U.S. markets open in 3 hours 9 minutes
  • S&P Futures

    +0.50 (+0.01%)
  • Dow Futures

    +25.00 (+0.06%)
  • Nasdaq Futures

    -19.75 (-0.11%)
  • Russell 2000 Futures

    -7.90 (-0.39%)
  • Crude Oil

    -1.03 (-1.31%)
  • Gold

    +1.40 (+0.07%)
  • Silver

    -0.05 (-0.24%)

    +0.0004 (+0.03%)
  • 10-Yr Bond

    0.0000 (0.00%)
  • Vix

    -0.49 (-3.37%)

    +0.0026 (+0.20%)

    +0.1880 (+0.12%)
  • Bitcoin USD

    -370.48 (-0.72%)
  • CMC Crypto 200

    0.00 (0.00%)
  • FTSE 100

    +0.86 (+0.01%)
  • Nikkei 225

    +836.48 (+2.19%)

Almost half of CEOs fear A.I. could destroy humanity 5-10 years from now—but one A.I. ‘Godfather’ says an existential threat is ‘preposterously ridiculous’

Business leaders, technologists and A.I. experts are divided on whether the technology of the moment will serve as a “renaissance” for humanity or the source of its downfall.

At the invitation-only Yale CEO summit this week, 42% of CEOs surveyed at the event said they believed A.I. has the potential to destroy humanity within the next five to 10 years.

The results of the survey were exclusively shared with CNN, to whom Yale professor Jeffrey Sonnenfeld described the findings as “pretty dark and alarming.”

Respondents included Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, and the leaders of businesses in industries from IT and pharmaceuticals to media and manufacturing. A total of 119 CEOs took part in the survey.

However, while 34% said A.I. had the potential to wipe out mankind within a decade and 8% said the dystopian outcome could occur in as little as five years, 58% of polled CEOs said that this could never happen and that they were “not worried.”

It isn’t just CEOs who are concerned about what rapidly developing artificial intelligence might unleash upon the world.

Back in March, 1,100 prominent technologists and A.I. researchers, including Elon Musk and Apple cofounder Steve Wozniak, signed an open letter calling for a six-month pause on the development of powerful A.I. systems.

As well as raising concerns about the impact of A.I. on the workforce, the letter’s signatories pointed to the possibility of these systems already being on a path to superintelligence that could threaten human civilization.

Meanwhile, Musk, cofounder of Tesla and SpaceX and the world’s richest person, separately said the tech will hit people “like an asteroid” and that there is a chance that it will “go Terminator.”

Even Sam Altman, CEO of OpenAI—the company behind chatbot phenomenon ChatGPT—has painted a bleak picture of what he thinks could happen if the technology goes wrong.

“The bad case—and I think this is important to say—is, like, lights-out for all of us,” he said in an interview with StrictlyVC earlier this year.

A ‘Godfather of A.I.’ begs to differ

Yann LeCun has a different opinion.

LeCun, along with Yoshua Bengio and Geoffrey Hinton, became known as the “godfathers of A.I.” after they won the prestigious $1 million Turing Award in 2018 for their pioneering work in artificial intelligence.

Two of these three so-called “Godfathers” have, in light of the recent buzz around the technology, publicly stated that they have regrets about their life’s work and are fearful about artificial intelligence being misused.

In a recent interview, Benigo said seeing A.I. mutate into a possible threat had left him feeling “lost,” while Hinton—who resigned from Google to speak openly about the risks posed by A.I.—has been warning about a “nightmare scenario” advanced artificial intelligence could create.

LeCun, however, is more optimistic.

Unlike his fellow A.I. pioneers, he does not see artificial intelligence triggering Doomsday.

Speaking at a press event in Paris on Tuesday, LeCun—who is now the chief A.I. scientist at Facebook parent company Meta—labeled the concept of A.I. posing a grave threat to humanity “preposterously ridiculous.”

While he conceded that there was “no question” machines would eventually outsmart people, he argued that this would not happen for many years, but he argued experts could be trusted to keep A.I. safe.

“Will A.I. take over the world? No, this is a projection of human nature on machines,” said LeCun, who is also a professor at NYU. “It's still going to run on a data center somewhere with an off switch ... and if you realize it's not safe, you just don't build it.”

He said that anxieties around A.I. were surfacing because people struggled to imagine how technology that does not yet exist could be safe.

“It's as if you asked in 1930 ‘how are you going to make a turbo-jet safe?’” he explained. “Turbojets were not invented yet in 1930, same as human level A.I. has not been invented yet. Turbojets were eventually made incredibly reliable and safe.”

LeCun also rejected the notion of regulations being introduced to stall A.I. developments, asserting that it would be a mistake to keep research “under lock and key.”

On Wednesday—after LeCun’s talk—EU lawmakers approved rules aimed at regulating A.I. technology. Officials will now craft the finer details of the regulation before the draft rules become law.

This story was originally featured on

More from Fortune:
5 side hustles where you may earn over $20,000 per year—all while working from home
Looking to make extra cash? This CD has a 5.15% APY right now
Buying a house? Here's how much to save
This is how much money you need to earn annually to comfortably buy a $600,000 home