In an open letter, dozens of industry experts have warned that artificial intelligence could lead to an extinction event. The letter was published on May 30 by the Center for AI Safety (CAIS), an advocacy group that aims to reduce societal-scale risks from AI, according to its website. It compares the potential effects of AI to pandemics caused by deadly diseases and nuclear warfare.
Sam Altman—the billionaire co-founder of ChatGPT maker OpenAI—signed the letter, along with the CEOs of AI firms Google DeepMind and Anthropic.
It was also signed by Dr. Geoffrey Hinton, Dr. Yoshua Bengio, and Dr. Yann LeCun, a group of computer scientists and professors often described as the “godfathers of AI” for their extensive work developing the field of AI deep learning. They won the prestigious Turing Award in 2018 for their efforts.
In addition to those industry heavyweights, a diverse range of celebrities and professors signed the letter, including the singer Grimes (who has previously used AI for creative exploration) and the popular podcaster and neuroscientist Sam Harris.
Quotable: The (brief) statement
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the one-sentence statement said. Its brevity and relatively broad scope allowed for signatories with a wide range of viewpoints.
What does more regulation of AI mean to Sam Altman?
Altman—one of the most recognizable faces in AI—has alternated between saying he’s “scared” of AI and championing it as “the greatest technology humanity has yet developed” (and that was in the same interview).
The OpenAI CEO met with US president Biden and testified before the Senate Judiciary Committee earlier this month, asking lawmakers for increased regulation of his industry.
“My worst fear is we cause significant harm to the world,” Altman said during the testimony. “If this technology goes wrong, it can go quite wrong.”
Altman outlined what the new regulations would look like in a blog post published last week with OpenAI’s two other co-founders. He called for three major reforms, including increased coordination between AI developers across the world, and the creation of an advanced technology that could rein in a potential “superintelligence” created by AI.
He also encouraged the formation of a global regulatory group for AI technology similar in structure to the International Atomic Energy Agency, with powers to inspect systems, require audits, and test for compliance with safety standards.
More from Quartz