Tech Experts, Leaders Warn Of AI Extinction Risk

Global leaders in AI development and research minds have once again joined hands to raise alarm about the risks posed by artificial intelligence. Championed by the Center for AI Safety — a San Francisco-based nonprofit that aims to counter the "societal-scale risks associated with AI" with responsible research and advocacy — the statement likens AI to pandemics and nuclear war.

Titled "Statement on AI Risk," the letter argues that AI poses an extinction risk, and therefore should be a global priority. The ultimate objective is to bring together all the stakeholders, from scientists and policymakers, to journalists and organizations profiting from it, to foster a productive discussion and reach a meaningful solution.

Among the signatories are OpenAI CEO Sam Altman, chief of Google's DeepMind unit Demis Hassabis, Stability AI head Emad Mostaque, Quora CEO Adam D'Angelo, and Microsoft CTO Kevin Scott. Notable omissions from this statement are Apple, Google, Meta, and Nvidia. 

This isn't the first time that a collective voice has been raised against AI risks, but this is the first that top AI executives like Altman have joined the chorus. Over the past few weeks, multiple notable voices have talked about building a regulatory oversight mechanism so that AI development happens responsibly, and it doesn't harm human prospects. However, the lingering fear is that AI could open the same kind of Pandora's box that the advent of social media and the internet did.

What else is happening with AI regulation?

In March, the likes of Elon Musk and Apple co-founder Steve Wozniak signed "Pause Giant AI Experiments: An Open Letter" alongside scientists and academics, asking for a six-month pause on the development of advanced AI products like OpenAI's GPT-4 model. Musk subsequently sat with former Fox News host Tucker Carlson, and quipped that AI has the potential for civilizational destruction.

"The world needs to establish the rules of the road so that any downsides of artificial intelligence are far outweighed by its benefits," wrote Microsoft CEO Bill Gates in a blog post. Over the past few months, calls for AI regulation have picked up from both sides of the aisle.

In May, Altman sat before Congress and pitched the idea of forming a new agency that regulates AI and "licenses any effort above a certain threshold of capabilities." Altman, who recently toured Europe, initially warned that OpenAI would be forced to leave operations in the EU bloc if regulations crossed a certain line, but made a 180-degree reversal two days later.

Microsoft President Brad Smith said in a CBS interview that he expects government-fueled AI regulation to materialize in the coming years, and wrote an in-depth article on setting up a framework to achieve those goals. As calls for AI regulation intensify, lawsuits over abusive usage have already started pouring in, while big tech continues to relentlessly push AI integration in products at a feverish pace.