AI Experts & Celebrities Are Sounding The Alarm Bell - They Want A 'Superintelligence' Ban

Though it has only reached the mainstream in the past few years, generative artificial intelligence is basically unavoidable. Whether it's AI-generated images and videos or lengthy "conversations" with AI chatbots, AI has become an increasingly pervasive part of modern life. While many passionately support this tech, there's an ever-growing desire to see generative AI reined in before it can cause more damage — look no further than how AI is killing the job market for young coders – than it has. In fact, even AI experts and well-known celebrities are pushing back as tech giants like OpenAI and xAI seek to usher in a new AI era.

Efforts are being made to develop AI "superintelligence," as it has been dubbed, which would basically surpass human cognitive ability in virtually all aspects. Against this, a wide range of celebrities and tech personalities have come together to launch the Statement on Superintelligence, urging those pursuing such ventures to reconsider and pause their research and development. It requests that such AI only be introduced so long as there's a "broad scientific consensus that it will be done safely and controllably" and that there's a "strong public buy-in" behind it. 

Virgin Group founder Richard Branson and Apple co-founder Steve Wozniak both added their names. Other notable names include Prince Harry and his wife, Meghan, Duchess of Sussex, actor Joseph Gordon-Levitt, and rapper will.i.am. Even AI luminaries such as Yoshua Bengio and Stuart Russell are on board. But what is AI suprintelligence exactly, and why are so many notable names against it?

Superintelligent AI could come with catastrophic consequences

To understand the growing concern about AI superintelligence, we would need to define what it could look like in the first place. According to IBM, superintelligent AI is an advanced form of AI software capable of cognition beyond that of a human being. Unlike current AI, which relies on pre-programmed algorithms and some level of human intervention to complete tasks, superintelligence needs no such hand-holding. It would learn continuously, constantly improving itself without external guardrails. Theoretically, it would be capable of reason, individual problem-solving, and flexible thinking, effectively replicating and growing beyond the abilities of the human brain.

As if that didn't sound scary enough, the risks of an AI superintelligence coming to fruition could be catastrophic in more ways than one. Not only might it be developed and controlled by those with bad intentions, but it's entirely possible that it breaks free of its inferior human creators. Such an entity is likely to be goal-driven, and it, much like a human, could be willing to go to any lengths necessary to achieve its goals — albeit with vastly more intellectual resources. Elimination of jobs, takeover, and even more thorough surveillance through technology, and potential human extinction could be on the table. This all may sound like science fiction, but we're treading uncharted waters here, where it's paramount to consider the absolute worst outcomes.

For now, superintelligent AI seems to be a ways off, but the clock is ticking. Time will tell what the future holds for the different types of AI, as well as the species that led to their creation — and whether this statement will have any influence on that.

Recommended