AI Poses 99% Risk to Humanity, but Elon Musk Doesn't Care
Musk fears AI could become superintelligent and threaten humanity
An AI safety researcher has warned of a very high probability, close to 100 percent, that AI could pose an existential threat to humanity. Despite this, tech mogul Elon Musk estimates the risk around 20 percent and argues for continued research despite the potential dangers.
Generative AI can be a double-edged sword. It is no secret that AI has paved the way for advancements in medicine, computing, education and other fields. However, critical concerns about the technology's potential downsides have also been raised.
Recent incidents, like Microsoft's Copilot generating inappropriate content and research suggesting ChatGPT's potential for malicious use, highlight the dangers associated with AI tools. Elon Musk has been outspoken about his views on AI, sparking significant debate on the topic surrounding the technology.
Musk recently called AI the "biggest technology revolution" but also expressed concern about a potential lack of sufficient power infrastructure by 2025, which could hinder its development. At the Abundance Summit, Musk suggested that "there's some chance that it will end humanity."
According to Business Insider, Elon Musk estimated a 10 to 20 percent chance of AI posing an existential threat. He did not elaborate on his reasoning. The 52-year-old tech mogul believes the potential benefits of AI outweigh the risks.
He has argued that "the probable positive scenario outweighs the negative scenario," suggesting further exploration is crucial.
The concept of p(doom) paints a bleak picture of AI's future
AI safety researcher Roman Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, recently spoke to Business Insider. He expressed concern that the probability of AI posing an existential threat is significantly higher than Elon Musk's estimate of 10 to 20 percent, calling Musk's view "too conservative."
"In my view, the true probability of doom from AI is significantly higher," Yampolskiy said. He elaborated on "p(doom)" as the risk of AI causing an existential threat, such as taking control of humanity or triggering a catastrophe through novel weapons, cyberattacks, or nuclear war.
The New York Times dubbed "p(doom)," the probability of AI posing an existential threat, a "morbid new statistic" gaining traction in Silicon Valley. Tech executives cited in this article offer varying estimates, ranging from a 5 to a 50 percent chance of an AI apocalypse. Yampolskiy places the risk at 99.999999 percent.
Yampolskiy argues that our safest option might be to avoid creating such robust systems altogether due to the inherent difficulty of controlling advanced AI.
"Not sure why he thinks it is a good idea to pursue this technology anyway," Yamploskiy added. "If he is concerned about competitors getting there first, it doesn't matter as uncontrolled superintelligence is equally bad, no matter who makes it come into existence."
Musk echoed similar concerns regarding the potential risks of powerful AI in his lawsuit against OpenAI and its CEO, Sam Altman, explicitly citing their GPT-4 model. The X owner believes the model might be approaching Artificial General Intelligence (AGI) and argues for open access to its research for public scrutiny and collaboration.
© Copyright IBTimes 2024. All rights reserved.