artificial intelligence
Some AI experts have warned about risks from fast-growning AI. Pixabay

According to a few industry leaders, the AI (artificial intelligence) technology they are currently developing could lead to dire consequences. Notably, the heads of Google Deepmind and OpenAI have previously warned that AI might be leading humanity to its extinction.

Top AI CEOs, as well as other tech experts recently supported a statement that was published by the Centre for AI Safety. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement reads.

While OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and Anthropic's Dario Amodei have supported the statement, others claim the fears have been over-elaborated. Nevertheless, the Centre for AI Safety website has highlighted a few possible disaster scenarios:

  • AI can be weaponised. For instance, chemical weapons can be made using drug-discovery tools.
  • AI can be used to spread misinformation. Some observers believe AI technology can be used to spread misinformation in political campaigns.
  • Only a few people could end up having access to AI technology. This could enable "regimes to enforce narrow values through pervasive surveillance and oppressive censorship."
  • Also, humans could become dependent on AI just like the scenario shown in the movie "Wall-E."

Dr. Geoffrey Hinton has also supported the Centre for AI Safety's call. To those unaware, the British-Canadian computer scientist, who is also known as the godfather of artificial intelligence had issued a warning about risks from AI. The statement is also signed by Yoshua Bengio, who is a professor of computer science at the University of Montreal.

What do other experts believe?

Just like Dr. Hinton, NYU Professor Yann LeCun and Prof Bengio are known for their groundbreaking work in AI. The trio jointly won the 2018 Turing Award for their outstanding contributions to computer science. However, Prof LeCun believes these warnings are overblown. Earlier this month, Prof LeCun tweeted, "The most common reaction by AI researchers to these prophecies of doom is face palming."

Likewise, a slew of other experts suggests the fears of AI ending humanity are unrealistic. They believe these fears deviate our focus from real issues such as bias in systems. Arvind Narayanan, who is a computer scientist at Princeton University, told the BBC that the possibility of disaster scenarios such as those depicted in sci-fi films is unrealistic.

Similarly, Oxford's Institute for Ethics in AI senior research associate Elizabeth Renieris told the outlet that she is more concerned about the near-term harms of AI. According to Ms. Renieris, a considerable number of AI tools are free-riding on the "whole of human experience to date."

Centre for AI Safety director Dan Hendrycks believes we should not view present concerns and future risks antagonistically. Hendrycks claims it will be easier to handle future risks if we address some of the existing issues now. To recap, reports surrounding the threat from AI began to do rounds online back in March when Twitter CEO Elon Musk signed an open letter to pause AI training.

The letter questioned whether it is necessary to create non-human minds that are likely to replace, outsmart, and even outnumber humans. The new campaign, on the other hand, comprises a short statement designed to "open up discussion." Furthermore, the statement says the risks posed by AI is similar to the risks posed by nuclear wars.

OpenAI, the company behind the widely popular ChatGPT, recently indicated that superintelligence will probably be regulated like nuclear energy. The firm suggests we will eventually need something like the International Atomic Energy Agency (IAEA) for superintelligence efforts.

Should we be worried about AI?

Technology leaders including Google chief executive Sundar Pichai, Microsoft CEO Satya Nadella and OpenAI boss Sam Altman have already discussed AI regulation with US President Joe Biden. Moreover, UK Prime Minister Rishi Sunak highlighted the benefits of AI to the economy and society while speaking to reporters earlier this year.

Acknowledging that people will be concerned about reports that AI poses existential risks, Sunak assured that the government is carefully looking at this.