Artificial Intelligence
Rishi Sunak will fly to Washington this week for talks with US President Joe Biden, as the prime minister seeks to exert British “leadership” in the debate over the development and regulation of artificial intelligence. DADO RUVIC/Reuters

Artificial Intelligence (AI) could have the power to be behind advances that "kill many humans" in just two years' time, Rishi Sunak's adviser has warned.

Matt Clifford, who is the chairman of the Advanced Research and Invention Agency (ARIA), is currently advising the prime minister on the development of the government's Foundation Model Taskforce, which focuses on investigating AI language models such as ChatGPT and Google Bard.

Although the tech expert acknowledged that two years is a "bullish" time scale, he warned of the existential risk once AI intelligence surpasses that of humans.

Elaborating on this claim during an appearance on the First Edition program on Monday, he was asked what percentage he would give on the chance humanity could be wiped out by AI.

"I think it is not zero," Clifford said.

The tech expert continued that policymakers should be prepared for threats ranging from cyberattacks to the creation of bioweapons if mankind fails to find a way to control the technology.

Last week, Rishi Sunak declared Britain was prepared to lead on "guard rails" to limit the dangers of AI, signalling that the government may seek to adopt a more cautious approach to the development of the technology.

"We have taken a deliberately iterative approach because the technology is evolving quickly and we want to make sure that our regulation can evolve as it does as well," the Prime Minister added.

And his fears were echoed by a recent statement, signed by 350 AI experts.

Signatories included the CEO of OpenAI, the software company behind the creation of ChatGPT, as well as Geoffrey Hinton, popularly known as the "godfather of AI".

It warned that in the longer term, there was a genuine risk the technology could lead to the extinction of humanity.

The application of artificial technology first rose to mainstream prominence with the development of ChatGPT, released at the end of November last year.

Owned and developed by the AI research and deployment company, OpenAI, ChatGPT is an artificial intelligence chatbot trained to follow instructions and provide a detailed response.

According to the latest available data, ChatGPT currently has over 100 million users, and the website generates 1 billion visitors per month.

The program can make an entire website layout for you, or write an easy-to-understand explanation of dark matter in a few seconds. ChatGPT can also be used for essay writing, relationship advice, and mathematical problem-solving.

And there is no doubt that "if harnessed in the right way, AI could be a force for good", says Clifford.

"You can imagine AI curing diseases, making the economy more productive, helping us get to a carbon-neutral economy" added the tech expert.

However, despite the obvious benefits of utilising the power of artificial intelligence, the rapid pace the technology is evolving has sparked fear over its potential to do harm.

In March, Italy announced it was temporarily blocking ChatGPT over data privacy concerns, the first Western country to take such action against the popular artificial intelligence chatbot.

Fears over the rapid development of artificial intelligence systems even prompted billionaire Tesla and Twitter boss Elon Musk to join hundreds of experts in expressing concern at the advancement of powerful AI systems.

In a letter issued by the Future of Life Institute, the growing concerns of AI's advancement were expressed, stating: "They should be developed only once we are confident that their effects will be positive and their risks will be manageable."

Despite calls for at least a 6-month pause in developments to AI, major tech companies continue to push ahead with plans to incorporate the technology into their products.

In a company tweet at the beginning of the year, Microsoft CEO Satya Nadella announced a "multiyear, multibillion-dollar investment to accelerate AI breakthroughs" that would be "broadly shared with the world."

In February, Google unveiled Bard, a ChatGPT-like conversation robot that is powered by its large language model called LaMDA.

And last month, StoryKit, a major video software company, announced the release of "video AI" product descriptions, which can be converted into full-funnel video marketing campaigns.

Artificial intelligence-powered apps such as ChatGPT increasingly may eat into the bottom lines of companies
ChatGPT has gone from being a fun tool for AI enthusiasts to becoming one of the world's favourite talking points. AFP News

One of the main concerns raised by experts is the lack of regulation surrounding the use of the technology.

The Labour Party has been urging ministers to bar technology developers from working on advanced AI tools unless they have been granted a license.

Shadow digital secretary Lucy Powell, who is set to speak at TechUK's conference today, said AI should be licensed in a similar way to medicines or nuclear power.

"That is the kind of model we should be thinking about, where you have to have a license in order to build these models," she told The Guardian.

However, some companies are already taking measures to safeguard against the danger AI poses.

Last week, ChatGPT maker OpenAI announced that it has found a new approach that will help prevent "hallucinations" in artificial intelligence following an incident wherein the AI leader's large language model (LLM) produced fabricated cases for a New York lawyer.

OpenAI models will now be rewarded for each step of the process where they get a correct answer instead of rewarding the model for the overall final answer, researchers at OpenAI said in a recent statement. The company called the new approach process supervision, as opposed to its previous approach called outcome supervision.

The news comes as lawmakers across the world continue to work on speeding up plans for artificial intelligence regulation.

The Biden administration met with Sam Altman, CEO of OpenAI, and three other chief executives of companies leading the AI tech industry last month.

The meetings were said to be regarding three major points of concern surrounding AI: transparency with policymakers and the public, security from malicious attacks, and validating the safety and efficacy of AI systems.

And Clifford's ominous prediction on the potentially fatal impact of AI on humanity will only serve to increase the desire of governments around the world to enact legislation that limits the development of such a powerful technology.