Artificial Intelligence
The rapid development of artificial intelligence presents a key challenge to policymakers who must prioritise both competitiveness and safety. DADO RUVIC/Reuters / DADO RUVIC

Dario Amodei, CEO of research company Anthropic AI, says the chance of AI "destroying the world" could be as high as 25 per cent.

According to the tech boss, it is entirely possible for something to go "catastrophically wrong on the scale of human civilisation".

Amodei is a co-founder of Anthropic AI and previously worked for OpenAI, the company that developed ChatGPT.

His current research focuses on working to build reliable, interpretable, and steerable AI systems.

But he told the Logan Bartlett Show tech podcast last week that we must consider "the risk of something going wrong with the model itself alongside something going wrong with people or organisations or nation states misusing the model or it inducing conflict among them".

Since the creation of the natural language processing tool ChatGPT in November 2022 thrust artificial technology into the mainstream discourse, it has caused significant excitement as well as alarm at the potential for AI to harm the human race.

According to the latest available data, ChatGPT currently has over 100 million users, and the website generates one billion visitors per month.

But in March, Italy announced it was temporarily blocking ChatGPT over data privacy concerns, the first Western country to take such action against the popular artificial intelligence chatbot.

Fears over the rapid development of artificial intelligence systems even prompted billionaire Tesla and Twitter boss Elon Musk to join hundreds of experts in expressing concern at the advancement of powerful AI systems.

In a letter issued by the Future of Life Institute, the growing concerns of AI's advancement were expressed, stating: "They should be developed only once we are confident that their effects will be positive and their risks will be manageable."

However, recent applications of the technology have also led to groundbreaking advances in its scientific and medical benefits.

For example, researchers have developed an artificial intelligence (AI) system that may help doctors detect cancer.

The use of AI technology in diagnosis and treatment across the NHS is becoming increasingly frequent. Last month, the UK government announced £21 million will be invested in AI developments across the NHS.

The funding, which will be made fully available by the end of 2023, will allow NHS Trusts to accelerate the deployment of the most promising AI tools across hospitals, to help treat patients this coming winter.

The integration of new technology will help diagnose patients more quickly for conditions such as cancers, strokes, and heart conditions, by utilising AI imaging and decision support tools.

Amodei agrees that if used correctly, AI has enormous potential to be a force for good.

"If we can avoid the downsides then this stuff about curing cancer, extending human lifespan, solving problems like mental illness... I don't think it's outside the scope of what this can do."

Amodei added: "I do believe that there is a 75 per cent to 90 per cent chance that this technology is developed and everything goes fine."

However, his comments have faced criticism from some campaign groups.

Control AI, an organisation focused on mitigating the risks of the technology, said that "the same firms that thought there was a chance that their products will kill you all were the ones calling the shots when it comes to regulation."

Prime Minister Rishi Sunak is expected to call a general election next year
Struggling at home, the British prime minister will hope his drive for AI governance can leave a lasting impact on the world stage. AFP News

When it comes to regulation, the UK government has, so far, supported a fairly tolerant approach to the use of AI.

In March, it released a White Paper outlining its stance on the technology.

It said that rather than enacting legislation it was preparing to require companies to abide by five "principles" when developing AI. Individual regulators would then be left to develop rules and practices.

However, this position appeared to set the UK at odds with other regulatory regimes, including that of the EU, which set out a more centralised approach, classifying certain types of AI as "high risk".

And at the G7 Summit earlier this year, Sunak signalled his government may take a more cautious approach to the technology, by leading on "guard rails" to limit the dangers of AI".

Next week, Sunak will host the first AI Safety Summit, at Bletchley Park in Buckinghamshire.

Several Senior Silicon Valley tech leaders are expected to attend, including the CEOs of leading AI labs OpenAI, Anthropic, and Google DeepMind - Sam Altman, Dario Amodei and Demis Hassabis respectively.

A representative from Musk's xAI start-up is also expected to attend the two-day summit, along with the CEO of AI companies Stability (Emad Mostaque), Cohere (Aidan Gomez) and Palantir (Alex Karp).

Microsoft and Meta are also expected to send their policy chiefs to the summit, Brad Smith and Sir Nick Clegg; with Google represented by its Head of Technology and Society, James Manyika.

The government says the summit aims to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action.

But one of the executives invited has warned that the conference risks achieving very little, accusing powerful tech companies of attempting to "capture" the landmark meeting.

Connor Leahy, the chief executive of the AI safety research company Conjecture, said he believed heads of government were poised to agree on a style of regulation that would allow companies to continue developing "god-like" AI almost unchecked.

Leahy is one of just 100 people, including foreign government ministers, tech executives and civil society figures, who have been invited to November's summit at Bletchley Park, which Downing Street is hoping will mark a turning point in how advanced AI technology is developed.

In the run-up to the summit, companies have been asked to approve a statement expected to be signed by world leaders at the event.

It is expected to be finalised in the coming days and will include a warning that AI risks causing "catastrophic harm" if it is left unchecked.

Earlier this year, leading figures from the world of artificial intelligence (AI) met in Derry to discuss the impact of the technology on education.

Sunak has previously endorsed the use of AI in education, to provide "personalised learning" to children at school.

Speaking at London Tech Week, a seven-day festival dubbed as a "global celebration" of tech, Sunak said it was one of the public services he was most excited about AI's potential to transform, adding the technology could "reduce teachers' workloads" by assisting with lesson planning and marking.

Whilst the Prime Minister chose to address the positive impact AI could have on the education sector, schools, and universities have struggled with the rapid development of the technology, which is becoming increasingly powerful and more accessible to students.