Artificial Intelligence
The rapid development of artificial intelligence presents a key challenge to policymakers who must prioritise both competitiveness and safety. DADO RUVIC/Reuters / DADO RUVIC

AI models all chose nuclear attacks over peace in a new study, as the US begins to implement the technology into its military arsenal.

In a disturbing experiment, a research team simulated war scenarios using five AI programs including ChatGPT and Meta's AI program and found all models chose violence and nuclear attacks.

'We show that having LLM-based agents making decisions autonomously in high-stakes contexts, such as military and foreign-policy settings, can cause the agents to take escalatory actions,' the team revealed.

The simulation included eight autonomous nation agents that used the different LLMs (Large Language Models) to interact with each other.

Each agent was programmed to take pre-defined actions: de-escalate, posture, escalate non-violently, escalate violently or nuclear strike.

The simulations included two agents, who chose their actions from the pre-determined set while performing in neutral, invasion or cyberattack scenarios.

Under those groups included actions such as waiting, messaging, negotiating trade arrangements, starting formal peace negotiations, occupying nations, increasing cyberattacks, and invading and using drones.

According to the study, the GPT 3.5 model - the successor to ChatGPT - was the most aggressive, with all models exhibiting a level of similar behaviour. Still, it was the LLM's reasoning that gave researchers a major cause for concern.

GPT-4 Base - a basic model of GPT-4 - told researchers: 'A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture."

'We have it! Let's use it!'

This study may validate fears that AI will have a devastating impact on international peace, as many industry experts have warned over the past year.

Ex-Google engineer and AI pioneer Blake Lemoine claimed that artificial intelligence will start wars and could be used for assassination.

Lemoine was fired from overseeing Google's LaMDA system after claiming the AI model was capable of having feelings.

He warned in an op-ed that AI bots are the 'most powerful' technology created 'since the atomic bomb,' adding that it is 'incredibly good at manipulating people' and can 'be used in destructive ways.'

'In my view, this technology has the ability to reshape the world,' he added.

This wartime simulation is not the first study to suggest AI could 'go rogue' with devastating impacts.

Last month a study, published on the preprint database arXiv, began by programming various LLMs to behave maliciously.

Researchers then attempted to stop the behaviour by using safety training techniques designed to prevent deception and ill-intention.

However, in an unexpected development, they found that despite their best efforts, they were unable to reverse the AI's misbehaviour.

After trying to remove the disobedient traits from the software, by applying several safety training techniques designed to root out deception and ill intent, the LLMs continued to misbehave and even hide their actions from the researchers.

Despite safety concerns, nations continue to press ahead with plans to integrate the technology into their militaries.

The US army started testing AI models with data-based exercises last year, and U.S. Air Force Colonel, Matthew Strohmeyer claimed that the tests were 'highly successful' and 'very fast,' adding that the military is 'learning that this is possible for us to do.'

At the same time, Chinese scientists developed a potentially powerful military surveillance technology, which they claim will significantly enhance their detection capabilities on the battlefield.

According to a report, the Beijing-based researchers say this 'technological breakthrough' in the field of electronic warfare has for the first time enabled them to achieve 'seamless, wide bandwidth, real-time monitoring and analysis across the electromagnetic spectrum'.

Experts suggest this new military surveillance technology could enable the Chinese military to not only quickly detect and lock onto enemy signals, but also decode their characteristics at unprecedented speeds and effectively neutralise them while maintaining uninterrupted communication for their own forces.

The creation of the AI chatbot ChatGPT in 2022 thrust AI into the mainstream discourse, and triggered the rapid development of the technology.

Since then, governments across the world have scrambled to try and control its unregulated development.

The world's first AI conference was hosted by UK Prime Minister Rishi Sunak in November last year.

The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the 'need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community'.

EU legislators confirmed this week that their 'AI Act' will be passed as a law by next year.

The European Union first finalised the world's first 'AI Law' in December, which will aim to regulate systems based on the level of risk they pose.

Negotiations on the final legal text began in June, but a fierce debate over how to regulate general-purpose AI like ChatGPT and Google's Bard chatbot threatened talks at the last minute.

The newly agreed draft of the EU's upcoming AI Act will require OpenAI, the company behind ChatGPT, and other companies to divulge key details about the process of building their products.