ChatGPT
The capabilities of ChatGPT indicate the potential of development AI going forward. Dado Ruvic/Reuters

Back in November 2022, Open AI released ChatGPT, a chatbot developed from its foundational large language models (GPT-3.5 and GPT-4). The launch of ChatGPT has provoked competition from other AI developers who are racing to catch up. Having been trained on vast amounts of text data, ChatGPT can produce intelligent human-like responses to carefully worded questions.

While there is agreement that this is a truly breakthrough moment, there is disagreement about whether this breakthrough will benefit or destroy human civilization. Put simply by Elon Musk, "AI will be the best or worst thing ever for humanity."

Whether AI turns out to be a blessing or a curse for humanity is contingent on the way it is developed, used, and managed. This leads us to the issue of AI regulation, which this article will focus on through the following questions. Firstly, why should AI development be regulated? Secondly, how should AI be regulated and what is the British government's approach? Thirdly, why is the current international climate dangerous for AI development?

In an open letter signed by Elon Musk and over 27,000 others, it is proposed that "AI labs... immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". The letter argues that the desirable level of planning and management necessary to ensure AI develops safety is "not happening".

The letter references the Asilomar AI Principles, which state that "advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources". The principles advocate for research into the question of how legal systems can keep up with AI developments, and that of the ethical values that should be used to underpin the development of AI.

Moreover, these are key questions that AI regulators have to address if they want to ensure their relevance in the unfolding AI story that could define the future success or failure of human civilisation.

AI: a different kind of risk?

Firstly, why regulate the development of AI? To answer this question, we have to understand what makes AI so unique as a security risk.

Consider the following three security risks to the future survival and prosperity of human civilisation.

Firstly, the possibility of conventional war. Right now, with tensions increasing over Taiwan between the US and China, and the obvious example of the Ukraine war, despite the horrific humanitarian losses of the 20th century, conventional war between great powers is still a possibility.

Secondly, the possibility of nuclear war. Any possibility of conventional war between great powers facilitates the risk of escalation into nuclear conflict. And indeed, in principle nuclear war could occur directly out of the failure of international diplomacy between rival powers.

Thirdly, the consequences of climate change. According to the UN, "the impacts of climate change are global in scope and unprecedented in scale," with 3.3 to 3.6 billion living in situations that are "highly vulnerable" to the consequences of climate change. Moreover, the consequences of human induced climate change already include extremes such as "heatwaves, heavy precipitation, droughts, and tropical cyclones". Consequently, reducing greenhouse gas emissions demands "major transitions" in the energy sector, challenging the will and competence of governments and nations to adopt rapid societal and economic change.

However, despite the urgency and significance of these three security risks in today's international world, the challenges presented by the current and future development of AI are arguably greater.

When it comes to AI, because the problem it presents orientates around managing the consequences of developing and using non-human forms of intelligence, we are dealing with a different kind of security risk. And not just a static form of non-human intelligence, but forms that are liable to constant, potentially exponential, improvement.

In contrast, whilst the problems of conventional war, nuclear war, and climate change are profound, they currently pertain to a domain of human intelligence and agency. To deal with the risks presented by each, we do not have to grapple with something that challenges the supremacy of human intelligence. That is, something greater than human agency altogether. They are problems exclusive to the domain of human agency.

I say "currently", because if AI becomes sufficiently advanced and integrated into human civilisation, it may start to merge into the security risk of war. For example, by making conflict more unpredictable and devastating with great powers integrating AI technology into their military capabilities.

In the near or distant future, AI could far surpass human intelligence, perhaps even with a form of consciousness and a will of its own. This being the case, it is difficult to predict what the future development of AI means for human civilisation, with the range of potential scenarios ranging from the extremely good to the extremely bad.

In his book "Superintelligence: Paths, Dangers, Strategies", Nick Bostrom, Professor at the University of Oxford and director of the Future of Humanity Institute, refers to the prospect of an "intelligence explosion" and "machine superintelligence" (p.4). On the prospect of superintelligence which could swiftly follow the reproduction human level intelligence in machines, Bolstrom states that "an extremely good or an extremely bad outcome to be somewhat more likely than more balanced outcome," with the bad including the possibility of human extinction (p.25).

According to Business Insider, Bostrom has argued that the risk posed to human civilisation by AI is greater than the risk posed by climate change. Because AI that possesses certain goals could be indifferent to human interests, it is possible that with ever greater levels of intelligence, AI systems could start to pursue those goals without considering the implications for humans.

Crucially, referring to the possibility of malevolent forms of superintelligent AI in an interview on Piers Morgan Uncensored, Bostrom says "it might not be possible to put that genie back in the bottle." Therefore, he argues we need to get AI development right the first time, aligning it with human interests.

Therefore, it is intuitive that the development of AI should be proactively managed and regulated with safety and security in mind. If we wait for an AI crisis to emerge to justify regulation of its development, it could be too late. We need to make sure that an AI crisis doesn't occur in the first place.

What kind of AI regulation?

This begs the question of how? What is the best way to regulate AI? Should governments create national regulatory agencies that coordinate principles and rules centrally? Or should the focus be on businesses regulating themselves given their more intimate knowledge of AI development, which civil servants and politicians may be slow to understand? How flexible should AI regulation be? Should there be concrete rules that nobody breaks, perhaps entrenched in law? If so, what rules? What penalties should be issued for individuals and businesses that break such rules?

The British government has already outlined its approach to AI regulation, giving us clues as to how they interpret questions such as these. On the 29th of March the British government published a policy paper which sets out plans for "pro-innovation approach to AI regulation". This policy paper has been followed more recently with the announcement of an expert taskforce to oversee the development of British AI, ensuring both safety and technological progress.

The paper highlights the need for Britain to speedily develop a "pragmatic, proportionate regulatory approach" to ensure it is ahead internationally when it comes to AI. They state the need for a "pro-innovation regulatory environment", which will attract AI companies to do business in Britain. However, the government also identify the risks that AI brings to our physical and mental health, our privacy, and also our human rights.

So what does the British "pro-innovation approach" entail?

Firstly the regulatory approach will be adaptive. Because of the rapid evolution of AI, the government specify that AI regulation should be "agile and iterative". Given the speed of AI development, the risk for regulation is that it will only be relevant for short periods before new regulation is required to respond to new technology. Consequently, the government's policy paper explains that the framework "is designed to build the evidence base so that we can learn from experience and continuously adapt to develop the best possible regulatory regime."

Secondly, the government specify five principles which they intend to "guide and inform" the responsible develop and use of AI in the British economy. They include:

  • "Safety, security and robustness"
  • "Appropriate transparency and explainability"
  • "Fairness"
  • "Accountability and governance"
  • "Contestability and redress"

However, referring to the potential of "onerous legislative requirements" constraining business innovation and government responsiveness to AI development, the policy paper explains that these principles are to be used on a "non-statutory basis". This correspondents to the need to be adaptive. If regulation entrenched in statute law becomes outdated it may take too long to change it.

Furthermore, whilst the government recognise the need for central coordination in the policy paper, there will be no "new AI regulator". That is, a regulatory agency specifically tasked with overseeing AI development.

The policy paper also highlights the need to work with international partners. Through "international alignment", British businesses can work across borders. If regulation is inconsistent between nation-states, it could make it harder for British businesses to work internationally. Therefore, the policy paper states the need to ensure "international compatibility" between the regulatory approaches of different countries.

AI regulation and international competition

The issue of International approaches to AI regulation raises the question of international competition. If dominant great-powers like the US and China are in a battle for dominance within the anarchy of the international system, is the current international environment conducive to the kind of caution that we might need to develop AI safely?

The Asilomar AI Principles differentiate between "undirected intelligence" and "beneficial intelligence". Clearly, with AI we should be aiming for the latter. However, just because AI is beneficial to X, it does not mean it is beneficial to Y. Moreover, in today's world where nation-states are pitted against each other in geopolitical competition, the challenge of generating universally beneficial forms of AI seems profound.

In a world where competition between nation-states and great powers is dominated by economic growth and geo-economics, to not employ AI risks being left behind. Therefore, there is arguably a natural incentive for governments to push forward AI development, as opposed to erring on the side of hesitancy and caution through more restrictive regulatory frameworks.

If nation-states neglect to utilise the potential of AI, their economic power could decrease relatively as other nation-states make GDP gains through AI adoption. Therefore, the relative power and material security of more AI cautious nation-states could be placed at risk if technological innovation is constrained more extensively than by rival powers.