Cyber hackers
Recent developments in AI have led to concerns about the best way to prevent harm Kacper Pempel/Illustration

Kroll, a financial and security risk assessment firm, has called on tech companies to place increased importance on cyber protection. The comments come in response to new UK government proposals on the best way to regulate AI.

The government white paper announced a "pro-innovation framework" designed to make "responsible" innovation easier and support the UK as a global leader in AI technology.

Instead of creating a new regulator, the government called on existing regulators to take authority for the use of Artificial Intelligence in their sectors. There will be no legal obligations on regulators to act.

Keith Wojciesezek, Global Head of Threat Intelligence at Kroll, said responsibility lay with companies to take a more proactive stance on how they manage the risks posed by AI.

"Businesses need to be baking in cyber protection from the start, not as a regulatory box-ticking exercise," he said.

AI tools can be used to conduct highly targeted cyberattacks. Recently, it was demonstrated that Chat GPT was capable of creating a "convincing" phishing campaign in order to trick unsuspecting people into parting with their money.

With an updated version of the chatbot rumoured to be on the way, cybersecurity risks are set to increase.

Wojciesezek is concerned that the rapid development of AI, where algorithmic-based learning allows processes to be developed without human supervision, may outpace the efforts of regulators to control it.

"As more AI tools and open-source versions emerge, hackers will likely be able to bypass the controls added to the systems," Wojciesezek said. He further added: "They may even be able to use AI tools to beat the controls over the AI system they want to abuse."

In general, there are worries that companies are prioritising the quickest possible development of artificial intelligence without considering cybersecurity and the wider potential for harm. Wojciesezek referred to it as an "all-out AI arms Race."

Microsoft is set to invest $10 billion in OpenAI, the creator of ChatGPT, while Google has responded by speeding up efforts to release its own search-orientated chatbox, Bard.

Such developments prompted global tech leaders, including Elon Musk, to call for a halt on "giant AI experiments." They believe that governments should be prepared to enforce a moratorium on the technology's development for six months while research is done to ensure systems are safer and more trustworthy.

The UK's response to AI regulation has already been criticised for failing to address the risks. Simon Elliott, partner at law firm Dentons told the BBC the government's approach was a "light-touch" that makes the UK "an outlier" compared with elsewhere in the world.

The EU has proposed a specific act to cover AI regulation, whilst last month, the US Chamber of Commerce bucked its general anti-regulatory stance, to call for more regulation.

The UK proposals are designed to strengthen the UK's position as a global leader in AI and harness the technology's ability to drive growth and prosperity. The nation set out 5 key principles.

  • Safety, security and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

With these new proposals, AI systems working in a secure and safe way, with risk assessment done on a continuous basis will be enforced. Information about how, when and for which purpose the system is being used should be communicated to the relevant people in a way that is easy to understand.

Furthermore, the UK is prioritising that AI technology should not undermine the legal rights of individuals or organisations, discriminate against individuals, or create unfair market outcomes.

Many have agreed that measures should be put in place to ensure effective oversight of the use of AI systems, with clear lines of accountability established. AI decisions and outcomes should be able to be contested and changed by service users and other parties impacted.