Artificial Intelligence
Security Must Take Centre Stage in AI Development, Urges Head of UK's Cyber Security Agency DADO RUVIC/Reuters / DADO RUVIC

The Head of the UK's National Cyber Security Centre (NCSC), Lindy Cameron, has emphasised the crucial importance of prioritising security in the development of artificial intelligence (AI) systems.

Security must take centre stage in AI development, urges head of UK's cyber security agency

Speaking at the Chatham House Cyber 2023 conference, Cameron highlighted the need for security to be integrated into AI systems from the outset, rather than being an afterthought. She also stressed the responsibility of developers to protect individuals, businesses, and the wider economy from potential vulnerabilities in AI systems.

Cameron's comments underscore the growing recognition of the inherent risks associated with AI and the need for a proactive and preemptive approach to security. She argued against relying on retroactively adding security measures to AI technology in the future and shifting the burden of risk onto individual users.

Instead, she advocated for a "secure by design" approach, where vendors take greater responsibility for incorporating cybersecurity into their technologies and supply chains from the beginning. This approach not only enables society and organisations to reap the benefits of AI advancements but also helps build trust in the safety and security of AI systems.

The UK, being a global leader in AI, understands the significance of securing AI advancements and has taken steps to address the issue. With an AI sector contributing £3.7 billion to the economy and employing 50,000 people, the UK is hosting the first-ever summit on global AI safety later this year. The summit aims to facilitate targeted international collaboration and action to establish the necessary guardrails for the safe and responsible development of AI.

Cameron emphasised three key themes that the National Cyber Security Centre is focused on to secure advancements in AI. The first theme involves supporting organisations in understanding the associated threats and implementing effective mitigation strategies. She highlighted the novel risks posed by AI technologies, such as adversarial attacks in machine learning.

Adversaries can manipulate the training data of machine learning systems, leading to unintended behaviour that can be exploited. Additionally, large language models (LLMs) introduce unique challenges, as organisations' intellectual property and sensitive data can be compromised if staff members unknowingly disclose confidential information through LLM prompts.

The second theme Cameron discussed was maximising the benefits of AI for the cyber defence community. By leveraging AI's capabilities, cybersecurity professionals can enhance their defence strategies and stay ahead of evolving threats. Finally, she stressed the importance of understanding how adversaries, including hostile states and cybercriminals, are utilising AI and finding ways to disrupt their activities.

Adversaries are increasingly employing AI to augment their existing tactics, and LLMs, in particular, offer lower barriers to entry for certain types of attacks. For example, foreign nationals with limited linguistic skills can use LLMs to craft convincing spear-phishing emails.

Cameron's speech emphasises the need for a comprehensive and proactive approach to AI security. By prioritising security from the inception of AI systems, developers can minimise vulnerabilities and safeguard users, businesses, and the broader economy. The integration of robust security measures into AI technology will not only mitigate risks but also enhance public trust in the safety and reliability of AI systems.

In a rapidly advancing AI landscape, governments, organisations and developers across the globe are being urged to adopt the "secure by design" approach, endorsed by Cameron, to effectively address the challenges posed by this technology's widespread integration. By fostering collaboration, knowledge sharing and the implementation of stringent security measures, stakeholders aim to unlock AI's transformative power while mitigating potential risks.

Set to take place later this year, the UK's upcoming summit on global AI safety is poised to play a pivotal role in driving international cooperation and establishing essential frameworks to navigate the evolving AI landscape with the utmost security and responsibility.