Cyber Security
The NCSC say AI systems are subject to novel security vulnerabilities that need to be considered alongside standard cyber security threats. AFP News/NICOLAS ASFOURI

New guidelines for secure AI system development were published by the UK today, in an attempt to raise the cyber security levels of AI.

The National Cyber Security Centre (NCSC) drew up the Guidelines for Secure AI System Development with help from industry experts and 21 other international agencies and ministries, including the US Cybersecurity and Infrastructure Security Agency (CISA).

A total of 18 countries including all of the G7 have now endorsed and "co-sealed" the guidelines, which will help developers make informed decisions about cybersecurity as they produce new AI systems.

The new UK-led guidelines are the first to be agreed upon globally.

They aim to support developers of any systems that use AI to make informed cybersecurity decisions at every stage of the development process – whether those systems have been created from scratch or built on top of tools and services provided by others.

The product will be officially launched this afternoon at an event hosted by the NCSC, at which 100 key industry, government and international partners will gather for a panel discussion on the shared challenge of securing AI.

Panellists include Microsoft, the Alan Turing Institute and UK, American, Canadian, and German cybersecurity agencies.

NCSC CEO Lindy Cameron described the guidelines as a "significant step" in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.

"We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up," she added.

In a keynote speech at Chatham House in June, Cameron warned about the perils of retrofitting security into AI systems in years to come, stressing the need to bake security into AI systems as they are developed, and not as an afterthought.

The new guidelines are intended as a global, multi-stakeholder effort to address that issue, building on the UK Government's AI Safety Summit's legacy of sustained international cooperation on AI risks.

Last month, Prime Minister Rishi Sunak hosted the world's first "AI Summit" in Bletchley Park, Buckinghamshire.

In the build-up to the conference, Sunak announced the establishment of a 'world first' UK AI safety institute.

The organisation will aim to "advance the world's knowledge of AI safety".

"It will carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of," Sunak said in a speech at the Royal Society, an association of leading scientists.

The summit then saw the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".

This was named 'The Bletchley Declaration'. Since then, the discussion surrounding AI security measures has intensified.

Darktrace global head of threat analysis, Toby Lewis, argues that security is a prerequisite for safe and trustworthy AI.

"I'm glad to see the guidelines emphasize the need for AI providers to secure their data and models from attackers, and for AI users to apply the right AI for the right task," he added.

"Those building AI should go further and build trust by taking users on the journey of how their AI reaches its answers. With security and trust, we'll realise the benefits of AI faster and for more people."

Since AI's explosion into the mainstream discourse last year, with the creation of ChatGPT, fears over the rapid development of advanced systems have been expressed by many within the science and technology community.

It even prompted hundreds of experts to communicate concern In a letter issued by the Future of Life Institute, stating: "They (advanced AI systems) should be developed only once we are confident that their effects will be positive and their risks will be manageable."

Last week, the Government announced that Britain would become an "AI match-fit" after it pledged a £118 boost for AI skills funding.

The investment aims to "ensure the country has the top global expertise and fosters the next generation of researchers needed to seize the transformational benefits of AI technology".

This includes naming, for the first time, the further 12 Centres for Doctoral Training (CDT) in AI that will benefit from £117 million in previously-announced government backing through UK Research and Innovation (UKRI), while a new visa scheme will make it easier for the most innovative businesses to bring talented AI researchers in their early careers to the UK.