AI
The UK government has already initiated formal pre-summit engagement with countries and frontier AI organisations. Dado Ruvic/Reuters

Viscount Camrose, Parliamentary Under Secretary of State for the Department for Science, Innovation and Technology in the UK government, revealed significant developments in the country's Artificial Intelligence (AI) policy on Tuesday.

Speaking on behalf of Rt Hon Michelle Donelan MP, the Secretary of State for Science, Innovation and Technology, Viscount Camrose provided an update on the UK's commitment to driving responsible AI innovation while addressing the associated risks.

One of the most significant revelations in the statement was the forthcoming AI Safety Summit, scheduled to take place at Bletchley Park on November 1 and 2, 2023. This landmark summit will bring together leading countries, technology organisations, academia and civil society to address the risks associated with powerful AI systems. The focus will be on risks such as the proliferation of information that could undermine biosecurity.

Additionally, the summit will explore how frontier AI can be safely harnessed for public good, including in fields like medical technology and transportation safety.

The UK government has already initiated formal pre-summit engagement with countries and frontier AI organisations. Five key objectives have been established to frame discussions leading up to the summit. First and foremost, the government aims to foster a shared understanding of the risks associated with frontier AI. Secondly, the UK Government is committed to establishing a forward process for international collaboration on frontier AI safety.

The third objective centres on encouraging organisations to adopt appropriate measures that enhance frontier AI safety at the individual level. Furthermore, the UK Government aims to identify areas for potential collaboration in AI safety research. Lastly, the government seeks to showcase how the safe development of AI can lead to global benefits.

Viscount Camrose expressed eagerness to keep Parliament updated as plans for the summit continue to progress, highlighting the government's commitment to fostering responsible AI development and safeguarding against potential risks.

The UK government has taken a proactive approach to address the challenges and opportunities presented by frontier AI models. Earlier this year, it allocated £100 million to establish the Frontier AI Taskforce, the first of its kind globally. Initially named the Foundation Model Taskforce, it has been renamed to explicitly reflect its role in assessing AI's risks and safety.

Since the appointment of Ian Hogarth as Taskforce Chair 12 weeks ago, significant progress has been made. The AI Taskforce has assembled an External Advisory Board comprising distinguished experts such as Turing Prize laureate Yoshua Bengio, GCHQ Director Anne Keast-Butler and Deputy National Security Adviser Matt Collins, among others. This board will provide invaluable guidance and expertise to guide the AI Taskforce's work.

In addition to the advisory board, the Taskforce has formed partnerships with leading frontier AI organisations and has begun recruiting a world-class research team. Oxford researcher Yarin Gal has been appointed as the Taskforce Research Director and Cambridge researcher David Kreuger will collaborate on shaping the research programme. These research efforts will be complemented by a dedicated team of civil servants, further strengthening the UK's AI capabilities and addressing public sector use cases for frontier AI models.

Industry collaboration has been a cornerstone of the UK's approach to AI safety. The Frontier AI Taskforce is partnering with leading AI companies and non-profits to assess the national security implications and societal risks associated with AI systems. These partnerships will provide crucial insights into AI's potential risks and benefits.

The UK government has also been proactive in establishing a regulatory framework for AI. In March, it published the AI Regulation White Paper, outlining principles to govern AI and mechanisms to monitor and adapt the regulatory framework as technology evolves. Over 400 responses from regulators, industry, academia, and civil society were received and the government is set to publish its response later this year, factoring in outcomes from the AI Safety Summit.

Furthermore, a central AI risk function has been established within the Department for Science, Innovation and Technology, which will collaborate with government, industry and academic experts to identify, measure, and monitor existing and emerging AI risks. The government emphasises its commitment to an iterative approach to AI regulation, aiming to address new risks and regulatory gaps as they emerge.

Several UK regulators have already taken proactive steps in line with the proposed AI framework. These include the Competition and Markets Authority, the Medicines and Healthcare Products Regulatory Agency and the Office for Nuclear Regulation, which are pioneering innovative approaches to ensure AI safety and effectiveness in their respective domains.

To improve coordination and clarity across the regulatory landscape, the UK government is collaborating with the Digital Regulation Cooperation Forum (DCRF) to pilot a multi-regulator advisory service for AI and digital innovators known as the DRCF AI and Digital Hub.

This initiative will offer tailored support to innovators navigating the AI regulatory landscape and contribute valuable insights to enhance the AI regulatory framework.