Artificial intelligence
The tentative political agreement, coined as the Artificial Intelligence Act, marks a crucial step forward. Wikimedia Commons

In a historic development, negotiators from the European Union (EU) have reached a groundbreaking deal on the world's first comprehensive artificial intelligence (AI) rules. The agreement represents a significant step toward establishing legal oversight for technology, including popular generative AI services such as OpenAI's ChatGPT.

After overcoming substantial differences in generative AI and the use of facial recognition by law enforcement, negotiators signed a tentative political agreement for the Artificial Intelligence Act.

The announcement was met with enthusiasm from European Commissioner Thierry Breton, who tweeted: "Deal!". The European Parliament and the bloc's 27 member countries also celebrated the agreement, with a parliamentary committee co-leading the negotiating efforts, tweeting: "The European Parliament and member states have finally reached a political agreement on the Artificial Intelligence Act!"

Despite the celebratory tone, officials provided limited details on the specific provisions that will be included in the eventual law. The agreed-upon regulations are not set to take effect until 2025 at the earliest, allowing time for further refinement and scrutiny.

The EU has been a trailblaser in the global effort to establish AI regulations, unveiling the first draft of its rulebook in 2021. However, the rapid expansion of generative AI technologies prompted European officials to revisit and update their proposal, aiming to create a comprehensive blueprint that could influence global standards.

Generative AI systems like OpenAI's ChatGPT have gained widespread popularity for their ability to generate text, photos and songs. However, this surge in innovation has also raised concerns regarding potential impacts on employment, privacy issues and the protection of intellectual property rights.

While the US, UK, China and global entities like the G7 have introduced their own proposals to regulate AI, Europe has maintained a lead in the development of comprehensive guidelines. Once the final version of the EU's AI Act is determined, it must receive approval from the bloc's 705 politicians before the upcoming EU-wide elections next year. The approval process is anticipated to be a formality, considering the importance and urgency of establishing a robust regulatory framework for AI.

Originally designed to address the risks associated with specific AI functions based on their level of risk, the AI Act faced expansion pressure from politicians. This led to the inclusion of foundation models, the advanced systems that underpin widely-used general-purpose AI services such as ChatGPT and Google's Bard chatbot.

One of the most contentious issues during negotiations centred around AI-powered facial recognition surveillance systems. After intense deliberations, negotiators found a compromise solution.

European politicians, driven by privacy concerns, initially sought a complete ban on the public use of facial scanning and other "remote biometric identification" systems. In contrast, governments of member countries argued for exemptions, enabling law enforcement to use these technologies in addressing serious crimes like child sexual exploitation and terrorist attacks.

The European Union has been in a race to ratify the world's first comprehensive AI legislation, highlighting the urgency of the matter since the prominence of ChatGPT last year. This AI breakthrough captivated attention with its remarkable capability to generate poetry and essays in mere seconds from basic user prompts.

Advocates of AI contend that this technology holds substantial potential to revolutionise various domains, spanning from employment to healthcare. However, there exist apprehensions regarding the societal risks it poses, with fears that it might introduce unprecedented chaos to the world.

Brussels remains steadfast in its pursuit to subject major tech entities to stringent legal frameworks, aimed at safeguarding the rights of EU citizens, particularly concerning privacy and data protection.

The European Commission initially proposed an AI law in 2021, designed to regulate systems based on the level of risk they pose to citizens' rights or health. Negotiations for the final legal text commenced in June.

However, a heated debate in recent weeks revolving around the regulation of general-purpose AI, such as ChatGPT and Google's Bard chatbot, momentarily threatened the progress of talks.