Artificial intelligence
From existential threats to online scams, the EU's AI Act tackles risks of AI & aims to ensure ethical development. Pexels

In a landmark move, the European Council has approved the Artificial Intelligence Act, potentially setting the first global standards for AI regulation.

On Tuesday, May 21st, the Council of the European Union took a significant step in regulating artificial intelligence, approving a groundbreaking law that sets a global benchmark for harmonising AI rules.

AI Act: A Game Changer for AI Regulation?

The law aims to promote the development and use of trustworthy AI across the EU's single market, fostering innovation while safeguarding fundamental rights like privacy and non-discrimination.

The AI Act establishes an innovation-friendly legal framework and promotes evidence-based learning through AI regulatory sandboxes. These sandboxes provide controlled environments for developing, testing, and validating novel AI systems in real-world settings.

How Will The AI Act Identify System Risks?

The legislation will separate AI systems into categories based on risk levels. AI systems posing limited risk will be subject to minimal transparency obligations. On the other hand, high-risk AI systems will face stringent obligations before entering the EU market.

For instance, the Act bans high-risk AI, including systems designed for manipulative persuasion (e.g., cognitive behavioural manipulation) and social scoring. It also restricts AI use in areas like predictive policing based on profiling and biometric categorisation by sensitive characteristics such as race, religion, or sexual orientation.

Who Will The AI Act Apply To?

The EU AI Act will primarily applies to the 27 members of the European Union. However, its reach extends beyond the bloc, potentially impacting companies that provide AI systems or services used in the EU.

Per a Reuters report, non-EU companies using EU customer data on their platforms must comply. The EU's AI Act could become a landmark precedent, inspiring other countries and regions to also develop their own AI regulations.

Enforcing the Rules

The AI Act establishes a multi-layered enforcement framework to ensure effective implementation across the EU. This includes:

An AI Office within the European Commission: This office will be vital in offering scientific and technical expertise to enforce activities.

An AI Board with Member State representatives: This board will act as an advisor and assist the Commission and member states in ensuring consistent application of the AI Act across the bloc.

An advisory forum for stakeholders: This will bring together various stakeholders to contribute their technical expertise and advise the AI Board and the Commission.

Companies that violate the AI Act will face hefty fines, potentially reaching a percentage of their global annual turnover in the previous financial year or a predetermined amount, whichever is higher. The Act also considers proportional administrative fines for smaller businesses and startups.

Once the relevant EU officials sign, the AI Act will be published in the EU's Official Journal within days and officially enter into force 20 days after publication. However, most provisions will only become fully applicable two years after this date, with some exceptions taking effect sooner.

While AI safety researchers warn of a high probability of existential threats from AI (some estimate close to 100%), figures like Elon Musk advocate for continued research, arguing that the risk is closer to 20%. The EU's AI Act aims to address these concerns by overseeing the development and use of AI to mitigate potential risks.

This timely law comes at a time when scammers have increasingly turned to AI-powered tools like ChatGPT to create sophisticated deep fakes or impersonate real people in online scams. The AI Act, which focuses on transparency and accountability, might play a key role in curbing such deceptive practices.