Cover image credit: Dorine Wu
Brussels leads the way in regulating the use of artificial intelligence within the European Union. Entering into force fairly recently (1 Aug 2024), the AI Act is the world’s first legislation aimed at bringing the rapidly developing technology in line with the democratic values and fundamental human rights prized in the bloc. It creates a regulatory framework based on a risk-based approach to categorising AI systems and spells out the obligations that organisations and developers of AI technology deployed in the EU must abide by. The AI Act is expected to work in tandem with the General Data Protection Regulation (2016) as well as the new Product Liability Directive (2024), ensuring that the rollout of nascent AI technology is aligned with existing legislation on personal data privacy and compensation for injured parties.
How much risk is too much risk?
![The AI Act categorises AI systems according to 4 different levels of risks and spells out the corresponding legal obligations for each level. Image credit: Nemko](https://static.wixstatic.com/media/dfa4ee_9d7c66f750934d4da75ea38e97131e78~mv2.webp/v1/fill/w_147,h_83,al_c,q_80,usm_0.66_1.00_0.01,blur_2,enc_avif,quality_auto/dfa4ee_9d7c66f750934d4da75ea38e97131e78~mv2.webp)
At the heart of the AI Act lies the risk-based approach that assigns AI systems different levels of risk based on their characteristics and behaviour, namely: unacceptable risk, high risk, limited risk and minimal risk. In addition, general-purpose AI (GPAI) models are separately classified as those with and without systemic risk mainly dependent on the amount of computing power that the model requires. Naturally, GPAI models that are deemed to pose a systemic risk are subject to more stringent regulation that will be elaborated on below. Finally, AI systems deemed to pose limited or minimal risk are permitted for use and development in the EU without the need to meet additional legal obligations.
Article 5 provides a fairly extensive list of AI systems that are prohibited in the EU due to the unacceptable risk posed. These include social scoring systems, predictive policing programmes and emotion recognition systems. AI systems that extract facial data to create or expand facial recognition databases are also banned. However, limited exceptions are carved out for law enforcement purposes as well as on grounds of national security. For instance, police forces are permitted to use AI systems that involve biometric identification when the data collected can assist in investigations into serious crimes. Note that these exceptions apply to both AI systems that pose unacceptable risk (prohibited in the EU) and high risk (permitted under specific circumstances). The new framework also does not apply to EU border and asylum agencies responsible for managing migrant flows while AI use in existing large-scale immigration databases such as the Eurodac (used in the management of European asylum applications) and the Visa Information System (through which Schengen states exchange data on visa applications) are exempted from all AI Act requirements until Dec 2030. An April 2024 report by Amnesty International warned that the broad exemption offered by the AI Act sets a dangerous precedent for the authorities to retain the use of AI in surveillance technology and automated risk assessment tools on would-be migrants. Evidently, stronger safeguards must be established to ensure that AI technology does not become the enabler of systemic discrimination against the most vulnerable in our community.
Next, high-risk AI systems that are either used in specific products covered under existing product safety legislation or listed in Annex III (see below). As the original focus of the legislation back when it was first drafted in 2021, the bulk of the AI Act is dedicated to spelling out regulations ensuring a safe deployment of these AI systems in the EU market. Some requirements spelled out include the mandatory establishment of a risk management system, provision of technical documentation to relevant authorities and the implementation of human oversight. In line with all other products sold in the European Economic Area, high-risk AI systems are also required to affix the CE marking (and hence comply with its prerequisite obligations).
Biometrics
Critical Infrastructure
Education and Vocational Training
Employment, workers’ management and access to self-employment
Access to and enjoyment of essential private services and essential public services and benefits
Law enforcement
Migration, asylum and border control management
Administration of justice and democratic processes
General-Purpose AI Models (GPAI)
Another section of the AI Act concerns the regulation of GPAI models. While not classified as high-risk AI, the AI Act spells out separate transparency requirements that all GPAI models have to abide by. This includes prominent disclosures of AI-generated content and in-built safeguards that reduce the occurrence of illegal content generated. Furthermore, all data used to train models must comply with EU copyright law (article 53) and prior authorisation must be sought from the copyright holder. GPAI models deemed to pose a systemic risk (presently determined by the measure of computer performance in terms of floating-point operations per second) must undergo additional evaluations and risk assessments before deployment in the EU (article 55).
Enforcement and Governance
At present, enforcement of compliance requirements spelled out in the AI Act are expected to be delegated to one notifying authority and one market surveillance authority in each EU member state as both pre- and post- market stages of deployment are involved. For instance, existing market surveillance authorities responsible for the post-market surveillance may oversee the rollout of AI systems while new notifying authorities may be established to draw up national regulations relevant to the specific EU state. However, the distinction is less clear in reality. In the years preceding the AI Act, data protection authorities (DPAs) of several nations have taken up the responsibility of enforcement instead. Originally set up to supervise, enforce and handle complaints relating to the landmark GDPR, DPAs are naturally the most suited to monitor personal data processing activities related to the deployment of AI technology. Unlike the “one-stop shop” system that characterises GDPR enforcement (whereby a single supervisory authority monitors violations of data privacy law), implementation of the AI Act is more decentralised across the different authorities just like existing product safety legislation.
Regardless, a EU-wide AI Office will be established to oversee this complex web of bureaucracy and improve the harmonisation of standards across all EU jurisdictions through developing codes of practice detailing the application of the AI Act in specific instances. It will also be responsible for investigating infringements and complaints in conjunction with national authorities. The maximum fine levied for Article 5 violations (of AI systems that are prohibited) is up to 7% of worldwide turnover/ EUR 35mn. Smaller penalties may be imposed for violations of other articles, up to 3% of worldwide turnover/ EUR 15mn.
In contrast to GDPR Article 82 that codifies the right to compensation, individuals at present have no recourse (claim to compensation) for harm suffered as a result of any infringement of the AI Act. However, this is set to change in upcoming years as the draft AI Liability Directive is set to be adopted into law. The draft directive is expected to include specific means through which individuals may seek legal recourse to recover damages suffered. In the meantime, the recently adopted Product Liability Directive (Dec 2024) will also hold manufacturers (including those of AI software) liable for damages suffered under specific circumstances.
AI is expected to play an indispensable role in 2025 as various AI systems mature and expand from limited pilot use cases to full-scale deployments across a variety of sectors. Naturally, regulators will be racing to keep up with the rapid pace of development. Just as the GDPR completely rewrote the rules for data protection, the AI Act is the European Union's opportunity to set the standards once again.
References
Chaudhry, U., Casovan, A., & Jones, J. (2024, November). Top 10 operational impacts of the EU AI act. https://iapp.org/resources/article/top-impacts-eu-ai-act/
EUR-Lex. (2024, June 13). REGULATION (EU) 2024/1689. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ%3AL_202401689#anx_III
European AI Office. (n.d.-a). https://digital-strategy.ec.europa.eu/en/policies/ai-office
European Commission. (n.d.-b). CE Marking. https://single-market-economy.ec.europa.eu/single-market/ce-marking_en
Gaudszun, T., & Shin, J. (2024, October 18). AI Watch: Global regulatory tracker - European Union. White & Case. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-european-union
Liguori, G. (2022, February 16). AI Act: A Risk-Based Policy Approach for Excellence and Trust in AI. https://medium.com/codex/ai-act-a-risk-based-policy-approach-for-excellence-and-trust-in-ai-d29ce0d54e2
Madiega, T. (2023, February). Artificial Intelligence Liability directive - european parliament. Artificial intelligence liability directive. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf