The AI Act, aimed at bolstering Europe’s leadership in AI while safeguarding values and rules, introduces a classification system to assess AI’s potential risks. It categorizes AI systems into four tiers: unacceptable, high, limited, and minimal risk. Low-risk systems, like spam filters, have minimal requirements. Unacceptable risk systems, such as invasive biometric surveillance, are banned with few exceptions.

High-risk AI, including autonomous vehicles and medical devices, is allowed with strict regulations, including rigorous testing, data documentation, and human oversight. The legislation also addresses general-purpose AI, like large language models.

The AI Act proposes substantial penalties for non-compliance, including fines of up to €30 million or 6% of global income. It also establishes a European AI Board to oversee implementation and ensure uniformity across the EU.

This legislation, introduced by the European Commission in April 2021, aims to set global AI norms, emphasizing trust, safety, and responsible AI development. While adopted as a general approach in late 2022, the negotiation process between member states and the European Commission is ongoing. It reflects the EU’s commitment to harnessing AI’s potential while protecting its citizens and values.

Learn more