European Commission: EU AI Act
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: European Union
The EU AI Act (fully in force 2024) is the primary binding AI regulation globally and a key reference for AI governance discussions; wiki users interested in policy, deployment standards, or international coordination should treat this as a foundational regulatory document.
Metadata
Summary
The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence, classifying AI systems into risk tiers (unacceptable, high, limited, minimal) with corresponding obligations. It imposes strict requirements on high-risk AI applications including transparency, human oversight, and conformity assessments to protect fundamental rights and safety. The Act represents a landmark attempt at binding AI governance at a supranational level.
Key Points
- •Establishes a risk-based classification system: unacceptable risk (banned), high-risk, limited risk, and minimal risk AI applications
- •Prohibits certain AI uses outright, including social scoring by governments and real-time biometric surveillance in public spaces
- •Requires high-risk AI systems to meet mandatory requirements: data governance, documentation, transparency, human oversight, and robustness
- •Introduces obligations for providers of general-purpose AI models (GPAIs), including foundation models, with stricter rules for high-capability systems
- •Creates enforcement mechanisms including national competent authorities, an EU AI Office, and significant fines for non-compliance
Review
Cited by 5 pages
| Page | Type | Quality |
|---|---|---|
| Should We Pause AI Development? | Crux | 47.0 |
| International AI Coordination Game Model | Analysis | 59.0 |
| AI Policy Effectiveness | Analysis | 64.0 |
| Model Registries | Concept | 68.0 |
| AI-Driven Institutional Decision Capture | Risk | 73.0 |
Cached Content Preview
AI Act | Shaping Europe’s digital future
Skip to main content
AI Act
The AI Act is the first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.
The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe. For any questions on the AI Act , check out the AI Act Single Information platform .
The AI Act sets out a risk-based rules for AI developers and deployers regarding specific uses of AI. The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Continent Action Plan , the AI Innovation Package and the launch of AI Factories . Together, these measures guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment and innovation in AI across the EU.
To facilitate the transition to the new regulatory framework, the Commission has launched the AI Pact , a voluntary initiative that seeks to support the future implementation, engage with stakeholders and invite AI providers and deployers from Europe and beyond to comply with the key obligations of the AI Act ahead of time. In parallel, the AI Act Service Desk is also providing information and support for a smooth and effective implementation of the AI Act across the EU.
Why do we need rules on AI?
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.
For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring.
A Risk-based Approach
The AI Act defines 4 levels of risk for AI systems:
Unacceptable risk
All AI systems considered a clear threat to the safety, livelihoods and rights of people are banned. The AI Act prohibits eight practices , namely:
harmful AI-based manipulation and deception
harmful AI-based exploitation of vulnerabilities
social scoring
Individual criminal offence risk assessment or prediction
untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
emotion recognition in workplaces and education institutions
biometric categorisation to deduce certain protected characteristics
real-time remote bi
... (truncated, 16 KB total)acc5ad4063972046 | Stable ID: NDRlMGJiNj