EU AI Act Implementation Timeline: Key Dates and Compliance Requirements (2025/03/02)
Summary
This collection provides a comprehensive overview of the European Union's Artificial Intelligence Act (EU AI Act) implementation timeline, detailing the phased approach to regulation based on AI risk categories. Published in the Official Journal on July 12, 2024, the Act entered into force on August 1, 2024, with various provisions becoming applicable at different dates over the next several years. The content explains key compliance deadlines, prohibited AI systems, obligations for general-purpose AI models, and requirements for high-risk AI systems, serving as a critical resource for organizations developing or deploying AI technologies to prepare for compliance.
Key Implementation Dates
August 1, 2024: The EU AI Act enters into force.
February 2, 2025: Ban on 'prohibited systems' takes effect, including:
Systems using subliminal techniques
Systems exploiting vulnerable groups
Biometric categorization systems
Social scoring systems
Individual predictive policing
Facial recognition using untargeted scraping
Emotion recognition in workplaces and educational institutions
'Real-time' remote biometric identification in public spaces (with limited exceptions)
May 2, 2025: AI Office to facilitate development of codes of practice for general-purpose AI (GPAI) providers.
August 2, 2025:
GPAI governance obligations become applicable
Member States must appoint competent authorities and implement rules on penalties
If codes of practice are not finalized or deemed inadequate, common rules for GPAI providers will be adopted
February 2, 2026: European Commission to develop implementation guidelines and examples of high-risk vs. non-high-risk AI systems.
August 2, 2026:
Act becomes generally applicable
Obligations on high-risk AI systems listed in Annex III come into effect
Member States required to implement at least one regulatory sandbox on AI
August 2, 2027:
Obligations apply to products already requiring third-party conformity assessments
GPAI systems placed on market before August 2, 2025 become subject to the Act
December 31, 2030: AI systems that are components of large-scale IT systems listed in Annex X placed on market before August 2, 2027 must be brought into compliance.
Compliance Implications
Organizations must assess their AI systems to determine applicable risk categories and prepare for the corresponding compliance requirements. Non-compliance can result in substantial penalties, with fines reaching up to €35 million or 7% of global turnover. High-risk AI systems face the strictest regulations, including pre-market conformity assessments, quality management systems, and post-market monitoring requirements.
The risk-based approach classifies AI systems based on their potential impact on individual rights, with stricter controls for high-risk applications while providing more streamlined regulation for lower-risk systems. The Act aims to foster innovation by creating a predictable legal environment while building public trust through clear standards for accountability and transparency.