EU AI Act High-Risk Deadline: What AI Companies Must Do Before August 2, 2026
The EU AI Act's high-risk AI system requirements take effect August 2, 2026. Here's what companies need to know about compliance obligations, penalties, and how to prepare.
Most founders I talk to don't even know this applies to them. The August 2 Deadline Is the One That Matters The EU AI Act became law August 1, 2024. Staggered deadlines. Banned practices (social scoring, subliminal manipulation) kicked in February 2025. Nobody noticed because almost no company was doing those things. General purpose AI rules hit August 2025. Only affected foundation model providers. August 2, 2026 is different. The high risk requirements. "High risk" covers way more companies than people think. The Employment Category Will Catch Most of You Annex III lists eight categories that make an AI system high risk: employment, education, essential services (credit, insurance), law enforcement, migration, biometrics, critical infrastructure, and administration of justice and democratic processes. Employment is the big one. If your AI screens resumes, ranks candidates, monitors productivity, or influences who gets promoted or fired? High risk. Period. Doesn't matter if a human makes the "final decision." If the AI narrows 200 applicants to 10, it's making the decision. And it's extraterritorial. Doesn't matter where you're headquartered. One EU customer. One EU job applicant.