·4 min read

The EU AI Act Is Here: What Developers Need to Know

regulationaiopinion

The EU AI Act entered into force in August 2024, making it the world's first comprehensive AI regulation. I've been reading through it, not as a lawyer, but as an engineer who builds computer vision and machine learning systems. And my main takeaway is this: if you're building AI, you need to understand this law. Not eventually. Now.

The Risk-Based Framework

The Act classifies AI systems into four risk tiers, and your obligations scale accordingly.

Unacceptable risk (banned). Social scoring by governments, real-time biometric surveillance in public spaces (with narrow exceptions), and manipulative AI that exploits vulnerabilities. These are prohibited outright.

High risk. This is where most of the action is. AI used in hiring, credit scoring, law enforcement, medical devices, critical infrastructure, and education falls here. If you're building CV systems for industrial safety (which I was doing at Honeywell), that likely qualifies. High-risk systems face the strictest requirements: risk assessments, data governance documentation, human oversight mechanisms, accuracy and robustness testing, and detailed technical documentation.

Limited risk. Chatbots, deepfake generators, and emotion recognition systems. The main requirement is transparency: users must be informed they're interacting with AI.

Minimal risk. Most AI applications. Spam filters, recommendation engines, video game AI. No specific obligations beyond existing law.

What Engineers Actually Need to Change

Here's where it gets practical.

Documentation becomes mandatory, not optional. For high-risk systems, you need detailed records of training data, model architecture decisions, evaluation metrics, known limitations, and intended use cases. The "it's in a Jupyter notebook somewhere" approach won't cut it. This is formalized model cards and data sheets, enforced by law.

Bias testing is required. High-risk systems must be tested for discriminatory outcomes across protected characteristics. If your hiring algorithm or credit model shows disparate impact, you can't ship it. For CV engineers, this means testing across demographic groups for face-related applications, something that should have been standard practice already but often wasn't.

Human oversight must be designed in. You can't deploy a high-risk AI system that operates as a complete black box. There must be mechanisms for human review, intervention, and override. This has real architectural implications for how you design inference pipelines and decision workflows.

Audit trails are not optional. High-risk systems must log their operations in ways that allow post-hoc analysis. If a decision is challenged, you need to be able to explain what happened and why.

The US Comparison

The contrast with the United States is stark. As of late 2024, there's no federal AI legislation in the US. A patchwork of state laws, executive orders, and voluntary commitments exist, but nothing close to the EU's comprehensive framework. Colorado has an AI discrimination law. California proposed and then vetoed a broader AI safety bill. The federal approach remains largely industry self-regulation.

For engineers working at companies with global reach, the EU Act becomes the de facto standard. If you're building for European users, you comply with the EU Act regardless of where your servers are.

My Take

I'll be honest: my first reaction was "this is going to slow everything down." And in some ways, it will. Documentation and compliance take time and engineering effort.

But having built CV systems at Honeywell that operated in safety-critical environments, I also know that rigorous documentation, bias testing, and audit trails aren't bureaucratic overhead. They're good engineering. The EU AI Act is essentially mandating practices that the best teams already follow.

The engineers who understand both AI and regulation will become increasingly valuable. Compliance isn't a legal team problem. It's a systems design problem. And systems design is what engineers do.