EU AI Act: Builders and Deployers, Both on the Hook

What every developer, ML engineer, and tech lead needs to know right now

EU AI Act: Builders and Deployers, Both on the Hook
EU AI Act: New rules, new reality for tech builders, image by GPT-4o, prompt by author.

Yesterday, you proudly wired a slick, GPT-powered feature into your app and pushed it to prod. Today, you’re viral on X, not for innovation, but because your AI cheerfully misidentified London as the capital of France.

Seconds later, an EU regulator politely but firmly requests your AI logs.

Welcome to your new reality: life under the EU AI Act, General Data Protection Regulation GDPR’s tougher cousin that’s less about cookies and more about serious teeth

Why panic?

Because your old AI playbook just went up in flames. Until now, you could:

  • Scrape random data, fine-tune a model, and move on.
  • Plug-and-play with an LLM as a “magic black box.”
  • Ignore real-world performance after launch because “it’s beta.”

No more. The Act drags everyone, from genius model creators to developers just wiring APIs into apps, onto the accountability stage:

  • Prove your training data isn’t biased.
  • Stress-test, document, and prove your AI’s resilience before it sees daylight.
  • Log every inference so regulators can rewind your decisions.
  • Keep a real human ready to kill-switch your tech when it misbehaves.

Slip up, and you’re looking at a fine that will seriously dent your revenue, up to 7% of global annual turnover.

Compare it to a required CI/CD for reliability. It will take some time to implement, but it will save you in the long run when you need it most.

In this article, we’ll break down the legal jargon into clear, actionable steps by determining your product’s risk level, the exact documentation you’ll need, and how to retrofit transparency before the August 2026 deadline arrives.


The risk-based approach

Whether you’ve already deployed AI systems or are planning new ones, your first critical step is determining which risk tier your AI falls into.

The EU AI Act categorizes AI into four clear risk tiers, each with specific rules tailored to potential impact.

A pyramid diagram showing the four risk tiers of the EU AI Act. From top to bottom: Level 1 (red) shows ‘Unacceptable risk’ including social scoring and manipulation; Level 2 (orange) shows ‘High risk’ covering critical infrastructure and decision systems; Level 3 (yellow) shows ‘Limited risk’ for transparency-required systems like chatbots; Level 4 (green) shows ‘Minimal risk’ for everyday AI like video games and spam filters. Each tier is numbered and color-coded.
The EU AI Act risk pyramid, where does your system fall? Image by author

1. Unacceptable risk

These are banned outright. Think Orwellian social scoring, manipulative AI that exploits vulnerabilities, or mass surveillance through biometric ID. Deploy these, and you’ll face fines up to 7% of your global revenue.

Examples:

  • Social scoring systems that rank citizens based on behavior
  • AI that uses subliminal manipulation in marketing to influence people against their will
  • Emotion-recognition AI monitoring students during exams or tracking employee expressions
  • Predictive policing tools that profile people to forecast criminal behavior without evidence
  • Facial recognition databases built by scraping online images without consent

2. High risk

These are heavily regulated. These include hiring tools, credit scoring, educational scoring systems, and critical infrastructure management. If your AI falls here, you’ll need to stress-test rigorously, document every step, and maintain logs meticulously. Fail compliance, and it’s back to the drawing board or worse, the courtroom.

Examples:

  • Healthcare: AI systems diagnosing medical conditions or recommending treatments
  • Finance: Credit scoring algorithms determining who gets loans
  • Education: AI systems evaluating students or deciding school admissions
  • HR: Resume screening tools, candidate ranking systems, or employee monitoring
  • Infrastructure: AI managing energy grids or traffic control systems
  • Public Services: AI triaging emergency calls or determining welfare eligibility

3. Limited risk

These face light-touch rules. Chatbots, generative AI, and deepfakes are allowed, but users must clearly be informed they’re interacting with AI or AI-generated content. Transparency is your ticket to smooth sailing.

Examples:

  • Customer service chatbots (must disclose they’re AI)
  • AI-generated content like deepfakes (requires clear labeling)
  • Virtual assistants and voice interfaces (should identify as AI)
  • Recommendation systems that interact directly with users

4. Minimal risk

These are free to operate. Everyday AI, like spam filters or game NPCs. These continue business as usual, with no new hurdles under the AI Act.

Examples:

  • Spam filters in email
  • Video game AI controlling NPCs
  • Basic e-commerce recommendation engines
  • Inventory optimization tools
  • Routine analytics and data processing AI

Context matters!

The same AI technology could fall into different risk categories depending on its use. A chatbot recommending movies is minimal risk, but if it’s giving medical advice, it jumps to high risk.


Determining your role

Before you can even consider compliance, you need to figure out which hat you are wearing under the EU AI Act. This may be interesting as it is possible you are wearing more than one.

Supplier

If you develop an AI system, whether for external customers or just for internal use, you are considered a supplier. Yes, even if nobody outside your company ever touches it.

Deployer

If you integrate or operate an AI system that someone else built, you are a deployer. You still have obligations, like monitoring the system, logging performance, and ensuring human oversight.

Both

Congratulations, you can be both supplier and deployer at the same time. If for example, you create an internal AI-based hiring tool for your HR department, you’re both supplying and deploying the system. Double the roles, double the responsibilities. You must fulfill all supplier and deployer obligations. This means:

  • Build compliance into your AI (risk mgmt, documentation, logging, certification).
  • Operate it strictly per your own instructions, monitor its output, keep logs, and report incidents. Just as if you were a separate user of your own product.

Why this matters

Each role comes with a different laundry list of obligations, risk assessments, and potential fines. Confusing your role today means drowning in compliance hell tomorrow.


Once you’ve determined your role (supplier or deployer) and identified the risk category your AI falls into, here’s your practical roadmap:

1. Unacceptable risk

If your AI use case falls into the banned category — such as social scoring, manipulative practices, or mass biometric surveillance — your immediate action is straightforward:

  • Cease any development or deployment immediately.
  • Conduct an internal audit to ensure no such practices are inadvertently implemented.
  • Stay informed about the latest prohibited use cases to avoid compliance pitfalls.

2. High risk

High risk are AI systems, such as hiring tools, credit scoring, and critical infrastructure management:

  • Enforce complete risk management and control procedures.
  • Perform thorough conformity assessments and obtain necessary certifications (CE marking).
  • Maintain detailed documentation of all training data, validation methods, and compliance measures.
  • Establish ongoing human oversight mechanisms and emergency stop capabilities.
  • Set up robust logging for every inference to ensure full traceability.
  • Frequently review and revise compliance measures to maintain standards and adapt to developing regulations.

3. Limited risk

If your AI system, like chatbots or generative AI, falls into the limited-risk tier:

  • Clearly inform users when they are interacting with AI or AI-generated content.
  • Label generated content appropriately to maintain transparency and user trust.
  • No certification is required, but transparency documentation should be clear, easily accessible, and user-friendly.

4. Minimal risk

For AI with minimal risk, such as spam filters or video game AI:

  • Continue regular operations with standard good practices.
  • Stay vigilant about any changes in regulatory frameworks or risk assessments.

Aligning your AI systems to these practical steps will ensure compliance and position your company as a trusted leader in the evolving AI regulatory landscape.


Implementing AI Act compliance

Beyond the legal jargon, here’s what AI Act compliance looks like in code and systems:

Logging and record-keeping

For high risk AI, automated logging isn’t optional; it’s required.

Here’s what to capture:

  • Input data: Log what goes into your AI (anonymized where appropriate)
  • Output results: Record what your AI decided or predicted
  • Decision factors: Document which features influenced the outcome
  • System events: Track model updates, errors, and human interventions

Example: A fintech company that deploys a credit risk model logs each application: application ID, model version, key input features (like debt-to-income ratio), the score produced, and the decision (approved/denied). They also log any human override and the rationale. These logs are stored securely for 5 years.

Storage tip: Retain high risk AI logs for 5–10 years, especially in regulated industries. Secure them appropriately with access controls and encryption.

Assessing and addressing bias

High risk AI systems must be demonstrably fair and non-discriminatory. Here’s how to test:

  1. Create diverse test datasets representing different demographics
  2. Slice results by key groups (gender, age, ethnicity, etc., where applicable)
  3. Compare metrics across groups (error rates, selection rates, etc.)
  4. Look for disparate impact (e.g., if Group A is selected at half the rate of Group B)

Tools you can use:

Mitigation strategies: If you find bias, consider:

  • Enhancing training data variety
  • Adjusting model parameters or thresholds
  • Post-processing outputs to ensure fairness

Real-world example: A hiring startup discovered their resume-screening AI scored candidates with female-coded words (like “women’s college”) lower. They retrained the model on an improved dataset and added a post-processing rule to equalize gender pass rates, then re-tested to confirm comparable outcomes.

Performance monitoring

Compliance doesn’t end at deployment. Set up ongoing monitoring:

  1. Define key performance indicators relevant to your AI’s purpose
  2. Track metrics over time to detect performance degradation
  3. Watch for data drift where input patterns change
  4. Set up automated alerts for significant deviations

Monitoring infrastructure options:

Feedback mechanisms: Enable users or overseers to flag questionable AI decisions. If humans frequently override the AI, that’s a signal that your model needs attention.

Example: A manufacturing company using AI to detect defects monitors the rate of defects flagged daily. When the rate suddenly drops to zero, their system alerts them. The AI might be missing defects due to environmental changes. They pause the AI, recalibrate, and resume operations.


Impact beyond Europe

You might be thinking, “I’m not based in Europe, why should I care?” Here’s why: the EU AI Act is more than just a local headache — it’s creating a ripple effect felt worldwide.

Why this matters even if you’re not in the EU

Simply put, if your AI touches EU citizens, even indirectly, you’re in the game. Much like GDPR forced a global rethink of data privacy, the EU AI Act forces global compliance if you serve European users or even use European data. It’s a boundaryless regulation that reshapes the playing field.

How global tech companies are adapting

Tech giants like Google, Microsoft, and OpenAI aren’t waiting to see if regulators knock. They’re proactively aligning products globally to EU standards. Why?

Running separate compliance regimes for different markets is costly and impractical, so European standards are becoming global standards by default.

However, it also leads to delays or even the absence of certain AI technologies in Europe, such as Apple’s postponement of AI-powered features due to EU compliance challenges.

The “Brussels Effect” in plain English

This global shift is known as the “Brussels Effect.” It’s the phenomenon where stringent regulations in Europe become worldwide benchmarks.

Brussels sets the bar high, companies adapt, and Europe’s local laws suddenly become everyone’s compliance nightmare or opportunity, depending on your perspective.

In short, even if you never planned a European vacation, your AI products might already be checking in.


The timeline

The AI Act isn’t some distant regulation — it’s already here and rolling out in phases that demand your attention now.

When these changes kick in

  • August 1, 2024: The AI Act officially went into force, marking the beginning of a transitional period.
  • February 2, 2025: Prohibited practices officially become illegal. Ensure you’ve entirely ceased any banned AI operations by this date.
  • August 2, 2025: Compliance obligations for general-purpose AI (like foundation models) come into effect.
  • August 2, 2026: Most compliance requirements for high risk AI systems officially begin. By this date, your high risk systems must be fully compliant.
  • August 2, 2027: Extended deadline for compliance of AI embedded within existing regulated products.

What to watch for in the coming months

  • Emerging EU standards: Look for the release of harmonized standards, guidelines, and clarifications from EU regulators.
  • Regulatory sandboxes: Opportunities to test innovative AI products under regulatory supervision will arise — stay alert for announcements.
  • Enforcement trends: Monitor enforcement actions to understand regulators’ priorities and the practical implications for your sector.

How to prepare yourself and stay informed

  • Regularly review official EU announcements and industry updates.
  • Engage in industry forums and workshops to exchange compliance strategies.
  • Consider internal training sessions on AI ethics and regulatory compliance to ensure your team is ready.
  • Proactively assess your current AI systems and start aligning them with the forthcoming requirements to avoid last-minute compliance pressure.

Gray areas and hidden traps

The EU AI Act looks clean on paper, but under the hood, it’s a legal minefield. Let’s look at some of the most confusing gray zones developers and companies are stumbling into:

  1. Are you a supplier or a deployer?

Spoiler: You might be both.
If you build an internal AI tool — even if you never sell it — you are treated as a supplier under the Act.

Yes, even if it’s just a scrappy internal app your team hacked together. Welcome to double the obligations: you must meet both the supplier and deployer requirements. This is product law logic awkwardly jammed into the world of software​.

2. The shapeshifting definition of AI

The law’s definition of “AI system” is ridiculously broad. Anything from classic machine learning models to potentially even complex rule-based systems might fall under it. The guidance documents? About as clear as a mud puddle​.

3. Open Source: Not a free pass

Using open-source models doesn’t magically shield you. If you package them into your own application, congratulations — you own the risk. You are responsible for ensuring the combined system is safe, even if you pulled half of it from GitHub​.

4. Buy or build?

Even if you simply buy and deploy AI systems, you still carry monitoring, oversight, and logging obligations. Meanwhile, building your own solution turns you into a “manufacturer” with even heavier duties, including full risk documentation, human oversight guarantees, and transparency measures​.

5. Role confusion means litigation risks

Expect legal fights over who counts as the “supplier” versus the “user” versus the “distributor.” Even regulators admit the current text will need years of court cases, interpretations, and patchwork guidelines to stabilize​.


Ship or sink

Forget the fantasy that regulators will look the other way. The EU AI Act is live, the clock is ticking, and fines of up to 7 percent of global revenue aren’t a scarecrow, they’re a guillotine. ​

So approach this like the engineer you are by:

1. Map your risk tier today, not “after launch.”

If your tool lands in high risk territory, build the logs, safety tests, and human kill switches now. Doing these later will cost much more.

2. Own your role(s)

Supplier, deployer, or the mix of both. Each hat carries its own paperwork and liability. Ignoring this is like pushing to prod without version control.

3. Automate governance

Automate governance the same way you automated CI/CD. Treat bias audits and traceability checks as unit tests for ethics. They run every build, catch regressions, and keep you sleeping at night.

4. Use the Brussels effect

Align once with EU standards, and you’ve future-proofed most other markets. That’s not red tape; that’s a global fast-pass.

Compliance isn’t a bureaucratic moat. It’s the new uptime.

Build it into your pipeline and you won’t just dodge fines. You’ll ship products that survive the next wave of regulation while your competitors scramble for lifeboats.


Disclaimer

In this article, I provide general information regarding the EU AI Act. I do not offer legal advice. Consult a legal professional for specific advice for your situation. The regulations are complex and can change.