Technology RadarTechnology Radar
Assess

The EU AI Act is the world's first comprehensive AI regulation, with full applicability by August 2, 2026. GPAI model obligations (affecting providers like OpenAI, Anthropic, Google) took effect August 2025. Penalties reach EUR 35M or 7% of global turnover. Only 18% of organizations have fully implemented AI governance frameworks despite using AI operationally (Stanford HAI AI Index 2025).

Why Assess (Not Hold, Not Adopt)

If your organization sells into the EU or has EU-based users, this is not optional — you need to understand your obligations now. The extraterritorial reach means a US company using AI coding agents to ship software to EU customers is in scope.

If you have zero EU exposure, this is a Hold. Don't spend compliance cycles on it yet. But watch it — the Act is already shaping regulatory conversations in the US, UK, and Canada, and similar frameworks will follow.

It's not Adopt because the implementation details are still being written. Codes of practice, harmonized standards, and enforcement guidance are in draft. Teams that over-invest in compliance tooling today risk building to a moving target. Assess means: know what's coming, identify your risk category, but don't build compliance infrastructure until the standards stabilize.

Key Timeline

  • Feb 2025: Prohibited AI practices and AI literacy obligations took effect
  • Aug 2025: GPAI model obligations entered into force — training data transparency, technical documentation, post-market monitoring (EU AI Act text, Article 53)
  • Aug 2026: Full applicability — enforcement powers, high-risk AI system rules, transparency rules, regulatory sandboxes required (at least 1 per member state)
  • Aug 2027: Extended transition for high-risk AI in regulated products

What This Means for Engineering Teams

You're probably affected if:

  • Your product uses AI coding assistants (Copilot, Claude Code, Cursor) and ships to EU customers — the generated code is downstream of a GPAI system
  • You fine-tune or host open-weight models — you may inherit GPAI provider obligations
  • You build AI features into your product — classification as high-risk depends on the domain (healthcare, finance, HR, education are high-risk by default)

You're probably not affected if:

  • You use AI tools purely for internal development with no EU-facing product
  • Your AI usage is limited to code completion without customer-facing output

Practical steps for Assess:

  1. Map which AI systems your organization uses and whether they're GPAI
  2. Determine if any output reaches EU users or regulated domains
  3. Designate someone to track the evolving codes of practice (the EU AI Office publishes updates)
  4. Don't buy compliance tooling yet — the standards aren't final

Strengths

  • Creates legal certainty for AI development — knowing the rules beats guessing
  • Risk-based approach avoids blanket restrictions — most AI coding tools fall in the lowest risk tier
  • Extraterritorial reach forces global companies to engage, preventing a regulatory race to the bottom

Limitations

  • Complex, evolving compliance landscape — final harmonized standards expected late 2026 at earliest
  • Codes of practice for GPAI still in draft as of March 2026
  • Enforcement infrastructure varies wildly by member state — some have dedicated AI regulators, others are assigning it to existing data protection authorities
  • Tension between the Act's compliance burden and the speed of AI development — by August 2027, the technology landscape will look very different from when the Act was drafted (2021–2023)