AI Act Update: GPAI Code of Practice Not Finalized

As of May 2, 2025, the EU has not finalized its General-Purpose AI Code of Practice, a key milestone under the AI Act. While the European Commission promises publication before August, the delay reflects mounting tensions between regulation, industry lobbying, and international pressure in shaping AI governance across the EU.
European Commission building with EU flags and overlaid title "GPAI Code of Practice Not Finalized"

Today marks a key regulatory milestone under the European Union’s Artificial Intelligence Act—and a significant test of the EU’s ability to manage one of the most ambitious AI governance frameworks in the world. As of May 2, 2025, the deadline for finalizing the Code of Practice for General-Purpose AI (GPAI) has not been met. While the European Commission acknowledges the delay and signals the final Code will be released ahead of August, the implications of this missed deadline are already being felt across the AI sector.

Early Implementation Milestones: A Mixed Track Record

Since the AI Act entered into force on August 1, 2024, the EU has moved steadily to implement the regulation’s earliest provisions. Most notably, the bloc enacted key measures on February 2, 2025, including:

  • Bans on Unacceptable AI: Prohibiting systems that pose “unacceptable risks,” such as social scoring, facial recognition scraping, and emotion detection in sensitive settings like schools and workplaces.
  • AI Literacy Requirements: Mandating that companies educate staff and users on the risks and responsibilities of AI use.

These early measures were successfully rolled out on schedule, underscoring the EU’s commitment to tackling harmful AI systems and promoting responsible use.

The Role of the GPAI Code of Practice in the EU AI Framework

A cornerstone of the Act’s regulatory scheme is the Code of Practice for General-Purpose AI—a compliance tool designed to provide interim guidance ahead of more detailed technical standards expected in 2027. As specified in Article 56 of the Act, the Code was to be finalized by May 2, 2025, and applies to developers of AI systems used across multiple applications and industries.

Its goals include:

  • Transparency: Clarifying training data sources and model capabilities
  • Copyright Compliance: Respecting opt-out mechanisms for protected content
  • Systemic Risk Mitigation: Introducing safeguards for large-scale, high-impact models

However, despite a broad drafting process that involved over 1,000 stakeholders, today’s deadline has passed without a final version being published.

Delays Acknowledged: What We Know as of Today

The delay in finalizing the Code of Practice has been openly acknowledged by the European Commission. According to the official introduction to the Code of Practice, the document was intended as an interim compliance tool for GPAI developers ahead of harmonized technical standards. While a finalized version remains pending, providers are currently encouraged to align with draft versions of the Code to demonstrate voluntary compliance.

However, the drafting process has not been without controversy. As reported by Euronews, lobbying by major technology firms led to the softening of transparency and accountability requirements, a move criticized by civil society groups. Simultaneously, the path to full technical standardization has been pushed back, with PYMNTS confirming that formal EU AI Act standards will not be adopted until 2027. For businesses navigating the evolving compliance landscape, the AI Act’s implementation timeline offers essential guidance on upcoming obligations and key enforcement dates.

Stakeholder Tensions and Political Pressure

The drafting process was contentious from the outset. While the Commission invited input from a wide range of organizations—Big Tech, civil society, academia—conflicting priorities stalled consensus.

Key Sources of Delay

  • Tech Industry Influence: Large companies reportedly lobbied to weaken transparency and risk assessment standards, including efforts to remove enforceable key performance indicators.
  • International Pushback: The Trump administration criticized the Code as overly restrictive, citing its potential to limit AI innovation and complicate U.S.-EU data flows.
  • Civil Society Divisions: Human rights advocates called for stronger safeguards against algorithmic bias, while others pushed for enhanced protections against existential risks posed by frontier models.

The result: a politically fraught process that ultimately forced the Commission to acknowledge that the May 2 timeline could not be met.

Compliance, Enforcement, and Interim Measures

Though the Code remains unpublished, the AI Act’s implementation continues. By August 2, 2025, the following obligations become legally binding for GPAI providers:

  • Submission of technical documentation
  • Ensuring explainability of outputs
  • Providing evidence of copyright protections

The European AI Office, operational since late 2024, will coordinate enforcement. Meanwhile, all EU Member States must designate national supervisory authorities by August to ensure localized compliance and initiate penalties where appropriate. Fines for non-compliance can reach 7% of global annual turnover or €35 million, depending on the nature of the violation.

Providers currently deploying GPAI systems are strongly encouraged to align with the most recent draft Code of Practice in preparation for upcoming inspections and enforcement actions.

Systemic Risk Models and What to Expect in Summer 2025

A separate compliance track is emerging for systemic-risk AI models, typically defined as those using computational power beyond 10²⁵ FLOPs. These models will soon be subject to:

  • Adversarial testing
  • Mandatory incident reporting
  • Additional safety constraints

The European AI Office is expected to release updated risk thresholds and assessment criteria in June 2025, offering further clarity to GPAI developers with large-scale or high-impact systems.

Looking Ahead: High-Risk AI Systems in 2026 and Full Implementation by 2027

While today’s missed Code of Practice deadline is a notable setback, the EU AI Act remains on track for broader implementation over the next two years.

  • August 2026: Obligations for high-risk AI systems in employment, healthcare, law enforcement, and critical infrastructure take effect. These systems will require conformity assessments and risk management documentation.
  • August 2027: All remaining provisions of the AI Act will apply, including those affecting AI embedded in regulated products like medical devices and industrial control systems.
  • August 2030: Final compliance deadline for legacy AI systems used by public authorities.

A Flexible but Fraught Rollout

As of today, the European Union has not met its own May 2, 2025 deadline for finalizing the GPAI Code of Practice. While this delay has been anticipated and partially accommodated, it underscores the difficulty of regulating a fast-evolving technology amid international and industry pressure.

Businesses, especially those developing or deploying GPAI systems in the EU, should not treat this delay as a pause. Instead, the use of draft versions, ongoing engagement with regulators, and readiness for the August 2025 obligations are essential next steps.

The coming months will be critical in determining whether the EU can maintain its leadership in ethical AI governance—or whether delays and divisions will continue to undercut its regulatory ambitions.

Share:

Search

More Posts