Meta Rejects EU's AI Code of Practice

Meta Declines to Sign EU’s New AI Code of Practice

Brussels, July 24, 2025 – The European Commission has officially released its long-awaited guidelines on general-purpose artificial intelligence (GPAI), laying out how core AI technologies such as ChatGPT and Gemini will be governed under the EU’s AI Act. The document defines which systems qualify as GPAI and outlines the obligations for companies that develop or integrate these models.

The guidelines will come into effect on August 2, 2025, but enforcement is postponed until mid-2026 to allow the AI Office and affected companies time to adapt. Despite efforts by major tech firms to delay or weaken the rules, the Commission has opted to move forward.

In parallel, the Commission has developed a voluntary Code of Practice, meant to help companies transition into compliance with the AI Act. It was drafted in collaboration with AI developers, civil society groups, and EU regulators. While leading players like OpenAI and French-based Mistral have agreed to sign the code, Meta Platforms has declined.

In a statement published on LinkedIn, Joel Kaplan, Meta’s Chief Global Affairs Officer, criticized the initiative, warning that the Code introduces legal ambiguities and imposes new requirements that exceed the AI Act itself. “Europe is heading down the wrong path on AI,” Kaplan wrote.

Meta’s refusal is seen as part of a broader pattern of tension between the company and EU regulators. The firm is currently under investigation for alleged violations of the Digital Markets Act (DMA) and Digital Services Act (DSA), including its controversial “Pay or OK” system and the training of its Meta AI model using publicly available online data.

In response to Meta’s claims, Commission spokesperson Thomas Regnier rejected the idea that the Code adds unnecessary burdens. According to Regnier, companies that sign the Code will benefit from “more legal certainty and reduced administrative burden.” He added that firms opting out may face increased regulatory oversight from the AI Office.

What Is the EU’s New AI Code of Practice?

The General-Purpose AI (GPAI) Code of Practice, published on July 10, 2025, is a voluntary framework designed to assist GPAI model providers in complying with the AI Act. It covers legal obligations in three key areas: transparency, copyright, and safety/security.

Key elements of the Code include:

It complements the AI Act, which categorizes AI systems by risk and enforces bans on certain applications, with penalties for violations.

Transparency and Copyright obligations apply to all GPAI providers. These include documenting model capabilities and limitations and managing copyright-protected content responsibly.

Safety and Security requirements are directed at providers of GPAI models with systemic risks—models that could have significant societal impacts.

The Code provides a practical compliance path, reducing the administrative burden and offering legal clarity for AI companies operating in the EU.

While non-binding, the Code may be adopted as a reference standard through a future implementing act. It does not confer automatic legal compliance but is expected to be endorsed by EU institutions and member states.

Drafted through a multi-stakeholder process, the Code will be reviewed every two years to reflect legal and technical developments.

Next Steps

The Code of Practice is expected to be finalized before the AI Act’s enforcement phase begins in 2026. The AI Office, a new enforcement body within the EU Commission, will oversee implementation and coordinate with national regulators.

As tensions rise between regulators and major tech firms, the EU’s AI policy is shaping into a test case for global governance of powerful AI models—and for how much influence Big Tech will have over the rules that shape their deployment.

Scroll to Top