EU AI Act: Second Draft of Code of Practice on Transparency and Watermarking Published

EU AI Act: Second Draft of Code of Practice on Transparency and Watermarking Published

Cooley
CooleyApr 7, 2026

Why It Matters

The draft sets a de‑facto industry benchmark that will shape compliance costs and technical architectures for AI providers and content deployers across the EU. Early alignment can mitigate regulatory risk and give firms a competitive edge in trustworthy AI markets.

Key Takeaways

  • Digitally signed metadata now mandatory, requiring PKI infrastructure
  • GPAI model providers shift from mandatory to voluntary watermarking
  • Signatories must offer free EU‑localised AI detection tools
  • Cooperative, provider‑agnostic detection interface becomes compulsory
  • Deployers face new modality‑specific labeling rules and reduced documentation

Pulse Analysis

The second draft of the EU AI Act’s Code of Practice marks a pivotal shift from aspirational guidance to enforceable best practices. By mandating digitally signed, timestamped metadata, the Commission forces AI developers to embed provenance data directly into content streams, a move that aligns with emerging standards like C2PA. This requirement not only raises the technical bar—necessitating robust PKI and certificate lifecycle management—but also creates a uniform signal for regulators and downstream platforms to verify authenticity. The inclusion of a free, EU‑localised detection tool further solidifies the EU’s intent to build a transparent AI ecosystem that can be audited in real time.

For AI system providers, the draft introduces a mixed regulatory landscape. While general‑purpose AI model providers are now encouraged rather than compelled to embed watermarks, the onus shifts to system integrators who must ensure downstream compliance. This could spur new commercial contracts that stipulate watermarking as a service level, even if not legally required. Moreover, the mandatory, provider‑agnostic detection interface forces dominant players to open up their proprietary detection methods, fostering interoperability but also exposing proprietary techniques to broader scrutiny. Companies that invest early in interoperable detection stacks will likely gain a market advantage and reduce future retrofitting costs.

Deployers—media firms, advertisers, and platform operators—must redesign workflows to meet modality‑specific labeling rules, such as persistent AI tags on short videos or repeated disclosures on long audio. The reduction in documentation burdens eases internal compliance, yet the prohibition on label removal raises practical enforcement challenges. Firms should therefore embed labeling at the content creation layer and adopt tamper‑evident mechanisms to preserve disclosures downstream. Aligning with the draft now positions organizations to demonstrate good faith compliance when the final code is adopted, mitigating potential penalties and reinforcing consumer trust in AI‑generated media.

EU AI Act: Second Draft of Code of Practice on Transparency and Watermarking Published

Comments

Want to join the conversation?

Loading comments...