
The EU AI Act Newsletter #99: Bridging the Atlantic

Key Takeaways
- •Fixed EU AI compliance dates: Dec 2027, Aug 2028
- •“Nudifier” ban targets non‑consensual explicit AI images
- •GPAI enforcement powers activate Aug 2026 across EU
- •Amnesty warns simplification could erode digital rights
- •Gender‑based AI harms remain under‑addressed in policy
Pulse Analysis
The Parliament’s simplification vote marks a pivotal shift in Europe’s AI regulatory calendar. By anchoring high‑risk system obligations to a December 2027 deadline and sector‑specific safety rules to August 2028, the EU gives companies a clear runway for compliance, while the November 2026 watermarking cut‑off pushes providers to embed provenance tools early. The explicit ban on “nudifier” applications reflects growing political pressure to protect personal dignity, and the expanded SME support signals an effort to balance innovation with oversight. For U.S. firms eyeing the European market, these dates dictate product roadmaps, investment timing, and legal risk assessments.
Enforcement of the AI Act’s Chapter V provisions will become operational on 2 August 2026, granting the European Commission authority to request documentation, conduct evaluations, and impose fines on GPAI model providers. National market‑surveillance bodies can also trigger Commission action, and downstream actors may lodge complaints, creating a multi‑layered oversight ecosystem. This robust framework aligns with findings from a recent US‑EU study tour, which identified children’s safety and high‑risk AI as immediate convergence points for transatlantic policy. The study underscored the need for cross‑sector expertise to manage AI’s societal impact, suggesting that coordinated standards could ease compliance burdens on multinational developers.
Civil‑society voices, however, warn that simplification may dilute hard‑won digital protections. Amnesty International highlighted the EU’s “Digital Omnibus” as a de‑regulatory push backed by big‑tech lobbying—Amazon alone spends roughly €7 million (about $7.6 million) annually on EU lobbying—potentially undermining the AI Act’s safeguards. Simultaneously, gender‑based AI harms remain under‑addressed, with the current Code of Practice lacking explicit provisions. Recommendations from the AI Standards Lab call for reinforced GDPR safeguards, clearer prohibitions on non‑consensual intimate imagery, and stronger AI Office resources. These critiques will shape the upcoming trilogue, influencing whether the EU’s AI regime balances innovation with fundamental rights, and how closely it aligns with U.S. regulatory trends.
The EU AI Act Newsletter #99: Bridging the Atlantic
Comments
Want to join the conversation?