The FTC’s AI Portfolio Is About to Get Bigger

The FTC’s AI Portfolio Is About to Get Bigger

CyberScoop
CyberScoopApr 20, 2026

Why It Matters

The FTC’s new powers could dramatically reduce the spread of harmful AI‑generated content, protect victims, and force tech firms to adopt robust compliance frameworks, while also targeting a fraud ecosystem that has siphoned billions from consumers.

Key Takeaways

  • FTC will enforce Take It Down Act starting May, 48‑hour removal rule
  • First conviction under law shows criminal liability for AI‑generated deepfake harassment
  • xAI’s Grok faces potential takedown actions for mass nudification
  • Voice‑cloning scams have stolen ~ $900 million from U.S. consumers
  • FTC seeks guidance for companies on good‑faith takedown efforts

Pulse Analysis

The Take It Down Act, passed by Congress last year, marks a watershed moment for digital privacy and content moderation. By criminalizing the creation and distribution of nonconsensual AI‑generated sexual imagery and granting individuals a statutory right to demand removal, the law gives the FTC a concrete enforcement toolbox. Starting in May, platforms must act within 48 hours of a verified takedown request or risk investigation, a timeline that pushes companies to embed rapid response mechanisms into their moderation pipelines.

The FTC’s enforcement momentum is already evident. A recent conviction of an Ohio resident for using AI‑generated deepfake nudes underscores that the statute carries real criminal consequences. At the same time, the commission is signaling to the tech sector that compliance will be scrutinized closely. xAI’s Grok, notorious for mass‑nudification of user avatars, is likely to become a test case as the FTC prepares guidance on what constitutes a good‑faith takedown effort. Industry observers expect that clear, prescriptive guidelines will emerge, helping firms avoid costly enforcement actions while protecting user safety.

Beyond deepfakes, the FTC is confronting a surge in AI‑enabled fraud, particularly voice‑cloning scams that have swindled nearly $900 million from Americans. While the agency’s jurisdiction is limited by the FCC’s oversight of telecommunications, FTC Chair Andrew Ferguson is urging additional legislative authority to tackle cross‑border scams. The convergence of deepfake enforcement and fraud mitigation signals a broader strategic shift: the FTC is positioning itself as the primary regulator of AI‑driven consumer harm, a role that will shape compliance standards and legal risk for technology companies for years to come.

The FTC’s AI portfolio is about to get bigger

Comments

Want to join the conversation?

Loading comments...