No § 230 Immunity for Meta's AI-Generated Ads

No § 230 Immunity for Meta's AI-Generated Ads

The Volokh Conspiracy
The Volokh ConspiracyMar 27, 2026

Key Takeaways

  • Meta's AI tools contributed to fraudulent ad content
  • Court rejects §230 immunity for Meta in this case
  • Plaintiffs can pursue aiding‑and‑abetting fraud claim
  • Generative AI tools may expose platforms to liability
  • Decision aligns with prior Forrest v. Meta ruling

Summary

A federal judge in Northern California denied Meta's bid to dismiss a lawsuit alleging the company helped create fraudulent pump‑and‑dump ads for a Chinese penny stock. Plaintiffs claim Meta's AI‑driven tools—Flexible Format, Dynamic Creative, and Advantage+ Creative—generated and optimized deceptive text and images that lured investors. The court held that such involvement exceeds the publisher protection of Section 230, allowing claims for aiding and abetting fraud and negligence to proceed. The ruling follows a similar decision in Forrest v. Meta, signaling a shift in platform liability.

Pulse Analysis

Section 230 has long insulated online services from liability for user‑generated content, but courts are increasingly scrutinizing the degree of a platform's involvement in creating that content. In the Bouck v. Meta case, the judge focused on Meta's Advantage+ Creative, a generative‑AI feature that automatically writes copy and designs images. By treating the AI as a co‑author rather than a neutral conduit, the court found that Meta crossed the threshold from passive host to active publisher, thereby forfeiting statutory immunity.

The ruling also advances plaintiffs' ability to allege aiding and abetting fraud. California law permits liability when a party knowingly assists another in committing an intentional tort. Here, the court accepted that Meta’s ad‑review process and its knowledge of the ads’ implausible promises could satisfy the knowledge element, allowing the fraud claim to survive a motion to dismiss. This approach mirrors the earlier Forrest decision, suggesting a developing judicial consensus that platforms cannot hide behind automated review tools when those tools materially shape deceptive messaging.

For advertisers and tech companies, the verdict underscores the need for tighter controls over AI‑generated ad content. Companies may have to implement more rigorous human oversight, transparent disclosure of AI involvement, and robust fraud‑detection mechanisms to mitigate exposure. As regulators and courts converge on the idea that AI‑assisted advertising can create legal responsibility, firms that fail to adapt could face a wave of tort actions, reshaping the economics of digital marketing and prompting a reevaluation of Section 230’s scope.

No § 230 Immunity for Meta's AI-Generated Ads

Comments

Want to join the conversation?