AI News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

AI Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
AINewsOpenAI Desperate to Avoid Explaining Why It Deleted Pirated Book Datasets
OpenAI Desperate to Avoid Explaining Why It Deleted Pirated Book Datasets
AI

OpenAI Desperate to Avoid Explaining Why It Deleted Pirated Book Datasets

•December 1, 2025
0
Ars Technica AI
Ars Technica AI•Dec 1, 2025

Companies Mentioned

OpenAI

OpenAI

Anthropic

Anthropic

Meta

Meta

META

Why It Matters

The decision could expose OpenAI to massive statutory damages and set a precedent for AI firms’ data‑training practices.

Key Takeaways

  • •Judge orders OpenAI to disclose internal deletion communications.
  • •Deletion of LibGen “Books 1/2” may show willful infringement.
  • •OpenAI’s privilege claims deemed inconsistent, weakening its defense.
  • •Potential damages could reach $150,000 per infringed work.
  • •Anthropic settlement highlights industry shift away from pirated training data.

Pulse Analysis

The recent court order against OpenAI marks a pivotal moment in the legal battle over AI training data. By mandating the release of Slack messages and lawyer communications concerning the removal of the LibGen‑derived "Books 1" and "Books 2" datasets, the judge has pierced the veil of attorney‑client privilege that OpenAI relied upon. This transparency aims to uncover whether the datasets were simply unused or deliberately discarded to evade copyright liability, a distinction that could determine the class‑action’s outcome.

Legal experts warn that OpenAI’s inconsistent arguments about "non‑use" as a privileged reason may erode its good‑faith defense. If the internal records reveal that the company was aware of the pirated nature of the books and chose to delete them to mitigate risk, courts could deem the infringement willful, unlocking statutory damages up to $150,000 per work. Such exposure not only threatens OpenAI’s bottom line but also signals to the broader AI community that privilege shields are limited when corporate conduct borders on deliberate copyright violation.

The dispute unfolds against a backdrop of shifting industry standards, highlighted by Anthropic’s recent multi‑billion‑dollar settlement with authors over similar data practices. That agreement underscores a growing consensus that training large language models on unlicensed content carries significant legal and reputational risk. OpenAI’s forthcoming appeal will therefore be watched closely, as its strategy may influence future data‑sourcing policies, licensing negotiations, and the overall trajectory of responsible AI development.

OpenAI desperate to avoid explaining why it deleted pirated book datasets

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...