
A federal judge in California ruled that Assembly Bill 2013, which requires generative AI developers to publish high‑level summaries of their training datasets, likely does not violate the First Amendment. The decision framed the disclosure requirement as a commercial‑speech regulation aimed at informing consumers, not as compelled political speech. The court applied the Central Hudson test, finding the statute advances a substantial governmental interest in transparency and is not overly burdensome. It also concluded the law is not unconstitutionally vague, given its detailed disclosure checklist.
California’s AI training‑data disclosure law, AB 2013, marks one of the first attempts to codify transparency in the rapidly expanding generative‑AI market. By obligating developers to list dataset sources, size, licensing status, and other key attributes, the statute seeks to give consumers the factual basis to assess model reliability and bias. This approach mirrors earlier commercial‑speech regulations that required truthful product information, positioning the law as a consumer‑protection measure rather than a content‑control mechanism.
The court’s analysis leaned heavily on the Central Hudson framework, which balances governmental interests against speech restrictions. Judge Bernal concluded that the state’s interest in preventing hidden biases and protecting trade‑secret‑related consumer decisions is substantial, and that the mandated disclosures are narrowly tailored. Unlike the Zauderer standard, typically reserved for misleading advertising, Central Hudson accommodates broader regulatory goals, suggesting future AI statutes may also survive heightened scrutiny if they clearly advance public interests without overreaching.
Industry stakeholders should note that the ruling does not eliminate all legal uncertainty. While the statute survived the vagueness challenge due to its specific checklist, questions remain about the scope of “development” versus “training” data and the treatment of licensed or third‑party models. As litigation progresses, courts will likely refine these definitions, influencing how AI firms structure data pipelines and documentation practices. For businesses, early compliance with AB 2013 can mitigate litigation risk and signal a commitment to transparency, a competitive advantage in an environment where users increasingly demand insight into AI provenance.
Comments
Want to join the conversation?