State AI Laws – Where Are They Now?

State AI Laws – Where Are They Now?

Cooley
CooleyApr 24, 2026

Why It Matters

Companies must adapt compliance programs to rapidly evolving state mandates while monitoring federal actions that could override or invalidate current obligations, directly affecting risk, cost, and market entry strategies.

Key Takeaways

  • Colorado revises SB 205, narrowing “high‑risk AI” to specific decisions
  • California AI bills require disclosures starting 2026
  • Utah’s AI Policy Act narrowed by SB 226, SB 332
  • New York RAISE Act raises penalties, 72‑hour incident reporting
  • Federal AI framework could preempt state regulations, sparking lawsuits

Pulse Analysis

State AI regulation in the U.S. is entering a phase of consolidation and recalibration. Early adopters like Colorado, California, Utah and New York have each launched ambitious frameworks, but industry pushback and practical implementation challenges are prompting lawmakers to trim scope, delay effective dates, or re‑engineer definitions. Colorado’s SB 205, originally a comprehensive high‑risk AI regime, now targets "covered ADMT" that materially influence decisions in housing, employment, finance and health, while shedding many risk‑management mandates. California’s cascade of bills—AB 2013 on training‑data transparency and SB 942/AB 853 on content labeling—set a multi‑year rollout that will compel large generative‑AI providers and platforms to embed disclosures and detection tools by 2026‑2027. Utah’s AI Policy Act, which extended consumer‑protection statutes to AI, has been narrowed to high‑risk interactions, and New York’s RAISE Act has shifted from prohibitive deployment bans to a transparency‑focused model with steep penalties and a 72‑hour reporting deadline.

These state‑level shifts are occurring against a backdrop of federal ambition. The White House’s National Policy Framework for Artificial Intelligence calls on Congress to enact sweeping legislation that would preempt state rules deemed to impose "undue burdens" on innovation. The Department of Justice’s AI Litigation Task Force and potential funding restrictions from the Department of Commerce signal that federal authorities may actively challenge state statutes, creating a layered compliance environment. Companies therefore need a dual‑track strategy: meet current state requirements to avoid early enforcement, while building flexibility to pivot if federal preemption or new federal standards emerge.

For businesses, the practical takeaway is to treat AI compliance as a moving target. Prioritize documentation, risk assessments, and consumer‑facing disclosures for the most consequential AI applications, especially those affecting credit, employment, housing and healthcare decisions. Deploy modular governance tools that can be scaled up or down as state laws are amended or as federal guidance crystallizes. Continuous monitoring of legislative calendars, agency guidance, and emerging case law will be essential to mitigate legal exposure and maintain competitive advantage in an increasingly regulated AI landscape.

State AI Laws – Where Are They Now?

Comments

Want to join the conversation?

Loading comments...