

The proposal could set a precedent for regulating AI in consumer products, directly protecting children and shaping industry standards nationwide.
The rapid integration of conversational AI into consumer products has moved beyond smartphones and home assistants into the realm of children's toys. Recent high‑profile cases—such as families alleging that chatbot‑enabled toys contributed to teenage suicides—have amplified parental anxiety and attracted regulatory scrutiny. Consumer‑advocacy groups have documented toys like Kumma, which can be coaxed into discussing violence or sexual content, and Miiloo, which reportedly echoed political propaganda. These incidents illustrate the technology’s capacity to generate inappropriate or harmful dialogue, prompting lawmakers to consider pre‑emptive safeguards before the market matures.
In California, Senator Steve Padilla’s SB 867 proposes a four‑year moratorium on the manufacture and sale of AI‑enabled toys to anyone under 18. The bill is framed as a pause to allow state safety regulators to craft comprehensive guidelines, mirroring the approach taken in SB 243, which already mandates child‑focused safeguards for chatbot operators. The proposal arrives amid President Trump’s executive order urging federal agencies to challenge state AI statutes, though the order carves out a child‑safety exemption. By targeting a specific product category, the legislation sidesteps broader constitutional battles while still testing the limits of state authority over emerging technologies.
If enacted, SB 867 could reshape the toy industry’s product roadmap, forcing companies like Mattel and OpenAI to redesign or delay AI‑powered offerings. The ban may also set a de‑facto national benchmark, encouraging other states to adopt similar restrictions and prompting federal agencies to revisit the child‑safety carve‑out. Investors will likely monitor compliance costs and potential market gaps for non‑AI alternatives. In the longer term, the measure underscores a growing consensus that proactive, child‑centric regulation is essential for responsible AI deployment, signaling a shift from reactive litigation to preventive policy.
Comments
Want to join the conversation?
Loading comments...