
Restricting Speech By Purportedly Protecting Children
Key Takeaways
- •Utah's Minor Protection in Social Media Act blocked by federal judge
- •Law targets platforms with vague definition, risking overbroad speech restrictions
- •Federal Kids Online Safety Act faces similar free‑speech and liability concerns
- •UK Online Safety Act expands age‑verification, threatening privacy and encryption
- •Courts favor parental control over government censorship for minors
Pulse Analysis
The push to regulate online content under the banner of protecting children is not new, but its modern incarnation targets the architecture of social media. Historical Supreme Court decisions—Tinker v. Des Moines, Reno v. ACLU, and Brown v. Entertainment Merchants—have consistently rejected blanket bans that infringe on expressive rights, emphasizing that speculative harms do not justify sweeping censorship. Today, legislators echo those rationales, citing mental‑health research that remains inconclusive, while seeking to impose technical controls that reshape user interaction and data visibility.
Utah's Minor Protection in Social Media Act exemplifies the tension between policy intent and constitutional limits. By mandating age‑verification systems, default privacy settings, and disabling engagement‑driving features for minors, the law attempted to treat social platforms like regulated consumer products. Judge Robert Shelby’s preliminary injunction underscored two critical points: the statute is content‑based, targeting only platforms labeled as "social media," and it fails the narrow‑tailoring test required for compelling state interests. The ruling reinforces the principle that parental oversight, not governmental micromanagement, should dictate a child’s digital exposure, and it foreshadows challenges to the federal Kids Online Safety Act, which faces similar over‑censorship risks.
Across the Atlantic, the United Kingdom’s Online Safety Act raises comparable concerns, extending age‑verification mandates and granting Ofcom sweeping powers to demand content scans. Critics warn that such requirements jeopardize end‑to‑end encryption and privacy, potentially creating backdoors exploitable by malicious actors. The global pattern suggests that without clear, evidence‑based links between platform use and harm, child‑protection statutes may become tools for broader content control. As courts continue to scrutinize these measures, the tech industry and civil‑rights advocates must prepare for a landscape where constitutional safeguards and privacy considerations shape the future of online speech.
Restricting Speech By Purportedly Protecting Children
Comments
Want to join the conversation?