Forcing millions to provide biometric data threatens privacy and civil liberties while granting tech firms unprecedented control over digital identity without robust parliamentary scrutiny.
The UK’s latest online safety push marks a decisive shift toward delegated legislation, using Henry VIII clauses to sidestep full parliamentary debate. By allowing a statutory instrument to enforce an under‑16 social‑media ban, the government can act quickly, but it also reduces transparency and public input. Coupled with new consultations on age‑gating VPNs and AI chatbots, the policy expands the reach of the Online Safety Act, embedding age‑verification mechanisms across a broader swath of digital services.
At the heart of the controversy is the reliance on private age‑assurance providers to verify users’ identities. Companies such as Persona, backed by investors linked to surveillance firms, already collect facial scans and other biometric data for platforms like Roblox, Reddit and Discord. This data flows into global commercial ecosystems, where it can be repurposed for targeted advertising or sold to third parties. The lack of a regulatory framework means users often hand over irreversible identifiers without clear consent, raising profound privacy and security concerns.
Open Rights Group’s call for mandatory privacy and security standards seeks to fill this regulatory vacuum. By involving the ICO and Ofcom, the government could enforce data‑minimisation, encryption, and independent oversight of age‑verification services. Such safeguards would protect users from potential misuse while preserving the intended goal of protecting children online. Without them, the expansion of biometric age‑gating risks entrenching a new layer of digital infrastructure that consolidates power in the hands of a few private entities, undermining democratic accountability and digital rights.
Comments
Want to join the conversation?
Loading comments...