Legal News and Headlines
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Legal Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Sunday recap

NewsDealsSocialBlogsVideosPodcasts
LegalNewsTech Firms Must Remove ‘Revenge Porn’ in 48 Hours or Risk Being Blocked, Says Starmer
Tech Firms Must Remove ‘Revenge Porn’ in 48 Hours or Risk Being Blocked, Says Starmer
Consumer TechCybersecurityLegal

Tech Firms Must Remove ‘Revenge Porn’ in 48 Hours or Risk Being Blocked, Says Starmer

•February 18, 2026
0
The Guardian
The Guardian•Feb 18, 2026

Why It Matters

The measure places legal responsibility on tech firms, aiming to curb the rapid spread of revenge porn and protect victims, while setting a global benchmark for online‑safety regulation.

Key Takeaways

  • •48‑hour removal deadline for flagged revenge porn
  • •Ofcom empowered to fine or block non‑compliant platforms
  • •Penalties up to 10% of global revenue
  • •AI chatbots like Grok added to regulation scope
  • •Victims can flag content directly or via Ofcom

Pulse Analysis

The United Kingdom is tightening its online‑safety framework by inserting a 48‑hour takedown requirement for revenge porn into the Crime and Policing Bill. Ofcom will act as the enforcement arm, equipped to issue fines that could reach ten percent of a firm’s worldwide turnover or to block access to non‑compliant services entirely. By classifying the creation and distribution of non‑consensual intimate images as a "priority offence" under the Online Safety Act, the government signals that such abuse is on par with child‑sexual‑abuse material and terrorism content, raising the stakes for all digital platforms operating in the market.

Technical enforcement will lean on existing tools like hash‑matching, which assigns a unique digital signature to known abusive media, allowing rapid detection across multiple sites. The new legislation also encourages the development of digital watermarks to flag revenge‑porn automatically when it reappears. However, AI‑generated deepfakes pose a significant challenge; subtle alterations can evade hash‑matching, and the sheer speed of AI image synthesis outpaces current moderation workflows. Industry experts stress the need for coordinated cross‑platform databases and advanced AI‑driven detection to keep pace with evolving threats.

For technology firms, the policy introduces both compliance costs and strategic risk. Companies must invest in faster moderation pipelines, integrate watermarking standards, and potentially redesign user‑reporting mechanisms to meet the 48‑hour deadline. The UK’s approach may inspire similar regulations abroad, prompting a wave of global standards that could reshape content‑moderation economics. While encrypted messaging services remain a gray area, the broader push underscores a shift toward holding platforms accountable for user‑generated abuse, offering victims a more reliable avenue for redress and signalling a decisive regulatory stance against digital misogyny.

Tech firms must remove ‘revenge porn’ in 48 hours or risk being blocked, says Starmer

Read Original Article
0

Comments

Want to join the conversation?

Loading comments...