
Google Is Now Targeting Bad Ads over Bad Actors

Companies Mentioned
Why It Matters
The move signals a fundamental change in how major platforms police content, prioritizing AI‑driven ad‑level blocks over blunt account bans, which reduces collateral damage to legitimate advertisers. It also highlights the escalating threat of AI‑generated scam ads and the need for scalable detection tools.
Key Takeaways
- •Google blocked 8.3 billion ads in 2025, a record increase
- •Advertiser suspensions fell to 24.9 million despite higher ad blocks
- •Gemini AI detected over 99% of policy‑violating ads before display
- •U.S. removed 1.7 billion ads, suspended 3.3 million accounts
- •AI‑driven enforcement cut incorrect suspensions by 80% year over year
Pulse Analysis
The digital advertising ecosystem is confronting a new wave of AI‑generated scams, prompting platforms to double down on automated defenses. Google’s 2025 Ads Safety Report shows that generative AI tools enable fraudsters to produce deceptive content at scale, inflating the volume of harmful ads. By embedding its Gemini models across the ad pipeline, Google can analyze creative assets in real time, flagging violations before they ever appear to users. This proactive stance not only curtails the spread of misinformation but also protects brand safety for advertisers.
Google’s strategy marks a departure from traditional, account‑centric enforcement. Rather than suspending entire advertiser accounts—a blunt instrument that often penalizes legitimate businesses—the company now focuses on blocking individual ads that breach policy. The Gemini AI’s reported 99% detection rate has allowed Google to reduce false‑positive suspensions by roughly 80% year over year. This granular approach minimizes disruption for compliant advertisers while still neutralizing malicious campaigns, creating a more balanced enforcement model that aligns with both user experience and revenue considerations.
For marketers and regulators, the shift underscores the growing reliance on machine learning to manage platform integrity. Advertisers must adapt by ensuring their creative assets meet increasingly sophisticated compliance standards, potentially leveraging Google’s own verification tools to avoid inadvertent blocks. Meanwhile, policymakers will likely scrutinize the transparency of AI‑driven moderation, demanding clearer accountability mechanisms. As bad actors evolve, Google’s continued investment in Gemini and related AI systems will be pivotal in shaping the future of safe, trustworthy digital advertising.
Google is now targeting bad ads over bad actors
Comments
Want to join the conversation?
Loading comments...