Malwarebytes Survey Finds 90% of People Don’t Trust AI with Their Data

Malwarebytes Survey Finds 90% of People Don’t Trust AI with Their Data

AI-TechPark
AI-TechParkMar 18, 2026

Why It Matters

The findings highlight mounting consumer pressure for stronger data‑privacy safeguards, forcing AI providers and enterprises to rethink consent mechanisms and security investments.

Key Takeaways

  • 90% fear AI using data without consent
  • 91% back national data‑privacy laws
  • 43% stopped using ChatGPT; 42% stopped Gemini
  • Multi‑factor authentication usage rose to 76%
  • 71% now employ ad blockers

Pulse Analysis

Consumer wariness of artificial intelligence is no longer a niche concern; the Malwarebytes survey quantifies a broad‑scale backlash that could reshape the AI market. With 90% of respondents uneasy about data exploitation, regulators are likely to feel heightened urgency to enact or tighten privacy statutes. Lawmakers in Europe, North America, and emerging markets may look to the survey as evidence that public sentiment favors enforceable consent frameworks, potentially accelerating the rollout of AI‑specific legislation and prompting companies to embed privacy‑by‑design principles from the outset.

For businesses, the data translates into a clear mandate: trust must be rebuilt through transparent data practices and robust security controls. The surge in multi‑factor authentication, ad‑blocking, and VPN usage indicates that users are already adopting defensive tools, creating a fertile market for cybersecurity vendors that can integrate AI responsibly. Companies that leverage AI for personalization or analytics will need to demonstrate clear consent pathways, otherwise they risk losing user engagement and facing compliance penalties. Malwarebytes’ own suite—featuring a no‑logs VPN, browser guard, and AI‑enhanced scam detection—illustrates how security firms can position themselves as guardians of privacy while still capitalizing on AI’s defensive capabilities.

Looking ahead, the interplay between AI innovation and privacy expectations will likely define competitive advantage. Enterprises that proactively adopt privacy‑centric AI models, offer granular data controls, and communicate these measures effectively will differentiate themselves in a skeptical marketplace. Meanwhile, continued consumer education and tools like Malwarebytes’ Digital Footprint Scanner can empower individuals to monitor exposure, reinforcing a feedback loop that pressures firms to uphold higher standards. In this evolving landscape, aligning AI development with rigorous data‑protection practices isn’t just regulatory compliance—it’s a strategic imperative for sustained growth.

Malwarebytes Survey Finds 90% of People Don’t Trust AI with Their Data

Comments

Want to join the conversation?

Loading comments...