OpenAI Wants Your Next Security Researcher to Be a Bot - New Aardvark Tool Finds and Fixes Software Flaws Automatically

OpenAI Wants Your Next Security Researcher to Be a Bot - New Aardvark Tool Finds and Fixes Software Flaws Automatically

TechRadar
TechRadarNov 3, 2025

Why It Matters

By automating detection and remediation, Aardvark could slash patch cycles and reduce exposure to attacks, accelerating the adoption of AI‑driven security operations across enterprises and open‑source ecosystems.

Summary

OpenAI has launched Aardvark, an autonomous AI agent built on ChatGPT that scans code, runs tests and proposes patches to fix software vulnerabilities at scale. In private‑beta testing the tool achieved a 92% success rate on benchmark “golden” repositories, and OpenAI reports it has already uncovered meaningful flaws in its own code and that of early partners. Aardvark mimics the workflow of human security researchers—reading source code, assessing exploitability, prioritising severity and generating targeted fixes—without the need for rest or manual effort. The company positions the agent as a breakthrough for developers and security teams facing tens of thousands of new bugs each year.

OpenAI wants your next security researcher to be a bot - new Aardvark tool finds and fixes software flaws automatically

Comments

Want to join the conversation?

Loading comments...