Google Wants Its Latest AI Tool To Find And Fix Vulnerable Code Before It Becomes A Problem
Artificial Intelligence (AI) tools have sped up everything from app development and problem-solving to scientific discoveries and medical research. Yet, at the same time, experts have also warned about their potential to create malware at a much faster pace, find exploitable flaws quickly, seed open-source tools to create backdoors at scale, and more. To meet this rising threat, Google's DeepMind division has created an AI-powered tool that not only finds crucial gaps or errors in software code, but also fixes them. The company is, fittingly, referring to it as CodeMender.
Google says CodeMender is capable of "patching new vulnerabilities, and proactive, rewriting and securing existing code and eliminating entire classes of vulnerabilities in the process." Developed over the course of roughly six months, the tool has already helped fix 72 security-related flaws in open-source projects, some of which comprised millions of lines of code. CodeMender relies on the powerful Gemini Deep Think models and works in an agentic manner, which means it can handle a task autonomously with minimal to no human intervention.
The modus operandi, however, is pretty similar. It reasons through the requirements, adds or adjusts the requisite portion of the code, and then validates them, too, so that the entire codebase doesn't run into unexpected errors due to the modifications. Notably, when it comes to high-stakes situations, codemender still surfaces the changes it has made for human review. At the moment, Google is erring on the side of caution and using human experts to vet all the tweaks made by CodeMender.
How does CodeMender work?
Now, using AI to find gaps in the code is half the equation. Patching them is the more important part, especially in the case of zero-day vulnerabilities, where developers have little time on their hands and the flaws are already being (or have been) exploited by bad actors in the wild. CodeMender, which is powered by Google's Gemini AI models, reduces the damage potential by fixing code and plugging those attack points.
On the technical side of things, the AI-powered tool employs methods such as static analysis, dynamic analysis, differential testing, fuzzing, and SMT solvers. Moreover, it doesn't simply read the codebase, but the pattern of the entire code, and how data flows through it. The overarching idea isn't to just pinpoint flaws, but go all the way down to the roots and detect architectural issues.
This won't be Google's first foray into the intersection of AI and security. Roughly a year ago, the company announced that its AI-powered OSS-Fuzz project had detected over two dozen vulnerabilities in code that humans missed. One of them was a critical-level flaw, noting that one of the bugs has persisted for two decades and may have gone undetected by human code sleuths. Together with BigSleep, the company claims that its AI-fueled tools have already proven their efficacy in finding zero-day vulnerabilities, the most dangerous kind of flaw in the security ecosystem. CodeMender is the natural evolution of those efforts to bolster code security with a helping hand from AI.
Why are tools like Codemender the need of the hour?
The arrival of AI and its deep integration in everything from Office apps to code generation has opened a whole new universe of attack surfaces. In early October 2025, Anthropic published research highlighting that merely 250 malicious documents are enough to poison a large AI model and open backdoors for all kinds of damage. It's worth noting that Anthropic has secured billions of dollars in funding from the likes of Google and Amazon, and its Claude chatbot now powers AI experiences in Microsoft's Copilot and Office suite.
"These vulnerabilities pose significant risks to AI security," the company said in its analysis, which was published in collaboration with the U.K. AI Security Institute and the Alan Turing Institute. The U.K. government has also warned that generative AI may lead to more rapid and widespread phishing, malware replication, and cyber intrusion, as well as trivial tasks such as cracking password protections with AI. Research also surmises that generative AI tools will open the floodgates for cyberattacks.
Even the Google Cloud Cybersecurity Forecast 2024 report raised an alarm about the usage of generative AI tools by bad actors. But it seems tools like CodeMender could level the playing field to some extent. Google is so confident in the efficacy of AI deployment for security that it claims "it will become increasingly difficult for humans alone to keep up." Notably, Google hasn't revealed when it plans to release CodeMender publicly, although it's quite likely that this tool will be focused on enterprise customers, given how compute-intensive the whole system is.