Anthropic Unveils AI Feature in Claude Code to Scan Codebases and Suggest Patches

Written by: Mane Sachin

Published on:

Follow Us

Anthropic has introduced a new security upgrade to its AI coding assistant, aiming to make AI-generated software safer before it reaches users.

The company announced that its Claude Code platform now includes a feature called Claude Code Security, which can scan entire codebases, detect vulnerabilities, and suggest software patches. The move comes at a time when AI coding tools are being used more widely — not only by experienced developers but also by beginners building apps and websites with limited technical knowledge.

The new capability has been integrated into the web version of Anthropic’s Claude Code tool. According to the company, it is designed to identify security flaws that traditional scanning systems may miss, especially those that rely heavily on known patterns or predefined rules.

For now, access to Claude Code Security is limited. It is being rolled out to select paid Enterprise and Team customers, with quicker access offered to maintainers of open-source repositories.

The launch also reflects growing concerns around the security of AI-generated code. A recent study by AI security startup Tenzai found that websites created using AI coding tools from companies such as OpenAI and Anthropic could, in some cases, be manipulated into leaking sensitive information or transferring funds unintentionally. The findings highlighted how easy it can be for hidden vulnerabilities to slip into rapidly generated code.

Anthropic says its approach goes beyond simple pattern matching. Instead of just scanning for known weaknesses, the system attempts to understand how different parts of a program interact and how data moves through the application — more like a human security researcher reviewing the logic of the code.

The feature works directly inside Claude Code’s existing environment. Developers can review detected issues and suggested fixes within the same dashboard. The process includes multiple verification stages and filtering of false positives. Importantly, no changes are applied automatically. The AI identifies potential problems and proposes patches, but developers retain full control over whether to implement them. Each issue is graded based on severity and paired with a confidence score to help teams prioritize.

Anthropic also revealed that it relies heavily on its own AI systems internally. The company has said that its AI coding tools now generate nearly all of its production code. For performance testing, Claude Code Security has been evaluated in competitive cybersecurity challenges and through collaboration with the Pacific Northwest National Laboratory to explore AI-driven protection for critical infrastructure.

In addition, the company claims its researchers identified more than 500 previously undiscovered vulnerabilities in production open-source codebases using its Claude Opus 4.6 model. It is currently working through responsible disclosure with maintainers and plans to expand its security efforts within the open-source community.

As AI continues to accelerate software development, the need for stronger built-in security measures is becoming increasingly clear. With this latest update, Anthropic appears to be positioning Claude Code not just as a productivity tool, but as a safeguard against the hidden risks of AI-generated software.

Also Read: Anthropic Launches Claude Sonnet 4.6, Bringing Frontier-Grade AI to Free Users

Mane Sachin

My name is Sachin Mane, and I’m the founder and writer of AI Hub Blog. I’m passionate about exploring the latest AI news, trends, and innovations in Artificial Intelligence, Machine Learning, Robotics, and digital technology. Through AI Hub Blog, I aim to provide readers with valuable insights on the most recent AI tools, advancements, and developments.

For Feedback - aihubblog@gmail.com