
Anthropic has introduced Claude Code Security, a new feature built directly into its Claude Code platform on the web. This tool represents a targeted advancement in how developers and security teams handle software vulnerabilities. Instead of relying solely on static analysis or manual reviews, Claude Code Security applies frontier AI capabilities to examine code the way a skilled human researcher would.
How the scanning process works
The process starts with the AI scanning full codebases. It traces data movement through systems, studies how different components interact, and identifies issues that simpler rule-based scanners overlook. These often include complex, high-severity vulnerabilities hidden deep in logic or dependencies. Once potential problems are flagged, the tool runs a multi-stage verification step. Claude re-evaluates its own findings, attempting to prove or disprove them to cut down on false positives. Each issue receives a severity rating and confidence score, so teams can prioritize the most critical fixes first.
Results appear in a clear dashboard where humans review everything. The AI then proposes specific software patches tailored to the detected flaws. Developers accept, modify, or reject these suggestions within their usual workflow. This human-in-the-loop approach keeps control with people while speeding up remediation.
Research backing and real-world results
The capability draws from more than a year of research into Claude’s cybersecurity strengths. Anthropic tested it internally with red teams, in Capture-the-Flag competitions, and alongside partners like Pacific Northwest National Laboratory. In one collaboration, Claude Opus 4.6 uncovered over 500 bugs in open-source projects that had gone undetected for years, some for decades. These results show the model’s ability to reason about code at a level beyond conventional tools.
Anthropic positions this as timely for the evolving threat landscape. As AI models grow more capable, attackers can use them to discover exploitable weaknesses quicker. Defenders gain an edge by deploying similar technology to spot and close those gaps first. The company expects AI to scan a large portion of global code soon, creating both risks and opportunities.
Availability and pricing
Availability is limited for now. Claude Code Security is in research preview for Enterprise and Team plan customers. Open-source maintainers can apply for free, expedited access through Anthropic’s channels. This rollout targets organizations with substantial codebases where security matters most, such as tech companies, financial services, and critical infrastructure providers.
The tool integrates seamlessly with existing Claude Code features. Users upload or connect repositories, initiate scans, and work through findings without switching applications. It complements recent Claude updates, including stronger models like Sonnet 4.6 and Opus variants, which already boost coding and agentic tasks. Pricing aligns with standard Claude API rates, though specific costs for this preview feature remain tied to overall plan usage.
Impact on developer workflows
This release fits into Anthropic’s focus on safe, useful AI. By embedding security directly into developer tools, it lowers barriers to proactive vulnerability management. Teams spend less time hunting issues manually and more on building features. In fast-moving software environments, this could reduce breach risks and compliance headaches.
Broader implications touch software development practices. Traditional security tools excel at known patterns but struggle with novel or context-dependent flaws. AI reasoning fills that gap by understanding intent and architecture. While not a full replacement for human experts, it amplifies their work, much like how code completion tools changed programming.
Developers gain efficiency without sacrificing quality. Security analysts handle higher-level oversight instead of repetitive scans. Enterprises benefit from faster patch cycles in production code. Open-source communities could see cleaner, more secure libraries if maintainers adopt the free access option.
Safety and responsible rollout
Anthropic emphasizes responsibility. Every finding goes through verification to avoid misleading alerts. The company continues safety testing and plans wider rollout based on feedback. This cautious approach mirrors their stance on powerful AI deployment.
For developers facing growing code complexity and supply-chain threats, Claude Code Security offers a practical step forward. It turns AI from a general assistant into a specialized defender of code integrity. As more teams integrate it, the bar for secure software rises across the industry.
Stay Ahead of the Machines
Don't let the AI revolution catch you off guard. Join Olivia and Alex for weekly insights on job automation and practical steps to future-proof your career.
No spam. Just the facts about your future.