BotBeat
...
← Back

> ▌

Multiple (Veracode study covers general AI coding assistants)Multiple (Veracode study covers general AI coding assistants)
RESEARCHMultiple (Veracode study covers general AI coding assistants)2026-03-04

Security Researchers Find 45% of AI-Generated Code Contains Critical Vulnerabilities

Key Takeaways

  • ▸45% of AI-generated code contains OWASP Top 10 security vulnerabilities, with Java showing a 72% failure rate
  • ▸Larger, more expensive AI models perform no better on security than smaller ones, indicating a systemic problem rather than a scaling issue
  • ▸AI coding assistants optimize for functionality over security, requiring developers to explicitly prompt for secure coding patterns
Source:
Hacker Newshttps://www.thatsoftwaredude.com/content/15256/nobody-told-the-security-team-about-the-ai-code↗

Summary

A comprehensive security analysis by Veracode has revealed that nearly half of all AI-generated code contains vulnerabilities from the OWASP Top 10 list of critical web application security risks. The 2025 GenAI Code Security Report tested over 100 large language models across 80 real-world coding tasks in Java, Python, C#, and JavaScript, finding a 45% overall security failure rate. Java code showed the highest vulnerability rate at 72%, while cross-site scripting appeared in 86% of relevant samples and log injection vulnerabilities in 88%. These findings align with earlier research from Georgetown's CSET, which found 40% of GitHub Copilot-generated code was vulnerable to MITRE's top 25 most dangerous software weaknesses.

The research identified a fundamental structural problem: AI coding assistants optimize for functionality rather than security, and newer or larger models showed no improvement over smaller ones. Veracode's CTO characterized this as "a systemic issue rather than an LLM scaling problem." The models typically generate the shortest path to working code without considering security constraints unless explicitly prompted. This creates a dangerous scenario where developers ship AI-generated features that pass tests and appear clean but contain exploitable vulnerabilities that may not be discovered for months.

The Cloud Security Alliance corroborated these findings, reporting that 62% of AI-generated code solutions contained design flaws or known vulnerabilities. The core issue is that AI models train on public code at massive scale but lack context about specific application threat models, business logic, internal security standards, or data sensitivity. Security teams are often unaware when AI-generated code enters production systems, creating a blind spot in organizations' security postures. Without explicit processes to review and secure AI-generated code, companies risk introducing systematic vulnerabilities into their applications.

  • Cross-site scripting and log injection vulnerabilities appear in 86-88% of relevant AI-generated code samples
  • Security teams often lack visibility into AI-generated code entering production, creating organizational blind spots

Editorial Opinion

These findings should serve as a wake-up call for any organization using AI coding assistants without security guardrails. The fact that larger models show no improvement suggests this isn't a problem that will simply be solved by the next model generation—it requires fundamental changes in how we integrate AI into development workflows. Organizations need immediate processes to audit AI-generated code, and AI companies need to prioritize security-by-default in their training approaches rather than leaving it to individual developers to remember to prompt for secure patterns.

Large Language Models (LLMs)Machine LearningCybersecurityEthics & BiasAI Safety & Alignment

More from Multiple (Veracode study covers general AI coding assistants)

Multiple (Veracode study covers general AI coding assistants)Multiple (Veracode study covers general AI coding assistants)
RESEARCH

Study Finds 45% of AI-Generated Code Contains Security Vulnerabilities

2026-03-06

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
SourceHutSourceHut
INDUSTRY REPORT

SourceHut's Git Service Disrupted by LLM Crawler Botnets

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us