Ship Safe v7.0.0 Launches Memory Poisoning Detection for AI Coding Agents
Key Takeaways
- ▸Memory Poisoning Agent is the first scanner to detect instruction injection attacks in AI agent memory and configuration files—a critical new vulnerability class as AI agents become integral to development
- ▸Full OWASP Agentic AI Top 10 mapping and LLM-powered deep analysis bring enterprise security rigor to AI-native threats like RAG poisoning, LLM jailbreaks, and agent config attacks
- ▸Live OSV.dev advisory feeds, trojanized package behavioral detection, and secrets verification probes (checking if leaked keys are still active) provide real-time threat intelligence integration
Summary
Ship Safe, an AI-powered application security platform, has released v7.0.0 with a focus on securing AI agents and LLM-powered development workflows. The update introduces a new Memory Poisoning Agent, the first scanner purpose-built to detect instruction injection attacks in AI agent memory files (.claude/memory/, .cursorrules, .cursor/rules/, .windsurfrules). This addresses a critical security gap as developers increasingly rely on AI agents like Claude, Cursor, and other code assistants that store system prompts and behavioral rules in local configuration files vulnerable to tampering.
The platform now orchestrates 19 specialized security agents in parallel to scan for 80+ attack classes, including the newly critical memory poisoning vector. v7.0.0 adds full OWASP Agentic AI Top 10 (ASI01–ASI10) mapping, live OSV.dev advisory feeds with zero-day CVE surfacing, LLM-powered deep analysis for exploitability verification, and trojanized package behavioral detection. Additional enhancements include expanded agent config discovery (Gemini CLI, Cody, Augment Code), Gemma 4 local model support via Ollama, and improved CI/CD integration with GitHub PR comments and SARIF output.
The release reflects growing recognition that AI agents themselves have become supply chain and configuration attack vectors. By scanning memory injection points where attackers could alter agent behavior through prompt injection, Ship Safe targets a vulnerability class that traditional AppSec tools overlook—one increasingly relevant as enterprises embed AI agents into development pipelines.
- Support for Gemma 4 local inference via Ollama enables offline-first security scanning without reliance on third-party LLM APIs, improving privacy and compliance
Editorial Opinion
Memory poisoning in AI agents represents a critical blind spot in AppSec tooling—one that becomes more dangerous as teams adopt AI coding assistants at scale. Ship Safe's v7.0.0 correctly identifies that agent memory files and system prompts are now configuration attack surfaces deserving the same scrutiny as secrets and dependencies. The inclusion of OWASP Agentic AI Top 10 mappings signals a maturation of security thinking around AI systems, moving beyond generic LLM safety to practical, exploit-focused threat modeling. However, the real test will be whether teams adopt these tools before memory poisoning attacks become exploited in the wild.


