AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?
Key Takeaways
- ▸Enterprise AI vendors are marketing autonomous agents to manage critical business functions, but legal responsibility for failures remains ambiguous and contested between vendors and users
- ▸The inherently non-deterministic nature of LLMs and AI agents makes it contractually difficult for vendors to provide warranties, while regulators hold end-user organizations ultimately accountable regardless of vendor claims
- ▸AI-driven decisions in sensitive areas like hiring, financial reporting, and supply chain management expose organizations to data protection, regulatory, and operational risks without clear liability frameworks
Summary
As enterprise software vendors like Oracle tout AI agents capable of autonomously managing critical business decisions in HR, finance, and supply chain management, a fundamental legal question remains unanswered: who bears responsibility when these systems fail? The stakes are enormous, with vendors eyeing a trillion-dollar opportunity, but the unpredictable nature of large language models and agentic AI systems creates contractual and regulatory ambiguity that neither vendors nor users have adequately resolved. Tech lawyers and regulators are increasingly clear that while vendors may resist liability for inherently non-deterministic AI systems, end-user organizations remain legally accountable for the decisions their AI makes—a disconnect that leaves businesses exposed to significant operational and compliance risks.
Regulatory bodies like the UK's Financial Reporting Council have emphasized that organizations cannot "blame it on the box," reinforcing that people and firms remain responsible for outcomes, even when AI systems execute decisions autonomously. The challenge is compounded by the fact that AI behavior is fundamentally unpredictable, making it difficult for vendors to provide meaningful warranties and for organizations to confidently deploy these systems in high-stakes domains. Legal experts note that traditional vendor liability frameworks—which assume predictable system behavior—break down in the context of agentic AI, leaving both parties in contractual negotiations without clear precedent or protective mechanisms.
- Technology buyers must negotiate strong contractual provisions around AI explainability, bias monitoring, and vendor accountability, as current legal precedent offers limited protection
Editorial Opinion
The gap between vendor marketing and legal reality represents a critical market failure in enterprise AI adoption. Vendors are selling transformative autonomous capabilities while contractually disclaiming responsibility for the inherently unpredictable systems they deploy—effectively transferring all downside risk to customers. Until AI liability frameworks mature and vendors are held accountable for system behavior, organizations deploying agentic AI in mission-critical functions are engaging in a high-stakes gamble with unclear insurance and limited recourse.


