BotBeat
...
← Back

> ▌

Liquid AILiquid AI
PRODUCT LAUNCHLiquid AI2026-03-05

Liquid AI Launches LFM2-24B-A2B: Local Tool-Calling Agent Runs Entirely on Consumer Hardware

Key Takeaways

  • ▸LFM2-24B-A2B enables complete on-device AI agent operation with no cloud dependencies, addressing privacy and compliance requirements in regulated industries
  • ▸The model demonstrated effective tool-calling across 67 tools and 13 MCP servers on consumer hardware (Apple M4 Max), with sub-second multi-step execution
  • ▸LocalCowork open-source desktop agent showcases real-world capabilities including security scanning, audit trails, document processing, and system operations
Source:
Hacker Newshttps://www.liquid.ai/blog/no-cloud-tool-calling-agents-consumer-hardware-lfm2-24b-a2b↗

Summary

Liquid AI has announced LFM2-24B-A2B, a foundation model specifically designed to power AI agents that run entirely on consumer hardware without cloud connectivity. The company demonstrated the model's capabilities through LocalCowork, an open-source desktop agent that performs complex tool-calling tasks—including security scanning, file operations, and system management—directly on a laptop. Testing was conducted on an Apple M4 Max with 36GB unified memory, evaluating the model across 67 tools spanning 13 Model Context Protocol (MCP) servers.

The model addresses a critical challenge in AI deployment: enabling fast, reliable tool selection and execution while keeping all data local. In regulated industries where data privacy is paramount, this approach eliminates the security and compliance risks associated with cloud-based AI services. LFM2-24B-A2B demonstrated sub-second response times for multi-step tool chains, maintaining interactive performance even with large tool menus and complex workflows.

Liquid AI has made the LocalCowork source code available in their Cookbook, allowing developers to build privacy-preserving AI agents on commodity hardware. The model runs using llama-server with Q4_K_M GGUF quantization and flash attention, making it accessible on high-end consumer laptops. This release represents a significant step toward practical, privacy-first AI agents that can operate in environments where cloud connectivity is restricted or prohibited.

  • All processing, including model inference and data handling, occurs locally on the laptop, eliminating data transmission risks

Editorial Opinion

Liquid AI's focus on on-device inference addresses a genuine market need that cloud-first AI companies have largely ignored: many organizations simply cannot send sensitive data to external servers, regardless of convenience. By optimizing for consumer hardware rather than datacenter GPUs, LFM2-24B-A2B makes privacy-preserving AI agents economically viable for regulated industries like healthcare, finance, and legal services. The sub-second tool-calling performance on an M4 Max suggests the model could enable a new category of local-first productivity applications that compete with cloud services on capability while offering superior privacy guarantees.

Large Language Models (LLMs)AI AgentsMLOps & InfrastructureCybersecurityPrivacy & Data

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
Google / AlphabetGoogle / Alphabet
RESEARCH

Deep Dive: Optimizing Sharded Matrix Multiplication on TPU with Pallas

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us