Outcry Launches Privacy-First On-Device AI Mentor for Activists and Organizers
Key Takeaways
- ▸Outcry brings LLM-based AI mentoring to activists with complete privacy protections—no cloud infrastructure, no tracking, no accounts, and no data collection whatsoever
- ▸The model is fine-tuned on actual activist conversations and movement scholarship (Sharp, Turchin, Combahee River Collective) rather than generic internet data, enabling strategic thinking about organizing work
- ▸Available free on iPhone, iPad, and Mac with Apple silicon, the app represents a security-first approach designed for organizers facing surveillance and operating in restricted contexts
Summary
Outcry, a privacy-first AI mentor application developed by Micah Bornfree (co-creator of Occupy Wall Street), has launched with full offline functionality on iOS, iPadOS, and macOS. The app runs entirely on-device with zero cloud infrastructure, data collection, or user accounts, making it specifically designed for activists and organizers operating in surveilled or restricted environments. Trained on thousands of real activist conversations and movement case studies rather than generic internet data, Outcry assists users with campaign strategy, speech writing, movement theory, tactical problem-solving, and strategic dialogue. The application is completely free with no subscriptions, paywalls, or ads, requires one large download to store the AI model locally, and operates fully offline—even functioning in airplane mode or in physically isolated locations.
- Created by Micah Bornfree, author of 'The End of Protest' and co-creator of Occupy Wall Street, bringing 20+ years of activist expertise to the product design
Editorial Opinion
Outcry challenges the dominant narrative that AI tools require cloud infrastructure and user surveillance to function effectively. By moving the entire model on-device and eliminating data collection entirely, Bornfree has created an AI product that prioritizes user autonomy and security over corporate extraction—a paradigm that other developers and platforms should consider emulating. This launch demonstrates that sophisticated AI assistance is compatible with radical privacy and can be designed for high-risk contexts without compromising capability.


