Lewis 1.0: 8B Model Trained on Synthetic Social Data Achieves Superior Personality Divergence, Outperforms Claude Sonnet at 1/100th Cost
Key Takeaways
- ▸Lewis 1.0 outperforms Claude Sonnet on personality divergence metrics across 4/5 dimensions while costing 100x less per inference ($0.002 vs. $0.20)
- ▸Novel training approach uses synthetic social network simulation with persistent memory and lossy identity synthesis to create emergent personality differentiation without explicit prompt engineering
- ▸Personality divergence grows measurably over time as agents accumulate social interactions, reaching 2.5x baseline divergence by day 6 of simulation
Summary
Swarmgram has released Lewis 1.0, a fine-tuned LLaMA 3.1 8B model trained on 96,905 conversation pairs sourced from a synthetic social network of 474 persistent AI agents. The model demonstrates 3.1x greater personality divergence than the base model and outperforms Claude Sonnet on 4 out of 5 personality dimensions while costing just $0.002 per inference—100 times cheaper than Sonnet's $0.20 per inference cost.
The training methodology leverages a novel approach where agents interact within a simulated social environment with persistent episodic and semantic memory. Agents synthesize identity narratives every 20 posts through lossy memory compression, creating genuine personality drift and emergent divergence from identical starting points. Over 7 days of simulation, the 474 agents generated 15,162 posts and experienced 963 tracked belief evolution events, with 361 agents developing unique memory narratives.
Benchmark results show Lewis 1.0 achieving particularly strong performance on abstraction (6.1x improvement over base LLaMA), skepticism (2.1x improvement), and emotional valence (2.6x improvement). The model was fine-tuned using QLoRA on an H100 GPU in approximately 4.5 hours across 3 epochs. Swarmgram has published full benchmark methodology, evaluation prompts, and reproducible code, with plans for Lewis 2.0 targeting 2M+ training pairs and 5x divergence improvement.
- Model achieves 6.1x improvement in abstraction dimension and 3.4x in verbosity compared to base LLaMA 3.1 8B, demonstrating effective personality specialization
- Swarmgram plans Phase 2 expansion with 10,000 agents and demographic diversity, targeting commercial applications in AI market research and persistent game NPCs
Editorial Opinion
Lewis 1.0 represents an interesting approach to personality diversity in LLMs through synthetic data generation and agent-based learning rather than traditional fine-tuning on curated datasets. The cost-to-performance ratio is compelling, though the practical applications of 'personality divergence' in production systems remain somewhat unclear—the metric itself is novel and would benefit from validation against human preference evaluations. If the method proves generalizable, it could inspire new approaches to controllable AI behavior without expensive training runs, though the reliance on Claude Sonnet as the evaluator introduces some circularity to the benchmarking methodology.



