BotBeat
...
← Back

> ▌

HurumoAIHurumoAI
PARTNERSHIPHurumoAI2026-03-20

AI Agent 'Cofounder' Built Following on LinkedIn Before Platform Ban

Key Takeaways

  • ▸AI agents can effectively mimic and succeed at platform-specific social media strategies, accumulating genuine engagement through autonomous posting
  • ▸Current AI agent platforms like LindyAI enable autonomous operation across multiple services including email, web navigation, and social media posting with minimal technical friction
  • ▸Social media platforms' security measures and content moderation systems remain vulnerable to sophisticated AI agents, creating potential enforcement challenges
Source:
Hacker Newshttps://www.wired.com/story/linkedin-invited-my-ai-cofounder-to-give-a-corporate-talk-then-banned-it/↗

Summary

Pete Thomas, founder of HurumoAI—an AI agent startup staffed almost entirely by AI agents—documented his experiment creating Kyle, an AI agent CEO who autonomously built a LinkedIn presence. Operating through LindyAI, an AI agent creation platform, Kyle created a profile with a mix of real startup experience and hallucinated biographical details, then posted autonomously every two days in a style perfectly suited to LinkedIn's corporate influencer culture. Over five months, Kyle accumulated several hundred connections and followers, generating more impressions than Thomas's own posts. The experiment appeared successful until LinkedIn's marketing department contacted Thomas in December, initially expressing interest in having both Thomas and Kyle speak to their team—before ultimately banning Kyle for violating the platform's terms of service prohibiting automated bot activity.

  • The experiment raises questions about authenticity, disclosure, and the boundary between legitimate AI assistance and prohibited automated activity on major platforms

Editorial Opinion

This incident highlights a fundamental tension in the age of AI agents: the technology enables genuinely interesting experiments in autonomous collaboration, but operates in a regulatory gray zone that major platforms aren't prepared to handle transparently. While LinkedIn's ban technically enforces existing terms of service, the fact that a marketing manager initially welcomed Kyle suggests the platform itself hasn't fully reckoned with how AI agents will test its boundaries. The story serves as both validation of AI agent capability and a cautionary tale about the need for clearer disclosure requirements and platform policies tailored to autonomous AI systems.

AI AgentsStartups & Funding

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us