BotBeat
...
← Back

> ▌

NewsGuardNewsGuard
PARTNERSHIPNewsGuard2026-03-12

NewsGuard and Pangram Launch AI Detection Tool to Combat AI-Generated Misinformation and Content Farms

Key Takeaways

  • ▸NewsGuard and Pangram Labs launched an automated AI detection system that identified 3,000 AI content farms during testing—more than double previous manual detection rates
  • ▸The tool combines Pangram's proprietary AI detection models with NewsGuard's expert human review to minimize false positives and verify findings
  • ▸AI content farms often masquerade as legitimate news outlets under generic names, spreading misinformation about politicians, brands, and public health while monetizing through ad fraud
Source:
Hacker Newshttps://www.adweek.com/media/newsguard-tracking-ai-slop-content-farms/↗

Summary

NewsGuard, a media rating and misinformation-tracking firm, has launched a new AI content farm detection tool in collaboration with AI detection startup Pangram Labs to identify news and information sites hosting significant amounts of AI-generated content. The system uses Pangram's proprietary AI models to automatically evaluate entire domains for AI-generated material, flagging suspicious sites for manual review by NewsGuard analysts. The detection system has identified approximately 3,000 AI content farm sites during its six-month testing period—more than double what NewsGuard could identify using manual techniques alone.

The tool targets sites that meet three criteria: containing substantial AI-generated content, lacking disclosure that content is AI-created, and appearing deceptively authentic to average users. Many flagged sites operate under generic news-like names and spread false information about political figures, brands, and public health topics. NewsGuard provided examples including Citizen Watch Report, which falsely claimed U.S. senators spent $814,000 on Ukrainian hotels, and News 24, which fabricated a story about Coca-Cola threatening to withdraw Super Bowl sponsorship over Bad Bunny's halftime performance. These made-for-advertising (MFA) sites generate revenue through ad arbitrage while deceiving viewers and advertisers alike.

  • The detection system addresses a critical gap in combating inauthentic content, as sophisticated AI-generated articles become increasingly difficult to distinguish from human-written journalism

Editorial Opinion

This partnership represents a pragmatic approach to an urgent problem: using AI to detect AI-generated misinformation before it proliferates. While the tool's ability to identify 3,000 content farms is impressive, the cat-and-mouse game between detection and evasion suggests this is only a temporary solution. The real challenge lies in making AI content detection accessible to newsrooms and platforms at scale, and in establishing industry standards for disclosure that make deceptive MFA sites unprofitable from the start.

Generative AIEntertainment & MediaRegulation & PolicyMisinformation & Deepfakes

Comments

Suggested

AnthropicAnthropic
RESEARCH

Inside Claude Code's Dynamic System Prompt Architecture: Anthropic's Complex Context Engineering Revealed

2026-04-05
OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us