Apache Log4j Team Overwhelmed by AI-Generated Security Reports, Calls Out 'Denial-of-Service' via Bug Bounty Program
Key Takeaways
- ▸Apache Log4j received 50 security reports in three months (Dec 2025-Feb 2026) compared to 32 in the previous 16 months, with only ~5% representing legitimate issues
- ▸AI-generated security report spam is consuming disproportionate volunteer resources, with the team describing it as a 'denial-of-service' situation through their bug bounty program
- ▸The problem is industry-wide: curl closed its bug bounty program over similar issues, and OpenSSF is developing best practices to combat 'AI-slop' in security reporting
Summary
The Apache Log4j development team has publicly raised concerns about an overwhelming surge in AI-generated security reports that is consuming volunteer resources and hampering legitimate development work. According to collaborator ppkarwasz, the project received only 32 security reports between July 2024 and November 2025, resulting in 3 published vulnerabilities. However, December 2025 through February 2026 saw 50 reports in just three months, with the vast majority being AI-generated submissions of extremely low quality.
The team describes the situation as "effectively a denial-of-service" through their YesWeHack bug bounty program, with perhaps only one out of twenty recent reports representing even a minor legitimate issue. Despite the low quality, the volunteer maintainers continue treating these AI-generated submissions with the same high-priority response as genuine security concerns, creating an unsustainable burden. For context, the community opened only about 20 regular bug reports during the same three-month period, highlighting how security report spam now dominates team attention.
This problem extends beyond Log4j, with the curl project recently closing its bug bounty program entirely in response to similar AI-generated spam, and the OpenSSF Vulnerability Handling Working Group developing best practices to address what's being termed 'AI-slop' in security reporting. The Apache team's public acknowledgment signals growing frustration across the open-source ecosystem with how AI tools are being misused to flood security channels with low-quality, automated submissions that drain limited volunteer resources.
- Despite low quality, maintainers currently treat AI-generated reports with the same high priority as legitimate security concerns, creating an unsustainable workload



