EditLens: New Research Reveals How AI-Edited Text Can Be Detected and Quantified
Key Takeaways
- ▸EditLens can reliably detect AI-edited text with 94.7% accuracy on binary classification, outperforming existing detection methods that focus only on fully AI-generated content
- ▸The model can quantify the degree of AI editing present in text, not just binary detection of whether AI was involved
- ▸AI-edited text shows distinct characteristics separable from both human-written and fully AI-generated text, opening new research directions
Summary
Researchers have developed EditLens, a machine learning model that can detect and quantify the extent of AI editing in text—addressing a growing need as large language models are increasingly used not to generate new text from scratch, but to revise and refine human-written content. The model achieves state-of-the-art performance with 94.7% F1 score on binary classification tasks (distinguishing human-written, AI-edited, and fully AI-generated text) and 90.4% on ternary classification. The team validated their approach using lightweight similarity metrics as intermediate supervision and confirmed findings with human annotators. A case study analyzing edits from Grammarly, a popular writing assistance tool, demonstrates practical applications. The researchers commit to publicly releasing their models and dataset to support further research in authorship attribution, education policy, and academic integrity.
- Immediate practical applications for academic integrity, authorship attribution, and policy decisions as AI writing assistants become mainstream
- Open-source release of models and dataset will accelerate development of detection methods across the research community
Editorial Opinion
This research fills a critical gap in the AI detection landscape as text editing becomes a dominant use case for LLMs. While previous research focused on detecting fully AI-generated content, the reality of 2025 is that AI is more commonly used to enhance human writing—making EditLens's quantification approach essential for educators, publishers, and policymakers. The work raises important questions about where to draw lines on AI assistance in different contexts: should academic papers using AI for proofreading be treated differently from medical diagnoses? The research community now has the tools to make these distinctions.


