Colleges Combat AI Cheating with Resurgence of Oral Exams
Key Takeaways
- ▸Oral exams are experiencing a revival as colleges seek to prevent AI-assisted academic dishonesty and ensure authentic student assessment
- ▸Cornell's Professor Chris Schaffer notes that oral defenses cannot be circumvented through AI tools, requiring genuine understanding and real-time articulation of knowledge
- ▸The strategy combines traditional pedagogical methods with modern institutional recognition that assessment formats must evolve alongside AI capabilities
Summary
Universities across the United States are increasingly turning to oral exams and oral defenses to combat widespread AI-assisted cheating, which has become a significant challenge in higher education. Professors at institutions including Cornell University and NYU Stern School of Business are implementing these traditional testing methods, where students must verbally demonstrate their knowledge directly to instructors without access to laptops, chatbots, or written materials. The approach, reminiscent of Socratic questioning, prevents students from using generative AI tools to complete assignments while forcing them to engage in real-time thinking and explanation. This shift represents a broader institutional response to the crisis posed by advanced language models that can produce near-perfect written work, leaving professors unable to distinguish between student effort and AI-generated content.
- Multiple universities including NYU Stern are adopting oral AI agents and oral exam protocols as part of a broader assessment redesign
Editorial Opinion
The return to oral exams signals an important recognition that AI's impact on education cannot be solved through policy alone—pedagogical innovation is essential. While oral defenses are effective at preventing AI cheating, they raise questions about scalability and equitable access for students with speech anxiety or disabilities. This trend suggests that the future of academic assessment may not be a wholesale rejection of technology, but rather a thoughtful rebalancing toward evaluation methods that AI cannot easily circumvent while remaining inclusive and fair to all learners.



