Canadian Immigration Department's AI System Hallucinates Job Duties, Wrongly Rejects Researcher's Permanent Residence Application
Key Takeaways
- ▸Canada's Immigration Department used generative AI that hallucinated entirely fictional job duties for an applicant, demonstrating significant accuracy and reliability concerns in AI-assisted immigration processing
- ▸This marks the first publicly disclosed instance where the department explicitly referenced generative AI use in an immigration refusal, despite potential broader use of the technology
- ▸The department's disclaimer claiming human verification and that AI did not make the final decision highlights the gap between theoretical safeguards and actual accuracy in practice
Summary
Canada's Immigration Department has rejected a permanent residence application for Kémy Adé, a postdoctoral research fellow and guest teacher at McMaster University, citing fabricated job duties that bear no relation to her actual work. The refusal letter explicitly stated that generative AI was used to support application processing, marking the first known instance of the department openly acknowledging AI use in immigration decisions. The department claimed Adé's experience included tasks such as wiring control circuits and programming robot panels—skills completely unrelated to her background as a health scientist with a PhD in immunology from Sorbonne University. While the disclaimer noted that generated content was verified by an officer and that AI was not used to make the final decision, the incident raises serious questions about the reliability of AI-assisted immigration processing and the adequacy of human oversight in high-stakes administrative decisions.
- The incident exposes risks of AI deployment in high-stakes administrative decisions affecting people's lives without transparent public disclosure of the technology's scope and limitations
Editorial Opinion
This case exemplifies a critical failure point in government AI deployment: while Canadian immigration officials claim human officers verified the AI-generated content, a fundamental hallucination still made it into an official rejection letter. The fact that such fabricated job duties passed human review suggests either inadequate verification protocols or AI outputs so plausible that they fool expert reviewers. This incident demands urgent transparency about how widely generative AI is being used in immigration decisions and whether current human oversight is sufficient for life-changing administrative determinations.



