BotBeat
...
← Back

> ▌

AdaAda
POLICY & REGULATIONAda2026-03-26

Canadian Immigration Department's AI System Hallucinates Job Duties, Wrongly Rejects Researcher's Permanent Residence Application

Key Takeaways

  • ▸Canada's Immigration Department used generative AI that hallucinated entirely fictional job duties for an applicant, demonstrating significant accuracy and reliability concerns in AI-assisted immigration processing
  • ▸This marks the first publicly disclosed instance where the department explicitly referenced generative AI use in an immigration refusal, despite potential broader use of the technology
  • ▸The department's disclaimer claiming human verification and that AI did not make the final decision highlights the gap between theoretical safeguards and actual accuracy in practice
Source:
Hacker Newshttps://www.thestar.com/news/canada/canada-rejected-her-permanent-residence-application-her-job-duties-were-made-up--by-immigrations-ai-reviewer/article_3f1ea5be-0b3d-4541-ac00-0a1b8484d877.html↗

Summary

Canada's Immigration Department has rejected a permanent residence application for Kémy Adé, a postdoctoral research fellow and guest teacher at McMaster University, citing fabricated job duties that bear no relation to her actual work. The refusal letter explicitly stated that generative AI was used to support application processing, marking the first known instance of the department openly acknowledging AI use in immigration decisions. The department claimed Adé's experience included tasks such as wiring control circuits and programming robot panels—skills completely unrelated to her background as a health scientist with a PhD in immunology from Sorbonne University. While the disclaimer noted that generated content was verified by an officer and that AI was not used to make the final decision, the incident raises serious questions about the reliability of AI-assisted immigration processing and the adequacy of human oversight in high-stakes administrative decisions.

  • The incident exposes risks of AI deployment in high-stakes administrative decisions affecting people's lives without transparent public disclosure of the technology's scope and limitations

Editorial Opinion

This case exemplifies a critical failure point in government AI deployment: while Canadian immigration officials claim human officers verified the AI-generated content, a fundamental hallucination still made it into an official rejection letter. The fact that such fabricated job duties passed human review suggests either inadequate verification protocols or AI outputs so plausible that they fool expert reviewers. This incident demands urgent transparency about how widely generative AI is being used in immigration decisions and whether current human oversight is sufficient for life-changing administrative determinations.

Government & DefenseRegulation & PolicyEthics & BiasAI Safety & Alignment

More from Ada

AdaAda
UPDATE

AdaCore Achieves SLSA Build Level 3 Certification to Strengthen Software Supply Chain Security

2026-03-05
AdaAda
OPEN SOURCE

AdaptiveCpp Enables CUDA Dialect Support on Apple GPUs Through New Metal Backend

2026-02-27

Comments

Suggested

OracleOracle
POLICY & REGULATION

AI Agents Promise to 'Run the Business'—But Who's Liable When Things Go Wrong?

2026-04-05
AnthropicAnthropic
POLICY & REGULATION

Anthropic Explores AI's Role in Autonomous Weapons Policy with Pentagon Discussion

2026-04-05
PerplexityPerplexity
POLICY & REGULATION

Perplexity's 'Incognito Mode' Called a 'Sham' in Class Action Lawsuit Over Data Sharing with Google and Meta

2026-04-05
← Back to news
© 2026 BotBeat
AboutPrivacy PolicyTerms of ServiceContact Us