AI in HR: The Risk of Blind Trust
AI in HR is transforming decisions, but unchecked reliance can create legal, compliance, and operational risks employers cannot afford to ignore
There is a seductive promise at the heart of the AI revolution: that software can make better, faster, fairer decisions than humans. Nowhere has this promise been embraced more enthusiastically or more dangerously than in Human Resources.
AI is genuinely transforming the world of work. It is streamlining recruitment, personalising onboarding, flagging attrition risk, and cutting administrative overhead.
A 2024 Gallup survey found that 93% of Fortune 500 Chief Human Resource Officers are integrating AI into their business practices. The momentum is real. The enthusiasm is understandable.
But enthusiasm without scrutiny is where organisations get hurt.
This article is not an argument against AI in HR. It is an argument for eyes-wide-open adoption because the evidence now accumulating from courtrooms, tribunals, and employment disputes makes one thing clear: blind trust in AI output is becoming one of the most expensive mistakes a people function can make.
The Hallucination Problem Nobody Talks About Loudly Enough
Let us start with the most underappreciated technical reality of modern AI: it fabricates things. Confidently. Fluently. And very convincingly.
In AI parlance, this is called a “hallucination” when a model generates plausible-sounding content that is entirely false. In creative writing, this is harmless. In HR and legal decision-making, it can be catastrophic.
A 2024 study by researchers at Stanford RegLab and Yale University found that general-purpose AI tools hallucinated on legal queries between 58% and 82% of the time. Even purpose-built legal AI tools from established vendors like LexisNexis and Thomson Reuters were found to produce incorrect information in more than 17% of queries. The researchers concluded: “Legal hallucinations have not been solved.”
The courts are providing the evidence. As of late 2025, a database maintained by HEC Paris researcher Damien Charlotin had identified 486 documented cases worldwide where AI-generated hallucinations appeared in legal filings 324 of them in US courts alone. That figure includes filings from 128 lawyers and even two judges.
Real-world cases span every jurisdiction:
| đź“‹ Noland v. Land of the Free, L.P. California Court of Appeal, 2025 |
|---|
| 21 of 23 citations in an appellate brief contained fabricated quotations or cases that did not exist. The court imposed a $10,000 sanction and referred the matter to the State Bar. |
| đź“‹ Mike Lindell Defamation Case Colorado, 2025 |
|---|
| Two lawyers were fined $3,000 each after submitting a filing containing more than two dozen AI-generated errors, including citations to non-existent cases. |
| đź“‹ Ko v Li Ontario Superior Court of Justice, 2025 ONSC 2766 |
|---|
| Counsel submitted cases where hyperlinks either redirected to entirely different unrelated matters or returned a "404 Error Page Not Found." |
The ABA has documented 156 instances where lawyers were sanctioned specifically for hallucinated citations. A Denver attorney accepted a 90-day suspension after admitting he had not checked ChatGPT’s work.
Why does this matter to HR?
Because HR professionals are now using the same tools for drafting termination letters, structuring redundancy processes, and researching employment law and many are doing so without the verification rigour that even these sanctioned lawyers failed to apply.
Raj Inda, CEO, Beyond Borders HR
The Three HR Landmines That AI Is Making More Dangerous
1. Reductions in Force and the Discrimination Trap
When organisations use AI to identify who stays and who goes during a RIF (Reduction in Force), they are making consequential decisions about people’s livelihoods using systems that carry their own inherited biases. The legal exposure is substantial and growing.
The most prominent case in recent memory is IBM. According to court filings, after IBM introduced AI chatbots and other tools to handle HR functions, it rapidly accelerated efforts to reduce its human HR workforce. The result was a systematic layoff pattern that the US Equal Employment Opportunity Commission (EEOC) found deeply troubling.
| đź“‹ Rodriguez, et al. v. IBM US District Court, S.D.N.Y. (ongoing) |
|---|
| The EEOC determined IBM's layoffs had an adverse impact on workers aged 40 and older, who comprised more than 85% of those considered for layoff over a five-year period. IBM reportedly discharged more than 20,000 workers aged 40 or over. Courts repeatedly refused to dismiss the claims, which remain active as of 2025. |
This is the hidden danger of AI-assisted RIFs. An algorithm trained on historical performance data may inadvertently penalise employees who took extended medical leave, those who work part-time (disproportionately women), or those whose communication patterns differ from a perceived “typical” employee profile.
In 2024 alone, AI-powered hiring and employment tools processed over 30 million applications while triggering hundreds of discrimination complaints, according to HR Defence.
| 📋 Mobley v. Workday Landmark Class Action (2024–ongoing) |
|---|
| The first class action against an AI HR vendor, alleging Workday's screening tools systematically discriminated against African-Americans, individuals over 40, and people with disabilities. A federal judge subsequently expanded the case and ruled that AI tools can be considered an "agent" of the employer you cannot outsource your liability to the vendor. |
California’s regulations, effective October 2025, make this unambiguous: any automated decision system used in employment must have meaningful human oversight. Employers must proactively test for bias and maintain detailed records for at least four years. Colorado’s AI Act, effective June 2026, goes further still.
AI can help you structure a RIF. It cannot make the decision for you. And when things go wrong, the courts will hold you not the software company responsible.
Raj Inda, CEO, Beyond Borders HR
2. The AI-Generated Grievance Avalanche
If the risk above concerns what HR professionals do with AI, this one concerns what employees are increasingly doing with it and the results are equally complex.
Workers are now routinely turning to ChatGPT, Copilot, and similar tools to draft grievances, disciplinary appeals, and termination challenges. What were once two-page handwritten complaints are now arriving as meticulously structured, legally-sounding 8-to-12-page documents.
A survey by The HR Dept found that 83% of HR directors had dealt with AI-related employee disputes in the past 12 months, and 95% had encountered cases where an employee used AI to raise a grievance. In 78% of those cases, the AI-generated grievance relied on inaccurate information or misrepresentations.
Employment lawyers have described an “explosion” of AI-generated grievances in 2024–2025. In one documented UK case, a grievance letter cited nine legal cases only two of which were real. The other seven were fabricated by AI.
| đź“‹ Daniel O'Hurley v Cornerstone Legal WA Pty Ltd [2024] FWC 1776 Australia |
|---|
| An employer used ChatGPT to draft a letter intended to confirm an employee's abandonment of their role. The AI-generated content sent through SMS was interpreted by the Fair Work Commission as a termination letter. The employer lost the ability to argue the employee had abandoned their role, and a general protections claim was allowed to proceed. Both sides were burned by unchecked AI output. |
People Management’s research found the UK employment tribunal service was holding more than 500,000 open claims between July and September 2025Â a backlog partly attributed to the rise in AI-assisted filings, alongside a 32% rise in open tribunal cases. Lawyers warn a further 15% increase is coming as new employment rights legislation takes effect.
The irony: despite their formal appearance, AI-written grievances rarely succeed. Of the HR directors who saw AI-assisted grievances reach tribunal, 86% reported that none of the employee cases had been successful. But the time, cost, and management distraction involved in responding to a twelve-page AI-drafted complaint full of fictitious case law and misapplied legislation is real and significant.
"The wording can sound convincing while still being legally wrong. AI can help organise a complaint and give it a more formal tone, but it does not check the facts or apply employment law reliably."
James Rowland, Neathouse Partners
3. AI-Drafted HR Documents and the False Confidence Problem
The O’Hurley case points to a broader risk that is perhaps the most pervasive: HR professionals using AI to draft critical documents termination letters, PIPs, WARN Act notices, separation agreements, redundancy selection criteria and treating the output as ready-to-sign.
The problem is not that AI produces bad documents. Sometimes it produces excellent ones. The problem is that it also produces subtly wrong ones, and the fluency of the output creates false confidence. A termination letter that cites the wrong statutory notice period. A redundancy process document that skips a legally required step. A separation agreement that inadvertently waives rights it cannot legally waive.
Unlike a hallucinated court citation where a judge will catch the error these mistakes live inside internal processes, invisible until litigation begins. By that point, the procedural defect becomes the case.
What Good AI Governance in HR Actually Looks Like
None of this means HR should retreat from AI. It means HR needs to lead on responsible AI adoption not leave it to IT or procurement.
 Here is what that looks like in practice:
Verify everything that touches legal compliance
Any AI output that references legislation, case law, regulatory requirements, or employment rights must be independently verified against authoritative sources before acting on it. Treat AI as a first draft, not a final answer.
Never let AI make the final call on people decisions
Selection for redundancy, performance-based termination, disciplinary outcomes these must involve trained human judgment. California, Colorado, Illinois, and New York City have all now legislated this requirement explicitly.
Conduct adverse impact analysis before any RIF
Before any workforce reduction is executed, conduct a statistical review of whether the selection criteria disproportionately affect any protected group age, gender, race, disability. AI-assisted selection does not eliminate this obligation; it makes it more, not less, important.
Build an AI use policy for employees too
Workers are using AI in disputes and grievances whether you have a policy or not. Your handbook should address this, set expectations around AI-drafted submissions in formal processes, and ensure managers are trained to recognise AI-generated content.
Audit your HR tech vendors
Under Mobley v. Workday and emerging state laws, the employer bears liability for discriminatory outcomes generated by vendor AI tools. Ask vendors for bias audit results. Contractually require them. Document your due diligence.
Maintain human-readable audit trails
Every AI-assisted employment decision should have a clear paper trail showing what human judgment was applied, what oversight was exercised, and why the decision was made. This is your legal protection if challenged.
The Bigger Picture
There is a version of AI in HR that is genuinely transformational that removes administrative burden, reduces human bias, surfaces insights no spreadsheet could, and frees HR professionals to do what humans do best: connect, coach, and lead.
That version exists. But it requires something AI cannot provide on its own: human accountability.
The cautionary tales accumulating in courtrooms, tribunals, and employment disputes are not arguments against technology. They are arguments against abdication the abdication of professional judgment to a tool that, however impressive, does not understand nuance, does not know your organisation’s legal obligations, and cannot be held responsible when things go wrong.
You can be.
AI in HR is not a question of whether. It is a question of how and the how matters enormously. The organisations that get this right will move faster, make better decisions, and build more equitable workplaces. The ones that do not will be providing the case studies for the next round of articles like this one.
Sources & References
- Stanford RegLab/HAI (2024)
- Gallup CHRO Survey (2024)
- Damien Charlotin AI Hallucination Cases Database, HEC Paris (2025)
- People Management / The HR Dept Survey (2026)
- American Bar Association
- HR Defence Blog
- US Equal Employment Opportunity Commission (EEOC)
- Cohen Milstein, Rodriguez, et al. v. IBM
- Fair Work Commission, O’Hurley v. Cornerstone Legal WA Pty Ltd [2024] FWC 1776
- Lexology / South African Employment Law Review (2026)
- Cornell Journal of Law and Public Policy (2024)
- California Civil Rights Council Regulations (2025)
- Sterne Kessler, AI IP Year in Review (2025)
- Proskauer Rose / Noland v. Land of the Free, L.P. (2025)
About the Author
Raj Inda is CEO of Beyond Borders HR a UK based Global Human Resources Consultancy specialising in people strategy, employment law, International HR compliance, Global Mobility, HR Outsourcing and workforce transformation. He writes on the future of work, responsible AI adoption, and employment risk in global organisations.
How Beyond Borders HR can help
AI in HR is evolving faster than most organisations’ internal governance frameworks. The risk is not in using these tools, but in using them without the right controls, validation processes, and accountability structures in place.
Beyond Borders HR works with global employers to implement AI responsibly within their HR function. This includes reviewing how AI is being used across recruitment, performance management, and termination processes, identifying areas of legal and compliance exposure, and building practical safeguards around decision-making.
From validating AI-assisted documentation to advising on workforce decisions and vendor risk, our focus is on ensuring that technology supports your HR strategy without creating unintended liability.
If your organisation is already using AI in HR, or planning to, it is worth assessing whether your current approach would withstand scrutiny in a real-world dispute.
Book a confidential consultation with our team to review your AI-related HR risks.