Beyond Borders HR

My Perspective on AI, Accuracy, and Accountability

Recent AI-related errors show why compliance and policy work still need human oversight and what global HR can do to restore balance.

About the author – Raj Inda, CEO, Beyond Borders HR
Raj is the Chief Executive Officer of Beyond Borders HR, a global HR advisory firm specialising in international compliance, mobility, and policy strategy. With over two decades of experience across retail, pharmaceuticals, IT, BPO and consulting environments, Raj has helped multinational organisations optimise their HR frameworks for complex, cross-border workforces. Raj also is a co-founder of XplorHR, contributing his expertise in shaping innovative HR solutions that support companies through transformation and expansion.
Connect with him via LinkedIn: https://www.linkedin.com/in/rajendrainda/

In recent months, there has been a series of unsettling headlines I have come across about AI being used in high-stakes professional work and failing miserably. 

The firm later acknowledged that the report had been prepared using GPT-4 through Microsoft Azure, and quietly refunded part of its fee to make amends and show goodwill. 

To me, this incident, while embarrassing for those involved, isn’t a story about one consultancy’s mistake or a one-off incident. It’s a stark warning about what happens when organisations start to trust speed over scrutiny.

This is a broader pattern

I’ve observed that this isn’t an isolated event. The same pattern has emerged across industries and even governments over recent years.

My Perspective on AI, Accuracy, and Accountability
In another instance reported from across the pond, UnitedHealth was sued for using faulty AI with up to 90% reported error rates to deny elderly patients the care owed to them. Aaron Albright, a spokesperson for NaviHealth told CBS MoneyWatch that the AI-powered tool is not used to make coverage determinations but as “a guide to help healthcare companies inform providers about what sort of assistance and care the patient may need.”

However, lawyers for the families are looking to represent “All persons who purchased Medicare Advantage Plan health insurance from Defendants in the United States during the period of four years prior to the filing of the complaint through the present.

Governments have faced their share of issues, too. The UK Home Office suspended its AI-powered visa assessment system after evidence of discriminatory outcomes. The press aptly called it “robo-racism.”
And in the Netherlands, a welfare-fraud detection algorithm falsely accused thousands of families, ultimately prompting the resignation of the entire Dutch government in 2021.

These are not trivial errors or isolated technical failures. They show what happens when decision-making in governance, consultancy, or anything involving human resources becomes too reliant on automated systems without strong human oversight.

What this means from a global HR perspective

In global HR, we are watching a similar transformation unfold. AI is being introduced into policy drafting, handbook creation, and workforce analysis at remarkable speed. Many HR teams are already experimenting with generative tools to “accelerate” compliance-related work.

But compliance isn’t just data or drafting, it’s context. Compliance evolves for each region with each new case-law and the very, very human background of each case.

Think of it this way, an AI could analyse thousands of employment laws and conclude that “notice periods in the EU are typically one month.” Technically, that’s true. 

But in France, it changes with seniority and collective agreements. In Germany, you can’t even finalise a dismissal without consulting the works council (if you have one in place). And in Spain, a missed procedural step can instantly turn a lawful redundancy into an unfair dismissal.

The data isn’t always entirely wrong, it’s just incomplete. Because compliance isn’t a fixed formula or a string of logical arguments to be analysed by a language model; it’s a living, contextual practice shaped by human cases, local culture, and evolving interpretations of fairness.

AI can generate a policy in seconds, but it cannot interpret how a regulation in Germany interacts with a collective bargaining agreement and the procedural requirement for different types of termination, or how new parental leave laws in Singapore apply to a company’s expatriate workforce.

Without that layer of human judgment, we risk creating what I call “compliance mirages”, or documents that look complete and professional, but lack legal and cultural accuracy.

My personal experience where “Smart Tools” got it wrong

At Beyond Borders HR, we’ve already seen how easily these gaps appear in practice.

One of our clients, a mid-sized multinational with operations in France, approached us after following an AI-generated “redundancy process guide.” On paper, the advice looked sound. It correctly recognised that the case was one of genuine redundancy and even outlined the basic consultation principle under French labour law.

However, what it missed were the critical procedural details that determine whether an employee termination in France is lawful or not:
  • when and how the invitation to consultation must be sent,
  • the required time gap between meetings, and
  • when and how the dismissal letter should be sent and what information should it contain

The client followed the process precisely as advised by the AI tool  yet because the sequence and documentation were legally non-compliant, the redundancy was procedurally unfair.

They ultimately had to settle for a significant sum to avoid further litigation, turning what should have been a routine  redundancy process into an avoidable financial and reputational setback.

My Perspective on AI, Accuracy, and Accountability

In another case, a client shared with us a set of “country HR fact sheets” compiled using free AI research tools. At first glance, the summaries seemed comprehensive covering working hours, various leave entitlements, and termination rules across multiple jurisdictions.

But upon verification, we found that many of the “facts” were outdated, incomplete, or flatly incorrect:

  • Annual leave entitlements that reflected pre-2020 legislation,
  • Notice periods copied from obsolete collective agreements, and
  • Wrong statutory references for overtime limits in several EU countries.
  • Wrong parental leave, sick leave and pay entitlements etc. 

This illustrates the core issue: AI doesn’t yet know when it’s wrong. It can produce content that looks authoritative but lacks the interpretive depth, contextual accuracy, and legal currency that global HR compliance demands.

The real problem isn’t laziness, it’s incentives

Consulting and corporate ecosystems have forever rewarded efficiency over accuracy. Faster outputs, shorter timelines, lower costs. AI delivers all three. But in doing so, it tempts us to skip the slowest, and the most essential step: verification.

Every consultant loves to talk about due diligence. Yet as these cases show, diligence often gets compressed when deadlines are tight. It’s not because professionals don’t care, it’s because the system incentivises speed over certainty.

My Perspective on AI, Accuracy, and Accountability

Accountability must remain human

If we want to use AI responsibly in global HR and governance, we need stronger checks, not bans. Technology should be an assistant, not an authority.

That means:

  • Every AI-assisted document should still go through human validation before it’s shared or signed.
  • Regional experts must review compliance content to ensure it aligns with local law and cultural expectations.
  • Organisations should record how AI is used, especially as the EU AI Act (adopted in 2024) introduces stricter transparency obligations.
  • And perhaps most importantly, HR leaders should train teams to question AI outputs, not just use them.

In short, we need governance that moves as fast as technology, but never faster than judgment.

From Beyond Borders HR’s lens

At Beyond Borders HR, we work with multinational clients across EMEA and APAC who are beginning to integrate AI into their HR operations. The technology itself is not the risk, the absence of structured oversight is.

Our Global HR Compliance & Policy services are designed around exactly this balance: using technology for efficiency while grounding every policy, handbook, and advisory note in human-verified, jurisdiction-specific insight.

The lesson from Deloitte and others isn’t that AI shouldn’t be used. It’s that AI should never replace the human conscience of compliance.

What do you think about AI & Accountability?

For any further inquiries or to discuss your specific needs, please feel free to contact us
To stay updated on the latest information and insights regarding Global HR, employee benefits related trends and employment law changes across 150 countries please sign up for our newsletter.
Subscription Form