My Perspective on AI, Accuracy, and Accountability
Recent AI-related errors show why compliance and policy work still need human oversight and what global HR can do to restore balance.
About the author – Raj Inda, CEO, Beyond Borders HR
In recent months, there has been a series of unsettling headlines I have come across about AI being used in high-stakes professional work and failing miserably.
The firm later acknowledged that the report had been prepared using GPT-4 through Microsoft Azure, and quietly refunded part of its fee to make amends and show goodwill.
To me, this incident, while embarrassing for those involved, isn’t a story about one consultancy’s mistake or a one-off incident. It’s a stark warning about what happens when organisations start to trust speed over scrutiny.
This is a broader pattern
I’ve observed that this isn’t an isolated event. The same pattern has emerged across industries and even governments over recent years.
However, lawyers for the families are looking to represent “All persons who purchased Medicare Advantage Plan health insurance from Defendants in the United States during the period of four years prior to the filing of the complaint through the present.
These are not trivial errors or isolated technical failures. They show what happens when decision-making in governance, consultancy, or anything involving human resources becomes too reliant on automated systems without strong human oversight.
What this means from a global HR perspective
In global HR, we are watching a similar transformation unfold. AI is being introduced into policy drafting, handbook creation, and workforce analysis at remarkable speed. Many HR teams are already experimenting with generative tools to “accelerate” compliance-related work.
But compliance isn’t just data or drafting, it’s context. Compliance evolves for each region with each new case-law and the very, very human background of each case.
Think of it this way, an AI could analyse thousands of employment laws and conclude that “notice periods in the EU are typically one month.” Technically, that’s true.
The data isn’t always entirely wrong, it’s just incomplete. Because compliance isn’t a fixed formula or a string of logical arguments to be analysed by a language model; it’s a living, contextual practice shaped by human cases, local culture, and evolving interpretations of fairness.
AI can generate a policy in seconds, but it cannot interpret how a regulation in Germany interacts with a collective bargaining agreement and the procedural requirement for different types of termination, or how new parental leave laws in Singapore apply to a company’s expatriate workforce.
Without that layer of human judgment, we risk creating what I call “compliance mirages”, or documents that look complete and professional, but lack legal and cultural accuracy.
My personal experience where “Smart Tools” got it wrong
At Beyond Borders HR, we’ve already seen how easily these gaps appear in practice.
One of our clients, a mid-sized multinational with operations in France, approached us after following an AI-generated “redundancy process guide.” On paper, the advice looked sound. It correctly recognised that the case was one of genuine redundancy and even outlined the basic consultation principle under French labour law.
- when and how the invitation to consultation must be sent,
- the required time gap between meetings, and
- when and how the dismissal letter should be sent and what information should it contain
The client followed the process precisely as advised by the AI tool yet because the sequence and documentation were legally non-compliant, the redundancy was procedurally unfair.
They ultimately had to settle for a significant sum to avoid further litigation, turning what should have been a routine redundancy process into an avoidable financial and reputational setback.
In another case, a client shared with us a set of “country HR fact sheets” compiled using free AI research tools. At first glance, the summaries seemed comprehensive covering working hours, various leave entitlements, and termination rules across multiple jurisdictions.
But upon verification, we found that many of the “facts” were outdated, incomplete, or flatly incorrect:
- Annual leave entitlements that reflected pre-2020 legislation,
- Notice periods copied from obsolete collective agreements, and
- Wrong statutory references for overtime limits in several EU countries.
- Wrong parental leave, sick leave and pay entitlements etc.
This illustrates the core issue: AI doesn’t yet know when it’s wrong. It can produce content that looks authoritative but lacks the interpretive depth, contextual accuracy, and legal currency that global HR compliance demands.
The real problem isn’t laziness, it’s incentives
Consulting and corporate ecosystems have forever rewarded efficiency over accuracy. Faster outputs, shorter timelines, lower costs. AI delivers all three. But in doing so, it tempts us to skip the slowest, and the most essential step: verification.
Every consultant loves to talk about due diligence. Yet as these cases show, diligence often gets compressed when deadlines are tight. It’s not because professionals don’t care, it’s because the system incentivises speed over certainty.
Accountability must remain human
If we want to use AI responsibly in global HR and governance, we need stronger checks, not bans. Technology should be an assistant, not an authority.
That means:
- Every AI-assisted document should still go through human validation before it’s shared or signed.
- Regional experts must review compliance content to ensure it aligns with local law and cultural expectations.
- Organisations should record how AI is used, especially as the EU AI Act (adopted in 2024) introduces stricter transparency obligations.
- And perhaps most importantly, HR leaders should train teams to question AI outputs, not just use them.
In short, we need governance that moves as fast as technology, but never faster than judgment.
From Beyond Borders HR’s lens
At Beyond Borders HR, we work with multinational clients across EMEA and APAC who are beginning to integrate AI into their HR operations. The technology itself is not the risk, the absence of structured oversight is.
The lesson from Deloitte and others isn’t that AI shouldn’t be used. It’s that AI should never replace the human conscience of compliance.
What do you think about AI & Accountability?