Highlight Report

AI regulations represent a new “GDPR moment” for HR leaders

In the US and EU, regulation of artificial intelligence (AI) and machine learning (ML) isn’t “on the cards” any longer—it’s here and now, with GDPR-level implications for human resources (HR) and the C-suite. With growing examples of reputational damage from errant AI deployments appearing in the media and academic critiques of “surveillance capitalism,” pressure for greater regulation has been answered.

Several US states and the EU have passed or will soon pass legislation requiring HR leaders to act now or face consequences for their enterprises’ reputations, potentially excruciating financial penalties, and angry employees. And in a talent war, where employee experience (EX) is increasingly a driver of value through productivity and improved customer experience (CX), leaders should be wary of that anger.

The threat: The computer says no (and so does the regulator)

As IBM’s Kim Morick and Jane Wu[1] recently noted, HR is the business function most likely to have deployed ML tools extensively, signposting high-profile regulations such as the EU’s forthcoming Artificial Intelligence Act and New York City’s Local Law 144. Both regulations have specific implications for HR and using automated employment decision tools (AEDTs) in recruitment. These tools, which use functions such as natural language processing (NLP) to score an applicant’s resumé, can be biased by the data they’ve learned from. If such data contains racial or gender biases, the tools can replicate them at scale. In addition, consistent concern has been raised over the implications of such tools for neurodiverse applicants. Thus, the nightmare scenario, the vernacular promise of AI—to remove subjectivity from the equation (as ML practitioners know, it’s never that simple)— becomes the formalization of prejudice in recruitment.

[1] Kim Morick, Global Leader, Data and Technology Transformation, IBM; Jane Wu, Associate Partner, GBS Talent and Transformation, IBM.

Audits – and penalties if you don’t comply

Such fears prompted an investigation by the UK’s Information Commissioner. NYC’s legislation requires formal audits of AI tools before they are used (opt outs)), with severe financial penalties for non-compliance. The EU, meanwhile, ‘aspires’ to set the standard for AI/ML regulation globally in a similar fashion to GDPR while outlawing some high-risk AI deployments completely.

In the enterprise space, this legislation and has global ramifications. With talent acquisition genuinely global in the age of remote work, most enterprises don’t have the option of simply refusing to cut out huge pools of talent from their business.

If you outsource, it’s still your problem

Moreover, with nearly 33% of respondents to the latest HFS Pulse survey stating that ML is a main BPO technology managed by their service provider, this legislation has implications for both enterprises and vendors. If using embedded technology in third-party software means your HR department can’t explain to candidates why their application was rejected, that won’t cut it with regulators.

Be proactive—because soon the consequences for inaction will be serious

HR leaders need to act now. IBM’s Morick and Wu note they need to audit their recruitment and talent management processes to clarify the following specifics: Where is AI used? What kinds of AI/ML are in play? Are AI/ML tools deployed that provide decision outcomes without human judgment? Are alternatives available for applicants who require them?

Such questions are a beginning, but they aren’t enough. HR Leaders must ensure AI deployments don’t violate diversity, equity, and inclusion (DEI) standards, such as the UK’s Equality Act. They can’t merely audit where AI is used and how; they must upskill the whole HR team on the necessity for ethical AI and ensure HR professionals fully understand technologies such as AEDTs. This understanding can be a key enabler for transforming HR teams from being subservient to data they don’t understand—potentially causing bad outcomes for applicants, employees, and the business—to data challengers, able to harness the power of AI approaches while being clear on their limitations and potential problems.

The Bottom Line: HFS has long championed explainable AI (XAI) as the approach necessary to win the trust of staff and consumers.

With HR increasingly deploying AI and ML in recruitment, workforce management, and platforms unifying CX/EX experience, leaders must be clear that they can explain how the tech works and that AI/ML deployment is compliant with new regulations.

Organizations should partner with firms that foreground explainability and openness in their AI deployments and have a clear “living document” AI/ML governance code within their organizations that conforms to international standards. AI isn’t going away, but neither are the potential issues arising from it. As ever, innovators will find opportunities – laggards will find consequences.

Sign in to view or download this research.

Login

Register

Insight. Inspiration. Impact.

Register now for immediate access of HFS' research, data and forward looking trends.

Get Started

Download Research

    Sign In

    Insight. Inspiration. Impact.

    Register now for immediate access of HFS' research, data and forward looking trends.

    Get Started