Legal considerations around data privacy and bias reduction continue to abound.
With artificial intelligence becoming all the rage across the HR world, there clearly appears to be rewards from using AI to find and land the best talent. But at what price?
Do AI-based tools in recruiting and hiring really outperform human decision-making? And if they do, could they potentially expose HR and employers to the same types of discrimination issues that can impact hiring driven by people, not algorithms? Right now, the legal landscape in the U.S. has yet to catch up those critical considerations.
In Europe, for instance, the U.K.’s Information Commissioner’s Office recently released guidance for organizations about transparency within AI decision-making. That effort is part of a larger European-wide data-privacy effort, the General Data Protection Regulation.
Here in the U.S., legal experts say there’s a long way to go for regulatory and legal initiatives around the potential privacy and other data-related complications that could surface using AI within HR.
On the federal level so far, there is the Algorithmic Accountability Act (AAA), introduced earlier this year by Sens. Cory Booker (D-N.J.) and Ron Wyden (D-Ore.). Here, Booker and Wyden asked the Federal Trade Commission, the Centers for Medicare and Medicaid Services and five major healthcare companies to “provide more information on how they are addressing bias in algorithms used in many healthcare systems.” While this specific proposal is not aimed at HR issues per se, it’s a start on regulating the use of AI when it comes to data-privacy issues related to healthcare consumers.
On the state level, in 2019 Illinois passed the Transparency, Consent and Data Destruction Duties Central to AI Video Interview Act, the first state law regulating the use of AI in hiring. The statute, which becomes effective Jan. 1, 2020, specifically requires employers hiring for jobs “based in” Illinois that use “artificial intelligence analysis” of video interviews to:
- notify the applicant, in advance, that the organization is using the technology to analyze video interviews;
- explain to the applicant “how the [AI] works” and what general characteristics the technology uses to evaluate applicants;
- obtain, in advance, the applicant’s consent to use the technology;
- limit the distribution and sharing of the video to only those persons “whose expertise or technology” is necessary to evaluate the applicant; and
- upon request from the applicant, destroy the video (and all backup copies) within 30 days upon the applicant’s request.
There is no denying it: AI is making inroads throughout the recruiting space. But legal experts say employers choosing HR to make hiring decisions should tread carefully.
Catherine Barbieri, a partner in the labor and employment practice at Fox Rothschild in Philadelphia, says that, when programmed correctly and audited regularly, AI has the ability to eliminate bias in the hiring process.
“AI is being used to screen resumes, interview candidates and ensure more diverse candidate pools, and to answer candidates’ questions about the hiring process,” she says. “However, AI is not perfect, and it is only as good as the data and algorithms that it uses.”
Additionally, she says, companies’ collection and retention of candidate videos and other biometric data may give rise to other notice and document-retention obligations under state law that do not exist in the human-initiated screening and interviewing process.
Barbieri says that, when using AI to eliminate human bias from the hiring process, HR professionals need to ensure that algorithms or data the AI is employing are not creating a new issue by favoring certain characteristics more tied to male or non-diverse candidates or cultures. Examples of this include chatbots that react negatively to candidates who shake their heads or fail to smile widely, tendencies that could be culture-specific.
“HR professionals should ensure that they understand fully, and approve of, the factors and characteristics that are being taken into consideration by AI in the hiring process,” she says. “HR also needs to treat any hiring data gathered by AI like it does any other confidential company data, and explicitly include it in its confidentiality and data-breach policies and procedures.”
As for the current, albeit limited, legislation focused on data privacy, Barbieri says, while the type of auditing that the AAA law would require should help reduce bias in the AI’s algorithms, it won’t apply to most employers.
On the Illinois act, she says, it’s not the first time the state has been in the vanguard on data-privacy issues, having also been the first state to regulate an employer’s use of employees’ biometric data in 2008. Other states have followed Illinois’ lead in the area of biometric data.
“While there is no indication yet that other states will pass similar AI-related legislation that will affect HR, it is instructive that roughly 60% of states have adopted some form of legislation concerning autonomous vehicles in response to safety concerns around that new technology,” she says. “It may be that, as the risks of AI-based hiring manifest themselves, other states will pass legislation to address the potential privacy and discrimination risks that AI poses.”
Washington, D.C.-based Randel Johnson, a partner in the Labor and Employment area at Seyfarth Shaw, says the use of artificial AI permeates virtually all aspects of society, though its use in the HR area is actually a relatively small part.
“While there are multiple uses in the employment context, the most predominant is in leveraging AI tools to more quickly and efficiently match candidates to jobs. The business case is clear,” Johnson says, noting that some will argue that AI tools reduce the possibility of unconscious biases in the hiring processes and, in fact, can expand the pool of applicants.
However, he notes, the use of AI in this way has surfaced criticisms as potentially disadvantaging employee groups who do not fit pre-established criteria underpinning the algorithms that developed the profile of who will be the most successful performer in the job in question.
“At the back end of these discussions are questions over the ‘black box’ nature of AI algorithms, and the understandable hesitancy of vendors to reveal proprietary information,” Johnson says. Further, he says, questions exist over whether current law governing validation of selection procedures under disparate-impact analysis fits this new method of hiring, trying to fit “a square peg into a round hole.”
Annette Tyman, a labor and employment partner in Seyfarth Shaw’s Chicago office, adds that, perhaps not surprisingly, public polling has displayed a suspicion of the use AI in hiring and indicated a preference for the more traditional one-on-one hiring practices.
“Deciphering the pros and cons of AI in the workplace—and how to govern possible abuses while allowing technology to promote legitimate increases in efficiencies—are not easy tasks,” she says, adding that these are difficult issues thatCapitol Hill is only beginning to wrestle with.
Tyman explains that the Algorithmic Accountability Act is clear notice that Congress believes something needs to be done, but by leaving much of the judgment calls to the FTC, the sponsors acknowledge that it is difficult to create a proper path forward.
She says that, while there is little chance AAA will move forward, it should be noted that draft legislation on privacy being circulated by Sen. Maria Cantwell (D-Wash.) and a separate draft bill by Sen. Roger Wicker (R-Miss.), ranking and chair of the Senate Commerce Committee, respectively, both plant a flag on the AI issue.
“Sen. Wicker’s bill properly establishes a commission to look at this entire area and includes a precedent-setting preemption of state-law provision,” Tyman says. “Congress moves slowly, but we can anticipate that it will step in, given growing questions and public perceptions. However, earliest indication are good that it will not act precipitously.”