High Tech Hiring: Tackling a New Breed of Discrimination

January 22, 2019

Artificial Intelligence was supposed to revolutionize the recruitment process and end bias in hiring. It has revolutionized the process, but it has not ended bias in hiring decisions. The rise of the internet and the increasing ease for a job applicant to find an opening and send in a job application has overwhelmed many Human Resources departments.

This is where algorithmic hiring tools and Artificial Intelligence stepped in. 55% of US human resources managers said artificial intelligence, or AI, would be a regular part of their work within the next five years, according to a 2017 survey by talent software firm CareerBuilder. The tools work by crunching employer-provided “training” data to assess how applicants match up with a given opening. In theory, they can help employers sidestep their human hiring biases and analyze job candidates’ skills without considering their sex, race or other protected traits. In practice, eliminating bias is not so easy.

Amazon, for example discovered that its machine recruiters were favoring male resumes over female resumes. The technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured.” Additionally, It penalized résumés that included the word “women’s”, as in “women’s chess club captain” and it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. The system, in effect, taught itself to prefer male candidates over female candidates.

The issue with these systems is the underlying data that the hiring tools rely on. If an employer tends to rate men’s performance more highly than it does women’s— the tool’s result will be the same. In one example cited in a report by Upturn, one of these AI hiring tools found workers who were named “Jared” or who had played high school lacrosse tended to be more successful.

If these high-tech tools lead employers to select members of one protected class like race or sex over those of another, they can cause what’s known as “disparate impact” discrimination. This term refers to employment bias that comes about when an employer applies a seemingly neutral policy or practice, but it has a discriminatory effect.

The point of this predictive technology is to find these correlations of workplace success, but just because a correlation exists does not necessarily mean they should be used to make hiring decisions.

Title VII of the Civil Rights Act of 1964, the federal law that bans workplace discrimination, lets workers challenge employer practices that cause disparate impacts on members of protected classes. There is a strong case that these tools violate the Civil Rights Act, but the hard part is knowing there is any discrimination occurring in the first place.

The Equal Employment Opportunity Commission is monitoring the space. In late 2016, the agency convened a panel of algorithmic hiring experts to discuss the discrimination risks this technology poses and created a working group to probe this area. Despite the progress, they still rely on workers to help flag discriminatory hiring and workers do not know they are being discriminated against. In addition, EEOC has been left understaffed, and lacks the capacity to do large scale investigations.

An additional challenge is the way that these laws have been interpreted by the courts does not match up well with the methods these tools are using to discriminate against job applicants. Some of the regulatory guidelines state that tools to assess job candidates are valid if they can show that there is a correlation between the thing that they’re measuring and some measure of workplace success. The point of this predictive technology is to find these correlations of workplace success, but just because a correlation exists does not necessarily mean they should be used to make hiring decisions.

Under the law, these tools currently might be legitimate. and employers might be convinced to adopt them, but the laws were written 30-40 years ago and are not equipped to deal with the new technology employers are using to weed out job applicants.

Despite these barriers, there have been steps forward in litigating these claims. Several worker-side firms are looking closely for ways to bring suits in this space. Management side attorneys are starting to see more demand letters from Plaintiff attorneys. The American Civil Liberties Union brought a complaint before the EEOC about jobs being advertised on Facebook in a discriminatory manner. Anti-discrimination law is focused on outcomes and putting more focus on the disparate outcomes will help bring this issue to light. Furthermore, putting the regulatory focus on the technologies themselves instead of the employers using the technology is critical. The government should work to regulate the underlying data these technologies are using, and employers should strive to make sure their AI tools are not creating any unintended bias in their hiring.

Kollin Bender, 21 January 2019