On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) issued guidance on the use of software, algorithms, and artificial intelligence (AI) for employment decisions under Title VII of the Civil Rights Act of 1964. This guidance comes as part of the agency-wide initiative launched by the EEOC in 2021 to ensure that the use of software and other technologies used in hiring and other employment decisions (which the EEOC calls “selection procedures”) comply with federal civil rights laws. 

The EEOC notes that employers have an increasing variety of technological tools available to implement selection procedures.  Thanks to AI, employers no longer have to read through resumes and physically interview candidates in order to select an applicant for a position.  In today’s world, resumes can be routinely scanned by computers looking for keywords and applicants are sometimes required to submit a video response that is then reviewed by a computer.  Based on the fact that AI is making decisions, many worry that this technology can treat people differently based on their race, color, religion, sex, or national origin (i.e. “legally protected characteristics”) in violation of Title VII of the Civil Rights Act of 1964 – an issue of particular concern for the EEOC.

But how can software, not humans, violate employment laws?  While we know that most employers aren’t purposely setting up technology to select or disqualify people on protected characteristics such as race, this may happen by mistake.  For example, an employer may set up an algorithm to sort applicants based on their zip code and proximity to the office location.  However, such an apparently neutral algorithm could inadvertently select some applicants and deselect others based on the predominant race of that specific neighborhood – resulting in an (unintentional) adverse impact based on race. 

In their new guidance, the EEOC affirms that the use of AI can open up an employer to liability under Title VII for using selection procedures that have an adverse impact based on a legally protected characteristic. The EEOC has adopted Uniform Guidelines on Employee Selection Procedures that provide guidance on how to determine if selection procedures are lawful under Title VII. According to the EEOC, these Guidelines apply to the use of AI, as summarized below:

  • Decision-making tools that have an adverse impact on individuals will violate Title VII unless “job related and consistent with business necessity,” and there is no less discriminatory alternative available.
  • Employers are generally liable for its use of AI tools, even if developed by an outside vendor. Thus it is important for an employer to question their vendor whether they have evaluated selection rates when using the tool. And if there is an adverse impact, revert to the prior bullet point.
  • The EEOC provides an explanation of selection rates, meaning the proportion of applicants or candidates who are hired, promoted, or otherwise selected. It also explains the “four-fifths rule”, which is a test used to draw an initial inference of discrimination if the selection rate for various groups are substantially different or less than four-fifths (or 80%).  While acknowledging that the four-fifths rule is “practical and easy to administer,” the EEOC further cautions that it may not always be appropriate, and that the EEOC might not consider compliance with the rule to be sufficient to show a particular selection procedure is lawful. 
  • Employers are encouraged to conduct analyses on an ongoing basis to determine whether their employment practices impose an adverse impact. If this is found, employers should proactively adjust or change the AI tool going forward.

As technology continues evolves, we can be sure to expect additional updates from the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative.