On May 18, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) released new “Questions and Answers” guidance on “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” The guidance is intended to “help employers prevent the use of AI from leading to discrimination in the workplace,” with a specific focus on disparate impact issues that may arise under Title VII. The guidance is discussed in further detail below.
* * *
Disparate Impact Discrimination Under Title VII
The guidance begins by reminding employers that, under Title VII, employers are prohibited “from using neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin, if the tests or selection procedures are not ‘job related for the position in question and consistent with business necessity.’” The guidance is specifically focused on this type of discrimination (as opposed to intentional “disparate treatment” discrimination).
Assessing Adverse Impact Resulting From the Use of AI Selection Tools
According to the EEOC, an employer’s use of artificial intelligence (“AI”)—which is referred to in the guidance as “algorithmic decision-making tools”—may implicate disparate impact concerns under Title VII.
The guidance explains that the use of AI tools to “inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees” may constitute a “selection procedure” under Title VII. As the guidance notes, selection procedures that have a disparate (adverse) impact on a protected class of employees may violate Title VII, unless the employer can show that the selection procedure is “job related and consistent with business necessity,” as required by the statute.
Notably, the EEOC states that an employer may be responsible under Title VII for its use of AI tools, even if the tools are developed by a third-party software vendor. An employer may also be held responsible for actions taken by software vendors acting on the employer’s behalf. The EEOC recommends that employers weighing the use of AI tools inquire whether the vendor has taken steps “to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII,” and if a lower selection rate is expected, to consider “whether use of the tool is job related and consistent with business necessity.” Further, the EEOC notes that the employer may be liable even if the vendor incorrectly assesses that the tool does not cause a substantially lower selection rate if, in fact, “the tool does result in either disparate impact discrimination or disparate treatment discrimination.”
The guidance also discusses use of the commonly-applied “four-fifths rule” to evaluate selection rates across protected classes.[1] In its recently published guidance, the EEOC takes the position that compliance with the four-fifths rule is not always enough to show that a given selection procedure does not violate Title VII, and that as a result, “employers that are deciding whether to rely on a vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor specifically whether it relied on the four-fifths rule of thumb when determining whether use of the tool might have an adverse impact on the basis of a characteristic protected by Title VII, or whether it relied on a standard such as statistical significance that is often used by courts.”
Finally, the guidance explains that “if an employer is in the process of developing a selection tool and discovers that use of the tool would have an adverse impact on individuals of a particular sex, race, or other group protected by Title VII, it can take steps to reduce the impact or select a different tool in order to avoid engaging in a practice that violates Title VII.” The EEOC also states that “[f]ailure to adopt a less discriminatory algorithm that was considered during the development process . . . may give rise to liability.”
Recent Federal Efforts to Regulate AI in the Workplace
The guidance represents a continuation of recent efforts by the EEOC and other federal agencies to increase regulation governing the use of AI in the workplace. In 2021, the EEOC announced an initiative to “examine more closely how existing and developing technologies fundamentally change the ways employment decisions are made.” In May 2022, the EEOC and DOJ released guidance addressing disability discrimination issues that may arise when employers use AI and other software tools to make employment decisions. And in April 2023, the EEOC, DOJ, CFPB, and FTC released a joint statement on AI, noting the agencies’ shared view that “responsible innovation” is compatible with “established laws.”
[1] The EEOC, in its guidance, refers to the four-fifths rule as a “rule of thumb,” which represents an understatement of the extent to which the rule has been relied on by the EEOC in the past. The rule is an easily applied test for adverse impact that employers can use in planning employment decisions. As an example, in the context of a decision to reduce headcount and an examination of whether there is an adverse impact against women, the employer would compare the percentage of women in the workplace before the terminations and the percentage of women who would remain if the terminations took place as planned. If the two percentages are within 4/5 or 80% of each other, the EEOC has said that there likely would not be adverse impact. Thus, for example, if 80% of the workforce were women before the terminations and 75% of the workforce were women afterward, the four-fifths rule would be satisfied, because 75% is 93.75% of 80%.