On August 9, 2023, the Equal Employment Opportunity Commission (“EEOC”) announced the settlement of the agency’s first lawsuit involving the alleged discriminatory use of artificial intelligence (“AI”) in the workplace. In the lawsuit, EEOC v. iTutorGroup, Inc., the EEOC alleged that iTutorGroup’s hiring software automatically rejected older job applicants in violation of the Age Discrimination in Employment Act (“ADEA”). This settlement comes amongst the EEOC’s stated intent to enforce anti-discrimination laws in connection with the use of AI in workplace decisions, and is likely the first of many more litigations and settlements in this area.
* * *
Background
EEOC’s Continued Focus on AI. AI offers companies many opportunities to streamline or refine processes, and the hiring process is no exception. Companies may choose, for example, to employ software to screen resumes or job applications. There is a risk, however, that such programs may inadvertently discriminate against certain protected classes. In response to the significant increase in the use of AI in hiring, the EEOC has emphasized its continued focus on AI and the need for employers to comply with existing anti-discrimination laws in connection with the use of AI products for workplace decisions. As part of that effort, in October 2021, the EEOC commenced an Artificial Intelligence and Algorithmic Fairness Initiative. The EEOC also recently issued specific AI guidance, including (a) May 2022 AI guidance concerning disability discrimination; and (b) May 2023 guidance entitled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” In January 2023, the EEOC held a public hearing to examine the benefits and challenges to the use of AI in employment decisions. On April 25, 2023, Charlotte A. Burrows, Chair of the EEOC, joined officials from the DOJ, CFPB, and FTC to release a joint statement emphasizing the agencies’ pledge “to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.” And earlier this month, EEOC Chair Charlotte Burrows indicated that AI is a “new civil rights frontier” that might threaten “basic values and principles” and carry a risk of discrimination in employment or hiring decisions.
The EEOC Litigation. On May 5, 2022, the EEOC filed a complaint against iTutorGroup, an organization that hires remote English tutors for students in China, in the Eastern District of New York. The EEOC alleged that the company violated the ADEA by implementing a software hiring program that “intentionally discriminated against older applicants because of their age” by “automatically reject[ing] female applicants age 55 or older and male applicants age 60 or older,” effectively screening out over 200 applicants. The purported discriminatory software was discovered when an applicant submitted two applications identical in all but birth date. According to the EEOC, the applicant used one application with their real date of birth and filed a second application with a more recent date of birth. The candidate allegedly received an interview only when using the more recent date of birth.
Under the terms of the settlement, iTutorGroup agreed to pay $365,000 to a group of applicants whose applications had been rejected because of their age. Claimants receiving payment pursuant to the settlement will be apportioned compensatory damages and back pay. In the settlement, the company did not admit to any wrongdoing, but agreed to submit to the EEOC “proposed anti-discrimination and complaint procedures applicable to the screening, hiring, and supervision” of candidates and employees.
Implications
Given the significant increase in the use of AI in the workplace by employers, the EEOC is expected to continue to focus on the use of AI and bring more litigation in this area. The recent settlement is an important reminder for employers to proactively monitor the developing law and guidance in this area, including at the state and local level. As but one example, New York City’s legislation (Local Law 144) recently became effective and prohibits employers from using an automated decision tool in certain employment decisions unless the employer first ensures that the tool has been audited for bias within the preceding year.
* * *