Earlier this month, Lina Khan, chair of the US Federal Trade Commission (FTC), wrote an essay in The New York Times affirming the agency’s commitment to regulating AI. But there was one AI application Khan didn’t mention that the FTC urgently needs to regulate: automated hiring systems. These range in complexity from tools that merely parse resumes and rank them to systems that green-light candidates and trash applicants deemed unfit. Increasingly, working Americans are obligated to use them if they want to get hired.
In my recent book, The Quantified Worker, I argue that the American worker is being reduced to numbers by AI technologies in the workplace, automated hiring systems chief among them. These systems reduce applicants to a score or rank, often ignoring the gestalt of their human experience. Sometimes they even sort people by their race, age, and sex, a practice that’s legally prohibited from being part of the employment decisionmaking process.
Ironically, many of these systems are marketed as being bias-free or guaranteed to reduce the probability of discriminatory hiring. But because they’re so loosely regulated, such systems have been shown to deny equal employment opportunity on the basis of protected categories such as race, age, sex, and disability. In December 2022, for example, a female truckers union sued Meta, alleging that Facebook “selectively shows job advertisements based on users’ gender and age, with older workers far less likely to see ads and women far less likely to see ads for blue-collar positions, especially in industries that historically exclude women.” This is deceptive. Even more, it is unfair to job applicants and employers alike. Employers purchase automated hiring systems to reduce their liability for employment discrimination, and the vendors of those systems are legally obligated to substantiate their claims of efficacy and fairness.
The law puts automated hiring systems under the FTC’s purview, but the agency has yet to release specific guidelines on how purveyors of these systems ought to advertise their wares. It should start by requiring auditing to ensure that automated hiring platforms are fulfilling the promises they make to employers. The vendors of these platforms should be obligated to provide clear records of audits demonstrating that their systems reduce bias in employment decisionmaking as advertised. These audits should be able to show that the designers followed Equal Employment Opportunity Commission (EEOC) guidelines when creating the platforms.
Also, in collaboration with the EEOC, the FTC could establish the Fair Automated Hiring Mark, which would be used to certify that automated hiring systems have passed the rigorous auditing process. As an imprimatur, the mark would be a useful signal of quality to consumers—both applicants and employers.
The FTC should also allow job applicants, who are consumers of AI-enabled online application systems, to sue under the Federal Credit Report Act (FCRA). Previously, the FCRA was thought to only apply to the Big Three credit agencies, but a close reading shows that this law can apply whenever a report has been created for any “economic decision.” By this definition, applicant profiles created by online automated hiring platforms are “consumer reports,” which means that the entities that generated them (such as online hiring platforms) would be considered credit reporting agencies. Under the FCRA, anyone that is the subject of one of these reports can petition the agency that made it to see the results and demand corrections or amendments. Most consumers do not know they have these rights. The FTC should launch an education campaign to inform applicants about these rights so they can make use of them.