The ACLU has recently taken a firm stance against certain AI-powered hiring tools, filing complaints with both the FTC and the EEOC. These actions stem from concerns about potential discrimination against individuals with disabilities and certain racial backgrounds. Here’s what employers and AI developers need to understand about these developments and how they should adapt their hiring processes accordingly.
The ACLU’s Concerns
On May 30, the ACLU filed a complaint with the FTC, following a charge with the EEOC, targeting three AI-powered hiring tools developed by Aon Consulting. The tools in question are:
1. Personality Assessment Test (ADEPT-15)
This test, designed to evaluate work styles and traits such as task performance and adaptability, has been used over 8 million times. The ACLU alleges it can unfairly screen out individuals with disabilities like depression, anxiety, and autism due to its focus on general personality traits not directly related to job performance.
2. Video Interviewing Platform (vidAssess-AI)
This tool allows employers to conduct asynchronous video interviews, assessing responses to pre-developed questions. The ACLU argues that despite claims of eliminating bias, the tool could still discriminate based on disabilities and race, relying on AI data that may exacerbate these issues.
3. Cognitive Ability Test (gridChallenge)
This gamified assessment measures working memory and cognitive function. The ACLU cites data showing racial disparities in test scores, with Asian, Black, Hispanic or Latino, and other non-white test-takers scoring lower than white counterparts. It argues individuals with cognitive impairments, including autism, are unfairly disadvantaged.
Recommendations for Employers
Given these developments, employers should take proactive steps to ensure their hiring processes are fair and compliant. HR advisors and consultants offer valuable insights and support in creating AI Governance programs and vetting AI vendors to avoid potential legal pitfalls.
Establish an AI Governance System
Developing clear procedures and systems, including human oversight, for deploying AI in hiring is crucial. The National Institute for Standards and Technology (NIST) provides guidelines on artificial intelligence that can help structure these governance systems. Regularly evaluating these processes ensures they effectively manage AI systems and prevent biases.
- Human Oversight: Ensure there is a human element in reviewing AI outputs. Human judgment is essential in interpreting AI results and making final decisions.
- Regular Audits: Conduct regular audits of AI systems to detect and correct biases. This helps maintain fairness and accuracy in the hiring process.
- Training and Awareness: Train HR staff on the implications of AI in hiring and the importance of unbiased decision-making.
Vet Your Vendors
Thoroughly evaluating AI vendors and their products is critical to ensuring they meet anti-discrimination standards. Relying solely on vendor claims can be risky; hence, it’s vital to ask the right questions, often with the help of legal advisors.
- Request Transparency: Ask vendors for detailed information about their AI tools’ development and data sources. Transparency in these areas can help assess potential biases.
- Legal Consultation: Consult with legal experts to ensure the AI tools comply with employment laws and anti-discrimination regulations.
- Vendor Accountability: Hold vendors accountable for their claims. Ensure they provide regular updates and evidence of bias mitigation in their AI tools.
Be Transparent
Informing candidates and employees about the AI tools used in your hiring process builds trust and ensures compliance with legal requirements in some states. Transparency also aligns with ethical hiring practices.
- Clear Communication: Clearly communicate the use of AI in the hiring process in job postings and during interviews. Explain how the AI tools work and their benefits.
- Candidate Consent: Obtain consent from candidates before using AI tools. Ensure they understand how their data will be used and protected.
- Feedback Mechanism: Provide a mechanism for candidates to ask questions or raise concerns about AI use. This fosters an open and transparent hiring environment.
Offer Accommodations
Providing alternatives to AI-based assessments for individuals with disabilities is essential to ensure fairness and inclusivity. This could include offering different formats or settings for assessments.
- Alternative Formats: Offer alternative testing formats that cater to different needs, such as written tests for those who may struggle with video interviews.
- Quiet Environments: Provide the option to complete assessments in quiet settings to minimize distractions and stress.
- Customized Assessments: Work with HR professionals to develop customized assessment methods that consider individual needs.
Align Your Questions
Ensuring that AI-driven interview questions are directly related to job requirements helps avoid unnecessary bias and focuses on relevant skills and qualifications.
- Job-Related Questions: Develop interview questions that are specifically aligned with the job’s essential functions and responsibilities.
- Regular Review: Regularly review and update the questions to ensure they remain relevant and fair.
- Bias Checks: Implement checks to ensure questions do not inadvertently favor certain groups.
Navigating the Future of AI in Hiring
The landscape of AI in hiring is rapidly evolving, and staying ahead requires proactive measures and expert guidance. As an employer, partnering with HR specialists ensures you navigate these changes effectively, maintaining compliance and fairness in your hiring processes. Don’t wait until you’re faced with legal challenges. Act now to implement robust AI governance and transparent practices.
To learn more about how you can leverage AI responsibly and enhance your hiring practices, contact me for expert support tailored to your business needs. Stay informed, stay compliant, and build a fairer workplace today.