If you have a brain, you have bias. In theory, artificial intelligence (AI) shouldn’t have this problem — a robot should be capable of making completely objective decisions when, for example, screening resumes or reviewing interview videos to select the best job candidate. But because AI is trained on human data, it mimics our biases, writes NLI in Fast Company. Therefore, human oversight is still needed.
According to the article, managers can take the following steps to prevent biases from influencing their hiring decisions:
- Get curious: Educate yourself about how your company uses AI tools in recruitment, and learn how to gauge the risk level of bias at each point, from writing the job description to screening resumes and ranking interviews.
- Develop fail-safes: Once you identify areas vulnerable to AI bias, you can set up checkpoints. For example, you can add a step to have HR run AI-generated job descriptions by you before posting the position.
- Mitigate biases: When you discover bias in your algorithms, classify it using The SEEDS Model®. SEEDS groups biases into five categories: similarity, expedience, experience, distance, and safety. For example, maybe your algorithm has a similarity bias because it’s learned to prefer candidates who graduated from the same college as many current employees. Labeling the bias will help you understand it and develop strategies to mitigate it.
Ultimately, AI algorithms are a reflection of our own biases. However, unlike humans, AI hasn’t yet developed the ability to detect and mitigate them. So until The SEEDS Model® is built into AI algorithms, it’s still up to us to provide checks and balances.
Read the full article in Fast Company here.