Bias Laundering in Interviews
Bias Laundering is a phrase most commonly used to describe machine learning models. Because Machine Learning is trained on data from existing systems, it commonly replicates and institutionalizes any bias already existing in that system. The “laundering” happens when that algorithm is then trusted to be objective or neutral, thereby shielding the bias built into the system as inevitable and correct. During an interview, evaluating candidates for vague qualities of questionable utility can have the same impact. One examples is consideration of how “excited” the candidate seems to be in the interview. Unless your interview rubric evaluates for this trait in every candidate and judges them all by the same, unambiguous standard, you’re probably working backwards from a feeling rather …