Algorithmic Bias Is Groupthink Gone Digital

person working on a tablet

Share This Post

As corporations and governments grow ever more reliant on artificial intelligence to help them make decisions, algorithms have more and more power to influence our lives. We rely on algorithms to help us decide who gets hired, who gets a bank loan or mortgage, and who’s granted parole.

And when we think about AI — deep learning and neural networks, circuit boards and code — we like to imagine it as neutral and objective, free from the imperfections of human brains. Computers don’t make mistakes, and the very idea of bias is a uniquely human failing.

Right?

It’s true that our ancient primate brains, evolved for tribal warfare and adapted for life on the savannah, are riddled with systematic errors of judgment and perception that bias our decisions. As we like to say at the NeuroLeadership Institute: “If you have a brain, you have bias.”

But the reality is that algorithms, since they’re designed by humans, are far from neutral and impartial. On the contrary, algorithms have frequently been shown to have disparate impact on groups that are already socially disadvantaged — a phenomenon known as “algorithmic bias.” Just as a lack of dissenting voices in a group discussion can lead to groupthink, as we highlighted in our recent white paper, a lack of diversity in a dataset can lead to algorithmic bias.

We can think of this phenomenon as a kind of “digital groupthink.” Consider the below examples of digital groupthink gone wrong:

1. Medical Malpractice

Machine learning is a type of artificial intelligence in which a computer infers rules from a data set it’s given. But data sets themselves can be biased, which means the resulting algorithm may duplicate or even amplify whatever human bias already existed. The Google semantic analysis tool word2vec can correctly answer questions like “sister is to woman as brother is to what?” (Answer: man.)

But when Google researchers had the system practice using articles from Google News and asked it the question, “Father is to doctor as mother is to what?” the algorithm answered “nurse.” Based on the articles in the news, the algorithm inferred that “father + medical = doctor” and “mother + medical profession = nurse.”

The inference was valid based on the dataset it studied, but it exposes a societal bias we should address, not perpetuate.

2. Saving Face

In recent years, several companies have developed machine learning technology to identify faces in photographs. Unfortunately, studies show that these systems don’t recognize dark-skinned faces as well as light-skinned ones — a serious problem now that facial recognition is used not just in consumer electronics, but also in law enforcement agencies like the FBI.

In a study of commercial facial recognition systems from IBM, Microsoft, and a Chinese company called Face++, MIT researcher Joy Buolamwini found that the systems were better at classifying white faces than darker ones, and more accurate for men’s faces than for women’s.

IBM’s system, the Watson Visual Recognition service, got white male faces wrong just 0.3% of the time. Compare that to 34.7% for black women. Buolamwini’s study promptly went viral and IBM, to its credit, responded swiftly, retraining its system with a fresh dataset and improving its recognition rates tenfold in a matter of weeks.

3. Boy Scouts

Amazon has long been known as a pioneer of technological efficiency. It has found innovative ways to automate everything from warehouse logistics to merchandise pricing. But last year, when the company attempted to streamline its process for recruiting top talent, it discovered a clear case of algorithmic bias.

Amazon had developed a recruiting engine, powered by machine learning, that assigned candidates a rating of one star to five stars. But the algorithm had been trained by observing patterns in resumes submitted to Amazon over a ten-year period. And since the tech industry has been male-dominated, the most qualified and experienced resumes submitted during that period tended to come from men.

As a result, the hiring tool began to penalize resumes contained the word “women’s,” as in “captain of the women’s chess club.” To its credit, Amazon quickly detected the gender bias in its algorithm, and the engine was never used to evaluate job candidates.

It’s tempting to think that artificial intelligence will remove bias from our future decision-making. But so long as humans have a role to play in designing and programming the way AI “thinks,” there will always be the possibility that bias — and groupthink — will be baked in.

To learn more about eradicating groupthink in your organization, download “The Business Case: How Diversity Defeats Groupthink.”

Subscribe To Our Newsletter

More To Explore

Season 10

DE(A)I Part One: Mitigating Bias in Technology Adoption

In this special episode of Your Brain at Work, published to coincide with a presentation — delivered by Janet M. Stovall, our Global Head of DEI, and Matt Summers, our Global Head of Culture and Leadership — at the Society for Human Resource Management’s Talent Conference and Expo… they examine the emergence of AI through the lens of Diversity, Equity and Inclusion — this time focusing on breaking bias.

Ready to transform your organization?

Connect with a NeuroLeadership Institute expert today.

two people walking across crosswalk

This site uses cookies to provide you with a personalized browsing experience. By using this site you agree to our use of cookies as explained in our Privacy Policy. Please read our Privacy Policy for more information.