How Diversity Defeats Groupthink
Human beings enjoy cohesion so much that we are often afraid to say anything to disturb it. Diversity can help.
Thank you for searching the NeuroLeadership Institute archives. Here’s what we were able to find for you.
Still having difficulty finding what you’re looking for? Contact us.
Human beings enjoy cohesion so much that we are often afraid to say anything to disturb it. Diversity can help.
We can learn a lot from the failed speaking-up moments from ill-fated flights like the Challenger. Learn how to help your team speak up when it matters.
Range is critical for not only individuals, but teams too. Learn how cultivating collective range can help your team defeat groupthink.
As corporations and governments grow ever more reliant on artificial intelligence to help them make decisions, algorithms have more and more power to influence our lives. We rely on algorithms to help us decide who gets hired, who gets a bank loan or mortgage, and who’s granted parole. And when we think about AI — deep learning and neural networks, circuit boards and code — we like to imagine it as neutral and objective, free from the imperfections of human brains. Computers don’t make mistakes, and the very idea of bias is a uniquely human failing. Right? It’s true that our ancient primate brains, evolved for tribal warfare and adapted for life on the savannah, are riddled with systematic errors of judgment and perception that bias our decisions. As we like to say at the NeuroLeadership Institute: “If you have a brain, you have bias.” But the reality is that algorithms, since they’re designed by humans, are far from neutral and impartial. On the contrary, algorithms have frequently been shown to have disparate impact on groups that are already socially disadvantaged — a phenomenon known as “algorithmic bias.” Just as a lack of dissenting voices in a group discussion can lead to groupthink, as we highlighted in our recent white paper, a lack of diversity in a dataset can lead to algorithmic bias. We can think of this phenomenon as a kind of “digital groupthink.” Consider the below examples of digital groupthink gone wrong: 1. Medical Malpractice Machine learning is a type of artificial intelligence in which a computer infers rules from a data set it’s given. But data sets themselves can be biased, which means the resulting algorithm may duplicate or even amplify whatever human bias already existed. The Google semantic analysis tool word2vec can correctly answer questions like “sister is to woman as brother is to what?” (Answer: man.) But when Google researchers had the system practice using articles from Google News and asked it the question, “Father is to doctor as mother is to what?” the algorithm answered “nurse.” Based on the articles in the news, the algorithm inferred that “father + medical = doctor” and “mother + medical profession = nurse.” The inference was valid based on the dataset it studied, but it exposes a societal bias we should address, not perpetuate. 2. Saving Face In recent years, several companies have developed machine learning technology to identify faces in photographs. Unfortunately, studies show that these systems don’t recognize dark-skinned faces as well as light-skinned ones — a serious problem now that facial recognition is used not just in consumer electronics, but also in law enforcement agencies like the FBI. In a study of commercial facial recognition systems from IBM, Microsoft, and a Chinese company called Face++, MIT researcher Joy Buolamwini found that the systems were better at classifying white faces than darker ones, and more accurate for men’s faces than for women’s. IBM’s system, the Watson Visual Recognition service, got white male faces wrong just 0.3% of the time. Compare that to 34.7% for black women. Buolamwini’s study promptly went viral and IBM, to its credit, responded swiftly, retraining its system with a fresh dataset and improving its recognition rates tenfold in a matter of weeks. 3. Boy Scouts Amazon has long been known as a pioneer of technological efficiency. It has found innovative ways to automate everything from warehouse logistics to merchandise pricing. But last year, when the company attempted to streamline its process for recruiting top talent, it discovered a clear case of algorithmic bias. Amazon had developed a recruiting engine, powered by machine learning, that assigned candidates a rating of one star to five stars. But the algorithm had been trained by observing patterns in resumes submitted to Amazon over a ten-year period. And since the tech industry has been male-dominated, the most qualified and experienced resumes submitted during that period tended to come from men. As a result, the hiring tool began to penalize resumes contained the word “women’s,” as in “captain of the women’s chess club.” To its credit, Amazon quickly detected the gender bias in its algorithm, and the engine was never used to evaluate job candidates. It’s tempting to think that artificial intelligence will remove bias from our future decision-making. But so long as humans have a role to play in designing and programming the way AI “thinks,” there will always be the possibility that bias — and groupthink — will be baked in. To learn more about eradicating groupthink in your organization, download “The Business Case: How Diversity Defeats Groupthink.”
Over the last 25 years, we’ve cracked the code for culture change at scale. Discover what science-backed habit activation can do for your organization.
This site uses cookies to provide you with a personalized browsing experience. By using this site you agree to our use of cookies as explained in our Privacy Policy. Please read our Privacy Policy for more information.