Information

Unconscious Racial and Gender Biases: The role of IAT

Unconscious Racial and Gender Biases: The role of IAT


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Recently the term unconscious bias, referring to race and sex, has increasingly appeared in the media. It appears that the support for these terms come from research conducted using the Implicit Association Test (IAT). However as I understand it the IAT is not a particularly reliable measure. My questions are quiet simple:

Are these findings reliable and valid?

What is driving these effects? In-group/out-group differences, sexism and racism, methodological errors etc.

Can the findings of previous IAT studies be used to extrapolate to unconscious racism and sexism? What are the consequences for the research and media based upon these findings.

RULES

I understand that this is a very emotive subject, but please try to remain rational. Any emotional or subjective answers will be rejected.

Note. I have done a bit of background reading on this while creating the question, but I don't have time to answer this myself at this time.

Here are a few references…

  1. Lebrecht (2009) Perceptual other-race training reduces implicit racial bias.
  2. Kaufman, psychology today article. (2011)
  3. Rezaei (2011). Validity and reliability of the IAT: Measuring gender and ethnic stereotypes.

Gender Bias: Is it Real? How to Overcome It

This article is an excerpt from the Shortform summary of "Blink" by Malcolm Gladwell. Shortform has the world's best summaries of books you should be reading.

Gender roles in society are finally being seen as outdated, but many of us still feel the need to play into certain gender roles. Why do we continue to be trapped in gender roles that no longer seem relevant to our modern lives?

The existence of gender roles in society and implicit gender bias may be the results of our unconscious associations. Learn how our unconscious biases affect our conscious decisions and take action to change your implicit gender biases.

Why We Still Have Gender Roles in Society

Even when we consciously believe that a woman has the right to opt-out of having children, have a thriving career, and/or wear what she pleases, we still have unconscious, implicit gender biases that tell us otherwise. This preserves the traditional gender roles in society.

Statistics from the administration of the Harvard’s Implicit Association Test (IAT) demonstrate that, regardless of our stated beliefs, most of us pair the concept of “male” with the concept of “career,” and pair “female” with “family.” When the IAT jumbles them, our reaction times are slower. This demonstrates why gender roles in society remain.

For example, when given two categories—Male/Career and Female/Family—we can sort words like “laundry,” “entrepreneur,” “merchant,” and “siblings” pretty quickly.

But change the categories to Male/Family and Female/Career, and the conscious mind has to course correct for the unconscious mind, which still yokes “male” with “career” and “female” with “family.” This slows down our reaction times. It also demonstrates our implicit gender biases.

Although we may not like it or consciously agree with it, most of us have a moderate or strong “automatic male association” when it comes to the workforce. Conversely, we associate females with the home and family. These implicit gender biases cement the gender roles in society that have been in place for centuries.

Can We Change Our Implicit Gender Biases?

Yes, but it takes effort. It’s possible to fight the Warren Harding error (judging based on appearances), to retrain your implicit assumptions by being aware of them and actively using your conscious mind to counter them. One car salesman has successfully used this strategy to ward off his implicit biases about customers. This example shows that it’s possible to fight implicit gender biases (among other biases). This might loosen our ties to traditional gender roles in society.

Bias in Car Sales

Car salesmen have a history of prejudging customers. A 1990s study of how the race and gender of a customer affect the car price found that Chicago car dealers, spread over 242 dealerships, offer white men the lowest prices (an average of $725 above what the dealer paid for the car), followed by white women ($935 above invoice), Black women ($1,195 above invoice), and finally, Black men ($1,687 above invoice).

You might think this is a conscious choice on the part of the dealers. They assume that anyone who isn’t a white male is a sucker, stupid enough to pay a car’s sticker price.

But the actors playing the part of customers in the study made it very obvious that they weren’t stupid. Researchers instructed all “customers,” regardless of race or gender, to make it clear that they were educated, had successful careers, and lived in wealthy neighborhoods. Black and female customers bargained for an average of 40 minutes, demonstrating that they were not willing to pay the sticker price. (White men didn’t have to negotiate at all. This is implicit gender bias and implicit racial bias in action.)

The conscious minds of the dealers must have told them that these people clearly weren’t suckers. But their unconscious minds weren’t convinced. Chicago car salesmen unconsciously link the concept of a “lay-down” with women and minorities.

Implicit gender biases and race biases perpetuate stereotypes about women and minorities and contribute to the defined gender roles in society and the roles we associate with people of different races and backgrounds.

If for no other reason than to make more sales, it’s in the seller’s best interest to consciously fight unconscious biases. At least one man has caught on. Counter to general car-selling wisdom, the best salesman at one New Jersey dealership quotes everyone the same price. His reputation for being fair gets him many referrals, which make up a third of his business. He sells about 20 cars per month, double the average of most salesmen.

This salesman attributes his success to his conscious belief that you can’t know which customer has the most money to spend or which one will be a lay-down. He understands the errors of unconscious attitudes that make judgments about customers, and makes a conscious effort to counter them.

Use this example as inspiration to consciously counter your implicit gender biases and knock down the barriers produced by traditional gender roles in society.


How the brain is built for bias

With significant determination and effort, we can become aware of our biases and work to eliminate them. &ldquoShort interventions don&rsquot yield long-lasting results,&rdquo Lai says&mdashour attempts must be consistent and sustained. Self-monitoring is part of it, but the bulk of the effort involves recognizing and working to change the culture of systemic racism and discrimination that instilled those biases in the first place.

Why is this so challenging? Because in addition to living in a culture that reinforces our biases every single day, we are working against the way our minds naturally function. Human beings regularly take shortcuts in thinking&mdashif we had to ponder everything carefully, from how to dress in the morning to how to perform a work task we&rsquove completed thousands of times, we&rsquod collapse from mental exhaustion. We do such things without thinking. These mental shortcuts can lead to problematic biases because they&rsquore often based on discriminatory images or ideas we&rsquove seen or heard repeatedly.

&ldquoImplicit biases generally favor the socially dominant group in a society,&rdquo Lai says, and in the U.S., that often means white people&mdashspecifically straight, white, relatively thin, Christian men. So strong is the idea of the &ldquonorm&rdquo in our society that members of a marginalized group can have biased thoughts about their peers with similar traits (for example, people in large bodies often think negatively about other large people), says Mary S. Himmelstein, Ph.D., an assistant professor of psychological sciences at Kent State University in Kent, OH.

And marginalizations build on one another, so while a white woman&rsquos ideas may often be ignored in a meeting full of men, a Black woman&rsquos are even more likely to be ignored, says Eva Pietri, Ph.D., an assistant professor of psychology at Indiana University-Purdue University at Indianapolis. If that woman is also disabled or a lesbian, and/or has a large body, she is in many settings exponentially more likely to be looked through or mistreated by her less marginalized peers.

One way in which experts determine what unconscious biases we have is through a test developed at Harvard known as the Implicit Association Test (IAT), in which subjects are asked to associate images with words as quickly as possible.

&ldquoConcepts linked in memory get categorized faster,&rdquo says Lai. &ldquoYou can group &lsquopeanut butter&rsquo with &lsquojelly&rsquo in less time than linking &lsquopeanut butter&rsquo to &lsquopickle.&rsquo&thinsp&rdquo These tests consistently find that, for instance, a picture of a white person is joined with the word &lsquogood&rsquo faster than one of a Black person is joined with the same adjective, especially when the tester is white. (You can take the IAT online, but bear in mind that it is not necessarily a good measure of an individual&rsquos biases so much as of the biases we have in aggregate.)


How Mindfulness Can Help Dislodge Unconscious Racial Biases

As the national conversation about racial prejudice -- and its role in deadly police confrontations with unarmed people of color -- continue, one point of analysis has been how to address and reform implicit bias. As Mother Jones writer Chris Mooney explains, we've reached a new race paradox in America: overt racism is less acceptable than ever, but we're a culture that loses a black man to police violence once every 28 hours. Racial biases are also still alive and well within academia and in the workforce.

Researchers from Central Michigan University may have uncovered a finding that could contribute to addressing implicit racial associations in the subconscious. Practicing mindfulness could help combat racial biases, according to the small, new study.

For the study, 72 white students completed the Implicit Association Test (IAT), a social psychology metric meant to examine the strength of a person's automatic associations. For this particular experiment, the test paired images of black, white, old and young faces with negatively or positively associated words.

Before the IAT, half of the participants listened to a 10-minute audio recording about mindfulness meditation. The recording instructed them to become aware of physical sensations, and to accept these sensations as well as their thoughts without resistance or judgment. The other half listened to a 10-minute discussion of natural history.

The researchers found, consistent with previous research, that white people have a quicker response time for positive words associated with white faces than positive words associated with black faces, as well as a quicker response time for negative words when paired with black faces rather than white faces. The subjects also had stronger associations between young and good, as well as old and bad.

However, the study also demonstrated that the short introduction to mindfulness meditation decreased these implicit age and race biases. This may be because mindfulness reduces the brain's reliance on automatic associations, thereby tempering biased thinking, the researchers hypothesized.

Last year, New York University research found that among people with strong race biases, there are larger differences in how the brain registers faces of different races, meaning that they perceive greater differences between black and white faces.

Still the new findings provide some heartening evidence that changing knee-jerk reactions may be possible -- and not only when it comes to race.

"Essentially, mindfulness should reduce the negative associations we have with any stereotyped group, allowing us to treat people on a more individual level rather than through a layer of prejudgment based on the associations of their group," one of the study's authors, Dr. Adam Lueke, said in an email to The Huffington Post.

The study was published in the journal Social Psychology and Personality Science.


The world is relying on a flawed psychological test to fight racism

In 1998, the incoming freshman class at Yale University was shown a psychological test that claimed to reveal and measure unconscious racism. The implications were intensely personal. Even students who insisted they were egalitarian were found to have unconscious prejudices (or “implicit bias” in psychological lingo) that made them behave in small, but accumulatively significant, discriminatory ways. Mahzarin Banaji, one of the psychologists who designed the test and leader of the discussion with Yale’s freshmen, remembers the tumult it caused. “It was mayhem,” she wrote in a recent email to Quartz. “They were confused, they were irritated, they were thoughtful and challenged, and they formed groups to discuss it.”

Finally, psychologists had found a way to crack open people’s unconscious, racist minds. This apparently incredible insight has taken the test in question, the Implicit Association Test (IAT), from Yale’s freshmen to millions of people worldwide. Referencing the role of implicit bias in perpetuating the gender pay gap or racist police shootings is widely considered woke, while IAT-focused diversity training is now a litmus test for whether an organization is progressive.

This acclaimed and hugely influential test, though, has repeatedly fallen short of basic scientific standards.

There’s little doubt we all have some form of unconscious prejudice. Nearly all our thoughts and actions are influenced, at least in part, by unconscious impulses. There’s no reason prejudice should be any different.

But we don’t yet know how to accurately measure unconscious prejudice. We certainly don’t know how to reduce implicit bias, and we don’t know how to influence unconscious views to decrease racism or sexism. There are now thousands of workplace talks and police trainings and jury guidelines that focus on implicit bias, but we still we have no strong scientific proof that these programs work.

The implicit bias narrative also lets us off the hook. We can’t feel as guilty or be held to account for racism that isn’t conscious. The forgiving notion of unconscious prejudice has become the go-to explanation for all manner of discrimination, but the shaky science behind the IAT suggests this theory isn’t simply easy, but false. And if implicit bias is a weak scapegoat, we must confront the troubling reality that society is still, disturbingly, all too consciously racist and sexist.

There are various psychological tests purporting to measure implicit bias the IAT is by far the most widely used. When social psychologists Banaji (now at Harvard University) and Anthony Greenwald of the University of Washington first made the test public almost 20 years ago, the accompanying press release described it as revealing “the roots of” unconscious prejudice in 90-95% of people. It has been promoted as such in the years since then, most vigorously by “Project Implicit,” a nonprofit based at Harvard University and founded by the creators of the test, along with University of Virginia social psychologist Brian Nosek. Project Implicit’s stated aim is to “educate the public about hidden biases” some 17 million implicit bias tests had been taken online by October 2015, courtesy of the nonprofit.

There are more than a dozen versions of the IAT, each designed to evaluate unconscious social attitudes towards a particular characteristic, such as weight, age, gender, sexual orientation, or race. They work by measuring how quick you are to associate certain words with certain groups.

The test that has received the most attention, both within and outside psychology, is the black-white race IAT. It asks you to sort various items: Good words (e.g. appealing, excellent, joyful), bad words (e.g. poison, horrible), African-American faces, and European-American faces. In one stage (the order of these stages varies with each test), words flash by onscreen, and you have to identify them as “good” or “bad” as quickly as possible, by pressing “i” on the keyboard for good words and “e” for bad words. In another stage, faces appear, one at a time, and you have to identify them as African American or European American by pressing “i” or “e,” respectively.

Then the test shows you both words and faces (separately, one at a time, but within the same stage). You’re told to hit “e” any time you see an European-American face or a good word, and “i” for an African-American face or a bad word. In yet another stage, you must hit “e” for African-American faces or good words, and “i” for European-American faces or bad words.

The slower you are and the more mistakes you make when asked to categorize African-American faces and good words using the same key, the higher your level of anti-black implicit bias—according to the test.

“Implicit bias” became a buzzword largely thanks to claims that the IAT could measure unconscious prejudice. The IAT itself doesn’t purport to increase diversity or put an end to discriminatory managers. But it has certainly been deployed that way, partly due to its creators’ outreach. In 2006, Scientific American praised Banaji for telling investment bankers, media executives, and lawyers how their “buried biases” can cause “mistakes.” “Part of Mahzarin [Banaji]’s genius was to see the IAT’s potential impact on real-world issues,” Princeton University social psychologist Susan Fiske said at the time.

“There’s the idea that we can decrease biases by slightly overhyping the findings,” says Edouard Machery, professor at the Center for Philosophy of Science at the University of Pittsburgh. “I’m not sure it was intentional. Every scientist must persuade other people that what they do is worth doing.”

HR departments quickly picked up the theory, and implicit-bias workshops are now relied on by companies hoping to create more egalitarian workplaces. Google, Facebook, and other Silicon Valley giants proudly crow about their implicit-bias trainings. The results are underwhelming, at best. Facebook has made just incremental improvements in diversity Google insists it’s trying but can’t show real results and Pinterest found that unconscious bias training simply didn’t make a difference. Implicit bias workshops certainly didn’t influence the behavior of then-Google employee James Damore, who complained about the training days and wrote a scientifically ill-informed rant arguing that his female colleagues were biologically less capable of working at the company.

Silicon Valley companies aren’t the only ones working on their “implicit bias” problem. Police forces, The New York Times, countless private companies, US public school districts, and universities such as Harvard have also turned to implicit-bias training to address institutional inequality.

There’s a typical format for workplace implicit-bias programs: Instructors first talk about how we all have unconscious prejudice. Then they run through related psychological studies—some of which, such as a commonly cited paper showing resumes with white names get more callbacks than those with non-white names, show prejudice rather than unconscious prejudice. Next, they have participants take the IAT, which purports to reveal their hidden biases, and conclude the program with discussions about how to be aware of and combat behavior driven by such biases.

The latest scientific research suggests there’s a very good reason why these well-meaning workshops have been so utterly ineffectual. A 2017 meta-analysis that looked at 494 previous studies (currently under peer review and not yet published in a journal) from several researchers, including Nosek, found that reducing implicit bias did not affect behavior. “Our findings suggest that changes in measured implicit bias are possible, but those changes do not necessarily translate into changes in explicit bias or behavior,” wrote the psychologists.

“I was pretty shocked that the meta-analysis found so little evidence of a change in behavior that corresponded with a change in implicit bias,” Patrick Forscher, psychology professor at the University of Arkansas and one of the co-authors of the meta-analysis, wrote in an email.

Forscher, who started graduate school believing that reducing implicit bias was a strong way of changing behavior and conducted research on how to do so, is now convinced that approach is misguided. “I currently believe that many (but not all) psychologists, in their desire to help solve social problems, have been way too overconfident in their interpretation of the evidence that they gather. I count myself in that number,” he wrote. “The impulse is understandable, but in the end it can do some harm by contributing to wasteful, and maybe even harmful policy.”

It’s highly plausible that the scientists who created the IAT, and now ardently defend it, believe their work will change the world for the better. Banaji sent me an email from a former student that compared her to Ta-Nehisi Coates, Bryan Stevenson, and Michelle Alexander “in elucidating the corrosive and terrifying vestiges of white supremacy in America.”

Greenwald explicitly discouraged me from writing this article. “Debates about scientific interpretation belong in scientific journals, not popular press,” he wrote. Banaji, Greenwald, and Nosek all declined to talk on the phone about their work, but answered most of my questions by email.

I saw a similar reluctance to criticize implicit bias among friends and colleagues. Taking the test, and buying into the concept of implicit bias, feels both open-minded and progressive.

I first took the IAT a few years ago, long before I was aware of these scientific disputes. It showed I had a moderate bias against African Americans and a slight bias towards associating men with careers and women with family. In other words, I told myself, I was both racist and sexist. It was shocking. Admitting this to myself also felt a little noble: I was recognizing my own involvement in structural inequalities.

Several others have told me they felt similarly when they received their own implicit-bias test results. One friend said her colleagues wouldn’t discuss diversity at all were it not for the implicit-bias workshops. So, as she so bluntly asked: Why was I stirring up shit?

One reason is, in a world where a widely held conspiracy theory claims that liberals invented climate change, I’m deeply uncomfortable with failing to report on scientific findings simply because they’re politically inconvenient. Even if IAT is eventually proven solid, there are currently heated academic debates on the subject—reaching levels of “personal antagonism,” says Forscher—and the public deserves to know about these scientific doubts.

There are also serious practical implications. Society is mired in prejudice, and implicit bias workshops attempting to solve the problem could be little more than an extremely well-funded but ineffectual distraction.

Finally, though there are plenty of good intentions behind public embracement of implicit bias, this enthusiasm also conveniently avoids the uncomfortable alternative: If the science behind implicit bias is flawed, and unconscious prejudice isn’t a major driver of discrimination, then society is likely far more consciously prejudiced than we pretend.

In recent years, a series of studies have led to significant concerns about the IAT’s reliability and validity. These findings, raising basic scientific questions about what the test actually does, can explain why trainings based on the IAT have failed to change discriminatory behavior.

First, reliability: In psychology, a test has strong “test-retest reliability” when a user can retake it and get a roughly similar score. Perfect reliability is scored as a 1, and defined as when a group of people repeatedly take the same test and their scores are always ranked in the exact same order. It’s a tough ask. A psychological test is considered strong if it has a test-retest reliability of at least 0.7, and preferably over 0.8.

Current studies have found the race IAT to have a test-retest reliability score of 0.44, while the IAT overall is around 0.5 (pdf) even the high end of that range is considered “unacceptable” in psychology. It means users get wildly different scores whenever they retake the test.

Part (though not all) of these variations can be attributed to the “practice effect”: it’s easy to improve your score once you know how the test works. Psychologists typically counter the influence of “practice effects” by giving participants trial sessions before monitoring their scores, but this doesn’t help the IAT. Scores often continue to fluctuate after multiple sessions, and such a persistent practice effect is a serious concern. “For other aspects of psychology if you have a test that’s not replicated at 0.7, 0.8, you just don’t use it,” says Machery.

The second major concern is the IAT’s “validity,” a measure of how effective a test is at gauging what it aims to test. Validity is firmly established by showing that test results can predict related behaviors, and the creators of the IAT have long insisted their test can predict discriminatory behavior. This point is absolutely crucial: after all, if a test claiming to expose unconscious prejudice does not correlate with evidence of prejudice, there’s little reason to take it seriously.

In Blindspot, a 2013 book aimed at general audiences, Banaji and Greenwald wrote:

[T]he automatic White preference expressed on the Race IAT…predicts discriminatory behavior even among research participants who earnestly (and, we believe, honestly) espouse egalitarian beliefs. That last statement may sound like a self-contradiction, but it’s an empirical truth. Among research participants who describe themselves as racially egalitarian, the Race IAT has been shown, reliably and repeatedly, to predict discriminatory behavior that was observed in the research.

So it came as a major blow when four separate (pdf) meta–analyses (pdf), undertaken between 2009 and 2015—each examining between 46 and 167 individual studies—all showed the IAT to be a weak predictor of behavior. Two of the meta-analyses focus on the race IAT while two examine the IAT’s links with behavior more broadly, but all four show weak predictive abilities.

Proponents of the IAT tend to point to individual studies showing strong links between test scores and racist behavior. Opponents counter by highlighting those that, counterintuitively, show a link between biased IAT scores and less discriminatory behavior. They may quibble, but single studies are no longer considered compelling evidence in psychology. The field, wracked by a replication crisis that found key results of single studies often cannot be recreated, now acknowledges that any one study is fallible. “People [who continue to believe in the validity of the IAT] are looking at the one or two studies that seem to support their view,” says Machery.

The four meta-analyses undertaken so far suggest there’s little use for the IAT outside of academia. Forscher says that while the test may reflect a psychological process that’s interesting to researchers, he’s “not very confident at all” that it measures a thought process that causes real-life discrimination. “I don’t think that working scientists should completely abandon the IAT in their lab studies,” he wrote in an email. “At the same time, I don’t think that the race IAT should be used to claim that implicit bias is causing disparities in police use of force, for example.”

Machery argues that if the IAT cannot meaningfully predict behavior, the results of the test are largely irrelevant. He compares someone with a low anti-black IAT score but who behaves in a prejudiced way to someone who insists that they’re courageous but who behaves in a consistently cowardly manner. “You would not say that he’s explicitly courageous and implicitly a coward,” he says. “You would say he’s a coward.”

No psychologist or neuroscientist can convincingly point to a clear divide between conscious and unconscious thought. And so psychology’s attempt to solve discrimination by delineating between an amorphous collection of conscious and unconscious biases is both simplistic and misguided.

“I think a lot of habits we have work like muscle memory we do them automatically without thinking about it explicitly,” says Luvell Anderson, philosophy professor at the University of Memphis. “But it’s not necessarily clear to me that those habits are unconscious in any deep or significant way.”

“Finding a way to measure attitudes that go around consciousness has always been a goal of social psychology,” adds Machery. “Again and again we fail.”

Several papers suggest the IAT does not measure truly unconscious thought. Even if people insist they’re egalitarian, they show awareness of their implicit biases, and seem able to predict the results they get from various IATs.

When Banaji and Greenwald first came up with the phrase “implicit bias,” they claimed it reflected thinking that is “unavailable to self-report or introspection.” The research showing that people are aware of their implicit biases suggests this definition is suspect.

Though there are countless articles and academic references claiming the test reveals unconscious thinking, Nosek says the IAT is not strictly about unconscious bias. “The use of the term ‘implicit’ in our field, for example, is deliberate to avoid specific commitment about consciousness,” he wrote in an email. “Implicit,” he explained, describes a “variety of concepts” including “unaware, unintentional, fast, efficient, unconscious.”

But, ultimately, the effort to avoid “commitment about consciousness” can easily be interpreted as an attempt to use scientific jargon to obscure meaning.

The psychologists behind the IAT have emphasized the importance of unconscious bias in part by minimizing the role of conscious prejudice.

In Blindspot, Greenwald and Banaji wrote:

[G]iven the relatively small proportion of people who are overtly prejudiced and how clearly it is established that automatic race preference predicts discrimination, it is reasonable to conclude not only that implicit bias is a cause of Black disadvantage but also that it plausibly plays a greater role than does explicit bias in explaining the discrimination that contributes to Black disadvantage.

In an email, though, Greenwald acknowledged that there’s currently no perfect assessment of explicit bias. The “relatively small proportion of people” referenced in the above quote, he wrote, are the few Americans who voiced support for segregation and disapproval of racial intermarriage, equal employment, and African-American presidential candidates in national surveys. These questionnaires cannot identify those who hide their prejudice and only express such views “in a private setting with like-minded others, or only to themselves in a private diary, or only in their thoughts,” added Greenwald. The number of people who keep their prejudices private “must be higher than the ‘small proportion’ in the quote from Blindspot,” he wrote. “But how much larger—is it 20%? 25%? More?”

Academics know that self-reported attitudes are not a strong assessment of beliefs. Forscher believes much of the early excitement around the IAT “came from a feeling that self-report measures are untrustworthy and that we need something better that gets at people’s ‘true’ attitudes.”

In Blindspot, Banaji and Greenwald estimate some 40% of white Americans are “uncomfortable egalitarians” who are more likely to help white people than black—in situations ranging from job interviews to first aid response—but are “earnestly” unaware of their prejudiced behavior.

It’s certainly plausible that some section of the population is prejudiced and completely oblivious to this fact. But there’s also a significant number of people who are prejudiced, know that they’re not always egalitarian, but don’t acknowledge their biases in psychological questionnaires or in conversation. Even if they don’t agree with explicitly bigoted views, most should recognize that they don’t behave in a completely race- or gender-blind manner. Many are prejudiced but hope to escape the label of “racist” or “sexist.” And the theory of implicit bias has handed them an excuse.

“It puts some space between the actor and the act,” says Anderson. “If you can say, ‘Yes, I did some act that might be judged racist or sexist or homophobic but I didn’t do so intentionally or knowingly, it was the result of some bias that I wasn’t explicitly aware of,’ that seems to work as a way of distancing oneself from full responsibility for the action.”

In the face of current evidence, Forscher wrote, “most implicit-bias researchers no longer believe that implicit measures are assessing an attitude that is more ‘true’ than self-report measures.” Indeed, the meta-analyses showed that the IAT is no better at predicting discriminatory behavior (including microaggressions) than explicit measures of explicit bias, such as the Modern Racism Scale, which evaluates racism simply by asking participants to state their level of agreement with statements like, “Blacks are getting too demanding in their push for equal rights.”

Thanks to psychology’s focus on implicit bias, small discriminatory acts are all too often labelled as unconscious, offering a convenient scapegoat to those who claim they aren’t really prejudiced. This happens in academic literature, where researchers are quick to point to discrimination as signs of implicit bias, regardless of whether there’s any evidence to show that such behavior is unconscious. It’s also a disturbing feature of everyday conversations.

When I’ve experienced (subtler) forms of prejudice, such as a boyfriend who asked me to iron his shirts, a very senior editor who made sexual comments to me, and remarks apropos of nothing from male friends and acquaintances about my breasts, the perpetrators were quick to suggest that their behavior was utterly unintentional, that they were totally consciously unaware of their sexism.

Surely it shouldn’t take much reflection to recognize the prejudice in their actions. Anyone who carefully considers their behavior should be able to acknowledge that they don’t treat men and women perfectly equally. Their prejudice is not truly “introspectively unavailable” to them.

That, for me, raises the uncomfortable question: What about my own prejudices? After researching this article, I retook both the black-white race IAT and the gender-career IAT and was told I had no implicit biases. Given the significant scientific doubts about the test, I didn’t think I could take this as happy confirmation that I’m racism- and sexism-free.

It’s deeply upsetting to admit, but I don’t think my actions are perfectly egalitarian. And though it would be easy to blame this on societal influences, if I perpetuate inequalities in my behavior, then I bear some responsibility. This doesn’t feel as noble as my previous (since-questioned) realization that I have implicit biases. I feel disgusted with myself, and am aware that simply recognizing my own prejudiced behavior does nothing to help those who face systematic discrimination.

Perhaps, I suggested to Anderson, a person holding onto such insidious prejudices could be compared to a mother who insists that she loves her two children equally, but who, deep down, knows she acts with favoritism towards one of them. He agreed the analogy works: Many of those who insist they’re totally egalitarian know, if they’re really honest, they don’t treat everyone equally.

None of the implicit-bias training-program instructors I spoke with were able to point to definitive positive results from their workshops, and they were largely unaware of the scientific controversies.

Holly Brittingham, head of talent and development at FCB, one of the largest global advertising networks, says more than 1,100 of the company’s employees have gone through its implicit-bias training, though there’s been no significant change in diversity since the program started. “That’s really still a challenge,” she says. The workshop, she insists, is “rooted in neuroscience, so it’s got a lot of validity to it.”

At Fair and Impartial Policy, which has delivered implicit-bias training to hundreds of police forces in the US and Canada, criminologist Lorie Fridell did know about the shaky evidence but said she hoped psychologists would eventually find a stronger link between the IAT and behavior. Though the effects of her police-force training had not been studied, “everything in our training is based on science,” she added. After we spoke, Fridell sent me a company memo claiming the existing meta-analyses show mixed results, when in fact they all show a weak link between IAT and behavior.

In 2014, following the shooting of Michael Brown, an unarmed black teenager in Ferguson, Missouri, the US Department of Justice set up a three-year, $4.75 million program to improve public relations with the police. The plan relied heavily on implicit-bias training. Though racist emails passed around the police department showed plenty of explicit prejudice in Ferguson, a statement published when the program launched announced plans to reduce implicit bias “where actual racism is not present.”

Phillip Atiba Goff, principal partner at the justice department program, professor in policing equity at John Jay College of Criminal Justice, and president of the New York-based Center for Policing Equity, says that though implicit bias is a feature of his work, his primary focus is behavior. “I’ve been black my entire life, with the possible exception of a week in college I took off,” says Goff. “I don’t care about the hearts and minds of the people who do racist behaviors towards me. I want the behaviors to stop. If you want to be a good scientist, there’s a difference between affecting bias and affecting the behavior.”

Why, then, bring in implicit bias at all? Goff’s work points to studies showing police officers with high anti-black IAT scores are quicker to shoot at African Americans. That finding, though, has been countered by research showing the exact opposite.

Nevertheless, Goff, who has developed implicit-bias police-training programs, insists the academic debates do not affect his own field research. Silicon Valley’s weak results, he says, can be explained simply: “The problems with the trainings is not the science, the problems with the trainings is the trainings.”

Imperfect scientific lab results, though, certainly cast doubt on how such theories are applied in the field. Plus, as Goff says later in our conversation, “Translating from the science to the field is incredibly difficult.”

Greenwald himself dismisses methods that claim to interrupt implicit biases, such as slowing down behavior so that the “conscious mind” can override unconscious impulses. “There is no scientific support for usefulness of these techniques,” he wrote in an email.

In public talks, Greenwald said he emphasizes ways of avoiding the effects of implicit bias. These methods, such as blind evaluations that obscure the race and gender of applicants, simply address prejudice, whether conscious or unconscious. Greenwald acknowledged that they do not focus on implicit bias: “The remedies I advocate are equally suitable for ALL forms of unintended discrimination,” he wrote in an email. “They are not limited to unintended discrimination due to implicit bias.” (“Unintended discrimination,” explained Greenwald, includes “institutional [structural] discrimination, ingroup favoritism, and implicit bias.”)

Both academic research on discrimination and workplace trainings could be considerably more powerful, argued several psychologists I spoke to, if they focused on behavior itself instead of trying to peer into the unconscious mind.

Behavioral targets will vary according to the institution. For example, diversity in Silicon Valley will never improve “as long as they continue to hire from the pools they hire from,” says Gregory Mitchell, a law professor at University of Virginia School of Law and co-author of several reports critical of the IAT. Meanwhile, he adds, “I suspect [police shootings have] much more to do with the local police policies and their training of officers than their implicit attitudes. When we portray police violence as a product of implicit bias without any real evidence to support that, we’re distracting ourselves from other possible causal factors.”

Hiring goals, diverse senior management, and penalties for those who repeatedly exhibit prejudiced behavior—rather than a soft talk about how we’re all biased but it’s not really our fault because it’s unconscious—would be effective alternative strategies for those serious about changing institutional inequality.

In a 2014 paper (pdf) by Banaji, Greenwald, and Nosek, the authors seemed to acknowledge the concerns raised about the test: “IAT measures have two properties that render them problematic to use to classify persons as likely to engage in discrimination,” they wrote, pointing to the test’s poor predictive abilities and test-retest reliability.

But when I asked them directly, Greenwald and Banaji doubled down on their earlier claims. “The IAT can be used to select people who would be less likely than others to engage in discriminatory behavior,” wrote Greenwald in an email.

The meta-analyses and other psychologists I spoke to strongly disagree: “There is also little evidence that the IAT can meaningfully predict discrimination,” notes one paper, “and we thus strongly caution against any practical applications of the IAT that rest on this assumption.”

There remains the question of whether the IAT’s predictive abilities could be more meaningful in a broader context. Nosek emphasized that a weak link between the test and behavior is still a reliable correlation. The IAT “provides very little information about what the person is likely to do” for any single instance of real-life individual behavior, he wrote in an email, but the test’s predictive abilities could become more significant across large populations or periods of time.

To some extent, even if implicit bias has a large-scale impact, these possible accumulative effects are beside the point. The IAT is used in thousands of real-world workplace trainings to highlight individual discriminatory behavior if the scientific evidence does not support this use, such practices urgently need to be revisited.

Nosek, whose Reproducibility Project played a key role in bringing the replication crises to light, claims his implicit bias work demonstrates the standards he calls for across psychology. The research is transparent, the data widely available, and there haven’t been major findings that couldn’t be replicated, he noted in an email. Nosek also qualified the importance of meta-analyses as a standard of proof, writing, “If the literature is highly biased, the conclusions of a meta-analysis will be highly biased too.” Meta-analyses can contain errors and are certainly not perfect—as with all scientific research, the findings are iterative rather than definitive—but they provide a powerful summary of knowledge, and are one of the strongest forms of evidence in psychology.

Overwhelmingly, the replication crisis highlights the flaws in assuming that any scientific research implies that findings have been definitively “proven.” Scientific truths evolve as further evidence and context is gathered. That is precisely why academics should be extremely cautious in applying their work outside the lab when their research is in early stages. Scientists could well still figure out precisely how our unconscious makes us more prejudiced, and manage to reduce discriminatory behavior by tackling these unconscious biases. But they don’t have that knowledge yet.

One likely reason implicit-bias testing and training became so popular is that it’s socially unacceptable to be seen as prejudiced. Discrimination still clearly exists we needed an explanation implicit bias provided one. It’s personally convenient to recast subtle forms of prejudice as unconscious bias. That doesn’t make it true.

There are further points and counterpoints that could not be fully explored in this article. There’s the critique that the IAT measures awareness of social status rather than biases the one that suggests any link between implicit bias and inequality could be explained by reverse causation whereby a more unequal society leads to poorer IAT scores and the one arguing the test became so popular in academia because of publication bias (any test that gives positive results tends to get more attention in the sciences, as it’s more likely to be accepted in journals than tests with negative or inconclusive results.)

Implicit bias research is mired in uncertainties, and the existing evidence neither definitively proves nor disproves current theories on the subject. Calvin Lai, director of research at Harvard’s Project Implicit and professor of psychological and brain sciences at Washington University-Saint Louis, notes that it’s extremely difficult to prove a test predicts behavior, and future larger studies could well find stronger evidence to bolster the IAT. For example, scientific confidence in the big five personality traits—openness, conscientiousness, extroversion, agreeableness, and neuroticism—has grown as more studies have confirmed that tests assessing these characteristics can effectively predict behavior.

In the meanwhile, how should government and corporate implicit-bias programs respond to the scientific uncertainty surrounding the IAT?

Lai says he worries about “throwing out the baby with the bathwater.” One clear benefit of the IAT is it helps people realize that, regardless of what they tell themselves, they are not necessarily bastions of equality. The personal responsibility I and others felt in discovering our own implicit biases can help motivate more considered, unprejudiced behavior. “A lot of other evidence focuses on large, system-wide patterns and so it’s easy to tell yourself, ‘Well that’s everyone else,’” says Lai. “I think demonstrations like the IAT show that no, it’s not just everyone else. It’s you as well.”

Raising personal awareness with a highly flawed test, though, opens the door for those eager to resist such conversations to claim that, if the IAT itself isn’t valid, then prejudice itself is a myth.

If we really want to eliminate discrimination, discussions of implicit bias should play a smaller role in sparking personal responsibility. The theory should be presented with the appropriate caveats, and as only the beginning of a conversation. The current hype around implicit bias, which overstates its role in both causing and combating discriminatory behavior, is both unwarranted and unhelpful.

Instead of looking to implicit bias to eradicate prejudice in society, we should consider it an interesting but flawed tool. We need to acknowledge the limitations, to look for other tangible ways to reduce inequality, and to admit that our colleagues, friends, and ourselves might not just be implicitly biased, but might have explicitly racist and sexist tendencies. We should ask people to consciously recognize their prejudicial behavior, and take responsibility for it. And we certainly should stop assuming our unconscious will be the key to solving discrimination.

“A lot of folks see the IAT as a golden path to the unconscious, a tool that perfectly captures what’s going on behind the scenes and it’s not,” says Lai. “It’s a lot messier than that. The truth, as often, is a lot more complicated.”


‘A-ha’ Activities for Unconscious Bias Training

Our unconscious social biases form involuntarily from our experiences. For example, as we are repeatedly exposed to actual incidences or media portrayals of females as collaborative, nurturing and homemakers, and men as assertive, competitive, and bread-winners, those associations become automated in our long-term memory. These biases are reinforced on a daily basis without us knowing, or thinking consciously about it. Stereotypes reflect what we see and hear every day, not what we consciously believe about what we see and hear. It is possible for us to hold unconscious stereotypes that we consciously oppose.

Because we are, by definition, unaware of our automatic, unconscious beliefs and attitudes, we believe we are acting in accordance with our conscious intentions, when in fact our unconscious is in the driver’s seat. It is possible for us to treat others unfairly even when we believe it is wrong to do so. Cognitive neuroscience research has taught us that most decisions we make, especially regarding people, are “alarmingly contaminated” by our biases. Our assessments of others are never as objective as we believe them to be.

Unconscious bias at work has profound implications—when we make decisions on who gets a job, who gets disciplined or promoted, who we chose to develop, or who we see as a confidant or as a suitable mentee, whose ideas we give consideration to, we may be adding our own subliminal and emotional criteria to that decision. Criteria we might not even be aware of and which may have no basis in facts. Bias can also contribute to hostile workplaces, bullying, and discrimination. Unconscious bias in recruitment, selection, promotion, development, and everyday workplace interaction limits the strategic potential that can flow from a diverse workforce for higher-quality problem solving and decision making, innovation and creativity, accessing diverse customers and suppliers, and attracting and energising top global talent.

Not surprisingly, organisations invest heavily in training programs designed to reduce unconscious bias. However, skeptics argue that bias training is ineffective, pointing to studies that report nil or even a negative impact from unconscious bias training. Although research into the efficacy of unconscious bias training is mixed, this is not surprising, given the range of variables that can influence training outcomes such as facilitator expertise and experience, mode of delivery, length and program design.

In a recent review of the research, the Equality and Human Rights Commission (UK) concluded that sophisticated unconscious bias training programs, combining awareness of unconscious bias, concern about its effects, and the use of tools to reduce bias, can reduce unconscious bias up to eight weeks post-intervention. Nevertheless, the authors caution that while unconscious bias training can be effective in reducing implicit bias, these biases are unlikely to be completely eradicated and training investment must be supplemented with changes (perhaps even a complete overhaul) to organisational structures, policies and procedures.

In sum, the research into unconscious bias training highlights two important considerations (i) unconscious bias training is necessary, but in itself not sufficient, for eliminating workplace bias and (ii) some unconscious bias training programs are more effective than others. Specifically, unconscious bias training is most effective when it: (i) incorporates bias awareness, or ‘a-ha’, activities and (ii) transfers evidence-based bias reduction and mitigation strategies.

‘A-ha’ Activities for Bias Awareness

Effective unconscious bias training activities ‘show’ rather than ‘tell’. Incorporating ‘a-ha’ activities that allow individuals to discover their biases in a non-confrontational manner is more powerful than presenting evidence of bias in employment or laboratory studies. Stereotypes and prejudices are maintained and reinforced by powerful cognitive and motivational biases that act to filter out information that contradicts or challenges our preexisting beliefs or attitudes. We all see bias vested in others but rarely see or admit our own biases. A-ha activities help participants to see how their subconscious preferences and beliefs drive their responses.

Cognitive dissonance refers to the uncomfortable emotional state experienced when individuals are made aware of an inconsistency in their beliefs, attitudes, or behaviours. Research indicates that when egalitarian values are central to an individual’s self-concept, highlighting an inconsistency between the individual’s anti-prejudice values and their biased responses is effective at evoking dissonance. In turn, dissonance motivates the individual to make conscious adjustments to their attitudes (reduction in prejudice) and behaviours (less discrimination) such that they better align with their explicit values of tolerance and equality.

(i) Implicit Association Test

A proven technique for enhancing awareness of one’s unconscious bias is the Implicit Association Test (IAT). This test measures the reaction time of individuals to a series of words or pictures presented on a computer screen. For example, the individual may be asked to type a particular key if the word presented on the screen is a ‘female name’ or a ‘weak word’ (e.g., delicate, small, flower) and a different key if the word is a ‘male name’ or a ‘strong word’ (e.g., powerful, mighty, robust). This activity is repeated numerous times and the average reaction time for a correct response is recorded.

Following this, the rules are changed such that the test taker is asked to press one key if the word is a ‘female name’ or a ‘strong word’, and a different key if the word is a ‘male name’ or a ‘weak word’. Because gender stereotyping associates female names with weak words, and male names with strong words, reaction times on the first test are relatively faster compared to the reaction times under the conditions of the second test involving a mismatch of stereotypical categories. Differential reaction times are evidence of implicit (unconscious) gender bias, and the greater the difference in reaction times between the two tests, the greater are those implicit stereotypical associations.

Anonymous IAT tests administered by Harvard University are publicly available at https://implicit.harvard.edu/implicit/takeatest.html. Over a million people have taken those tests, and results confirm that participants across a range of locations, ages, genders, races, and ethnicities hold unconscious stereotypes and prejudices regarding disability, sexual orientation, race, skin tone, age, weight, gender, ethnicity, and religion.

Practitioners should be aware, however, that there have been varied results from the use of this tool in real-world settings. Problems may arise because the theory behind the IAT is difficult to understand and participants may misinterpret the results…leading to confusion, shock, anger, and defensiveness.

When the IAT is used as an intervention tool, it is important that the facilitator is knowledgeable in the mechanisms of the IAT and adequately explains to participants that bias is inevitable as a result of social conditioning and cognitive processes—the results do not show evidence or make accusations of prejudice. Rather, the facilitator must stress that exercise is undertaken to highlight the existence of hidden bias and that, contrary to our conscious intentions, we all hold hidden biases that manifest in subtle and unconscious ways.

In addition to the IAT test, there are some other activities grounded in social psychological theory that can be incorporated into unconscious bias training.

(ii) The Tag Game

One example is the Tag Game, adapted from Fowler (2006). In this exercise, participants stick badges, in a variety of shapes, colours, and sizes, somewhere between their waist and neck. Participants are then instructed to form groups without talking. There are no instructions given as to what criteria to use to form the groups. Once formed, the participants are instructed to break up and form into new groups. This is repeated at least four times. Participants will normally form groups based on shapes, colours, or sizes. Rarely do the participants look beyond the badges, and even less rarely do they intentionally form diverse groups in which many shapes, colours, and sizes are represented.

This powerful yet non-confrontational activity leads well into a discussion about social categorisation processes, the automaticity of “us” vs “them” categorisations, and in group bias (also known as affinity bias). It is also an excellent exercise for introducing the concept of diversity and the potential benefits of diverse workgroups. Group discussions following the exercise explore diversity experiences (or lack thereof) in the workplace, and prompt participants to suggest ways to improve the recognition, support, and value of diverse perspectives and experiences.

(iii) The Father-Son Activity

Another useful awareness activity for unconscious bias training taken from the social psychological literature is the Father/Son activity, adapted from Pendry, Driscoll, & Field (2007). In this activity, participants are instructed to solve the following problem:

“A father and son were involved in a car accident in which the father was killed and the son was seriously injured. The father was pronounced dead at the scene of the accident and his body was taken to a local morgue. The son was taken by ambulance to a nearby hospital and was immediately wheeled into an emergency operating room. A surgeon was called. Upon arrival and seeing the patient, the attending surgeon exclaimed “Oh my God, it’s my son!’ Can you explain this?”

Around 40% of participants who are faced with this challenge do not think of the most plausible answer—being the surgeon is the boy’s mother. Rather, readers invent elaborate stories such as the boy was adopted and the surgeon was his natural father or the father in the car was a priest. As such, the exercise illustrates the powerful pull of automatic, stereotyped associations. For some individuals, the association between surgeon and men is so strong that it interferes with problem-solving and making accurate judgments.

This exercise leads well into an ensuing discussion on the automaticity of stereotypes and the distinction between explicit and implicit bias. From here, the discussion can move to explore ways of controlling or overcoming automatic bias. Also, because some of the participants will solve the problem with the most plausible reason, the exercise highlights individual differences in stereotyping and opens a discussion into why stereotypes differ across individuals.

(iv) The Circle of Trust

The Circle of Trust is a powerful exercise for demonstrating the effect of affinity bias. In this exercise, participants are instructed to write down in a column on the left-hand side of a blank piece of paper the initials of six to ten people whom they trust the most who are not family members. The facilitator then reads out some diversity dimensions including gender, nationality, native language, accent, age, race/ ethnicity, professional background, religion, etc., and participants are instructed to place a tick beside those members of their trusted circle who are similar in that dimension to them. For example, male participants will place a tick beside all men in their trusted six, white participants will place a tick beside all white individuals in their trusted six etc. Participants discover that their trusted six often displays minimal diversity – for most participants, their inner circle include people with backgrounds similar to their own.

The facilitator explains that this tendency or preference for people like ourselves is called affinity or ingroup bias and is well-researched. Studies show that, in general, people extend not only greater trust, but also greater positive regard, cooperation, and empathy to ingroup members compared with outgroup members. This preference for people like ourselves is largely instinctive and unconscious. Affinity bias manifests not only as a preference for ingroup members — but it may also manifest as an aversive tendency towards outgroup members. For example, we are more likely to withhold praise or rewards from outgroup members.

Participants are then prompted to consider the implications of this for the workplace? For example, as leaders, when they assign responsibility for a high-profile piece of work, to whom do they entrust that responsibility? The facilitator suggests that participants will likely offer opportunities to those individuals whom they trust the most. Those people, it turns out, are people who are similar to themselves. Now, because success on high-profile assignments is critical for emerging as a leader, a tendency to favour people like ourselves when assigning stretch assignments leads to self-cloning and promotes homogeneity in leadership. Though not intentional, people who are not like us get overlooked and left behind.

Although we believe we are making objective assessments of merit and treating people fairly, hidden preferences for people like ourselves can cause us to support the development and career progression of some people over others without us even knowing we are doing so. Regarding employment, affinity bias can compel people to favour those who are most similar to themselves, thereby leading to a tendency for leaders, people managers or recruiting managers to hire, promote, or otherwise esteem those who mirror attributes or qualities that align with their own. Moreover, we are also very good at justifying our biases. Studies show that we exhibit a systematic tendency to claim that the strengths of ingroup candidates are more important selection criteria than are the strengths of candidates with backgrounds different from our own.

Affinity bias can also lead us to actively solicit, pay greater attention to and to favour the contributions of ingroup members over outgroup members. We are also more likely to mentor or sponsor ingroup members compared with outgroup members.

In some groups, there may be certain individuals with a diverse inner circle. The facilitator encourages participants to think about how an individual’s experiences could disrupt affinity bias with the ensuing discussion drawing on intergroup research supporting intergroup friendship as a prejudice reduction technique.

Moving from Awareness to Action

As a stand-alone initiative, awareness programs are rarely effective tools for reducing bias. Fortunately, the social psychological literature provides us with some proven techniques for dismantling social categorisations and overriding bias. For more information, subscribe to next month’s newsletter here and look for the article titled ‘How Effective is Your Unconscious Bias Training? Part 2 of 2: The SPACE2 Model of Bias Mitigation.


Counteracting Negotiation Biases Like Race and Gender in the Workplace

Comment

To learn more about negotiation biases, let’s look back to July of 2018 when the principal flutist of the Boston Symphony Orchestra (BSO), Elizabeth Rowe, became the first Massachusetts resident to sue her employer under a new state law designed to address the persistent pay gap between men and women. Despite being the most frequent soloist among the BSO’s principal musicians, Rowe earns only about 75% of the salary of her closest comparable colleague, the BSO’s principal oboist, John Ferrillo, and also earns less than four other principal male players, adjusted for seniority. Ferrillo and Rowe, who joined the symphony in 2001 and 2004, respectively, sit next to each other in the orchestra, and both lead woodwind sections from endowed chairs. Rowe is seeking $200,000 in back pay.

According to her lawsuit, Rowe asked the BSO several times in recent years to adjust her pay and was rebuffed. She sued one day after the Massachusetts Equal Pay Act went into effect. The law stipulates that employers cannot pay workers less than what they pay employees of a different gender for comparable work—that is, work requiring substantially similar skill, effort, and responsibility performed under similar working conditions.

In a statement attached to Rowe’s complaint, Ferrillo called Rowe his “peer and equal” and noted that they work so closely that “we jokingly refer to playing Floboe.” Rowe is “at least as worthy of the compensation that I receive as I am,” he wrote.

In an August court filing, the BSO denied Rowe’s allegations of gender-based pay discrimination, arguing that “the flute and the oboe are not comparable instruments” and that Rowe’s pay is higher than that of numerous other male principal BSO members.

Regardless of how Rowe’s case would play out (they settled for an undisclosed amount in February 2019), the gender discrimination she alleged is not unique. According to the Pew Research Center, women earned 82 cents for each dollar earned by men in 2017, a pay gap that has persisted over time. Women earn less than men in part because jobs performed mainly by women (such as teaching and nursing) pay less than those dominated by men (such as technology and management). But differences in how salary negotiations unfold for men and women vying for comparable jobs appear to be another contributor to the pay gap.

In particular, the degree to which incoming employees feel comfortable assertively negotiating their starting salary may depend on their gender. When women negotiate for higher salaries, they must behave contrary to deeply ingrained societal gender roles of women as passive, helpful, and accommodating. As a result, their requests often face a backlash: relative to men who ask for more, women are penalized financially, are considered less hirable and less likable, and are less likely to be promoted, research by Hannah Riley Bowles of the Harvard Kennedy School and others shows. Men, by contrast, generally can negotiate for higher pay without fearing a backlash because such behavior is consistent with the stereotype of men as assertive, bold, and self-interested.

But gender is only one aspect of the pay gap. In 2016, college-educated black men earned about 80% of the hourly wages of college-educated white men, while black men overall earned just 68% of that earned by white men, according to the Pew Research Center. Also in 2016, Hispanic women earned only 62.2% of the wages of white men. Asian women fared better, earning 95.8% of white men’s earnings in 2016, but only earned 78.4% of what Asian men made that year.

As these statistics suggest, the pay gap is more complex than it first appears. Two recent studies illuminate these complexities and suggest remedies organizations and their leaders can attempt to reduce the impact of racial and gender bias in employment negotiations.

Negotiation biases and bargaining while black

In a recent study, University of Virginia professor Morela Hernandez and her team assigned 144 male and female working adults of different races (50% white, 27% African American, 14.6% Asian, 6.3% Hispanic, and 2.1% other), as well as 74 male and female undergraduate students of different races, to play either a job candidate or a hiring evaluator in a 15-minute negotiation simulation over a job with a salary range of $82,000–$90,000. After they negotiated, the participants answered questions that assessed their level of racial bias.

The study results showed that white and black candidates were equally likely to try to negotiate their salary. However, evaluators who scored high for racial bias believed that black candidates had negotiated more often than white candidates. This false perception, likely based on the biased evaluators’ expectation that black candidates would and should settle for less, led them to penalize black candidates for negotiating by granting fewer salary concessions. In fact, each time a black candidate was perceived to have made an offer or counteroffer, participants high in racial bias gave them about $300 less in starting salary, on average. By contrast, evaluators who scored low on racial bias had more accurate perceptions of candidates’ negotiating frequency and granted more equitable salaries as a result.

Considerable research evidence finds that virtually all of us are subject to implicit racial biases that lead us to treat others unfairly and inequitably, often contrary to our well-meaning intentions. Such “ordinary prejudice,” as psychologists call it, is widespread and rooted in the human brain’s tendency to categorize and make snap judgments. But Hernandez’s study leads to the sobering conclusion that explicit racial bias—a belief in the dominance of certain groups over other groups, based on factors such as race—remains a significant obstacle for African Americans in the job market.

When race and gender intersect

In a famous study from 1995, Yale University professor Ian Ayres found that car dealers made significantly higher opening offers to black and female participants than to white males, who ended up getting much better deals as a result. Specifically, average dealer profit was $362 from white men, $504 from white women, $783 from black men, and $1,237 from black women. This was true despite the fact that the participants were trained in advance to negotiate in the same way.

The study results starkly illustrate that in negotiation, racial and gender biases can intersect in ways that harm some groups and benefit others. White women appeared to benefit from racial stereotypes relative to blacks in the Ayres study, for example, but to be harmed by gender stereotypes relative to white men. Meanwhile, black women seemed to suffer from negative gender and racial stereotypes in the car-buying negotiations.

In a new study, Negin R. Toosi of California State University and her colleagues explored how race and gender intersect to influence job candidates’ assertiveness in salary negotiations. In particular, they compared the negotiating behavior of white and Asian Americans. Asian Americans face stereotypes that paint them as unassertive and submissive. Might Asian American women then face the “double jeopardy” of both racial and gender biases in salary negotiations?

To find out, Toosi and her team asked 980 white and Asian American men and women to imagine they had received a job offer from a consulting firm with a salary range of $31,000 to $54,000. How much would they ask for if they could make the first offer? Interestingly, white men and Asian women specified higher first offers ($48,247 and $47,797 on average, respectively) than white women and Asian men ($46,341 and $46,436 on average, respectively). Further analysis showed that participants who aimed lower had a greater fear of being punished for asking for too much. Interestingly, white women in this study seemed to fear this type of backlash, but Asian women did not.

Contrary to the double-jeopardy hypothesis, the findings support past research showing that when people belong to more than one minority group (such as “woman” and “Asian”), they are at risk of being overlooked. This “intersectional invisibility” can have negative consequences, such as leading certain groups to be underrepresented in organizations. Yet it may also reduce one’s likelihood of being measured against racial or gender stereotypes and falling short. Asian women participants in this experiment may have intuited this outcome and aimed high as a result, though this possibility still needs to be tested.

This research shows that race and gender need to be considered in tandem to avoid jumping to simplistic conclusions about how we are likely to be treated in job negotiations—and how we are likely to treat others.

Promote more equitable negotiations

There are steps that individual women and minority negotiators can take to avoid a backlash for behaving contrary to tired racial and gender stereotypes. For example, women have been advised to appear other-focused and nurturing when making salary requests by referencing their family’s financial needs or their desire to represent women as a whole. However, such behaviors can feel false, uncomfortable, and overly accommodating.

To reduce the insidious impact of racial and gender negotiation biases in hiring, compensation, and promotion negotiations, broader organizational and societal changes are needed. Because negotiation biases spring from faulty intuition, reducing the role of snap judgments in the decision-making process is an important step toward promoting more equitable job negotiations.

In her book What Works: Gender Equality by Design (Belknap Press, 2016), Harvard Kennedy School professor Iris Bohnet recommends requiring decision makers to conduct structured rather than unstructured interviews. Noting that unstructured interviews have proven to be very bad at predicting employee performance, Bohnet explains that managers can make more rational hiring decisions by asking all candidates the same list of predetermined questions in the same order, scoring them during the interview, and then carefully comparing and weighting their answers on a scoring system.

Reducing the role of intuition can also improve salary negotiations. Organizations would benefit from publicizing pay-grade ranges and requiring decision makers to compare the salaries of those with comparable jobs. In addition, leaders should instruct negotiators not to ask candidates how much they earned in the past. Because women and minorities tend to earn less than white men, the question can put them at a disadvantage and perpetuate the gender pay gap. In fact, the Massachusetts Equal Pay Act and recent laws in several other states now make it illegal to ask employees about their salary history.

Have you experienced or witnessed negotiation biases such as this in the workplace? Share your story below.

“Bargaining While Black: The Role of Race in Salary Negotiations,” by Morela Hernandez, Derek R. Avery, Sabrina D. Volpone, and Cheryl R. Kaiser. Journal of Applied Psychology, 2018.

“Who Can Lean In? The Intersecting Role of Race and Gender in Negotiations,” by Negin R. Toosi, Shira Mor, Zhaleh Semnani-Azad, Katherine W. Phillips, and Emily T. Amanatullah. Psychology of Women Quarterly, 2018.

Adapted from the article “Counteracting Racial and Gender Bias in Job Negotiations” in the January 2019 issue of Negotiation Briefings , the Program on Negotiation ’s monthly newsletter of advice for professional negotiators.


Introductory Works

Following the call in Greenwald and Banaji 1995 for measures of individual differences in implicit attitudes, stereotypes, and self-esteem, the Implicit Association Test (IAT) was presented in Greenwald, et al. 1998. The test requires subjects to rapidly sort two stimuli together in varying pairs, with the time it takes to complete the pairings and the errors made during the process reflecting the strength of the underlying association between the different classes of stimuli. For example, Greenwald, et al. 1998 shows that subjects tend to respond more quickly when sorting flower names (such as rose, daffodil, tulip) with good words (peace, glory, laughter) and insects (fly, wasp, beetle) with bad words (war, failure, sadness), compared with the opposite pairings (flowers with bad concepts and insects with good concepts). Since the task is more easily demonstrated than explained, readers unfamiliar with it should consider visiting the Project Implicit website, which offers opportunities to sample various IATs and resources for both interested laypeople (see especially the list of frequently asked questions) and psychological scientists. In addition, a summary of data obtained from several classic IATs is in Nosek, et al. 2002.

Greenwald, Anthony G., and Mahzarin R. Banaji. 1995. Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review 102.1: 4–27.

This article reviews early evidence of “implicit” cognition, which can be introspectively unknown to the actor, and calls for an individual difference measure for these constructs, which was subsequently met by the IAT.

Greenwald, Anthony G., Debbie E. McGhee, and Jordan L. K. Schwartz. 1998. Measuring individual differences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology 74.6: 1464–1480.

Introduces and validates the IAT using data from known groups.

Nosek, Brian A., Mahzarin R. Banaji, and Anthony G. Greenwald. 2002. Harvesting implicit group attitudes and beliefs from a demonstration web site. Group Dynamics: Theory, Research, and Practice 6.1: 101–115.

Using data from more than 600,000 participants in Project Implicit, the authors review results of nine IATs: race attitudes (using both faces and names), age attitudes (using both faces and names), gender-science stereotypes, gender-career stereotypes, implicit self-esteem, attitudes toward math versus the arts, and attitudes toward the 2000 presidential candidates.

Since its founding in 1998, Project Implicit has recorded roughly 13 million completed tests. To fulfill its dual mission of advancing scientific understanding and educating the public about unconscious biases, the site offers a variety of demonstration and research tasks (allowing the user to take an IAT or similar test used for actual psychological research) and numerous educational materials and resources for psychological researchers.

Users without a subscription are not able to see the full content on this page. Please subscribe or login.


Racial Implicit Associations in Psychiatric Diagnosis, Treatment, and Compliance Expectations

Racial and ethnic disparities are well documented in psychiatry, yet suboptimal understanding of underlying mechanisms of these disparities undermines diversity, inclusion, and education efforts. Prior research suggests that implicit associations can affect human behavior, which may ultimately influence healthcare disparities. This study investigated whether racial implicit associations exist among medical students and psychiatric physicians and whether race/ethnicity, training level, age, and gender predicted racial implicit associations.

Methods

Participants completed online demographic questions and 3 race Implicit Association Tests (IATs) related to psychiatric diagnosis (psychosis vs. mood disorders), patient compliance (compliance vs. non-compliance), and psychiatric medications (antipsychotics vs. antidepressants). Linear and logistic regression models were used to identify demographic predictors of racial implicit associations.

Results

The authors analyzed data from 294 medical students and psychiatric physicians. Participants were more likely to pair faces of Black individuals with words related to psychotic disorders (as opposed to mood disorders), non-compliance (as opposed to compliance), and antipsychotic medications (as opposed to antidepressant medications). Among participants, self-reported White race and higher level of training were the strongest predictors of associating faces of Black individuals with psychotic disorders, even after adjusting for participant’s age.

Conclusions

Racial implicit associations were measurable among medical students and psychiatric physicians. Future research should examine (1) the relationship between implicit associations and clinician behavior and (2) the ability of interventions to reduce racial implicit associations in mental healthcare.


Address Implicit Racial Bias Found in Officer

The May 25 death of George Floyd in Minneapolis while he was in police custody and the associated protests prompted a renewed national discussion over race. Hundreds of thousands took to the streets in American cities and around the world to demonstrate their frustration, outrage, fatigue and solidarity, and to demand change. Accordingly, many organizations have taken an inward look at the practices and norms that contribute to their lack of diversity, equity and inclusion. The Army is one such organization.

In June, Secretary of the Army Ryan McCarthy directed the removal of Department of the Army photos from selection boards in part to address the disparity of Black officers’ promotions relative to their white counterparts. In fairness, the Army was not unaware of its shortcomings when it comes to racial diversity and has attempted to address them before. But previous discussions on the topic have often focused on the “shortcomings” of the officer rather than the institution.

The Army has attributed the disparity in Black officer promotions to low accession numbers and some combination of the Black officer’s branch, education, performance or access to mentors. Rarely, if ever, has the Army considered external, systemic factors.

Although not without problems, McCarthy’s directive is noteworthy because it recognizes the presence and impact of implicit racial biases against Black officers.

The Army must go further to remedy the negative impact of implicit racial bias.

Diversity Challenges

Blacks are underrepresented in the officer ranks when compared with their share of enlisted personnel and the civilian labor force. Blacks comprised 11% of the active-duty officer corps in 2019, compared with 23% of enlisted soldiers, according to the Army Demographics Fiscal Year 2019 Army Profile. Blacks made up 13% of the civilian labor force in 2018, the U.S. Bureau of Labor Statistics reported in late 2019.

In 2011, the Military Leadership Diversity Commission found that Black officers are promoted at a lesser rate than their white counterparts, especially to the general officer grades. The study also found that across all services, “Black officers’ promotion rates were substantially lower than the pay grade-specific average promotion rates for their respective services.” The commission’s findings align with earlier studies of the Army’s promotion record. More recent studies are not publicly available.

A 1996 study by University of Alabama political science professor J. Norman Baldwin in Public Administration Review found that minority officers are underrepresented in the Army’s middle officer ranks (captain, major and lieutenant colonel).

A 2009 study by G.L.A. Harris in the International Journal of Public Administration reinforced the observation, noting that Black officers’ challenge appears to be at the critical juncture for promotion from senior company grade to the first field grade level. Harris is a professor of public administration at Portland State University, Oregon.

While there are a number of studies that address the evident disparity in Black officers’ promotions, rarely do the findings discuss causality, or the relationship between cause and effect. When causality is explored, several themes emerge. While a student at the U.S. Army War College, Brig. Gen. Remo Butler identified four principal determinants of Black officers’ success or failure: education, developmental assignments, mentoring and the clash of cultures. His 1999 thesis, “Why Black Officers Fail,” is one of the most oft-cited pieces on the topic of Black officer successes and failures, and it speaks directly to these themes.

Concerning education, Butler was critical of historically Black colleges and universities (HBCUs) and the ROTC programs hosted there. Butler found that HBCUs and their hosted ROTC programs lacked the academic rigor and professional training quality as large racially integrated, predominately white institutions.

Col. Irving Smith III, an assistant professor at the U.S. Military Academy at West Point, New York, wrote in support of Butler’s findings. In 2010, in a U.S. Army War College Parameters article headlined “Why Black Officers Still Fail,” Smith was critical of the ROTC experience at HBCUs relative to predominantly white institutions because of the large role contractors (retired military and reserve component officers) played in military training.


An Army ROTC cadet from Tuskegee University, Alabama, slides down a rope during an exercise at Fort Benning, Georgia.
(Credit: U.S. Army/1st Lt. Stephanie Snyder)

Others have also explored education as a factor in the disparity. In a 2016 dissertation by student researcher Robert Smith that examined the success of Black male officers in the Coast Guard, Black officers not selected for promotion to lieutenant commander (O-4) were often missing postgraduate education. Despite that shortcoming, the research did not find that Blacks suffered from a lesser undergraduate education.

None of the studies and reports cited here addressed whether implicit racial bias during performance evaluations and promotion boards by senior officials was a contributing factor or the cause for a Black officer’s promotion failure.

Although the Army prohibits explicit racial bias in performance evaluations and selections, raters and board members’ behaviors influenced by implicit racial bias can manifest indirectly. There is minimal information concerning implicit bias and military promotions.

Research Supports Theory

A 2017 Florida Law Review article provides that “the theory of implicit bias occupies a rapidly growing field of scientific, legal research. With the advent of tools measuring individuals’ subconscious preferences toward people of other races, genders, ages, national origins, religions, and sexual orientations, scholars have rushed to explore how these biases might affect decision-making and produce broad societal consequences.” A majority of social scientists and legal commentators in the field agree that implicit bias exists and has behavioral effects that adversely affect minority and less-favored groups in American society.

Research has explored employment discrimination and found that implicit racist attitudes interact with a climate for racial bias to predict discrimination. In 2010, Dan-Olof Rooth, a professor of economics at the Institute of Social Research at Stockholm University, explored how implicit bias yields discriminatory behavior in the hiring decisions of employment recruiters.

A 2009 review of 10 studies by New York University psychology and politics professor John Jost and others that appeared in Research in Organizational Behavior revealed that students, nurses, doctors, police officers, employment recruiters and many others exhibit implicit biases for race, ethnicity, nationality, gender, social status and other distinctions. Implicit biases, or implicit associations, predict socially and organizationally significant behaviors, including employment, medical and voting decisions made by working adults.

Decision-Making Influenced

The Implicit Association Test (IAT) is to measure unconscious racial bias and test whether it would influence decision-making. Using the IAT, a 2007 study by Dr. Alexander Green, of Harvard Medical School and Massachusetts General Hospital, and others revealed that as physicians’ pro-white implicit bias increased, so did their likelihood of treating white patients and not treating Black patients with thrombolysis, which is associated with blood clots and blood flow.

The results, which appeared in the Journal of General Internal Medicine, suggested that physicians’ unconscious biases may contribute to racial and ethnic disparities in the use of medical procedures. Similarly, research has shown the phenomenon’s applicability in supervisory ratings.

Research challenges the assertion that “there is no evidence that the IAT reliably predicts class-wide discrimination on tangible outcomes in any setting,” according to a 2006 critique of the IAT in the Ohio State Law Journal by professors Philip Tetlock and George Mitchell. Racial bias exists and is significant, both in statistical and practical terms. Considering these data, McCarthy was correct to remove the bias-informing value of Department of the Army photos.

Mitigating the Impact

DoD Directive 1320.12: Commissioned Officer Promotion Program requires service secretaries’ instructions to officers serving on selection boards minimally include guidelines to ensure the boards consider all eligible officers “without prejudice or partiality.” Recognition of the existence and potential impact of racial biases necessitates the directive and instruction.

Before 1999, instructions to a selection board for promotion included language that recognized that past personal and institutional discrimination might have disadvantaged minority and female officers. Such discrimination “may manifest itself in disproportionately lower evaluation reports, assignments of lesser importance or responsibility, etc.,” the guidance said.

The removal of Department of the Army photos, combined with the masking of race data, while a necessary step in mitigating the presence and impact of implicit bias, serves as a significant impediment to selection board members considering earlier discrimination and bias. Accordingly, the Army must focus its efforts earlier in the evaluation-promotion cycle. Additional training and a process change will help.

Change Is Necessary

One of the critiques of the IAT is that it suffers from test-retest reliability. Individuals will exhibit less implicit bias taking the test multiple times. The criticism is valid but suggests that an awareness of one’s own implicit bias can be a catalyst for change, as three university researchers explained in a 2015 Journal of Experimental Social Psychology article, “Modern prejudice: Subtle, but unconscious?”

To this end, the Army should leverage regular bias training. Soldiers currently receive training on numerous topics annually. Bias awareness training would nest well with other equal opportunity and diversity training.

Subjectivity in evaluations is a second area where change is necessary to address the impact of bias. Officer Evaluation Reports are subjective assessments and reflect biases of the rater and senior rater.

In a 2019 study in the International Journal of Selection and Assessment, “Same ‐ gender and same ‐ race bias in assessment center ratings: A rating error approach to understanding subgroup differences,” four university researchers reported that they found that these biases benefit employees who share personal relationships with and the same race as their evaluator.

Army Regulation 623-3: Evaluation Reporting System further facilitates the subjectivity in the instruction provided to raters: “Provide an accurate assessment of the rated Soldier’s performance and potential (as applicable), using all reasonable means, including personal contact.”

The pervasiveness of implicit bias is a viable explanatory for the racial disparity experienced by Black officers. Awareness of implicit bias and controlling its role in Army processes are necessary mitigations.

Maj. Benjamin McClellan is a strategist in the Office of the Deputy Chief of Staff of the Army for Operations, Plans and Training, the Pentagon. He formerly served on the Army Talent Management Task Force. He holds a doctorate in law and policy from Northeastern University, Boston.



Comments:

  1. Dakinos

    What words ... super, brilliant thought

  2. Adny

    Just that is necessary.

  3. Thoraldtun

    UH !!!

  4. Taurr

    It agree, this remarkable opinion

  5. Kami

    I'll say thank you too!

  6. Aitan

    I well understand it. I can help with the question decision.



Write a message