Where were you when you first learned about implicit bias? Maybe you were on your couch at home watching one of the 2016 US Presidential Debates when Hilary Clinton discussed implicit bias in her debate with Donald Trump. For many psychologists like me, we’d been hearing about implicit bias—and were sick of hearing about implicit bias—for nearly two decades by then. I can remember precisely where I was when I learned about the measurement of these supposedly hidden prejudices. I was a PhD student at Brown University in 1998 (back when Hilary’s husband was President) sitting in my advisor’s office when she had me complete an implicit association test (IAT) on the computer. The test consisted of pictures of Black and white faces and positive and negative words, and my job was to sort out the faces by race and the words by emotional valence. What became quickly apparent to me was that I had a much tougher time pairing Black faces with positive words than Black faces with negative words. And I didn’t have to think too hard to grok what my difficulty meant about me: I was a racist. Let me back up. In the 1980s and 1990s, psychology had a problem. We knew people were biased, but whenever we asked them directly, they acted shocked. "Prejudiced? Who me? Never!" Meanwhile, the world kept providing evidence that bias was alive and well. The issue was that prejudice became so socially unacceptable that no one would admit to holding negative stereotypes about groups not their own. And make no mistake: bias was real, as anyone could see from persistent disparities in hiring, housing, and criminal justice. So simply asking people about their racial attitudes wouldn’t cut it: people would lie to the psychologists, even to themselves. Even worse: people might not even be aware of their racist attitudes and impulses, so how could we expect them to self-report on them? To solve this, creative psychologists devised ever more clever ways to assess bias, from elaborate fake polygraphs—the bogus pipeline—to indirect questionnaires. But these approaches had obvious limitations. Around the same time, the cognitive revolution was in full swing, and cognitive psychologists were making real progress in mapping the structure of mental concepts using speeded reaction time tasks (where people respond as quickly as possible to stimuli while researchers measure how fast they react). In one influential demonstration of semantic priming, participants recognized the word “doctor” faster when it was preceded by the word “nurse,” suggesting these concepts were not only semantically related but stored in memory as linked nodes. More than that, these mental networks didn’t just encode meaning, they encoded evaluation, too. So, when people saw a positive word like “flower,” they were quicker to recognize another positive word like “good,” a finding that launched decades of work on evaluative priming. It also planted the seeds for how to assess implicit bias. If “flower” and “good” go together in memory, then maybe racial groups and evaluations work the same way. Perhaps our cultural environment wires “Black” person to “bad” and “white” person to “good,” even if we consciously reject those links. That was the clever insight: rather than asking people how they feel about race, psychologists could measure how quickly they paired “Black” with “good” or “bad.” The slower you were to link Black with good, the stronger your supposed bias. From this idea came the IAT and other speeded reaction time tasks. These were tools meant to bypass self-report and catch bias in the act. Researchers called these tools bona fide pipelines—to contrast with the bogus kinds—because they were thought to be pipelines to the soul, unobtrusively measuring racial attitudes These measures became psychology's darling almost overnight. The promise was irresistible: objective, scientific measurement of subjective, hidden bias. There was just one tiny problem, man. (You know what’s coming.) The measures don't work as advertised. Olivier Corneille and Bertram Gawronski just published a paper that systematically dismantles the entire edifice of implicit measures, and to be honest, I don’t see how the field recovers. New shit has come to light. Before you dismiss their critique as a couple of iconoclasts criticizing the establishment, know that these authors, especially Gawronski, are no dilettantes. Along with Galen Bodenhausen, Gawronski developed the influential APE (Associative-Propositional Evaluation) model, which became one of the most important theoretical frameworks for understanding how implicit and explicit attitudes work together. When someone who helped build the house tells you the foundation is cracked, you listen. Corneille and Gawronski examine six core claims about why implicit measures are supposedly superior to self-reports, and every single one crumbles under scrutiny. Remember, the whole point of developing these measures was to get around the problems with self-reports—the lying, the lack of awareness, the social-desirability effects. The first and most crucial claim was that implicit measures are immune to social-desirability and context effects while self-reports are hopelessly compromised. Except it's wrong. Implicit measures are just as sensitive to social context as self-reports, sometimes more so. Change the race of the experimenter and scores shift. Test people in public versus private settings and results change again. Responses on implicit tests can be controlled and even faked. The supposedly pure pipeline to unconscious bias turns out to be just as contaminated as the self-reports we'd rejected. According to the authors, “there is no compelling evidence that implicit measures are immune or less sensitive than self-reports to social-desirability effects.” Ouch. We spent decades trying to bypass the problems with asking people directly about their biased racial attitudes, but our solution doesn’t work. Equally devastating is the consciousness claim. Implicit measures are thought to tap into unconscious thoughts people genuinely don't know they have. This was supposed to solve the other half of our self-report problem: even if people wanted to tell us about their biases, they might not be aware of them. However, study after study shows people can predict their implicit scores with surprising accuracy. If you can guess what your so-called unconscious bias test will reveal, how unconscious is it really? Even prominent scholars of implicit measures like Anthony Greenwald now admit that these measures cannot assess unconscious thoughts and feelings. The authors also dispute claims about automaticity (that only implicit measures can tap automatic processes), the ability to capture simple associations (that only implicit measures tap mental links between concepts), robustness (that only impli |