Why Our Brains Glitch to Make Us Biased—and What We Can Do About It
“We believe that we are objective, but that other people are biased.”
Mahzarin Banaji is the Richard Clarke Cabot Professor of Social Ethics in the Department of Psychology at Harvard University and co-author, with Anthony Greenwald, of Blindspot: Hidden Biases of Good People. Her research investigates unconscious thinking and feeling, and she helps to maintain Project Implicit, an educational website that uses the Implicit Association Test to teach users about implicit biases. Recently, Mahzarin chatted with Heleo’s Editorial Director Panio Gianopoulos about the ways our brains slip up without our realizing—and how we can correct for these internal errors.
Panio: When it comes to how we perceive the world, we’re wrong about many things, particularly people. You use the term “mind bugs” to describe this [mistake-making] process—could you elaborate on the concept?
Mahzarin: “Mind bugs” came out of the work of scientists studying how children make mistakes when they first learn to do arithmetic, especially subtraction. Subtraction is really hard, and there are certain patterns of errors. “Mind bugs” conveys the idea that the brain is this machine, software that’s evolved over millions of years, and that software can be buggy. It’s not that we’re bad, it’s not even that our programs don’t run, but they’re buggy enough that they make mistakes.
The term “mind bugs” neutralizes, to some extent, the difficult issues that we confront. If I forget a word, that’s fine, but if the forgotten word has to do with something that was said by an eyewitness that’s going to be crucial in their testimony, that obviously has consequences. Mind bugs run the entire gamut from being ordinary little errors to complete splattering on the sidewalk, and I’m interested in the whole range of them.
Panio: Designers usually find ways to repair software bugs. Is there anything we can do to correct mind bugs?
Want more conversations with the world's great thinkers? Click the shiny blue button!
Mahzarin: The first thing we have to confront is the fact that we don’t know when something is a bug. Say I have to hire somebody and I look at two candidates: one looks really good compared to the other. How am I to know whether the preference of A over B is a true reflection of A and B’s talents, their merit, the likelihood that they will succeed? Or is it—as we now know from hundreds of studies—the lens through which I’m looking at those people?
If I’m looking to hire a nurse, does a man get a fair shot at that job, or will I find the female candidate’s application to be more competent, even though in past work experience, they look the same? Maybe the man has an edge over her, but I end up feeling more comfortable making that choice.
That awareness is not easy. In a single instance, we can’t know, and this is why science is so helpful. We can tell you in the aggregate case that there are these biases. Becoming aware of the data from hundreds of studies shows us that in these situations, we think we’re calling it based on who’s the right candidate to be hired. But over time, we’re systematically picking one kind over the other.
How to correct for that isn’t an easy question. The people making these mistakes are actually trying very hard to hire the best possible person, and they have not a smidgen of what I call, “conscious bias.”
What is interesting is the Symphony Orchestra Study. Symphony orchestras, starting in the late 1970’s, began to audition blindly. The musician would play, the judge would listen to the musician, and between [the judge] and the player is a curtain. You have no idea who’s playing, but you don’t need to know. You only have to hear the quality of the music.
Symphony orchestras went from being almost entirely male to suddenly being deeply gender integrated. We know that that’s a bias, and we know that when the bias was removed better musicians got hired. Yet, when it comes to putting blinders on ourselves, we resist. But that could be one way in which we could remove certain kinds of biases.
Panio: I do think people tend to believe, “Well, maybe other people are biased, but I’m not. I don’t do this.”
Mahzarin: Yes, we believe that we are objective, but that other people are biased.
“Our belief that our intuition is correct is among the most frightening qualities of our minds. It is impervious to evidence.”
Panio: That can veer into this cavalier attitude of, “I don’t need data. I just go with my gut.” There’s this privileging of intuition. Intuition sounds very charming because it’s based on personality, as opposed to fact. The problem with intuition is you can’t check it against anything. It’s just a feeling.
Mahzarin: Right. I think you’ve hit on the hardest thing here. Our belief that our intuition is correct is among the most frightening qualities of our minds. It is impervious to evidence. I often hear people say, “It’s not that I don’t like him. I just don’t think he’s going to fit,” and I’ll say, “What about him do you think won’t fit?” They’re almost always dumbfounded when I ask that, because what they mean is, “Something in my gut just tells me,” so they can’t voice what objective facts about that person would make them not fit. But something feels that way.
When our ancestors made decisions based on intuition, it worked out okay in a world that was very different. If you had to decide whether that thing in the forest was a tiger or not, it was okay to run if you thought it was a tiger and it turned out not to be a tiger. That was not a big deal. Using intuition in the past allowed our ancestors to survive long enough that we are on the planet today.
But our world is so different. We have to look at other human beings who are vastly different from us. The same people our ancestors would have killed off—we have to try to collaborate with. As modern humans, we have decided that those old forms of kinship are not the ways in which our society will advance.
We don’t hire our children into our businesses, necessarily, because there’s such a thing called “talent” and it may not be our children who have it. We know all that, and yet the pull towards the primitive, towards the feeling of running to your family, hunkering down in moments of uncertainty, these are not only natural things, they are predictable. We should be warned ahead of time that this is how we’re going to feel.
I’m working on a project called, “Outsmarting Human Minds,” an idea to build a set of little modules that will teach people in two or three minutes about each of these mind bugs, and ways in which we might beat them.
Panio: That’s a great idea. I’m a parent of three young kids and I find myself looking at them and watching their minds work and instructing them, essentially, about things that they’re going to screw up on. From my perspective, it’s very easy to see it. I wish there were a way to do that for myself, because I’m also making mistakes—I just don’t happen to have someone there telling me, “Hey. Watch out. You’re going to make these predictable mistakes.”
Mahzarin: That’s a very profound insight. In the future, I wonder if we will have some kind of a moral, intellectual barometer that will tell us, “Yes, you can do this, but when you do this, three different species of fish will die when you do this.”
Panio: Sounds like a Black Mirror episode.
Mahzarin: That would be a wonderful device, so that our actions and repercussions are given to us before we make choices.
Panio: You’d think this idea of causality would be really compelling, and yet, I feel like we’re very bad about actually thinking about the consequences—anything beyond the short-term.
Mahzarin: Again, that comes from an old world. Until very recently, people didn’t live for very long, and so the whole notion of a future is a strange possibility. The future is a couple hours ahead—that’s what the hedonistic mind is thinking. “What can I do now to find pleasure because, who knows? I won’t be here in a couple of hours.”
Yet, we are living longer. Think about the incredible job that this three pound brain has to do to be able to imagine not one career, but several. To imagine ways in which your actions are influencing other people, not just your own children, but the planet.
“Our own future selves are strangers to us. This is why we don’t don’t save enough in retirement or eat healthily now.”
Panio: I read some research about brain imaging based on when people picture themselves in the future. They would look at computer mock-up versions of themselves, and the part of the brain that fired was the part that activated when seeing a stranger—as opposed to themselves.
Mahzarin: Absolutely. That work came from my department. I was always impressed that our own future selves are strangers to us. This is why we treat our future selves so shabbily. This is why we don’t don’t save enough in retirement or eat healthily now.
Panio: Why don’t you tell me a little about the IAT?
Mahzarin: The IAT (Implicit Association Test) is a device that allows you to get a glimpse of what your mind contains—the knowledge that you have picked up by living in a particular culture. The IAT recognizes that certain things have come to be associated with each other because our experiences have shown us that they go together.
Certain things pair up because they have co-occurred: cloudy sky and rain, bread and butter, mother and father. There might be many such associations that have gotten into your head without your being aware that they’re associated. One is that the category Black or Black American, in our culture, has come to be associated with things that are not positive. To the extent that “black is bad” in our minds, we ought to be able to devise a test that should pick up how strong that association is for you. If it’s not true, then you should show no such association. For you, black should equally raise the possibility good and the possibility bad.
When you put people under time pressure to get them to make these reactions, you can see that, for people like myself, a person who harbors no racial bias at any conscious level, when I take the Implicit Association Test, I have to try to put black and good together. Putting white and good together I can do very easily. I can easily put white faces and good words like “love” and “peace” together.
But when you make me switch—“Mahzarin, for the next round, I’d like you to associate black with good and white with bad”—when I take the test in that format, my mind comes to a standstill. It does it, but very slowly, making many mistakes. The extent to which I can do the former much faster than the latter, that is a measure of how strongly your mind has come to associate black with bad and white with good.
“My first reaction was panic. My second was, ‘The test is screwed up.’ Only the third reaction was, ‘Oh my God. It’s not the test. It’s me.’”
That’s at the heart of the IAT, and you could replace black and white, and good and bad, with anything you want. The Red Sox and the Yankees with good and bad, for example. I show a nice, positive association of Red Sox with good and Yankees with bad, and I’m sure somebody in New York would show exactly the opposite. That too, is a bias, but who cares about that one? That’s a good bias—a bias that favors your own team, your own community, your own school, even your own child. Preferring your own child to the neighbor’s child, that’s perfectly reasonable.
What about these other biases and associations? That females belong in the home and males belong in the workplace, that Native Americans are foreign and that European Americans are American—these are not biases I want to have. These are not things that I believe have any place in my head, and yet I discovered that they do.
Panio: It must have been startling. Did you expect that?
Mahzarin: Not at all. I had the strong intuition that I would not show that bias because I understood how the test was set up. When I couldn’t finish the task equally fast and accurately in the one set as opposed to the other, my first reaction was panic. My second was, “The test is screwed up.” Only the third reaction was, “Oh my God. It’s not the test. It’s me.”
Panio: Are there any people who’ve taken the test who have not demonstrated a bias?
Mahzarin: Yes. 70% to 75% of Whites will show an association of white with good, but 25% to 30% of White Americans do not show that association. They’re either neutral in the sense that to them, black and good is equally fast and accurate as white and good, and some small percentage actually veered the other way. They associate black with good more so than white with good.
With the career and gender test—Male/Female, Home and Work—something like 75% of men and, this is interesting, 80% of women show the stereotypical bias. They associate female with home and male with career. But about 20% to 25% of people do not.
Likewise, you can see it in the domain of age bias. As you might imagine, in just about every culture, elderly is not good. What is fascinating about this data is that, under certain conditions, people who belong to the disadvantaged groups themselves show bias against their group. The elderly are just as anti-elderly as young people are on the IAT, even though they would never consciously express negativity toward their own group, which tells us that the IAT is picking up something about how well your group is thought of in the culture in which you live.
Among the most painful results I’ve seen is when you see how early in life young children look just like the adults of their group. 75% of six year-old white kids show the same preference as white adults, even though they have had much less experience in the world. You would think they’d be oblivious, but implicit cognition develops very fast, and it’s learned not just based on direct experience, but what other people think, what television tells us, or what storybooks are showing us. Like sponges, they’ll absorb whatever is there.
Panio: Can people ever change their bias, taking a test years later and seeing a difference?
Mahzarin: Yes, change is possible, certainly at the explicit level. There is some evidence that the sexuality test is showing a lower bias in Americans. That would fit with our understanding of how fast our culture is changing on that issue—in the last 10 to 15 years we’ve gone through very rapid change on that. It’s not just Hollywood, of course. Sexuality also has this other benefit, that we can know people and love them and only later find out that they’re gay, something that we cannot do with race, for example, or age or gender.
We have seen a reduction in that bias, but not on race. People often ask the question after the Obama election.
Panio: One of the things I’ve always found interesting is the evolutionary explanation, where people say, “Well, if it served no function then it would have evolved out of us.” When you think about racism, I can understand, conceptually—if somebody looks different from you then they’re not from your kin or your tribe, they might be more dangerous. But isn’t there an evolutionary imperative to diversify genetically? It seems like you would actually be attracted to other races, in general, because it would be good for genetic diversity.
Mahzarin: The evolutionary question is an interesting one in the following way: there are many behaviors that we currently perform that clearly are part of our repertoire because they helped in the past. These are behaviors that are a part of us, our intelligence and our personalities, because they paid off in the past. But one of the things that Darwin is very clear about is that the same attributes that paid off in World One may not pay off in a New World.
I’ll take sugar and fat as a nice example. Our bodies are very good at storing smidgens of sugar and fat that we eat and creating fat, little bodies. The reason that happens is because we evolved in times of so little food and resources that the only people who survived were the ones whose bodies were capable of storing sugar and fat. The others died long before they could have children. This was a very good thing for our ancestors, we would say evolution showed that was a winning ability.
Today, a world that is filled with sugar and fat is killing us. We have to think very carefully when we say, “This is an evolutionarily good thing which is why I have it.” Probably you could come up with a good argument for why this was helpful in the past, but that doesn’t mean that it is helpful in the present. In fact, the very thing that was helpful in the past could kill you off today. I think socially, there’s something very similar going on.
Indeed, we ran from people who looked different from us. It’s reasonable to expect that our suspicious minds, when we see things that are different, we question them. But what if you live in a world where looking at somebody who’s different but has a separate skill from yours, requires that you move towards them rather than away? If you can’t do that, you can’t make money, you probably can’t hold a job, and you certainly cannot be a great leader in a global world.
Now of course, we don’t run from all difference because you’re right, there are people who have curiosity, who wondered, “Who are those people who live on the other side of the mountain? Why do their girls look so beautiful?” We explored and got on little planks of wood and we traveled just to explore, and to be in totally strange surroundings. We’ve been doing this for millions of years.
This conversation has been edited and condensed.
Want great ideas delivered to your inbox?
Get conversations with the world's top thinkers directly to your inbox.
While you're at it, keep up with us on Facebook, Twitter and Linkedin.