“That is something that human beings have never truly done.”
Steven Johnson is the bestselling author of eleven books, including Where Good Ideas Come From, Wonderland, and The Ghost Map. He is also the host and co-creator of the Emmy-winning PBS/BBC series How We Got To Now, and the host of the American Innovations podcast. He recently sat down with Next Big Idea Club CEO Rufus Griscom at Betaworks Studios to discuss his latest book, the Next Big Idea Club Winter Finalist, Farsighted: How We Make the Decisions That Matter the Most.
Rufus: You talk about our process of addressing global warming as evidence that actually we’re getting better at [prediction], which of course is counterintuitive to most of us. It’s quite rational to feel like this is one of the most cataclysmic errors we have made—in not properly addressing climate change. But you frame it differently.
Steven: The thing that we have to remember is that what we’re trying to do in thinking about climate change is something that is very hard for human beings, which is to think about the consequences of our actions today as they will affect the world in 20 or 30 or 50 years. That is something that human beings have never truly done. Human beings have built institutions, structures, and infrastructure designed to last for hundreds of years, certainly, but there’s very little history of humans actually thinking about changes that are coming based on current trends that will make the world very different in 50 years.
The analogy would be when automobiles were invented 120 years ago. If everybody was looking at this new contraption and [saying], “Oh, this is really cool, but I think what’s going to happen in about 50 years is that these suburbs will be built now that we have cars, and we’ll empty out the inner city core, and we’ll end up having crime problems, and we’ll be outputting carbon in the atmosphere, and the climate will …”—no one thought like that in 1900. They just said, “Hey, it looks like a carriage but without the horse.” That was all people could think.
Now we actually have gotten better, thanks in part to computers, which we often think of as exacerbating short-term Twitter attention spans. Thanks to computer modeling, we can think about 100 years of climate change, and about how there will be consequences from actions we’re making today that will have a lag time of decades. And we can think about how to change our behavior. I bet a lot of people [now] make day-to-day decisions that are shaped by concerns about the amount of carbon that will be in the atmosphere in 10 or 20 or 30 years. That’s a totally new way of thinking.
We’re doing it now with artificial intelligence as well. There’s this big debate about the risk of superintelligent machines that might not be invented for another 50 years. Some people look at that and say, “Why are you wasting your time? There’s enough trouble now just trying to fix Facebook—why are you worrying about the Terminator in 50 years?” But I think that’s the exercise that we should have been doing with social media 10 years ago. AI doesn’t seem all that impressive right now, Siri can’t even figure out that I’m trying to text my wife—why am I worried about super intelligence? But if we don’t make those kinds of forecasts, and we don’t think about what those long-term effects can be, that will get us into trouble all over again.
“When you’ve decided on path X, the last exercise you do is something that he calls a premortem.”
There’s a great technique [for this] that is the creation of a psychologist, Gary Klein. [Malcolm] Gladwell writes about him in Blink a little bit. [Klein] came up with a routine for when you’ve almost made your final choice, whether it’s moving to [a new state] or launching a new product in your business. When you’ve decided on path X, the last exercise you do is something that he calls a premortem.
A premortem is, as you can guess from the name, the opposite of a postmortem. [In] a postmortem, the patient is dead, and it’s the forensic scientist’s job to figure out what killed the patient. A premortem is [when] the patient is going to die, and it’s your job to figure out what killed the patient. So in the decision context it’s [this]: this decision that you have [selected] will prove to be a catastrophic failure in two years. Tell the story of how that failure happened. Force yourself to imagine this narrative of why [it was] terrible, and how what seemed like a brilliant decision turned out to be an awful mistake.
It turns out that you get very different results from people psychologically when you phrase the question that way. When you ask people, “Hey, you think you should go down this path, do you see any flaws in that plan?” they [answer,] “No, it looks great. It’s a beautiful path.” But when you ask them, “Okay, invent the story of how this path ends up leading to disaster,” they end up being more creative. They see flaws they wouldn’t have otherwise seen.
That is the routine that I believe tech companies should be doing. [It] should be built into the tech culture. If you are in a disruptive industry, and you are messing with the ways in which people create value, or share information, or share their ideas, I think it is important to run the premortem on new products and features.
This is what Facebook and Twitter did not do. They were like, “Oh, well if you connect everybody, the world will be a better place.” They didn’t take the time to [ask], “How could this be manipulated and abused?” Not just from their own standpoint as a business, but from a broader social standpoint. If you’re going to go and move fast and break things, running premortems on how things might break in unanticipated ways is an exercise that, ethically, I think you should be required to do.