Dan: You traveled all around the world and across our great 50 states to research the transhumanism movement. Can you explain transhumanism?
Mark: Basically it’s a social movement predicated on the idea that through technology, we can change the basic elements of what it means to be human, like the fact that we die and that our intelligence is limited by our brain matter. The idea is that through technology, we can push beyond the boundaries of our humanity.
Dan: Technology already does that for us in many ways. For example, right now I am in New Zealand, and you are in Ireland, right? That’s a gap that, once upon a time, two mammals would not be able to bridge in any way. But here we are, talking to each other from across the world. And technology has advanced enough that, instead of being amazed by that fact, all I can do is be slightly annoyed by the half-second delay.
Mark: As I may be slightly annoyed by the fact that my three-year-old son is wailing in the background.
Dan: You write about how one of the first things that got you interested in this movement was this existential crisis you had when your son was born. This notion that, “The natural end point of my life and his life is that we die” is bullshit, and that clearly someone must working on something better than that. How has your investigation into transhumanism changed the way you think about parenting?
Mark: The more time I spent with these people, the more I was pushed into quite conservative positions actually. They’re so extreme and radical that they force you into positions that you wouldn’t have imagined you would hold—like defending death. [I found myself] saying, “Actually, maybe the fact that we die is okay. Maybe that’s the source of all meaning.”
When you talk to transhumanists, eventually you will come up against this idea that [their] ideal future is [one] of disembodiment, and absolute power and absolute godlike transcendence. To me, this is really hard to relate to. There was a lot of stuff about transhumanism that I could relate to, but the idea that you want to eventually be a disembodied, unfleshed being, floating through the universe and gathering all possible knowledge as you went… You might as well be dead, you know? What is the appeal of that?
“A lot of transhumanism, like a lot of contemporary culture, has to do with an over-identification with the computer. It’s a confusion of the boundaries of human and machine.”
Dan: It’s like taking some of the worst aspects of life, being on the Internet, and making it the only thing you experience.
Mark: Yeah. I feel like a lot of transhumanism, like a lot of contemporary culture, has to do with an over-identification with the computer. It’s a confusion of the boundaries of human and machine. There’s a difference between those of us who are just using the technology and people who are coders or software engineers that are kind of in the matrix. They see the world in a different way. They see life as composed of logical, solvable problems, and I think transhumanism probably has a lot to do with that.
Dan: It seemed like you were coming from the quite reasonable perspective that life is not a series of solvable problems, and that the inexplicable messiness of life is its appeal. Whereas to them, they’d think that’s a bug.
Mark: I mean, I hate the messiness of life as well. I also slip into thinking of myself in a machine-like way. That I’m not productive enough, or for all the input I’m taking, there’s not enough output. Or, “I wish I was smarter, I wish I had more computational power, I wish I was able to just sit down and get my work done and not faff about.” There’s an appeal to that computational way of thinking of yourself.
So, I do have those frustrations, sure. Are you familiar with nootropics?
Dan: Definitely not.
Mark: They’re basically smart drugs. This guy named Abelard Lindsay was a freelance programmer, but he had this sideline in concocting smart drugs. It’s this whole team that concocts these nootropic stacks, messing around with the levels, measuring the effects on their own cognitive abilities. It’s a very extreme version of the quantified self.
Lindsay had this drug called Ciltep, which is a compound of artichoke extract and something called Forskolin. I took it for a while and was pretty convinced that it was making me more efficient and focused. It was probably a placebo effect, but I was really drawn to this idea that I could take a pill and make myself smarter and more efficient and less distractible.
Dan: But isn’t there an actual drug that does that exact thing that isn’t just a bunch of bullshit about avocados? Couldn’t you just take Adderall?
Mark: Adderall? I’d never go that far.
Dan: You would gladly take some weird-ass shit that some guy made out of avocados and not an actual drug regulated by the FDA?
Mark: I think it was artichokes.
Dan: I’m in New Zealand right now, a country where weird billionaires are apparently buying land to prepare for the apocalypse. One contradiction you wrote about is between what seems like the dire straits of our future and this desire on the part of transhumanists to extend themselves indefinitely into that future. If Peter Thiel thinks the zombies are coming so that he needs to buy land in New Zealand, why does he also want to live forever?
Mark: [It’s] interesting: our vision of the future, of the apocalypse, is completely tied to our ideology.
My vision of the apocalypse that may or may not be coming is climate change: capitalism is going to be the thing that destroys us. That’s a left-wing apocalypse that’s completely consistent with my moral, ethical, and ideological view. Whereas if you’re a doomsday prepper, or typically right-wing, your vision of the apocalypse would be something like a nuclear bomb being dropped by an enemy of America. Or an outbreak of infectious disease causing minorities to spill out of the city and steal your shit, so you’ve got to protect your stuff with your guns and have your canned goods in a bunker.
I think Peter Thiel is a conflation of those two things. He feels like the world is going to shit, that capitalism and democracy are not compatible and eventually, everything’s going to collapse. But as long as you have your own future locked down, that’s okay.
Dan: So the doomsday prepper feels as though, as long as their ass is covered, they do not necessarily have a responsibility to save everyone or even anyone else. And a transhumanist who shares that belief feels like, “Why not extend myself indefinitely into the future? It’s just saving myself for this logical extreme.”
Mark: Right. There’s a tendency, and I’m guilty of it, to see transhumanism as tied in with a particular ideology. There’s this strong vein of libertarianism running through all this stuff, and it’s completely compatible with this extreme individualism, which I guess is why I spent so much time in America, why so many of the people I talked to were Americans or at least in America.
But at the same time, transhumanism is ideologically neutral. There are socialist transhumanists and communist transhumanists. I met at least one Muslim transhumanist; they’re all across the spectrum.
“Whether or not I find this future appealing, it doesn’t matter. We’re talking on Skype, we’re on the Internet—none of us signed up for this, always being in cyberspace.”
But if everyone’s going to live forever, what does everyone do? Is there going to be a massive population explosion? Should we stop having kids?
Dan: Well it’s just going to be the rich who, if such technology ever presents itself, are going to benefit from it.
Mark: I think so. There’s this idea that at the beginning, it will be available to only the very rich, and then it will trickle down to the marketable dictator, and it will trickle down to your average Joes, and then eventually we will all be bloated consciousnesses or whatever.
But one of the things that interests me about all of this is whether or not I find this future appealing, it doesn’t matter. We’re talking on Skype, we’re on the Internet—none of us signed up for this, always being in cyberspace. If you described this version of the present to you in 1991, how would you have reacted?
Dan: With horror, I assume. In 1998, I swore I wouldn’t have a cell phone, and look at us now.
Mark: So did I. The 1998 version of you was probably right, you know? We are living in a dystopian present. But you don’t get a choice. Well you do technically, but really, who’s not using the Internet?
Dan: You mentioned your various journeys across America. This book did, by the end, really turn into a wacky road trip across weird transhumanist America. Is there something about Americans that makes us particularly susceptible to a belief in our own immortality?
Mark: Most Americans have a really good grasp of themselves and the narratives of their own lives, and that’s not something you feel with most Europeans. I wonder whether it has something to do with America having a very clear narrative of itself as a country, about itself and what it means to be American.
“I think that if you scratched beneath the surface of Californian ideology, it’s basically anti-death.”
Transhumanism is an international movement, and I did spend a certain amount of time in places other than America, but most of my time was spent in not just America, but in California in particular. Transhumanism is this specifically Californian way of thinking about life and about the self: you can achieve anything if you take the right attitude, if you apply the right smarts and know-how, and roll up your sleeves and apply money to it. Basically anything is soluble.
I think that if you scratched beneath the surface of Californian ideology, it’s basically anti-death. Death doesn’t fit with that picture at all. It’s probably no coincidence that transhumanism has taken hold in California. Obviously there is the specifically technological culture of Silicon Valley, but I think that it comes from a Californian individualism as well.
Dan: There’s a section in the book where you’re looking at these thin-sliced mouse brain scans, and you think about how disturbing it is to think that we are all really just information. So, what proof do you have that you and I are not actually just uploaded, emulated whole brains talking to each other right now?
Mark: I have no reason at all to believe that you are not that, other than just blind trust. I’ve never seen you in person, and this is only the second time we’ve talked. You may well be a relatively sophisticated AI algorithm.
Mark: That’s actually a prevalent idea in Silicon Valley, and it’s something that transhumanists talk about. I don’t know how deeply they believe it, but it’s consistent with their worldview. I’m straining to remember my undergraduate philosophy, but didn’t Descartes say something quite similar?
Dan: You tell me. Your undergraduate philosophy is stronger than mine.
Mark: I’m gonna go with he did. Schopenhauer talks about it as well I think, the veil of maya and how what we see is just this screen of delusion, and behind that is the real reality. It comes back to Plato as well. It’s a really old idea; it just keeps getting reconfigured. Maybe it has to do with extreme over-identification with software and with machines, and maybe having seen The Matrix one too many times? Some Silicon Valley billionaire is really taken by this idea that we live in a simulation, and has employed a group of programmers to try and break us out of it.
“Is there room for art in a future of superbrains exploring the universe, or is that the frivolity that gets eliminated when our processing power gets accelerated to supercomputer speeds?”
Dan: The title of the book, [To Be a Machine], comes from somewhere a bit unexpected: a quote from Andy Warhol. He talks about how he wants his art to be seen as though it is coming from a machine. But your book did make me wonder, is there room for art in a future of superbrains exploring the universe, or is that the frivolity that gets eliminated when our processing power gets accelerated to supercomputer speeds?
Mark: Maybe I’m revealing my essential modernism here, but I feel there’s something primitive about art in a really good way. There’s something about art that is based in not understanding, in being confused in the world, being frustrated by the world. And I don’t think there is a place for that in this vision of the future. There’s [no] place for confusion and frustration. The idea is that we transcend all that. There will be no need for art, for religion, for stories, any of these things.
I’m sure there are lots of other ways of thinking about that and there are, as you’re probably aware, AI algorithms that are trained to make images and music. Have you heard any of these?
Dan: Yeah, they’re bad.
Mark: It’s the cheesiest, radio jingle level of music writing. It’s an uncanny parodic reflection of us at our worst.
Mark: That’s what’s really creepy and uncanny about it. There was a musical that was written entirely by an artificial intelligence algorithm that I think performed in the West End in London. It was universally pretty appalling. That’s not to say that art could never be made by a machine, but why would you want art to be made by a machine?
Dan: And once you’ve achieved the singularity and you are just a theory exploring the universe, like you’re going to waste time listening to a symphony? No way.