Why You Are (Probably) Not Living in a Computer Simulation
I recently watched Devs, a somewhat bleak but stylish, serious and well-researched TV drama written by Alex Garland, which explores a number of philosophical themes close to my heart. The following discussion of it contains some spoilers, so if you haven’t seen it yet (and it’s worth watching), I’ll give you a moment to run off and binge-watch the 8-episode series on BBC iPlayer, or Hulu, or (cough!) wherever… All done? Great! So, in summary, breakthrough developments in quantum computing have allowed a group of Silicon Valley types to be able to predict the future and visualise the past. There is a lot of philosophically interesting discussion about whether this is possible, ethical quandaries relating to foreknowledge, what it means for human freewill, etc, and the whole premise drives the drama very nicely. I was, however, somewhat disappointed by the finale, which involved various individuals being “uploaded” to a computer simulation. To be fair, this is still a popular sci-fi conceit (Upload on Amazon Prime provides a nice recent comic take on it, and it’s been well-visited by numerous novelists). Ultimately, I suppose, given that the series had broken new ground in a believable way, I was simply hoping for more. But I think it also annoyed me that the notion of “living in a simulation” was so glibly presented as a coherent possibility.
Although an old-trope of science-fiction, the popularity of this idea seems largely due to the so-called Simulation Hypothesis first proposed by Oxford philosopher Nick Bostrom in his paper, “Are You Living in a Computer Simulation?“. For those not willing to plough through Bostrom’s paper, there is a nice little summary of his arguments in a short interview with him on Youtube, but I will also attempt to briefly explain his central points below.
Firstly, Bostrom observes, technological and scientific knowledge appears to be increasing at an inexorable and exponential rate, which will eventually allow us to simulate reality. Given the eventual technological means, and (judging by the nature of human academics and researchers) that there is the interest to do so, it seems likely that at some point humans will attempt to simulate what the past was like by creating “ancestor simulations” (a hyper-detailed and realistic version of The Sims, perhaps). Such a simulation would be so detailed, in fact, that the simulated beings would themselves possess consciousness.
If we broaden out this possibility, then we can apply the argument to any civilisation that might arise anywhere and at any time in the universe. Given the age and size of the cosmos, and the number of planets that could support life, it therefore seems likely that there are countless civilisations that may have evolved somewhere and at some point. Of these civilisations, either they never reach “technological maturity” (they blow themselves up in a nuclear holocaust, their planet is destroyed by an asteroid, etc), they never develop such ancestor simulations (it’s not technologically feasible, they aren’t interested, etc), or they do in fact go on to develop such simulations. Given our current knowledge and assumptions, Bostrom argues, the third option seems at least as probable as the other two, but if we assume it to be true, then a more radical probability emerges: it is almost a certainty that we are in fact now living in a computer simulation.
This last step may seem like a bit of a jump, but Bostrom’s reasoning is that if there are a sufficient number of technologically mature civilisations that are interested in running ancestor simulations, then, on just one planet, the number of simulated people will likely greatly outnumber unsimulated ones (think of the number of bored alien teenagers who could be playing “ancestor Sims” each on their own computer). Add to that the likelihood that there are any number of other such civilisations, all with similarly bored teenagers (or well-funded uni departments), then the likelihood that you now are one of the real people becomes quite small.
In summary, then, there are three options (to each of which Bostrom allots equal likelihood):
- Almost all civilisations tend to blow themselves up before they reach technological maturity; or,
- almost all technologically mature civilisations show no interest in running such simulations (or are technologically unable to); or,
- you are almost certainly living in a computer simulation.
Bostrom’s hypothesis seems to have garnered quite a lot of popular attention, perhaps owing to it’s silly-season headline-grabbing assertion (“Are you living in a Computer Game?”), the backing of some high-profile adherents (e.g. Elon Musk and Sam Harris), or those Matrix fans looking for something to fill the gap before the fourth instalment. But Bostom’s position is not new, merely a technological twist on the old sceptical argument that reality is an illusion, an idea with a long philosophical and religious pedigree (Plato’s cave, Zhuangzi’s butterfly, the Vedic notion of Maya, Descartes’s Malignant Demon, etc). What is new, perhaps, is the role played by the notion of “simulation”. Descartes wondered whether reality could be a dream, but in so doing assumed consciousness to be real (whatever “I” am, it is certain that “I” exist in some form). For Bostrom, however, it seems as if the opposite is true: it is the physical world of exponential technological growth curves that is undeniable; in comparison, consciousness is ephemeral, something to be conjured up by alien techno-nerds and simulated by clever extraterrestrial algorithms. We might wonder – as Descartes similarly did in relation to his own scenario – how Bostrom can trust the “simulation” logic and “computer game” physics of this “reality” in order to extrapolate the probable scenarios that we “must” accept. If all this is a simulated game, then why should we trust our own reason or empirical evidence, both of which may have been tweaked by malicious alien programmers. But I actually think Bostrom has a bigger problem.
In everyday language, a “simulation” usually describes a scale model of something more complex. Meteorologists simulate the weather in order to predict if it will rain tomorrow; an engineer might simulate the performance of a bridge design under high winds; a country’s military commanders might simulate various forms of conflict. In most such cases, especially where the simulation takes place via computer or a scale model, there is no danger that the simulation might be mistaken for reality. We can imagine a situation where a fire alarm turns out to be a drill (or not), or an army’s “manoeuvres” be mistaken for (or disguise) actual military aggression. But no one is going to get wet from a computer model of a thunderstorm, no matter how detailed and accurate that data model. It’s a weird assumption, then, which seems to have become accepted sci-fi dogma (even considered plausible by some philosophers) that the possibility of living in a computer simulation is just a question of having enough data – even though “being conscious” in a simulation is the equivalent of “getting wet” in a weather simulation (and therefore, you would think, ruled out). In his Youtube interview, Bostrom himself seems to share this assumption, talking of “a computer simulation so detailed that the simulated people would be conscious” – as if the magical ingredient that is consciousness is just a matter of adding more detail, more information.
The bizarre nature of this assumption can perhaps best be drawn out if we compare it to another scenario. Imagine you are writing a novel, and in your dedication to hyperrealism, your characters become so detailed and fully realised that they become conscious. Does this seem plausible to you? If you are simply dealing with words on a page, symbols written in ink, would adding more of those symbols bring your characters closer to being “alive”? In which case, the difference between you, and a story about you, is simply a matter of the amount of detail involved; the difference between a computer model of a rainstorm, and getting drenched, comes down to how meticulously you describe the raindrops. Which just seems to me to be absolute nonsense.
Of course, Bostrom need not be committed to simulation. His argument might work if he simply stated that, “eventually, technologically mature civilisations will be able to create artificial brains”. You can’t rule this possibility out. Perhaps a civilisation would engineer such a brain from silicon or some other inorganic material, or maybe even just construct them from organic molecules from the bottom up. But either way, in doing so they would not have succeeded in “simulating” consciousness; they would actually have created it. In creating life, they would, in effect, have become God-like. I think maybe Bostrom doesn’t rule out this possibility, but it does seem to cut against the current popular trend (as evidenced in Devs, Upload, etc) where the threshold of consciousness is passed simply by supplying “enough information”.
At this point, you might wonder why we should bother with the notion of simulations at all. In fact, some enterprising divinity student could easily adapt the simulation hypothesis for theological ends. For far longer than they have been interested in running “simulations”, humans have longed to create artificial life. If technological advance (broadly construed) allows this one day to happen, then could not the same probability arguments Bostrom employs be used to argue that it is almost certain that human life was created by God-like beings?
[Gareth Southwell is a philosopher, writer and illustrator. The above piece is a development of something first shared via newsletter. If interested, you can follow such philosophical ramblings, get news, writing updates, and a free book, by entering your email on the form on this page, or signing up here.]