Let’s first establish that this entire post is speculation. My only claim is that this is a fascinating concept that is a ton of fun to think about!

For the sake of simplicity, I will use the word computer to describe any mechanism used to run a simulation. This will therefore include everything from a desktop computer that we’d be familiar with, to the mind of a super-intelligent life form. If details of the mechanism are relevant, I’ll clarify at that time.

**The Simulation Argument**

The concept of the simulation is now in the realm of pop science. It’s the idea that this entire universe could be inside a computer. This idea is made more compelling by how close we seem to be to reaching the ability to create such simulations ourselves.

The most famous arguments presented to support this idea were presented by Nick Bostrom in 2003. Here is an excerpt from the abstract:

at least one of the following propositions is true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage;

(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);

(3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.

Or, in other words, the only realistic way humanity will *not* create simulations is that they either go extinct before they’re able or they choose not to. I think that is fair. With that in mind, it seems to be very likely that we’ll create simulations than not. The conclusion is that, if we can create them, and probably in vast quantities, and the simulated beings are capable of making their own simulations, and those simulated can make simulations, and so on, it seems very unlikely that we’re at the beginning of this apparently infinite series of simulations.

**One Obvious Weak Spot**

No matter what the makeup of the computer, it will require matter and energy to run. There is a finite supply of those in this universe. To simulate our universe exactly as it is down to the subatomic level would take more matter and energy than we actually have in our universe. For example, if we use 1 bit of information to represent a single electron in the simulation, that bit requires more than 1 electron to actually store and process. With this fact in place, we have to realize that we will have to cut a lot of corners in the universes we simulate.

One solution is to make the simulated universe smaller than ours, either in space (physically smaller) or in time (runs for a shorter period). The smaller the universe, the more likely we’d be to even be capable of simulating it.

Another, and more commonly accepted solution, is to only process the information necessary dependent on the purpose of the simulation. For instance, if we want to create a simulation to study an animal’s behavior in a specific environment, we wouldn’t need to process anything that wouldn’t be directly relevant. If the animals would never come in contact with a specific variety of tree, there’s no reason to have the tree in the simulation. If the animals can’t comprehend space, stars, planets, etc, then we can fake all those details.

This “fudging the details” trick is actually used in current video games. One method is called “dynamic occlusion culling” and it means the part of the world you see is the only world that exists. Everything behind and off to the side of the camera ceases to exist as soon as you turn away.

That quickly and easily gives us back our ability to create a simulation.

In both of these cases, it makes *infinitely* nested simulations impossible. Each child simulation will have at least slightly fewer resources than the parent simulation to work with to create their simulations. This means that there would be a child simulation that would not have the resources needed to create their own simulations. This is relevant because now we are working with a finite depth of simulations, and that increases the odds of us being the top-level simulation.

Even then, we have the potential to make millions and millions of these simulations. Even if we’re not literally working with infinity, the number is so staggeringly huge, that it feels almost impossible for us to not be inside one. What other option do we have?

**The Unknown Variables**

Since we’re talking about what created our universe, we literally don’t know. It’s even possible that it is unknowable. We’re trying to weigh probabilities based one possibility. You can’t calculate odds until you actually have the full set of possibilities to compare. Otherwise, we’re talking hunches or educated guesses at best. It may turn out that these hunches are dead on. We may find out they weren’t even close. It does at least seem inevitable that simulations will exist, because it seems inevitable that we will create them. Other than that, who knows?

**Mechanisms for a Digital World**

If we go forward with the premise that simulations are possible and are contained within a computer, we are forced to go a little deeper to examine what process in the computer is relevant for a simulation to exist.

Consider a simulation running on a desktop computer. If nobody observes the events inside the simulation from outside (via a computer monitor, for example), did the simulation happen? The monitor is simply a way of observing the process. Therefore one would logically conclude that the simulation is happening either way. That means that **there is no requirement for us to observe the output of a simulation for us to create one**.

What is happening inside the actual computer itself? Quite simply, electrons are shuffled around. That’s how all electronics work. One electron bumps into the next, sending a signal to the other end to perform a specific task. Even inside a computer, where the available paths for the movement of these electrons are carefully chosen, the shuffling of these electrons is meaningless without a way to interpret their movement.

There’s nothing special about how electrons move through a transistor. When a transistor is switched on, the electrons start to flow. When off, they stop. Almost every part of a computer works on principles very similar to this.

If it’s the behavior of the electron that is relevant to a digital world, then the computer itself doesn’t matter. An electron moving outside a computer in a way identical to an electron inside a computer is just as valid. Therefore the logical conclusion appears to be that the requirement for an electron’s movement to create a simulation is that** the path of the electron has the potential to be interpreted as a simulation**. If that is the case, then no actual computer of any kind is even necessary. In fact, no human interaction is necessary at all. We break free of the intent requirement of an intelligently designed simulation into natural digital worlds (NDWs).

The next step is the inevitable conclusion that the electron itself doesn’t matter. It’s simply the movement. It’s the *potential* to interpret that movement as a digital world that matters. An NDW could be run using photons or protons or hydrogen atoms or mosquitoes or fish or planets or stars. Any moving thing or collection of moving things could be used.

Time is another interesting factor of simulations. Again, let’s go back to desktop computers. The first personal computers were incredibly slow compared to what we have now. Let’s imagine that we tried to play a full HD movie on such a machine, and it managed to play the video at a rate of one frame per minute. Watching it from out point of view is so slow, it’s practically meaningless. Now imagine you lived in that movie as it was playing. To you, there wouldn’t be a gap between the “slices of time” (frames). Everything would appear perfectly normal. This same concept can be applied to the speed at which a digital world is generated. A single particle lazily meandering through space for billions of years could easily be responsible for centuries of time in an NDW.

The opposite is therefore also true. A collection of incredibly fast moving particles could be responsible for creating years in a digital world in mere seconds in their own.

Is motion even a prerequisite? If it’s the potential of interpretation that is relevant, than one could use other sources other than movement. Form, for instance. The earliest computers used punch cards as a way of storing and computing. The computer was more of an interpreter of what the cards contained. It was the physical shape of the cards with the potential in that case. Not the movement.

If form and movement work, is there anything that doesn’t? Color, taste, temperature, acidity, placement, orientation, sound, and mass would all work. In fact, anything that possesses variable qualities has the potential we’re after.

This all suggests that a single particle could be involved in countless other overlapping NDWs within the exact same space. It’s like books on a shelf where the books are all sharing some pages with each other.

Let’s imagine that the fewest particles required for a digital world is 1 million. Lets say we are also limiting ourselves to interpreting the movement of those particles into an NDW. The exact same movement of those exact same 1 million particles can still be interpreted in a myriad of ways, with each way being a completely distinct and equally real NDW.

An NDW can be generated by as little as a single string in string theory to as much as the entire observable universe.

If we now circle back to an intelligently designed simulation inside a computer of some kind, we can now see that there would be countless other naturally occurring digital worlds running in the exact same space using the exact same matter and energy.

**Nesting Worlds**

Inside NDWs that resulted in intelligent life similar to ours, it’s certainly possible that life would evolve intellectually much like our own, to the point of creating simulations. Even if the NDW had no big bang, and it began with intelligent life already in full swing, those beings would be up against the same enormous volume of NDWs of their universe.

Likewise, an intentionally created simulation in our universe is still going to possess a vast number of NDWs inside it. Not only is the matter that is used to create the simulation used to simultaneously create several NDWs, the simulation itself is *producing* them.

**Probability of Life in Natural Digital Worlds vs Simulations**

As I pointed out above, working with probabilities without having all the necessary data is quite speculative. We may find that life is an inevitability in all universes. Perhaps we will find that life is nearly impossible and an incredible rarity. If all we are relying on is intuition and hunches, then we aren’t really talking about probability. With that in mind, there are a few interesting things to think about that would play a factor in the probability calculations.

Not all simulations would have life. Even now we run simulations where life isn’t even relevant to what we’re studying. This establishes the fact that neither simulations or NDWs have a 100% chance of containing life.

It is literally impossible for the number of simulations to exceed the number of NDWs. This is demonstrated by the fact that simulations will occupy the same space as an enormous number of NDWs.

With the sheer volume of NDWs, even if only the absolute tiniest fraction of all NDWs resulted in some form of life, that alone would be more than humanity could ever make. Even if humanity could somehow keep pace with the universe, the universe already has a 14 billion year head start at this.

**Conclusion**

Given that nearly any combination of time, space, energy, and matter, and any combination of any of their measurable properties has the same potential to be interpreted as a digital world as the electrons moving inside a computer, the number of digital worlds must be an incomprehensibly huge number. The minute fraction of those that contain intelligent life would also be just as incomprehensible. No matter how fast humanity creates simulations, it is physically impossible to get ahead. Humanity will never even come even remotely close to the number of NDWs.

With all this in mind, returning to the simulation argument, following the premise that digital worlds are either the only or at least most common sources of existence (debatable in itself) it is far more reasonable to conclude that we are not in an intelligently engineered simulation, but are in fact in a naturally occurring digital world that was spawned from the random properties of elements inside our parent universe.

If every time a NDW produces a technologically advanced civilization (“TAC”), you get a million more simulated TACs, then anyone who finds themselves to be in a TAC has only a million to one chance of being in one that was not intentionally simulated.

I don’t see anything in your reasoning that counters this.

Also, re “the unknown”, whether we are able to create full VR simulations within 30 years is knowable. Whether we will actually create many of them by then is knowable. All we need to do is wait 30 years and see. Thus the hypothesis that we will have and use that capability within 30 (or however many years one wishes to specify) is a falsifiable one. It is a genuine scientifically valid hypothesis. No one knows the outcome of scientific experiment before it is run. So what? Run it and see.

As pointed out, there is no context in which the simulations can outnumber the NDWs. In the context of our universe, the NDWs will outnumber all simulations. In the context of any NDW, that will still be the case. In the context of intentional simulations, it is also the case. There simply will always be more unintentional NDWs than intentional simulations. Every argument for there to be more NDWs than simulations in this universe applies to all other digital worlds too.

Yes, we will know whether or not we [i]can[/i] create simulations. I’m not debating that. It’s even unimaginable to me that we have that ability and don’t use it and produce an incredible number of them. None of that is relevant to probability. We don’t know all the alternatives, and it’s perfectly possible that we can never know.

For instance, perhaps somehow contained within every photon is an organic universe. What would that do to your probability calculations? Perhaps outside our universe, the concepts we understand about time and space don’t even apply, and our universe was created via a mechanism that just doesn’t care about the rules of our universe. I mean, we could create simulations have similar rules if we wanted. The creation of the universe could still very well appear to be created by “magic” to us. There are all these variables that are simply unavailable for the calculation of the probability of the simulation argument.

Conversely, consider a dice roll. We can actually figure out the probability of a specific result because we know all the possible results. We need to know all possibilities before we can accurately evaluate probability. Even if we don’t know the other results, but we generally know how many there are and how frequent they are, we could actually evaluate the probability of being in a simulation. Otherwise, as indicated in “The Unknown”, we’re just guessing.

This might make the flawed reasoning more clear: “Because this type of universe is the only one I know of, it must be the most common.”

I’ve renamed that section “The Unknown Variables” in the hopes of helping to clear this up.

Let’s assume then that we wait 30 years and discover that we can and do produce multiple simulated TACs. So we reasonably assume that all or nearly all TACs produce let’s say 1 million simulated TACs. It doesn’t matter one iota whether non TAC producing universes can arise. We can even stop talking about universes and just talk about TACs (naturally occurring or simulated).

If, then, every TAC arising naturally produces 1 million designer TACs then the probability of us being in a designer TAC is a million to one in favour. Just as I have a 2-1 probability in my favour if, on a random number generator, I win with natural numbers and you win with odd natural numbers.

You keep overlooking the fact that every for every designer TAC produced at any level of the simulation, there are countless organic TACs being produced.

So if we’re calculating the odds of a

digital worldbeing a simulation (So we limit the data set to only the digital worlds, whichissomething we can start working on since we are working with a set of data instead of a single point) the designer TACs can never exceed the NDWs. There is never a point where one can make more simulations out of the available materials than nature. Any materials used by any computer to produce a simulation is also unintentionally producing countless NDWs. It doesn’t matter if the parent is a simulation or an NDW. This rule applies at every single level.You must realize the strength of my argument since you refuse to address it. You can have all the many millions more NDWs you want (none of which can exist inside a photon (whatever that even means) by the way).

My argument is as stated and remains unaddressed.

I did address it directly, and even in my original article. I’m also discarding all NDWs that do not produce intelligent life in my numbers.

Your still evading the argument that all that matters is TACs. Not universes, not life sustaining universes, not humanity. For every TAC popping into existence there are a million designer models.

Tell me how you get more organic TACs than intentionally created TACs and we’ll be getting somewhere.

I’m not evading anything. It’s the same thing. There is no quality

at allin any number of designer universes that will be more frequent than in organic ones. If every single designer simulation contained a TAC, and 1 in 10^1000 organic worlds contained a TAC, the organics still win.You seem to still not quite get the insane number of ways matter can be organized and interpreted.

If you take only 52 points on a two dimensional line, the number of ways you can arrange those points is 8*10^67. That’s 8 with 67 zeroes after it and that’s only:

A designer TAC will only be evaluating those points in 1 way. In nature, if the material can even

possiblybe evaluated as a TAC, itisa TAC.It doesn’t matter how many universes there are. So showing that there are gazillions doesn’t matter. I get your point that there are so many possible organic universes that even if a tiny fraction of them produce TACs there will still be zillions of organic TACs. My point is that for every single one of those zillions of organic TCAs there will be a million designer TCAs. That’s a million zillion designer TCAs.

Produce as many TCAs as you want organically. Their numbers will still be swamped by the designer TCAs that each of them produces.

Each organic TCA is a factory producing designer TCAs. And that doesn’t even account for the fact that each of these designer TCAs is itself a factory producing still more designer TCAs down as far as compressed resources permit.

So let me ask you to confine your next reply to answering this single question. Where is the flaw in the following argument? For every organically arising TAC there will be a million designer TACs. Therefore, the odds of being in a designer TAC are a million to one.

Every digital world, simulated or not, will produce more organic TACs than designed TACs. That’s what’s missing. The logic that applies to one level applies to all.

So the implicit assumption in my argument, with which you take issue, is that designer TAC universes ONLY produce designer TAC universes and not also many more organic TACs.

First, these TAC-containing designer universes may be designed such that organic TAC-containing universes cannot arise. It may be explicitly to prevent them or it may be a by-product of something like the need for compression.

Second, even if designer TAC-containing universes permitted organic TAC-containing universes, they can’t be as easily and plentifully produced as you suggest. You can’t get a universe inside of a photon.

These aren’t separate arguments. In an organic universe it may be that, for example, black holes are the manifestation of another universe. With the fine tuning necessary for a universe to give rise to a TAC, it may take all the universes represented by all the black holes in one organic universe to produce even one TAC. Even one may require a lot of luck.

You might get a TAC-containing universe from a black hole, but you aren’t going to get one from someone’s shoe, or from a photon, or from a piece of wood, or a tree, or a forest. There’s not enough matter, energy or level of organization.

“designer TAC universes ONLY produce designer TAC universes and not also many more organic TACs”

Another way to look at it:

Forget all randomly occurring TACs. They are inevitable, but we don’t even need to count those innumerable populated existences in order to determine that designed TACs will always be less common then unintentional/natural ones.

Now take a designed TAC, whether created in our own universe or a child universe, it doesn’t matter. Let’s say it’s designed so literally nothing exists except for one super intelligent being, and the absolute minimum amount of matter and energy needed to create a computer that produces a child TAC. That computer in that simulation is interpreting the materials in a specific way to produce a simulation. For instance, the movement of an electron. All we need to do is point to a single electron and say, if that electron is interpreted in a different way, the features of the simulation are different. If a single piece of the mechanism for a simulation has the potential to cause the interpretation of the material to be a separate digital world, then it is impossible to design a TAC without simultaneously and unintentionally creating many more accidental and random TACs.

Regarding the idea of a universe inside a photon, it was more a to illustrate the point that there are still regions of science and technology that may exist but that we don’t know and therefore can’t include in our calculations of probability. I’m not at all claiming there is literally a physical universe inside a photon.

“You might get a TAC-containing universe from a black hole, but you aren’t going to get one from someone’s shoe, or from a photon, or from a piece of wood, or a tree, or a forest. There’s not enough matter, energy or level of organization.”

This is true only for

designedTACs, and even then, only because of our current limitations on computing.When I point out that there are a myriad of ways to interpret matter and energy to result in a simulation, I don’t mean that only one method of interpretation is usable at a time. For instance, current computers are almost exclusively evaluating the movement of an electron in an “on/off” sense. The position, speed, vibration, quantity, etc. play little to no role in our current computers.

NDWs can evaluate any

combinationof qualities. This drastically reduces the amount of matter or energy needed to produce a single digital world because the same matter can be evaluated in multiple ways at the same time.Current computers:Electrons not moving = 0

Electrons moving = 1

Other potential ways to place value on electrons:Electrons moving along the x axis = 2

Electrons moving along the y axis = 3

Electrons moving along the z axis = 4

Electrons colliding with other electrons = 5

Electrons passing into a different medium = 6

etc.

And this is still just electrons! We don’t have to limit the calculations involved in a single NDW to a single material. The potential for a NDW could be spread out over many materials, each evaluated in different ways.

A single photon

couldhave the potential to be interpreted as a digital world (not a physical one). It’s movement, collisions, wavelength, speed, etc. could all be part of the calculation. The digital world it produces may be short-lived, but it could still include a TAC. Don’t forget that there’s no requirement for these digital worlds to have a big bang or evolution. It could just start with the TAC fully developed. Also remember that time outside a digital world is not relevant to inside it and vice-versa. The photon may take centuries to produce enough data for a decade in a digital world, but that doesn’t matter. It may even be the opposite (a decade of a photon bouncing around produces a century inside the digital world) but I doubt it.Can I try to reframe your objection to the simulation argument so I can see if I get it right? Or at least see if this is consistent with your objection?

1. Our method of perceiving the world defines our world. There is no absolute reality. A being that had only a human-level sense of smell would not perceive spruce trees in a forest but only what we would describe as a generalized, amorphous blob or spruce-smelliness.

2. Taking that further, maybe rather than a spruce tree, other beings would perceive something more akin to our concept of a rock, or maybe a camera, or maybe a technically advanced civilization (“TAC”).

3. Thus every item in our world could, to purported, very-dissimilar beings, contain or consist of one or more TAC.

4. Therefore, for every actual designed TAC, there will be an enormous number of actual undesigned TACs.

That’s very intriguing line of thought. I’ll have to give it some thought. Your points seems reasonable but it’s a bit different from what I’m getting at.

My idea hinges around the idea that the

potentialmathematical interpretation of matter/energy is no different from the actual mathematical interpretation. So you don’t actually have to do the math for the result of the equation to “exist”.A digital world in general is really just interpreting matter and energy in a mathematical way. Whether we actually

dothe math or not is irrelevant. As an oversimplified example of my point, if a tree grows 10 apples, the counting of them doesn’t change the fact that there are 10 apples. The math doesn’t have to be calculated by an observer for it to be reflected in reality.So

anymathematical evaluation ofanycombination of matter/energy and their properties would be valid. Countless numbers of these calculations would result in digital worlds containing technologically advanced civilizations. Any simulations we run are limited to our own mathematical interpretation of the matter/energy inside whatever computer is being used.I don’t feel this is an objection to the simulation argument more than it is an addendum. Bostrom’s points defending the existence of simulations are still just as legitimate as always. The only issue is in the conclusion that we must be in one of those since it didn’t take into account naturally occurring digital worlds.

1. “…if a tree grows 10 apples, the counting of them doesn’t change the fact that there are 10 apples. The math doesn’t have to be calculated by an observer for it to be reflected in reality.”

This is absolutist. As per my post above, it not only requires an observer to give the concept of ‘apple’ any meaning, it requires an observer to give the concept of ’10’ any meaning. Everything, every particle of everything, is in a state of superposition until it is observed. Without an external observer all you have is ‘potential’ not ‘actual’. You no more have 10 apples without an observer as you have 457 clowns.

2. “My idea hinges around the idea that the potential mathematical interpretation of matter/energy is no different from the actual mathematical interpretation.”

A ‘potential interpretation’ is as unreal as it gets. It allows for anything and everything and does not describe the actual experience of actual beings.

I think your argument only supports the conclusion that within every designed TAC there exists countless ‘potential’ undesigned TACs, but whereas the designed TACs are ‘actual’, perceived as real, existing, by the designer and the designed, the potential TACs are unreal, imaginary – figments imagined to occupy the imaginary thoughts of the imaginary citizens of these TACs.

That these potential TACs are capable of being perceived, by beings we can imagine as having the right modes of perception, doesn’t make them any more real than said imagined beings.

This is why I stated that it was an oversimplified example. Even classical interpretation of quantum mechanics states that there are

probabilitiesof an electron existing in a specific point. It doesn’t mean the electron can appear anywhere it wants. It means there are certain possible locations, and some are more likely than others. If anything, it furthers my side by saying that the uncollapsed wave function allows for a much wider range of possible mathematical outcomes than a collapsed one.For what you’re claiming to be correct the

uncollapsedwave function itself mustalsonot exist without someone doing the calculation. This is actually contrary to what experiments show.It may be worth keeping in mind that there is still some debate on whether this particular aspect of quantum mechanics is literally valid, or simply a solution that appears to fit with experiments so far.It seems as though you are starting with the premise that an intelligent creator is required for a digital world, and therefore all digital worlds have an intelligent creator. It’s circular.

I feel this needs to be turned around to see how you legitimize the calculations inside a computer while disregarding calculations outside. If I took a piece of paper, and wrote out the calculations that would result in a digital world, and those calculations were evaluating the position, orientation, color, motion, and all other conceivable properties of leaves on a tree, your reasoning would suggest that

thatwould be a legitimate digital world, but only because I actually did the math. If I stopped before the last digit was written, then there would be no digital world because the math wasn’t complete by an intelligent observer.I say the math is there to be calculated and our actual calculation of it is not relevant.

Oh, and by “observer” I leave open the possibility of the observer not being sentient – as in the possibility of environmental decoherence. But there still needs to be a thing outside of the thing in question to interact with the first and collapse its wave function.

Now I’m confused as to what you’re claiming. For your original argument to be true, only an intelligent observer’s active calculations involving the properties of matter/energy are relevant. But this now says that the sentience of the “observer” is irrelevant. Essentially, the particles just need to have interacted with other particles. If no intelligent observer is required, then I don’t know what we’re arguing about. That’s been my claim this whole time.