The simulation hypothesis is mostly associated with Nick Bostrom and his paper Are You Living in a Computer Simulation? Bostrom argues that we likely are living in a simulation and Elon Musk agrees with him. Frankly I think it is unlikely we are living in a simulation in the way Bostrom’s means it, but at any rate, it is impossible to prove or know and, as far as I can tell, would make no practical difference. In the end, if reality is a simulation, then being in a simulation or not being in one becomes for all practical purposes the same. There is a different way from Bostrom’s that we might be living in a simulation. This way could account for the occasional unreality of things most of us sometimes experience. It could account in a deeper way for why Bostrom might have thought about arguing we are living in a simulation.
Xerxes D. Arsiwalla, a physicist in Spain, was the lead author on a paper Are Brains Computers, Emulators or Simulators? In the paper, he draws a contrast between the brain as a computer vs the brain as a simulator. If the brain is a computer, he argues that “all cognitive processes can be described by algorithms running on a universal Turing machine”. This implies that consciousness is computational. On the other hand, if consciousness is non-computational, then it would be based on what he terms “non-classical logic”. He goes on to state:
Machines implementing non-classical logic might be better suited for simulation rather than computation (a la Turing). It is thus reasonable to pit simulation as an alternative to computation and ask whether the brain, rather than computing, is simulating a model of the world in order to make predictions and guide behavior. If so, this suggests a hardware supporting dynamics more akin to a quantum many-body field theory.
The paper goes on to discuss the limitations of computationalist view. He cites the Turing Halting problem and the Penrose tiling problem which can’t be solved by computation. Then he provides a “third example of a non-computable problem is the collapse of the wave-function or the measurement problem in quantum physics, which evades an algorithmic description”. Not mentioned here is another class of problem. This would be a type of problem that might be solved computationally but one that requires so much computer resources that it cannot be solved in any given amount of time.
An emulator “can be defined as any machine that can be used to specify dynamical states transitions of another system”. Computers can do emulations; however, a computer emulation would be subject to the limits of computation. Emulators can also be what the paper terms “dynamical systems-based simulations” which are not computational. The difference between the two is:
The difference of say computing an explicit solution of a differential equation in order to determine the trajectory of a system in phase space versus mechanistically mimicking the given vector field of the equation within which an entity denoting the system is simply allowed to evolve thereby reconstructing its trajectory in phase space. The former involves explicit computational operations, whereas the latter simply mimics the dynamics of the system being simulated on a customized hardware. For complex problems involving a large number of variables and/or model uncertainly, the cost of inference by computation may scale very fast, whereas simulations generating outcomes of models or counterfactual models may be far more efficient.
We finally reach the key argument of the paper. Brains are not computers. They are simulators.
Beyond this example of the motor system, if the brain is indeed tasked with estimating the dynamics of a complex world filled with uncertainties, including hidden psychological states of other agents… then in order to act and achieve its goals, relying on pure computational inference would arguably be extremely costly and slow, whereas implementing simulations of world models as described above, on its cellular and molecular hardware would be a more viable alternative. These simulation engines are customized during the process of learning and development to acquire models of the world. The simulated dynamics of these models lead to predictions as well as counterfactual hypotheses, which can then be passed through feedback control loops to correct for prediction errors. Note that these dynamics-based simulations differ from computer simulations. In the former, no specific function is being computed. Instead, as in control engineering, a model of the process is encoded (or learnt) in the network’s connectivity and is used to generate subsequent state transitions. More complex models require more complex network architectures and multi-scale biophysical dynamics, rather than heavy computational algorithms, which is presumably not what we see the brain to be designed for.
This explains much about the evolutionary origin of consciousness. Compared to actual computers, the brain and nervous systems must make the best with a relatively small amount of energy and a relatively slow computational speed. In simple organisms those limitations may not be fatal. However, the evolution of greater adaptive capability, the integration of more sensory data, and the development of broader repertoire of behaviors would eventually hit a computational barrier. The brain could not compute quickly enough to provide an selection advantage if it relied solely on a computational approach. The evolutionary response would be development of a simulation on top of a computational base. Unsurprisingly , our consciousness feels occasionally exactly like a simulation, although for the most part we think the simulation is real.