See my earlier post on the coding equivalence principle.
In my earlier article I motivated, defined and gave some reasons for believing in a philosophical coding invariance principle. Today I will address the profound implications that this principle has, discuss the definition of Occam's razor in light of this principle and finally consider in what ways Turing computation may prove to be inadequate theory of computation with which to formalize this principle.
If a universe is described by a finitely computable general unified theory then it is equivalent to the empty universe (a general unified theory is a physics theory that predicts everything and finitely computable means you could simulate it on a sufficiently powerful computer).
This is profound because it says that our universe might as well not exist if physics could ever fully describe it! Of course denying the possibility of a general unified theory would not imply physics was useless. Indeed science could still be the best way to arrive at truth. All it says is that the work of physics will never end.
Any finite universe is equivalent to the empty universe. This is an extraordinary claim as current thinking is that our universe is finite in extent. However strictly speaking the rule applies to the full space-time diagram of the universe which might be infinite in the time direction. If the universe will only last for a finite time before ceasing to exist (and certain other currently believed technical conditions hold) then the universe might as well not exist.
These two facts might be considered to raise problems with the coding principle. However, we do not know that our universe is finite and we have no reason to believe that there is a general unified theory. Indeed I will argue in another blog post that we could never have any evidence for such a proposal.
Fortunately for the coding equivalence principle it does not imply that all infinite universes are equivalent to the empty universe. You might take this as a good reason to assume that our universe is infinite (at least in potential). Indeed I hold this view myself.
There are also significant ethical consequences of this coding principle which will be addressed in another blog post.
Let us consider Occam's razor now. If there are multiple codings for the same universe which do you use when checking how simple a theory is? I think this question can be satisfactorily resolved but the details are too mathematical for this blog post.
I say that two descriptions should be taken to describe the same universe if they are each computable from the other. But there is a significant technical problem with this. Turing computation is simply not defined for the entities with which we are dealing (abstract sets). I think the answer to this is to generalize Turing computation to dealing with this type of data. Unfortunately there have been multiple ways of doing this suggested and no canonical form identified. So for this principle to be properly applicable we really need further research into types of hypercomputation. This is a major research interest of mine (mainly for other reasons).