Is dark matter the caloric of our time?

I think its great the Wolframâ€™s approach has no free parameters and is not only fitting GR but is contributing to the algorithms used by physicists to model gravity. The thing about â€śfree parametersâ€ť is really just saying something more like â€śThis isnâ€™t a class of theories, it is a particular theory.â€ť Whenever you introduce a parameter you are really just stating a constraint on the class of theories implied by that parameter.

Regarding the â€ścaloricâ€ť aphorism, I was really kind of disappointed that what Wolfram was really saying with his aphorism is more like:

â€śThe ruliad will turn out to be the theory of everything including dark matter.â€ť

He didnâ€™t provide any specific argument for this explanation for dark matter in the video â€“ it is pure conjecture. For instance, MOND has been dispatched by observations of clouds of dark matter from gravitational lensing. He didnâ€™t address this. This â€śaphorismâ€ť it seems to me, is on the same order of rhetorical posturing as his repetitive resort to â€ścomputational irreducibilityâ€ť which it would also seem to me is something as trivial as â€śthere is no closed form solution to the vast majority of dynamical systems equationsâ€ť. Oh? We didnâ€™t know that to be the case ever since Newton/Leibniz, Stephen?

Having said that, it seems pretty obvious to me that whatever dark matter turns out to be, it had damn well better play the greatest part in alpha_G_proton. This is something everyone seems to be missing for some strange reason. They think they get away with not talking about the proton when talking about dark matter. Whatâ€™s up with that? I mean it isnâ€™t as though alpha_G_proton isnâ€™t a well known parameter and it isnâ€™t as though its relationship to 2^127 isnâ€™t also known to be even more remarkable than Diracâ€™s or Eddingtonâ€™s conjectures.

Whatâ€™s wrong with these physicists?

My understanding of Wolframâ€™s conjecture about dark matter is that it is the gravitational effect of the underlying computation (of his multi-way graph of â€śatoms of spaceâ€ť) which causes what we observe as continuous spacetime to emerge. This computation (occurring at an unknown scale, but necessarily much smaller than the Planck length) is what creates the persistent structures we observe as particles and the evolution of those structures that we interpret as time. Energy, in his scheme, is just the density of the computation in a given region of the graph (or, coarse-grained, volume of space).

So, one might imagine that in regions which have concentrations of particles, there might be more intense computation going on to create the structures that compose them, and that this computation gives rise to the additional gravitational effects we attribute to dark matter.

As far as I know, none of the details of this have been worked out. At the moment it is all arm-waving, but then much the same could have been said of explaining heat by the motion of unobservable tiny molecules before we were able to directly detect them.

Maybe I should have been more specific in my critique. What really disappointed me was Wolframâ€™s failure to mention observations that decouple visible matter from dark matter â€“ the most difficult of which to explain away is the Bullet Cluster:

Such observations eliminate entire classes of dark matter theories including any that rely on proximity to light matter.

Jack Sarfatti (@jacksarfatti) just brought a 2011 paper by Gerard 't Hooft (Nobel Prize in Physics, 1999) to my attention, â€śHow a wave function can collapse without violating Schroedingerâ€™s equation, and how to understand Bornâ€™s ruleâ€ť (full text at link). Here is the abstract.

It is often claimed that the collapse of the wave function and Bornâ€™s rule to interpret the square of the norm as a probability, have to be introduced as separate axioms in quantum mechanics besides the Schroedinger equation. Here we show that this is not true in certain models where quantum behavior can be attributed to underlying deterministic equations. It is argued that indeed the apparent spontaneous collapse of wave functions and Bornâ€™s rule are features that strongly point towards determinism underlying quantum mechanics.

The key idea is that what we observe as probabilistic behaviour in quantum mechanics and the apparent mystery of the â€ścollapse of the wave functionâ€ť by a measurement may actually be coarse-grained behaviour of deterministic (classical) â€śsub-microscopic statesâ€ť which underlie the microscopic (particles and fields) states we measure.

The degrees of freedom in terms of which we usually describe atoms, molecules, subatomic particles and their fields will be referred to as

microscopicdegrees of freedom. It is these that have to be described as superpositions of the sub-microscopic states, and in turn, the macroscopic states are superpositions of microscopically defined states. Perhaps the most accurate way to describe the situation is to say that the states we use to describe atoms, quantum fields, etc., serve astemplates. A particle in the state |x\rangle or in the state |p\rangle or whatever, will nearly always be a superposition of many of the sub-microscopic states; as such, they evolveexactlyaccording to SchrĂ¶dinger equations. In contrast, the sub-microscopic states evolve classically. The macroscopic states also evolve classically, but the details of their evolution laws are far too complicated to follow, which is what we need the microscopic template states for. Themacroscopicstates, such as people and the indicators of measuring devices, are probabilistic distributions of the sub-microscopic â€śhidden variablesâ€ť. We actually only observe macro-states.

When you think about how this might actually work, it is remarkably similar to the picture thatâ€™s emerging from the Wolfram Physics Project: a structure below that of the fields and particles we observe which behaves deterministically, but when aggregated into our observables appears to be probabilistic, just as a gas made up of many molecules undergoing deterministic, reversible collisions manifests statistical and irreversible behavior in the aggregate.

But what about entanglement and Bellâ€™s theorem? Didnâ€™t that rule out any such local, realistic hidden variable theory? Well, yes, but tâ€™Hooft notes that the underlying (sub-microscopic) structure may well be correlated over spacelike intervals.

Spacelike correlations can of course be understood if systems have a common distant past; therefore, by itself, the finding that there must be strong spacelike entanglement constitutes no contradiction, but it seems quite amazing. What must be kept in mind is that, in our cellular automaton models, the state commonly referred to as the vacuum is by no means featureless; there are vacuum fluctuations, and this means that the vacuum is a highly complex solution of local field equations. Thus, the vacuum may show all sorts of correlations with matter, even over large (spacelike) distances.

This is essentially the same as Wolframâ€™s distant connections in the graph being observed as entanglement.