When a Nature article reminds me of civilizational brain damage as portrayed by David Lynch

It’s always nice to see folks taking “causality” seriously enough to at least devote effort to a recommended reading list.

Unfortunately despite Shalizi having invested a great deal of care and effort in the topic, and despite his site, bactra.org having numerous hits regarding algorithmic information, he just doesn’t seem to get the forest for the trees. His numerous references to Judea Pearl on that page are why I reserve my vitriol for Turing Awardees (and other hood ornaments) that have no excuse, as they misled and are misleading generations with “forest for the trees” pedantry.

It really is as simple as understanding that any pretense of natural science presupposes calculation and this is what Ray Solomonoff’s 1960s proof assumed in concluding that you can’t do better than the algorithmic information criterion for model selection* – well, except for what Tom Etter and Pierre Noyes were trying to tell people regarding their radical approach to “the quantum core” as I previously posted here:

But that’s a whole other fork in the road in which “the arrow of time” is more analogous to the arrow of gravitational force emergent from a new science with process, information and structure as the primary categories rather than time, space and matter. See Outline of a New Science by Tom Etter.

* “model selection” is, of course, only one stage of scientific activity. But it is worth focusing on because you can’t even begin decision-making until you have selected a model upon which to base your decision tree with all it’s “what if” nodes. All the noise about “causality” leading to the relatively sophisticated understanding about “p-hacking” are 2 levels of intellectual rigor behind Algorithmic Information approximation as causal model selection. We should, long ago, have abandoned attempting to predict things on the basis of Pearl’s DAGs (Directed Acyclic Graphs) and gone to DCGs (Directed CYCLIC Graphs) which immediately and obviously takes you out of the dead end of trying to focus on only one dependent variable at a time hence “p-hacking”.

By “long ago” I mean at least as far back as when John Tukey’s student, Charlie Smith found DAGs inadequate to model the energy economy of the US during founding the DoE’s EIA, thence departed to start the second neural network summer in the 1980s at the System Development Foundation where he financed the guys who were trying to do dynamical systems modeling of nervous systems (ie: DCGs/RNNs).

3 Likes