A good starting point is my prediction market claim at metaculus on the use of lossless compression in macrosocial model selection.
One might notice that a pseudonym suddenly appeared to attack me and then disappeared from metaculus shortly thereafter. Another, similar, attack appeared over at ycombinator when I brought up the possibility of exploiting the market failure. Both attacks came off as sounding like they knew what they were talking about but it is clear that their purpose was to poison the conversation. There is an advantage to possessing the truth about society that others don’t. So I don’t discount the possibility that work in this area is a trade secret of Wall Street and the intelligence community. Whether they need to hire attack dogs or not is another question.
LessWrong’s low signal to noise level around AIT’s applicability in model selection probably played a big role in giving guys like Robin an excuse not to. The Hutter Prize may have been the only time in history up to that point* where a fair contest in the natural sciences was set forth: Everyone gets the same set of data – now do your best to model it. LessWrong’s response was to blather so guys in the singularity world would ignore it.
I probably should have pestered him but I tend not to bother if I don’t get a response. Otherwise, if I had to guess: It is because he doesn’t see the necessity as it pertains to his reputation. To paraphrase a saying from back in the old days of computers: “No one ever got fired for NOT explaining why they bought IBM rather than Control Data.” Anders Sandberg and Peter Turney are the only two who have risked their reputation by responding directly. In the case of those two benighted individuals, they did so-risk a response, which puts them head and shoulders among those who didn’t.
In the case of Sandberg and Turney, they basically want empirical evidence that AIT is superior to statistical information criteria such as BIC, etc before they’ll stick their necks out. That’s an honest and respectable stance, even if wholly inadequate to the catastrophic situation we face. When I say “respectable” I mean it in the same sense as “respectable conservative”. But at least its a stance, however unheroic. Good for them! (I agreed with Turney, way back in the aforelinked 2007 blog, that experimental evidence was important – throwing down the gauntlet to him to operationalize what he meant by "experimental evidence – and despite his posture requiring him to do so, he failed to take up that gauntlet.
This is typical of these conversations.)
But does it really take a hero to recognize that when one embarks on creating a model of reality that one has already accepted “the effectiveness of mathematics in the natural sciences”? Moreover, as one is obligated to make predictions in order to test one’s model, one is further obligated to go into a particular realm of mathematics involving algorithms:
The mathematics of state transitions: From the given state to the predicted state.
That’s the basic problem I have with everyone. It shouldn’t have taken Solomonoff’s proof, nor Minsky’s final and very forceful admonition to recognize that the shortest algorithm “describing the data” is the formalization of Ockham’s Razor, nor that one is presupposing Ockham’s Razor in the practice of natural science.
Machiavelli said it best:
And it ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, then to take the lead in the introduction of a new order of things. Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.
*Kaggle is often viewed as a fair contest given a set of data – and it is certainly an advance – but I’ve asked them if I could conduct a lossless compression-based prize via their platform and their response has been a big DUH?. Ever since the Netflix Prize was announced shortly after The Hutter Prize, everyone wants to divide the data into a training set and a test set and have performance on the test set be the metric. This gets into all kinds of nasty issues, some of which Marcus Hutter addressed in the Hutter Prize FAQ at my insistence. But probably the biggest problem with Kaggle isn’t that it falls victim to those issues in particular, but rather that, as we see in the social pseudosciences, all models are specialized rather than unified. Hence my repeated use of the phrase “macrosocial model” regarding the potentially existential crisis we face.