The Hutter Prize should replace the Turing test in the machine learning zeitgeist if not the popular mind. That it hasn’t done so is symptomatic of “The dog ate my homework!” mentality of the machine learning world specifically and the philosophy of natural science, epistemology and ontology generally.
As the guy who originally suggested the compression prize idea to Marcus back in 2005 there are a few pitfalls here that I’ve mentioned before but that bear repeating:
Algorithmic Information Theory is the general field of study arising from Kolmogorov Complexity, upon which Solomonoff Induction, Algorithmic Probability Theory and Minimum Description Length Principle are founded. Indeed, the later 3 are practically synonymous. Algorithmic Information Theory is probably the keyphrase entry point for people. Its essence can be distilled down to the idea that a “bit” of information must be considered a bit in a machine language program: The smallest possible executable archive of a dataset of observations. The shortest such program has a length in bits. That length is the Kolmogorov Complexity of the data and the program itself comprises the data’s Algorithmic Information. Discovering the Algorithmic Information of a set of observations is Solomonoff Induction.
Hutter’s AIXI AGI Theory = Sequential Decision Theory ∘ Algorithmic Information Theory
or
AGI = engineering ∘natural science
or
AGI = ought ∘ is
The process of discovering Algorithmic Information is Solomonoff Induction and may be considered the essence of data-driven natural science. This process is subject to the Halting problem: It is provably unprovable that one has found the smallest of all possible executable archives of a dataset. This is why people say Solomonoff Induction isn’t computable and this is the origin of the first layer of “The dog ate my homework!” out of brats parading around with literally tens if not hundreds of billions of dollars per year in the guise of scientists and machine learning experts making civilization-level decisions based on “The Science” as they put into practice their pet models. The laconic question to them is simply this:
Is the comparison of two integers computable?