Demis Hassabis on Artificial Intelligence

The Methuselah Mouse Prize was my original inspiration for proposing an incremental prize to Marcus Hutter for a test more rigorous than Turing’s, based on lossless compression. When I went looking for the current status of The Methuselah Mouse Prize, I discovered the Methuselah Foundation has terminated it. The explanation they gave struck me as rather lame. The beauty of prizes with very objective judging criteria, such as the MPrize and the Hutter Prize is that they attract people who might be otherwise suspicious that subjective judging criteria will end up in some sort of social status seeking. Moreover, both the MPrize and the Hutter Prize were designed to make it easy for those with little in the way of material resources to compete on a level playing field with the big kids.

1 Like

I came across this write-up by Malcolm Dean that could be interesting for others:

[ Ah yes, the Crown Jewel of Rational Materialistic Science, the Absolutely Unquestionable, the Undeniable Dogma, the Explanation for Everything—Natural Selection—seeks yet another explanatory cure from Thermodynamics. This time, it’s Algorithmic Information Theory to the rescue. But wait! Problem solved? Not so fast.

Catarina Dutilh Novaes (2007:1) observes that in Medieval times, logic played the role that mathematics now plays in science. Offering a hypothesis with a mathematical basis gives it a Platonic blessing, a near-religious status devoutly sought by faithful Darwinians.

In Proving Darwin: Making Biology Mathematical (2012), Chaitin quotes his 2007 self: "In my opinion, if Darwin’s theory is as simple, fundamental and basic as its adherents believe, then there ought to be an equally fundamental mathematical theory about this, that expresses these ideas with the generality, precision and degree of abstractness that we are accustomed to demand in pure mathematics. —Gregory Chaitin, “Speculations on Biology, Information and Complexity,” EATCS Bulletin, February 2007. Followed by this quote from Jacob Schwartz:

“Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearings in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, the mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics.” —Jacob T. Schwartz, The Pernicious Influence of Mathematics on Science (1960), in Discrete Thoughts: Essays on Mathematics, Science, and Philosophy, edited by Mark Kac, Gian-Carlo Rota and Jacob T. Schwartz, 1992

In 1975, Chaitin admitted that “Although randomness can be precisely defined and can even be measured, a given number cannot be proved to be random. This enigma establishes a limit to what is possible in mathematics.” — Chaitin, G. J. (1975). Randomness and Mathematical Proof. Scientific American, 232(5), 47–53. RANDOMNESS AND MATHEMATICAL PROOF on JSTOR

The first paper on Algorithmic Information Theory was probably Chaitin (1977), in which Chaitin credits the idea to Solomonoff (Minsky 1962). Chaitin (1977) defines Algorithmic Information Theory as “an attempt to apply information-theoretic and probabilistic ideas to recursive function theory.” [ Minsky, M. L. (1962:35-46). Problems of formulation for artificial intelligence. In Proceedings of a Symposium on Mathematical Problems in Biology. ]

Johnston (2022) argues that “random mutations, when decoded by the process of development, preferentially produce phenotypes with shorter algorithmic descriptions.” It’s another incarnation of the Darwinian dilemma: given a 19th century hypothesis based on animal husbandry, how can we arrive at the manifold beauty of evolution without invoking a universal causality (that is, something divine). Relying on the quasi-divine justification of mathematics avoids this collision.

Johnston’s second problem is to explain how “Symmetry and simplicity spontaneously emerge” from the “nature of evolution.” This nature is defined as “algorithmic,” so that this gesture of faith brings the blessing of “preferentially produced phenotypes with shorter algorithmic descriptions.” This is explained as the arrival-of-the-frequent bias ( Schaper S, Louis AA (2014) ]. “Many biological systems, beyond the examples we provided, may favor simplicity and, where relevant, high symmetry, without requiring selective advantages for these features.”

Algorithmic Information Theory (AIT), it seems, has a Platonic tendency rooted in its basic metaphor, an ideal computer. Kohtaro Tadaki attempts to solve this difficulty by providing a statistical mechanics of AIT. Instead of abstract computer logic, the problem shifts to the mathematics of physical transformation. That is, Thermodynamics.

In these emails/working notes, we have explored three main approaches which emphasize the fundamental nature of physical transformations:

Ulanowicz’s ecological approach, Bejan’s Constructal Law, and Lerner’s Information Macrodynamics (IMD) extended by my plain-language Cognitive Thermodynamics and the Borromean model of Information processes.

The IMD formalism begins with pure randomness, out of which Kolmogorov’s statistical regularities lead to distinctions, interactions, and eventually persistent structures. First Quantum, then Classical Physics emerges in this It from Bit cosmogony, shows how biology, intelligence, and Observers are produced by Information processes.

2 Likes

I’ll have to get around to checking out how Lerner manages to get to an “arrow of time” in the emergence of Classical Physics from Quantum Physics, without begging the “Platonic tendency” not just of AIT but of telos or “final cause” or “purpose” or, to use AIXI’s factorization of AGI: Sequential Decision Theory’s utility function.

I’m all for questioning the “mechanistic” idealization of computation but only insofar as one is addressing one’s self to questions beyond induction of mechanistic causation in the natural sciences. So long as the social pseudosciences hold sway over the West’s quasi-theocracy, with their unprincipled critiques of “hate-statistics” that “don’t imply causation”, in contrast to “love-statistics” that do imply causation even if there are no experimental controls, the sample size is one and that only in some human interest op-ed in the New York Times triggering Mom-swarm stimergy overriding all reason and law that may end in nuclear holocaust, I have little interest in arguing over these philosophical details. Let’s at least get on with the revolution in the natural sciences represented by AIT by offering up incremental prizes for lossless compression of a wide range of longitudinal measures relevant to macrosocial models.

2 Likes

Can this guy explain why, in the year 2022, why when I am watching college football on Saturday via my firestick on Sling the bloody thing has to rebuffer ever so often?!! If these guys want me to surrender my one agency to a bunch of stupid Robots primed for “Zee Forss Indusreal Revoluzions” then the least they could do is make it to where my football watching is perfect in every way. Bunch of technocratic boobs.

One of the better videos on the dead-end represented by present “interpolative” neural net models:

My response:

Algorithmic Information Approximation is the ideal approach to generalization but everyone seems to be missing the key to its practical exploitation and that includes the only major AI lab founded on the principles of Algorithmic Information – DeepMind. Think about AlphaGo as applied to mathematical proofs: There are certain “moves” in formal space that are “legal” and there are certain outcomes that are desirable, such as simplification without loss of generality. It should be possible to learn to do “lookahead” and learn the value of various “moves” in formal space to become an expert mathematician. OK, now let’s take the next step from formal mathematics (which is incomplete as per Godel) to derivation of algorithms. There are certain program transformations that preserve functionality. So a similar approach to program transformation into desirable forms should be feasible. Now let’s take this one more step to Algorithmic Information Theory’s “loss function” which is simply the size of the program that outputs all the data in evidence without loss and without regard to any other utility, aka the work of the theoretical scientist as it should be formalized in the age of Moore’s Law. After all, this is what Solomonoff proved: That as soon as you adopt the primary meta-assumption of scientific theory – that you can make predictions using arithmetic according to some notion of calculation – you have adopted a UTM model of some sort and are then bound by Solomonoff Induction’s proof that the smallest executable archive of all your data in evidence is the most probable model that can be derived from the data alone: data-driven or evidence-based science.

Now – for the coup de grace:

Let’s start with a very trivial program with a horrendous “loss” : The entire database placed between quotes in a print statement. You’ve just brought all evidence into the algorithmic realm, but its a really lousy, brain-dead algorithm that is the epitome of “overfitting” thence utterly worthless for extrapolation, right? Oh but we’re not done yet! We can start to make “legal moves” in program space with the same loss function as evaluation of “positions” in our algorithmic “Go” universe!

Why hasn’t anyone done this? More to the point: Why hasn’t DeepMind done this?

3 Likes

That disappeared from the YouTube comments. It’s not as though it was obscene or “unsafe” or anything else. Perhaps it was a bit critical but it is far from without substance. Is it the channel’s policy? Is it YouTube’s policy? If the so if the editorial policy of that channel is so sensitive to even mild criticism that it will delete relatively high effort comments, they must not be very serious about feedback. If it is YouTube then they must have a policy that punishes their content providers. VERY strange!

3 Likes

“It gives an answer with complete confidence, and I sort of believe it. And half the time, it’s completely wrong.”
—Rodney Brooks, Robust.AI

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”
—Rodney Brooks, Robust.AI

“One of the standard processes has four teraflops—four million million floating point operations a second on a piece of silicon that costs 5 bucks. It’s just mind-blowing, the amount of computation.”
—Rodney Brooks, Robust.AI

4 Likes