The Foundation World Model That Might Have Been

That’s the point of the Controvert stage where:

ADUC (All Data Under Consideration) curated by conflicting
schools of thought. The purpose is to show how adversarial world views can be brought into a formal scientific dialogue by including each other’s data in ADUC. Outcome: Advocates must algorithmically define their criticisms of adversarial data and thereby “clean” and “unbias” the data brought by their adversaries in order to better-compress ADUC.

The blizzard of d/misinformation about AIC renders it difficult to deal with more than one problem at a time, so I was unable to talk as deeply as I might have liked about the subjective nature of “data selection” and how the contest addresses it.

I did bring this up in the form of a “conjecture” to the Algorithmic Information Theory mailing list:

[AIT] Conjecture: Bias Meta-Measurement Accuracy Increases With Increasing Diversity of Measurement Instruments

The motivation for the title’s conjecture arises with the increasing public concern, and confusion, over the definition of “bias” in large language models.

I’ve looked but have been unable to discover any work in the field of “algorithmic bias” that applies Algorithmic Information Theory to the identification of “bias”, in the scientific sense*, let alone its meta-measurement, given a bit string of passive measurements.

How would one go about doing a literature search for prior scholarship on this conjecture? How would one phrase the conjecture in the language of AIT?

*The majority of the concern over “algorithmic bias” in large language models refers to unrestricted curation of text corpora resulting in those models reflecting not only a “biased” utility function in, say, an AIXI agent’s use of Sequential Decision Theory, but also, and even more critically, “bias” in terms of the accuracy of the resulting algorithmic model of reality, yielding inaccurate predictions. Leaving behind the utility function notion of “bias” (SDT) and focusing on the scientific notion of “bias” (AIT) one can easily recognize how the scientific community detects bias in its measurement instruments with highly redundant cross-checks, not just between measurement instruments of the same phenomena, but also cross-discipline checks for consistency via unified theories. An extreme but simple example would be a world in which all thermometers were manufactured by the same company that, for some perverse reason, reports 101C at sea level for the boiling point of water but for all other temperature measurements reported the normal Celsius temperature. Cross-disciplinary checks with other kinds of measurements would result in a minimum algorithmic description that reified the identity of the thermometer company latent in the data as having a particular measurement bias – and quantify that bias to be 1C(water) so that thermometer measurements in general could be optimally predicted.

I’ve addressed this before here at scanalyst and elsewhere multiple times. At least Sisyphus tried to cheat death!

I really feel like the people in power don’t WANT science anymore – which drives me nuts when the Future of Life Institute tries to deal with “power concentration” and can’t be admonished to take advantage of a great banquet of science set before them with the AIC to do precisely what they want!

$4 million in grant money is far from “concentrated power” of course, so I do try to keep my id in check when the rock rolls down the mountain.

2 Likes