I’d really like to see the guys at the Future of Humanity Institute, Foresight Institute, Kaggle, X-Prize Foundation, etc. explain why they believe they have a higher prediction-accuracy ROI, (adjusted for risk) than would paying out for incremental improvements in the lossless compression of a dataset containing all observations that disputants agree must be accounted for by a candidate theory. For that matter, I’d really like to see the guys at the Methuselah Foundation explain why they abandoned the Methuselah Mouse Prize, which paid out for incremental improvements in longevity of mouse models, which has a similar risk-adjusted return on investment for longevity.
Having served on the Hutter Prize judging committee for free for 15 years, it is very low stress precisely because judging is not only virtually automatic*, the lack of judging discretion means I don’t have to argue with trolls.
What’s not to like?
*The candidate theory must be validated by the simple expedient of linking to the executable archive so we can run it and then see if the output checksum matches that of the dataset. Are these guys afraid of using computer resources to do judging work after 5 decades of Moore’s Law?
PS: I should probably reiterate that I did approach Anders Sandeberg about a COVID-19 modeling prize and he told me that “minimum description length principle” was the equivalent of Bayesian Information Criterion, which prompted me to rewrite the wikipedia article on MDL.