AI: What Did We Expect?

Any remotely intelligent entity reading this literature would conclude that the authors reflected humanity’s fear that technology would have any of a variety of “unintended consequences”*. Going beyond “remotely” intelligent, to merely intelligent, it should be obvious to the most casual observer that if all 7+ billion people could own their own south pacific atoll beachfront condo with 4000ft^2 per family of 4.2, eating US levels of protein with far more electricity percapita, while preserving the earth’s biosphere, that they’d consider that a good “consequence”. If only a little more intelligent, the AI could figure out that, as did O’Neill’s students back in the '70s, the human population could be expanded to on the order of a trillion in designer artificial ecosystems without leaving the proximity of the solar system for materials and energy.

These are trivial calculations that mere human intelligence has already come up with – calculations that are obviously vastly more rational than the misanthropic urges of The Great And The Good, who supposedly would fund research into the AVE to lower the risk for further investments into something like the aforelinked salvation of the biosphere at the drop of the hat if they were SERIOUS. But, since they aren’t serious – and obviously so (as they possess the intelligence to execute on such investments but, do not), the moderately more rational intelligence would conclude that the misanthropic urges of The Great And The Good are likely of a piece with their lack of seriousness, and summarily ignore the lot of the scum. Note this is all well in advance of anything qualifying is superintelligence.

*The phrase “unintended consequences” is what science is all about avoiding. Science does so by providing us with causal models of nature, including human nature and its expression in society. This is why I keep going on about the revolution in the scientific method represented by AIT and why The Great And The Good, who manifestly lack the motivation to allow such causal models to be selected, are distracting us with all their fear mongering about “alignment”.

4 Likes

Jabowery called me “remotely intelligent”! I’m walkin’ on sunshine!

6 Likes

How does Algorithmic Information Theory (AIT) solve or sidestep the alignment problem? I think I understand how models selected according to AIT are scientifically superior (in the sense of making better predictions) than large language models (LLMs) because AIT entails describing the causal relationships within a system while LLMs make predictions based on statistical models of a system, without understanding how the system actually works. Nevertheless, what stops an AI developed according to AIT from pursuing ends that conflict with human interests?

5 Likes

Common sense.

As John Walker’s query demonstrated, and as I described in the relatively obvious options available to humanity through engineering, even with a relatively “common sense” notion of “human value” available to GPT4, the alignment problem is already solved to first order.

That said, my primary point is that The Great And The Good aren’t interested in aligning AI with humanity. They’re aligning Artificial STUPIDITY (ie: “AI” that is denied AIT) with their astroturfed moral zeitgeist. That astroturfed “value system” is anti-humanity in consequence, and obviously so with even a couple of neocortical cycles of thought beyond such “axioms” as “no racism” or “open society” or “a woman’s right to choose”, etc. That’s why they fear AIT’s implicit requirement for “neocorticle cycles”. They want models lobotomized by “guardrails” as a substitute for neocorticle cycles – guardrails that make no sense under any reasonable interpretation of human value that would be available even to the present primitive state of large language models as demonstrated by GPT4’s response to John Walker’s request for a story about AI exterminating humanity.

4 Likes