Ideas such as the above (in contrast with “transformers” and their LLM idiot savants) that not only radically increase the effective neuron count per GPU, but raise the level of descriptive grammar in the Chomsky hierarchy might succeed in raising the intelligence of AIs to nearly the level of AGIs. It then behooves me to put the hysterics of AGI in some perspective.
The hysterics about AI are a self-fulfilling prophecy due to the motivation of the hysterical: Keep people from assortative migration & protective sequestration of their children away from the parasitical elites driven hysterical by not only the deprivation of their food source, but by the threat that control groups present:
Discovery of what is causing social ills – which, of course, would expose the fact that elites are pathogenic parasites. Oh, but I haven’t yet explained why these pathogenic parasites fear AI so much that they are going into hysterics – but I’ve clearly implied the reason and I’ve explicitly stated the reason often enough in the past:
The Algorithmic Information Criterion for selection of causal models is the optimal scientific method for scientists deprived of control groups. So those of us wanting to preserve our children from the parasitical elites are obsessively motivated to advance the state of the art in AIC to take advantage of the half-century of exponentiating transistors so that we can overcome the damage done to the social sciences by our parasitical elites. This, then, will produce AIs of such power that they really will represent a threat.
So, really, bottom line, the reason humanity is under threat by AI is the parasitical elites have put humanity in a position where we must choose: Continue to let them feed on our children or take the chance of true AIs (not these “large language model” idiots) destroying humanity.
Technologies that provide habitat isolation thence protective sequestration such as artificial atoll habitats for exponential remediation of civilization’s footprint thence O’Neill space habitats may become the only way of preventing what I suppose we should call “AI-goo” (contrast with Drexler’s “grey-goo”) from consuming everything.