AI and Psi with Paul Werbos

Werbos and/or Charlles Sinclair Smith can correct me if I’m wrong but Werbos is perhaps the most important of the keepers of the connectionist flame that led Charlie to finance, via the System Development Foundation, the second connectionist summer. Charlie had been a co-founder of DoE EIA under Carter and confronted the daunting task of modeling the US energy sector, and Charlie took that job more seriously than most would have given the same responsibility. Despite Charlie’s specialty being in statistics, Charlie quickly recognized that dynamics were where things had to go with any real world models. That’s what led him to Werbos because Werbos, unlike many others who were dabbling with backprop ideas, took recurrent neural networks most seriously. Recurrence is where you have to go if you want to semi-automate scientific discovery of causal laws.

It is my impression that plagiarism hence ignorance of Werbos’s contributions may be, in part, because, at a critical juncture, Charlie advised Werbos to seek civil service employment due to the material security it offered Werbos who had children to support. As we now observe, the counties around DC have been spared the destruction of the middle class by having the highest median income (particularly relative to home ownership) in the US, so this advice was prescient.

Anyway, I had to get that bit of history out of the way as to why I’m well disposed toward Werbos. Even though I don’t think I’ve ever seen Werbos address Algorithmic Information/Solomonoff as model selection, let alone for macrosocial models, just getting people to appreciate the difference between statistics and dynamics seems to be such an enormous barrier to most people that I have to count him among the blessed.

But this interview goes beyond algorithmic information which, like natural science, is value-free because they are identical. It goes to issues of value implied by any time-reversed causal structure, ie: telos or final causation or purpose or utility or value or Omega Point or whatever synonym you want place on this essential aspect of Being In The World, or Embodied Intelligence.

His claim to have discovered the physics of a hierarchy of noospheres within which we find and make value is among the most intriguing of what I call “top down ToEs” that start with the primordial Mind within which our minds, indeed our entire Beings are thoughts.

So, yeah, this connection to “AI” is important because everyone is so insistent on skipping past the “IS” of “AGI” (primarily forward time causality) and going straight to the “OUGHT” (backward time causality) as though those in power are uniquely in touch with God. If we’re to get out of a Thirty Years War over quasi-religious differences via AGI this is probably the general area that needs flesh upon its bones. I still hold out some hope for AIT (the “IS” half of AGI) as a way of waking up The Great and The Good to just how catastrophic are the consequences of their social theories (in a purely forward time mechanistic clockwork orange sense), despite their belief in themselves as Chosen of Natural Selection or some such to rule over us by virtue of their ability to rent seek. Failing that, Sortocracy may, at least, provide a peace treaty following on a Thirty Years War. But all of that may be bypassed if one of these top down ToEs actually delivers on the promise (sometimes only implied) of putting us in touch with telos.

For a while I tried starting a website called “” for Theory of Everything Rosetta Stone, in which I established some ground rules for Mediawiki formatting conventions to bring various top down ToEs into some sort of correspondence to rid us of the noxious Babel that is preventing communication.

Unfortunately, Werbos’s ToE will probably not receive the attention it deserves because of this confusion and the lack of appreciation of how damaging this confusion of terminology can be. But the same can be said of all the other top down ToEs.

I’m sure I have some bones to pick with Werbos, but I’ll just leave it at that because whatever errors I believe he may have fallen into, they are as nothing compared to the fact that he’s at least trying to solve a central problem with science made even more critical by AI, and I suspect he’s got more of what it takes to do so than the vast majority of the intelligentsia.