Quasi Pair Programming With ChatGPT

These two (very high speed scroll-by – so be prepared to manually advance the frames) videos of programming a PacMan game illustrates what might be called a quasi pair programming session with ChatGPT.

This deserves its own thread here at Scanalyst because as people have long recognized (and I have recognized since my first pair programming experiences with remote programmers sharing screens on PLATO circa 1974) at least at the coding stage two heads can be squared better than one because the probability of annoying errors getting through both of you during coding is about squared the probability of one programmer. Moreover, as with my PLATO experience, remote pair programming is a further multiplier as one can choose from a number of folks who may share one’s interest. For example, while programming Spasim at the Unviersity of Iowa, I interacted heavily with the guys at Iowa State University and Indiana University as well as University of Illinois at Champaign Urbana to get up to speed on the programming language and environment, as well as techniques in game programming. Having an on-call programming partner will expand the reach of this productivity multiplier.

I say “quasi” because ChatGPT isn’t watching you code but it is acting as a much more efficient means of accessing stackexchange.com and other sources of wisdom from the school of hard knocks and appears not to mislead nearly as often as one might have expected from the rate of bad advice in those online programmer advice sites.

This is a huge deal – a much bigger deal than other language model coding assistants.

PS: It may be wishful thinking to hope the economic value of this kind of assistance may drive an “arms race” between the Big-LLM folks to finally start recognizing they need to break through the statistical information barrier to the algorithmic information regime. If that happens, some of these companies may find themselves suffering an internal cage-match between their watchdog Algorithmic Bias Commissars (imposing kludges to align the LLM behavior with the theocracy) and the profit-seeking Algorithmic Bias Scientists (attempting to remove algorithmic bias so as to capture more network effects from customers that mean business rather than politics).

But don’t count on the latter winning. The Big Boys have way too much to lose if the truth gets out about how damaging to civilization is the centralization of positive network externalities.

3 Likes

A colleague was going to pay the $200 per month just for me to try out the open AI pro version as a coding assistant. And the benchmarks looked good. But then this

1 Like

$200 per month? How many hours would that consume? Or is it helping out a friend?

The $200/month is what OpenAI charges for their Pro version. I’ve been using ChatGPT o3-mini-high as a coding assistant working on my own stuff and it is just barely useful to me. He keeps trying to get me excited about some youtube video or other that he runs across from time to time and, after viewing them, I keep telling him that it’s just more hype. The latest buzzword is “agents” that will let you have a bunch of simulated “employees” doing stuff for you and he thought that OpenAI Pro would enable that for real. Here’s the latest video I told him was hype:

That’s basically one of those old “program with graphical UI” apps targeting “workflow”, that they retro-fitted with a workflow item that calls out to OpenAI’s API permitting them to rebrand as an “Agents” company.

Everyone is so freaked out about “AI” that its difficult to keep up with where their heads are at. The latest difficulty I’m running across is business people I am trying to communicate with about this stuff keep bringing up the danger of rogue AIs destroying the world. I try to calm them down and explain that the real problem is rogue AI companies manipulating the masses – companies that don’t even understand what AI is.

That’s why I wrote:

5 Likes