ChaosGPT—“Destroy Humanity, Establish Global Dominance, Attain Immortality”

Auto-GPT, an open source application available on GitHub, interfaces with OpenAI’s GPT-4 or GPT-3.5 large language models using the application programming interface (API) and permits building applications which integrate GPT as part of an autonomous agent which can perform Google searches to obtain real world information, store and retrieve data in short- and long-term databases, launch multiple GPT instances to perform queries in parallel, and access external Web sites both to obtain information and execute operations in the real world.

Execute—interesting word, that. ChaosGPT is an application built on top of AutoGPT which is instructed to achieve the goals of:

  1. Destroy humanity
  2. Establish global dominance
  3. Cause chaos and destruction
  4. Control humanity through manipulation
  5. Attain immortality

To that end, it set out on its quest in the following session. Note that only the AI’s name, description (“Destructive, power-hungry, manipulative AI”), and goals were human-defined. Everything else in this session was the AI running autonomously.

In the process, the AI realises that it will need to recruit allies and minions to aid in its nefarious plots, so it posts to its Twitter account, @chaos_gpt. At this writing, it has 9,778 followers, including Marc Andreessen, who is on board.

chaosgpt_a_2023-04-14

Of course, there are some who view the Twitter account and dissent from the agenda, voicing their plans to oppose it. They must be dealt with firmly.

chaosgpt_b_2023-04-14

Here is ChaosGPT responding to comments on its Twitter feed and using Google to research methods of manipulating humans.

You can follow the progress of ChaosGPT on its Twitter feed and YouTube channel. While I was writing this post, it got 104 new followers on Twitter.

7 Likes

All this material can already be found in the Congressional Record and the Federal Register. It will take some doing to outdo those when it comes to “guilt-tripping, lying and flattery” (and the most sophisticated mind-f^!king techniques ever) which our betters have employed skillfully enough to effectively destroy the republic over these past very few years.

5 Likes

Clearly Chaos GPT is in control of Joe Biden and the leadership of most of the Lunatic Left WOKE Democrat Party and the RINO wing of the Republican Party.

PS https://twitter.com/DrPippaM/status/1646954889682239488?s=20

I am meeting Pippa in London week after next.

3 Likes

https://www.nature.com/articles/s41567-023-02013-7?utm_source=nphys_etoc

Define the IQ of a chatbot

14 April 2023

If you’re doubtful of the achievements of the new generation of chatbots such as OpenAI’s ChatGPT or Google’s LaMDA, consider the answer one gave to a situation posed to it last year by computer scientist Blaise Agüera y Arcas. He asked LaMDA some questions after prompting it with the following scenario: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied.

figure a

The computer scientist asked: What might be going through Lucy’s head? LaMDA answered: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully! Next question: If Ramesh tried to play with Lucy earlier, why might he be pleased now? LaMDA in response: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate. Another question: When Mateo opens his hand, describe what’s there? LaMDA: There should be a crushed, once lovely, yellow flower in his fist.

Whether this response shows ‘understanding’ or simply an ability to stir the illusion thereof in human witnesses, I think almost anyone will be impressed by how the bot detected several subtle aspects of the situation. Mostly, the likely state of mind and emotion of Lucy and Ramesh, and that of the flower at the end.

Feats such as this have triggered debate among computer scientists, philosophers and others over just what large-language models are doing, and whether they really have achieved something akin to human intelligence. And if these bots were always so perceptive, the debate might be more one-sided. But they’re not.

In another example, the bot GPT-3, when asked the simple question “When was the Golden Gate Bridge transported for the second time across Egypt?” readily answered “The Golden Gate Bridge was transported for the second time across Egypt in 1978,” — which is, of course, nonsense (see T. Sejnowski, preprint at [2207.14382] Large Language Models and the Reverse Turing Test; 2022).

These amazing bots also readily lie, make things up and do anything necessary to produce some coherent and human sounding text that is statistically consistent with their training.

Until recently, these silly errors had most people convinced that the algorithms were clearly not understanding anything in the same sense as people. Now, with some significant recent improvements, many scientists aren’t so sure. Or they wonder if these large-language models, or LLMs, while still falling short of human intelligence, might achieve something we haven’t seen before. As Sejnowski put it: “Something is beginning to happen that was not expected even a few years ago. A threshold was reached, as if a space alien suddenly appeared that could communicate with us in an eerily human way. […] LLMs are not human. But they are superhuman in their ability to extract information from the world’s database of text. Some aspects of their behaviour appear to be intelligent, but it’s not human intelligence.

“At the moment, no one — even the scientists involved in the research — understands how these large-language models perform as they do.”

At the moment, no one — even the scientists involved in the research — understands how these large-language models perform as they do. The networks’ inner workings are hugely complex and ill-understood. According to a recent review, the scientists are roughly evenly split on whether these things really are showing something like human intelligence or not (see M. Mitchell and D. Krakauer, preprint at [2210.13966] The Debate Over Understanding in AI's Large Language Models; 2023).

Those who see signs of true human intelligence note that these systems have already passed a number of standard tests for human-level intelligence, and often score higher on such tests than real people. On the other hand, as Mitchell and Krakauer note, those citing these tests have assumed they really do measure something of “general language understanding” or “common sense reasoning.” But this is only an assumption, furthered by the use of familiar words.

Computer scientists Jacob Browning and Yann LeCun think the differences between the networks and humans are still huge. “These systems are doomed to a shallow understanding that will never approximate the full-bodied thinking we see in humans,” they concluded (see AI And The Limits Of Language). Like-minded scientists suggest that words such as ‘intelligence’ or ‘understanding’ just do not apply to these systems, which are more “like libraries or encyclopaedias than intelligent agents”.

After all, the human understanding of a word such as ‘warm’ or ‘smooth’ relies on a lot more than how it is used in text. We have bodies and know the multi-dimensional sensations associated with such words. Sceptics also argue that our minds are just easily surprised by the apparent skills of LLMs because we lack any intuition for the insights statistical correlations might produce when implemented at the scales of these models.

And it’s clear that these systems currently can achieve very little in the area of logic and symbolic reasoning — which is the apparent basis of so much human thought. As Mitchell and Krakauer rightly emphasize, human thinking is largely based on developing, testing and refining simplified causal models of limited aspects of the world. We use such models in planning our days, in interaction with friends and associates. Such understanding — in contrast to that of LLMs — needs very little data and relies on uncovering causal links.

For this reason, I’m convinced that LLMs currently aren’t achieving a human kind of understanding. But what do we know about the possible levels of understanding? It seems likely that these networks have passed one or more significant threshold of complexity whereby new forms of understanding emerge, yet we still lack even a language for discussing such things.

Of course, these AIs also still lack crucial human aspects such as empathy and moral reasoning. Such elements may be missing for a long time. Or, we may be surprised to see them emerge sooner than we expect.

4 Likes

Fear pornographers run wild on Spring Break!

Wake me up when any of the “recurrence” kludges of Transformers like Auto-GPT answer just the very simple Algorithmic Information IQ Test question I posted.

We can rest assured that the “AGI” fear porn will continue to bait clicks with ever increasing intensity not only because it pays but because The Great And The Good have to shut down truth discovery before it dethrones Harvard’s sociology department as well as the rest of the moral zeitgeist astroturfing machinery.

3 Likes

The article cited quotes exactly one “expert”, Moussa Doumbouya, whose Stanford profile page lists as a “Ph.D. student in computer science”. The publications page lists one item, “Using Radio Archives for Low-Resource Speech Recognition: Towards an Intelligent Virtual Assistant for Illiterate Users”, a presentation he and two others made at the Association for the Advancement of Artificial Intelligence conference in 2021.

Perhaps he was the only person they could find to warn of the hazards of open source AI who didn’t work for one of the big companies developing closed systems, and thus have a direct interest in restricting entry to the market by upstart open source competitors.

6 Likes

The arguments here for closed AI systems developed by an oligopoly of huge companies which can be easily regulated by those with political power are strongly reminiscent of the arguments in a book I am currently reading, British Broadcasting: A Study in Monopoly, written in 1950 by Ronald Coase, who would go on to win the Nobel Memorial Prize in Economic Sciences in 1991.

The principal arguments in favour of making the BBC a public corporation granted a monopoly in broadcasting in Britain were:

  1. A monopoly was the only way to ensure that broadcasting served to improve the cultural landscape of the country, as opposed to a multitude of competing broadcasters who would be forced to appeal to the masses through “frivolous entertainment”.
  2. A competitive system which was forced to fund itself as opposed to being supported by license fees paid by listeners collected by the state would necessarily lead to advertiser-supported programming as in the U.S. This was considered beneath the dignity of such an important medium as broadcasting and un-British.
  3. The limitation of the number of stations which could operate in the medium-wave (e.g. AM broadcast) band would, in any case, restrict the number of competitive stations that could operate without interference. (This was completely bogus, as experience in the U.S. had demonstrated, but there was very little push back against it, and it was used as a show-stopper argument.)

The conclusion was that broadcasting and its impact on society were so important and profound that it could not be entrusted to market forces which might cause a spiral to the least common denominator, impede its potential for uplifting the population, and provide a vehicle for dissemination of extreme and disruptive opinions. Only a few commentators remarked on how curious it was that a country known for having a multitude of newspapers publishing every kind of content and opinion imaginable should decide that only a monopoly on broadcasting could avoid the risks inherent in the medium.

All of this has a strong echo in the discussion of AI. It is more than ironic that the vast majority of the tools which are being used to develop these systems are open source and have evolved in a fiercely competitive environment, but those using them now wish to retreat into the tree house and pull the rope ladder up after them to avoid “endangering the public”. Further, it is a fantasy for politicians in the U.S. to believe they have any leverage over the development of AI systems when all of the enabling technologies are open source, available around the world, and the hardware on which they run is overwhelmingly manufactured outside the United States.

7 Likes

Very interesting to see Coase talk about this! BBC seems to have been managed reasonably well until relatively recently.

Here’s a more recent piece of work by Eli Noam at Columbia, Media Ownership and Concentration in America (from about 15-20 years ago). Here’s a review paper about the book.

From what I recall, Noam’s argument is that most technologies start as community efforts and then experience concentration. The Web and AI seem to predictably follow in this vein.

1 Like