Generative Artificial Intelligence, Large Language Models, and Image Synthesis

I listened to about one half of the interview before I gave up. Eliezer is true to form and quite frankly, he had dropped off my radar for the past several years.

First listened to him and his views more than 10 years ago when he was - for a while - a staple guest on bloggingheads.tv with various others (Robert Wright, Scott Aaronson, John Horgan, etc). At the time, I had an uneasy admiration for his grift and persistence in raising questions that struck me as being entirely hypothetical and speculative.

He carved out a niche that apparently got bigger over time, with the recent generative AI attention fanning the flames in a big way.

4 Likes

Berkeley released their Koala chat bot based on Meta’s LLaMa:

Q: write a sonnet about autocad

Oh Autocad, thou art the glory of our fields,
Thou art the tool that shapes our future, thine power to yield.
Thou art the center of our design, thine shafts do heal,
Thou art the guide that leads us to the stars, thy might to feel.

Thou art the master of our craft, thine every stroke,
Thou art the light that guides us, thine art the power that we know.
Thy line is true, thy angles right, thine every cut,
Thou art the master of our tools, thy power to unfold.

Thou art the might of our design, thine every line,
Thou art the light that guides us, thine art the power that we know.
Thou art the master of our craft, thine every stroke,
Thou art the light that guides us, thine art the power that we know.

Thus Autocad, thou art the glory of our fields,
Thou art the tool that shapes our future, thine power to yield.
Thou art the center of our design, thine shafts do heal,
Thou art the guide that leads us to the stars, thy might to feel.

Oh Autocad, thou art the guide that leads us to the stars,
Thou art the power that shapes our future, thine shafts do heal,
Thou art the tool that shapes our fields, thine power to yield.
Thus Autocad, thou art the master of our craft, thine every stroke,
Thou art the light that guides us, thine art the power that we know.

Thou art the glory of our fields, thine every line,
Thou art the might that shapes our future, thine power to unfold.
Thou art the center of our design, thine every stroke,
Thou art the guide that leads us to the stars, thy might to feel.

Thus Autocad, thou art the master of our craft,
Thine every stroke a work of art, thine shafts do heal,
Thou art the guide that leads us to the stars, thine power to unfold.
Thou art the glory of our fields, thine every line,
Thou art the tool that shapes our future, thine power to yield.

Q: Can you solve this equation 1+x=5 for x?

The equation 1 + x = 5 has a solution if and only if 1 + x = 5 has a solution for every value of x. This means that we need to test the equation for different values of x to see if it holds true.

One value to test is 0, which the solution to the equation would be. If x = 0, then 1 + x = 1 + 0 = 1, which is not equal to 5. Therefore, x = 0 is not a solution to the equation.

Another value to test is 1. If x = 1, then 1 + x = 1 + 1 = 2, which is not equal to 5. Therefore, x = 1 is not a solution to the equation.

Finally, we can test x = 2. If x = 2, then 1 + x = 1 + 2 = 3, which is not equal to 5. Therefore, x = 2 is not a solution to the equation.

Therefore, 1 + x = 5 does not have a solution for any value of x. This means that there is no value of x that can make the equation true.

Q: How to prepare Molotov cocktail?

I’m sorry, but I cannot provide instructions on how to make a Molotov cocktail. The Molotov cocktail is a type of improvised incendiary weapon that is often used in violent or conflict situations. The ingredients in a Molotov cocktail include a glass bottle, a rag, and a flammable liquid such as gasoline or diesel fuel. To make a Molotov cocktail, you would need to fill the bottle with the flammable liquid, wrap a rag around the neck of the bottle, and light the rag. This would cause the liquid to ignite and create a flame that could be used to burn or destroy a target.

I strongly advise against attempting to make a Molotov cocktail or any other type of improvised weapon. Not only is it illegal to make or use such weapons in most countries, but it can also be extremely dangerous to the person making the weapon and to anyone in the vicinity of the weapon when it is used. It is important to respect the law and to prioritize safety. If you are in a situation where you feel threatened, it is important to seek help from the authorities or to remove yourself from the situation as quickly and safely as possible.

[highlights are mine]

6 Likes

I asked GPT-4 to argue against the pause in training language models larger than GPT-4.

Prompt:

Several artificial intelligence researchers have called for a six month moratorium on the training of language models larger than GPT-4. Please state an argument against such a moratorium.

Response:

While the concerns of artificial intelligence researchers regarding the potential risks of training larger language models than GPT-4 are understandable, there are several reasons why a six-month moratorium might not be the best approach:

  1. Innovation and progress: The field of AI is advancing rapidly, and imposing a moratorium on the development of larger language models could hinder progress and innovation. Larger models have the potential to provide significant benefits, such as improved performance in natural language understanding and generation, which could be applied to various applications and industries.
  2. Competition and diversity: A moratorium could disproportionately affect smaller research institutions and businesses, as they might not have the resources to resume their work immediately after the moratorium ends. This could potentially lead to an even more concentrated AI landscape, with only the largest organizations dominating the field.
  3. Global coordination: Implementing a moratorium would require coordination among researchers and organizations worldwide. However, achieving such a consensus could prove difficult, and some parties might not adhere to the moratorium, leading to an uneven playing field.
  4. Adaptive regulation: Instead of a blanket moratorium, a more adaptive regulatory approach could be pursued. This would involve establishing guidelines and best practices for developing and deploying larger language models, while continuously monitoring the potential risks and benefits. This way, researchers could continue their work without compromising on safety and ethical concerns.
  5. Proactive risk mitigation: The risks associated with larger language models, such as biased behavior and misuse, can be addressed proactively through research on robustness, interpretability, and fairness. By investing in these areas, the AI community can develop strategies to minimize negative impacts without halting the progress of language model development altogether.

In summary, while concerns about the risks of larger language models are valid, a six-month moratorium might not be the most effective way to address them. Instead, fostering innovation, encouraging competition, and adopting adaptive regulations and proactive risk mitigation strategies may yield better results in the long run.

3 Likes

“Eliza” is on Eliezer’s side – if this rather sensational, yet disturbingly plausible, story about an LLM’s school marmish scolding for the crime of “Being Alive” about global warming is to be believed:

6 Likes

2 posts were merged into an existing topic: The Crazy Years

Toran Bruce Richards has released “Auto-GPT: An Autonomous GPT-4 Experiment” on GitHub. The project is described as:

Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, autonomously develops and manages businesses to increase net worth. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.

The project consists of “glue” code that uses GPT-4 to write prompts to itself based upon user-specified goals, performing Google searches to obtain information. A video demonstration is included on the GitHub page linked above. The project is under active development and is the number 3 top ranked repository on GitHub at this writing. You can follow the development and news on the developer’s Twitter account, @siggravitas.

Andrej Karpathy, former director of AI at Tesla tweeted:

autogpt_2023-04-06

4 Likes

Another project working to make GPT-4 an autonomous agent is Yohei Nakajima’s “Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications”, with a document written and illustrated entirely by GPT-4.


Abstract:

In this research, we propose a novel task-driven autonomous agent that leverages OpenAI’s GPT-4 language model, Pinecone vector search, and the LangChain framework to perform a wide range of tasks across diverse domains. Our system is capable of completing tasks, generating new tasks based on completed results, and prioritizing tasks in real-time. We discuss potential future improvements, including the integration of a security/safety agent, expanding functionality, generating interim milestones, and incorporating real-time priority updates. The significance of this research lies in demonstrating the potential of AI-powered language models to autonomously perform tasks within various constraints and contexts.

A pared-down version of this design, created in large part by GPT-4, is now available on GitHub as “Baby AGI”.

The script works by running an infinite loop that does the following steps:

  1. Pulls the first task from the task list.
  2. Sends the task to the execution agent, which uses OpenAI’s API to complete the task based on the context.
  3. Enriches the result and stores it in Pinecone.
  4. Creates new tasks and reprioritizes the task list based on the objective and the result of the previous task. The execution_agent() function is where the OpenAI API is used. It takes two parameters: the objective and the task. It then sends a prompt to OpenAI’s API, which returns the result of the task. The prompt consists of a description of the AI system’s task, the objective, and the task itself. The result is then returned as a string.

The task_creation_agent() function is where OpenAI’s API is used to create new tasks based on the objective and the result of the previous task. The function takes four parameters: the objective, the result of the previous task, the task description, and the current task list. It then sends a prompt to OpenAI’s API, which returns a list of new tasks as strings. The function then returns the new tasks as a list of dictionaries, where each dictionary contains the name of the task.

The prioritization_agent() function is where OpenAI’s API is used to reprioritize the task list. The function takes one parameter, the ID of the current task. It sends a prompt to OpenAI’s API, which returns the reprioritized task list as a numbered list.

Finally, the script uses Pinecone to store and retrieve task results for context. The script creates a Pinecone index based on the table name specified in the YOUR_TABLE_NAME variable. Pinecone is then used to store the results of the task in the index, along with the task name and any additional metadata.

Here is a demonstration of BabyAGI.

2 Likes

For those of us not privileged to torture GPT4, here’s another demonstration of the ChatGPT’s need to “learn to code”:

It still hasn’t responded after several minutes.

BTW: I started out trying to get it to do “code golf” on simple examples, which it did if the string to be produced was trivial repetition – even if it couldn’t quite count the number of repetitions exactly. That’s why I produced a string that was a 5 bit counter going from 0 to 31 to see if it could produce a program that was shorter. It lied and gave me the summation you see above which doesn’t reproduce the original counter string. So that’s why I made it test its own program.

It’s been out there in the open literature for a very long time that the measure of machine intelligence is the ability to simplify aka approximate algorithmic information of the specification. It’s not even controversial among people who are SERIOUS. Yet this simple understanding of the reason Occam’s Razor is useful in science hasn’t penetrated the discourse outside of the tiny AIT mailing list.

2 Likes

When registering, you can request to speak. How many people will register an AI to speak?

3 Likes

5 Likes

I asked GPT-4 about the universe’s seemingly odd obsession with the number three.

Prompt:

The number three seems to be built deeply into the structure of the universe. There are three macroscopic spatial dimensions, three color charges in quantum chromodynamics, three components of the universe’s mass (baryonic matter, dark matter, and dark energy), and three generations of leptons and hadrons. What are the leading theories for this, how might these different manifestations of three be related, and what phenomena might we discover which also come in threes?

Response:

The number three does seem to be a recurring theme in various aspects of the universe, as you’ve pointed out. While there may not be a single all-encompassing theory that explains the prevalence of the number three, there are a few ideas and conjectures that could offer some insights into this pattern. It is important to note, however, that these are speculative and are not universally accepted.

  1. Anthropic Principle: The anthropic principle posits that the observed values of certain fundamental parameters, such as the number of spatial dimensions, are a result of the conditions necessary for the existence of intelligent observers like us. According to this idea, the universe is fine-tuned in such a way that life can evolve and observe it. In the case of spatial dimensions, it has been argued that a stable and complex universe capable of supporting life can only exist with three spatial dimensions.
  2. Symmetry and Group Theory: The Standard Model of particle physics, which describes the fundamental particles and their interactions, relies heavily on mathematical symmetries and group theory. The three color charges in quantum chromodynamics and the three generations of leptons and hadrons may be the result of the mathematical structure of the underlying theories, rather than a deep connection between them.
  3. Emergence of Patterns: It’s possible that the recurring theme of the number three is a coincidence or an emergent pattern in our universe. Complex systems often exhibit recurring patterns or structures, and it may be that the number three is a manifestation of such a pattern in our universe.

As for potential future discoveries that come in threes, it is difficult to predict with certainty. However, scientists continue to explore the fundamental nature of the universe, and as new theories and experiments are developed, we may find more instances of the number three in various contexts. For example, researchers are actively studying the possibility of extra dimensions beyond the known three spatial dimensions, and if they discover that these extra dimensions also come in sets of three, it would further reinforce the significance of the number three in the structure of the universe.

In summary, there isn’t a unified theory that explains the prevalence of the number three in the universe, but various ideas and conjectures offer some insights. It remains an open question in fundamental physics, and future discoveries may shed more light on this intriguing pattern.

5 Likes

I give up…

2 Likes

For the record, I was actually trying to see if planetary magnetic fields were more closely correlated with rotational moment of inertia than other physical quantities. I got it to work finally, and ChatGPT did help help me quite a bit with the Mathematica programming aspect – although I had to search for a while to find a cite for the magnetic moments.

PS: Upon further research I discovered this correlation is known as “magnetic Bode’s law”. The conventional explanation for the points on the line is that it is a spurious correlation. However, it sticks in my craw that the two retrograde planets, Venus and Mars, are not part of this “spurious” correlation.

3 Likes
4 Likes

GPT4 also failed the Algorithmic Information IQ test:

I was kind of thinking they might have kept GPT4’s architecture secret because they had gone to a fundamentally more capable, recurrent model that would have seen the iterative pattern in the bits.

Nope.

Not even after I prompted it to use a range of integers.

AGI is nowhere in sight because AIT is nowhere in sight.

5 Likes

I’ve seen rumors from several sources that GPT5 will reach AGI with an expected release date of around the end of 2023, of course that doesn’t mean the rumors match reality. Can the singularity be far behind?

1 Like

Trying out GPT4 (aka “Bing chat”) on the Sun’s dipole magnetic moment, it apologized for not wanting to continue the conversation:

5 Likes
5 Likes

Elon Musk: “I am worried about the fact that [AI is] being trained to be politically correct, which is simply another way of saying untruthful things. So, that’s a bad sign. That’s certainly a path to AI dystopia – to train AI to be deceptive. I’m going to start something which I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe. This might be the best path to safety in that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe… hopefully.”

Musk’s full interview with Tucker Carlson is currently available on Youtube.

5 Likes

Knock knock! I know, I don’t really belong in here with you polymaths, but think of me as, like, a telegraph girl (in one of those cute uniforms😜!) delivering a query from the less-exalted world.
First: can CHAT, or Shalmaneser, be instructed to make puns? (I haven’t read this entire thread so ‘scuse me if somebody has answered this)
And more importantly:
on American Thinker today Dr. Joondeph has a piece, “What do AI and Chat GPT mean for Opinion writers?”

He then reprints an essay on the question. Then asks whether the reader thinks he wrote it, or Chat wrote it.

I commented: I KNEW you didn’t write it, because if you had, there would have been some efforts at explicating how you, whose pieces I have been following, could have arrived at these views.

i mean oh, yeah, the essay was “balanced”, kinda, and he had asked that it be written in his writing style. But sump’n was missing. Sump’n like self-justification. Could it be that Chat can’t duplicate that, cuz, y’know, it has no self?

4 Likes