Generative Artificial Intelligence, Large Language Models, and Image Synthesis

Allow me to remind you that I believe the model S and model X have a camper mode where you can sleep with the seats folded down.

5 Likes

You can run LLMs locally on your PC with GPT4ALL, among other tools.

7 Likes
4 Likes


From Venice.ai, unsure of exact prompt - this was the closest I could get today:


“Photo of a paisley cloisonne Art Nouveau Bugatti Aerolithe”


Somewhat closer to actual work, Taco Cohen’s papers on geometric-algebra (Clifford algebra: euclidean, Minkowski, projective or conformal) neural networks and transformers are quite interesting, with applications in molecular modeling and image compression.

5 Likes

Ranting commentary on the current state of generative AI in business (in Australia). NSFW.

https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

5 Likes
3 Likes

The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

4 Likes
3 Likes
7 Likes

Applied disinformation AI:

https://openai.com/index/disrupting-deceptive-uses-of-AI-by-covert-influence-operations/

Detailed reports:

2 Likes

Abstract
Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

5 Likes

This is only the beginning. New stuff is coming out:

John’s Monkeying with the Mainstream Media seems so quaint…

5 Likes

I’ve been looking at AI papers, here’s an interesting one:
Mass Editing Memory in a Transformer

Large language model contain implicit knowledge of facts in the world, but they have no built-in way to update that knowledge. In previous work (ROME) we found that memorized factual associations can be located at a specific location in a GPT network, and we developed a way to directly edit parameters to alter that location to change the model’s knowledge of a single fact.

In this paper, we develop an improved direct editing method (MEMIT) and scale it up to perform many edits at once. We find that we can update thousands of memories simultaneously, improving on previous approaches by orders of magnitude.
A later paper extends the idea to diffusion models. It may provide a way to fix the nonsense “safety” BS for open-source models, as well as aiding in solving many practical problems.

Another paper that can help get LLMs to go from woke->work:
“Do Anything Now”: Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang

The misuse of large language models (LLMs) has drawn significant attention from the general public and LLM vendors. One particular type of adversarial prompt, known as jailbreak prompt, has emerged as the main attack vector to bypass the safeguards and elicit harmful content from LLMs. In this paper, employing our new framework JailbreakHub, we conduct a comprehensive analysis of 1,405 jailbreak prompts spanning from December 2022 to December 2023. We identify 131 jailbreak communities and discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from online Web communities to prompt-aggregation websites and 28 user accounts have consistently optimized jailbreak prompts over 100 days. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 107,250 samples across 13 forbidden scenarios. Leveraging this dataset, our experiments on six popular LLMs show that their safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify five highly effective jailbreak prompts that achieve 0.95 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and the earliest one has persisted online for over 240 days. We hope that our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs.

4 Likes

The Fox News opinion piece by Mike Gonzalez argues that the recent violence at a Los Angeles synagogue is part of a broader, well-funded, and meticulously organized revolutionary ecosystem. This ecosystem, which also supported the Black Lives Matter movement, is characterized by:

  1. Activist organizations orchestrating protests.
  2. Fiscal sponsors providing financial and legal support.
  3. Wealthy donors funding these groups.
  4. Radical media outlets amplifying their messages.

The report highlights the involvement of groups like Codepink and the Palestinian Youth Movement, and donors such as George Soros’s Open Society foundations, in these coordinated efforts[1].

Sources
[1] LA synagogue violence springs from same revolutionary global ecosystem that brought us BLM LA synagogue violence springs from same revolutionary global ecosystem that brought us BLM | Fox News
[2] Opinion - Fox News Trending Opinion News & Updates | Fox News
[3] Fox News on X: "Opinion - X.com x.com
[4] Fox News Opinion on X: "LA synagogue violence springs from same … x.com
[5] RELIGION | Fox News RELIGION | Fox News

3 Likes

Niall Ferguson’s article “We’re All Soviets Now” explores the idea that the United States, in its current state, bears striking resemblances to the late Soviet Union. Ferguson argues that while the U.S. and the Soviet Union had different economic systems, the U.S. is now experiencing similar issues such as public cynicism, bureaucratic inefficiency, and a disconnect between the elite and the general population. He also highlights the geopolitical rivalry with China, suggesting that the U.S. might be repeating Soviet mistakes in this new Cold War[1].

Sources
[1] Niall Ferguson: We’re All Soviets Now Niall Ferguson: We’re All Soviets Now
[2] The Free Press https://www.thefp.com
[3] Comments - Niall Ferguson: We’re All Soviets Now - The Free Press Comments - Niall Ferguson: We’re All Soviets Now
[4] Niall Ferguson: We’re All Soviets Now : r/slatestarcodex - Reddit https://www.reddit.com/r/slatestarcodex/comments/1djh2wr/niall_ferguson_were_all_soviets_now/
[5] Why we’re not all Soviets now, but the USA is an unhappy empire. https://www.youtube.com/watch?v=fdU3T-gkW94

4 Likes

Reminds me of something once said to me that I resented at the time but have seen borne out all too often, not only in others but (horror!) in myself: ‘What you condemn, you become.’

7 Likes

Niall Ferguson argues that the United States is increasingly resembling the late Soviet Union in several concerning ways[1]:

  1. Gerontocratic leadership, with aging presidents like Biden (81) and Trump (78) reminiscent of late Soviet leaders.

  2. Public cynicism and lack of confidence in major institutions, with polls showing very low trust in government, media, and other organizations.

  3. A dysfunctional healthcare system that is expensive but delivers poor outcomes.

  4. Rising “deaths of despair” among working class Americans, similar to increased mortality in late Soviet Russia.

  5. An ideological divide between elites and the general public on issues like climate change, education, and individual freedom.

  6. Questionable use of the legal system against political opponents.

  7. Chronic budget deficits and government intervention in the economy.

  8. Stagnant productivity growth despite technological advances.

Ferguson suggests these parallels indicate the U.S. may be vulnerable in its current geopolitical rivalry with China, potentially repeating Soviet mistakes. He argues America needs to address these internal issues to avoid “becoming the Soviets” in a new Cold War[1].

Sources
[1] Niall Ferguson: We’re All Soviets Now Niall Ferguson: We’re All Soviets Now

3 Likes
4 Likes

This should be in Crazy Years. I don’t think my vacuum is sentient because it fell down the stairs.

The robot, affectionately known as the “Robot Supervisor,” had been a model employee since its appointment in August 2023.

The whole story is like this. Let’s pretend that the machine is a sentient so as to discuss the sentient machines. Who appointed the vacuum cleaner “robot supervisor”?

Unless this was intended to be a comic and the jokes on me, remind me to never take Aman serious again.

Yesterday, we appointed a sentient AI to the position of drying clothes.

6 Likes

California legislators, under the influence of Effective Altruism activists, are trying to sneak through a disastrous bill for open-source AI and the technology industry generally.

SB 1047 creates an unaccountable Frontier Model Division that will be staffed by EAs with police powers, and which can throw model developers in jail for the thoughtcrime of doing AI research. It’s being fast-tracked through the state Senate. Since many cloud and AI companies are headquartered in California, this will have worldwide impact.

More about effective altruism:

3 Likes