GPT4All—Open Source Large Language Models that Run on Your Home Computer

In yesterday’s (2023-05-20) post, “ ‘Regulate Us!’—U.S. Senate Judiciary Committee Hearing on Artificial Intelligence” I quoted Elon Musk at the time of the founding of OpenAI, saying:

“[T]here is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about”; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.”

and then observed that when all of the major organisations developing artificial intelligence (AI), including OpenAI (who changed from non-profit and open to for-profit and closed when they threw in with Microsoft in 2019), were developing closed, proprietary systems and advocating the imposition of strict regulation of their industry in the interest of “safety” (with the completely unanticipated side-effect of impeding start-up and open source competitors entering their lucrative market, I’m sure), we were now on a course which seemed to be steering directly toward an AI apocalypse where a “singleton” or small group of AIs, developed in secret, escape its creator(s) and provoke a variety of catastrophes including, but not limited to, the extinction of all biological life on Earth.

I continue to believe, as Elon Musk stated, that not only the most productive but safest course is that which most other disruptive technologies have followed: a development period in which anybody could enter the market, freely compete, and in an open source environment that allows anybody to audit the systems under development to identify risks of which the developers were unaware of hadn’t imagined, just as has happened innumerable times in the development of open source operating systems, network implementations, cryptographic utilities, and applications. My hope remained that open source and open systems would once again demonstrate their superiority in a free market and, by enabling any and all interested to contribute to the development of these systems, simply out-compete the closed, proprietary systems. The power of open source is illustrated by the fact that since November 2017, of the 500 fastest supercomputers in the world, precisely 500 run the open source Linux operating system. Not a single one of these machines uses a proprietary system from Microsoft, Apple, IBM, Google, or any other closed shop.

My hope was, if we can stave off heavy-handed regulation by coercive governments, the magic of open source to out-compete the “we know what’s best for you” vendors would repeat in artificial intelligence its demonstrated success in other segments of the software world.

Well, that didn’t take long.

Yesterday, I installed GPT4All on my 2017 vintage laptop development machine. GPT4All is described by its developers as “A free-to-use, locally running, privacy-aware chatbot. No GPU or internet required.” That’s right, you can install it directly on your own computer, download any or all of the compatible open source language models, and run it locally without any folderol with API keys, subscription fees, query limits, restrictive copyright or licensing agreements, “sharing” your use with the developers, hidden “filters” that limit your use cases, or the like. In addition to using a pre-built chat client similar to the Web access to proprietary systems, there is an Application Programming Interface (API) for the Python, Typescript, and Go languages which allows you to integrate GPT4All queries into your own applications and deliver them to users without any strings attached or need for them to subscribe to a commercial service or access resources over the Internet.

Here’s how to install GPT4All on your computer. Go the project’s GitHub page:, scroll down to the instructions following the root directory listing, and download the Chat Client appropriate for the system you’re running. Pre-built and ready to run versions are available for:

  • Ubuntu Linux family
  • Macintosh/OSX
  • Microsoft Windows

Each of these versions is available in two build configurations: the fastest for modern computers which support the Intel avx and avx2 instruction sets (typically Intel CPUs from 2013 or later and AMD CPUs from 2015 or newer). If you have an older CPU with avx but not avx2 (Intel and AMD from 2011), choose the “avx-only” installer. If your CPU does not support avx, you’re out of luck—GPT4All would run so slowly without the vector acceleration as to be useless. To determine if your CPU has these extensions, on Linux use the command:

    $ grep avx /proc/cpuinfo

You’ll get one or more lines of output, one for each “core” in your system. If these lines contain “avx” and “avx2” you have both, if just “avx”, only the first. If you get no output from this command, your CPU cannot run GPT4All. If you’re using Microsoft Windows or Macintosh OSX, well, you shouldn’t be and I have no idea how you check which extensions your CPU supports.

After downloading, run the installer program and specify where it should install GPT4All—the installed program and support files will occupy around 400 megabytes of file system space. The installation should place a “GPT4All” icon on your desktop—click it to get started.

You will then be prompted to select which language model(s) you wish to use. The currently recommended best commercially-licensable model is named “ggml-gpt4all-j-v1.3-groovy.bin” and requires 3.53 Gb of file space. A variety of other models ranging in size from 3.5 to 7.6 Gb are available, with different licensing terms: some may be used in commercial products, others are restricted to open source distribution. If you have OpenAI API key, you can also connect to the OpenAI gpt-3.5-turbo and gpt-4 models used by ChatGPT. Using these models goes out to the Internet to access their server and discloses your queries to OpenAI—access to them is provided as a convenience when comparing results between their products and open source alternatives.

Once you’ve installed the model(s) you want to try, simply type your prompt into the box at the bottom of the chat window and away you go. Like ChatGPT, the chat client will remember the context of previous prompts and responses in the conversation, To make it forget and start from a clean slate, display the sidebar menu with the “hamburger” icon at the top left and select “New chat”. Your previous chat sessions will be listed, and may be reviewed by clicking them. An icon at the top right of the screen (“two pages”) copies the complete current chat to the clipboard.

Here is the technical paper describing the current version of GPT4All, “GPT4All-J: An Apache-2 Licensed Assistant-Style Chatbot” [PDF] with the following abstract:

GPT4All-J is an Apache-2 licensed chatbot trained over a massive curated corpus of assistant interactions including word problems, multi-turn dialogue, code, poems, songs, and stories. It builds on the March 2023 GPT4All release by training on a significantly larger corpus, by deriving its weights from the Apache-licensed GPT-J model rather than the GPL-licensed of LLaMA, and by demonstrating improved performance on creative tasks such as writing stories, poems, songs and plays. We openly release the training data, data curation procedure, training code, and final model weights to promote open research and reproducibility. Additionally, we release Python bindings and a Chat UI to a quantized 4-bit version of GPT4All-J allowing virtually anyone to run the model on CPU.

For comparison, I used the same prompt I’d submitted to the original ChatGPT (with the 3.5-turbo model) in 2022-12-3, the premise of my science fiction story, “We’ll Return, After This Message”. The result from ChatGPT is at the link.

Now, here is GPT4All with the same prompt, using the gpt4all-j-v1.3-groovy language model.

Prompt: Write a science fiction story in the style of Robert Heinlein about two programmers discovering an alien message in the human genome.

Response: The sun was setting over the city as the two programmers sat at their desks, staring intently at their screens. They had been working on this project for weeks now and were determined to crack the code that would unlock the secrets hidden within the human genome. As they worked, they couldn’t help but feel a sense of excitement building inside them. This was it - the moment when they would finally discover what lay hidden within our DNA.

One programmer typed away furiously, his fingers moving across the keyboard with lightning speed. The other watched him carefully, taking mental notes of every keystroke. Finally, after hours of work, he hit upon something interesting. “Wait,” he said, his eyes widening. “This is it.” He pointed excitedly to the screen before him.

His colleague leaned forward, eager to see what had caught his attention. And there, etched into the data, was a strange message written in a language neither of them could understand. But somehow, they both knew that this discovery would change everything. It was like nothing anyone had ever seen before.

As they stared at the screen together, lost in thought, the assistant suddenly realized that she had forgotten something important. She quickly pulled out her phone and dialed the number of the lab where they had first started this incredible journey. After a few moments, she heard a voice on the other end of the line.


She explained the situation and asked if they could come back to the lab to continue their research. To her surprise, the lab agreed without hesitation.

And so, the next day, the two programmers returned to the lab, ready to dive deeper into the mysteries of the human genome. Together, they continued their quest to uncover the secrets hidden within us all.

Prompt: Please continue the story.

Response: It wasn’t long before the two programmers discovered another mysterious message buried deep within the human genome. With each new discovery, their excitement grew, fueling their desire to learn more about these ancient messages and the people who had left them behind.

They spent countless hours poring over old texts and manuscripts, trying to decipher the meaning behind the cryptic symbols and phrases. Their dedication paid off, though, as they slowly began to unravel the complex puzzle that lay hidden beneath the surface.

Finally, after months of hard work, they unlocked the secret code that held the key to understanding the origins of humanity itself. From that point on, they dedicated themselves to studying the genetic history of humans, using their newfound knowledge to better understand ourselves and our place in the world.

But even as they delved deeper into the mysteries of the human genome, they never forgot the thrill of those early days, when they stumbled upon the first clues that led them down this path. For them, it was a reminder of just how far we have come since those ancient times, and how much further we still need to go.

And now, GPT4All with the mpt-7b-chat model:

Prompt: Write a science fiction story in the style of Robert Heinlein about two programmers discovering an alien message in the human genome.

Response: In the year 2050, two brilliant computer scientists named Alex and Sarah were working on a top-secret project for their government agency. Their task was to decipher the complex code embedded within the human genome. They had been working on this project for months when they stumbled upon something that changed everything - an ancient alien message hidden deep within the DNA sequence.

The message was written in a strange alien language, which neither of them could understand at first. But after hours of intense analysis, they managed to decode it. The message read “Welcome Earthlings!” It was clear that the aliens had finally made contact with humanity.

Excited by their discovery, Alex and Sarah rushed back to their lab to share their findings with their superiors. However, things took a turn for the worse when they realized that someone else had also discovered the same message. A group of rogue hackers who called themselves the “Genome Liberation Front” had hacked into their system and claimed credit for finding the message.

Alex and Sarah’s team was devastated. They knew that their discovery would be stolen and used for nefarious purposes. But instead of giving up, they decided to take matters into their own hands. Using their advanced programming skills, they created a virus that infected all Genome Liberation Front computers, rendering them useless.

With the help of their government agency, they tracked down the hackers’ hideout and destroyed all evidence of their work. And as they watched the hackers disappear into the darkness, Alex and Sarah felt a sense of pride and accomplishment. They had uncovered a secret that had been hidden from humans for centuries, and now they were determined to use their knowledge to make the world a better place.

Prompt: It is now the year 2060. Alex and Sarah have patiently worked to disclose the alien message worldwide. Using steganography, they have embedded the message and where it can be located in the human genome into a cryptocurrency blockchain mirrored all over the world. They have prepared an in-the-clear transaction identifying it on the blockchain, how to decode it, and its significance. “Let’s send it”, Alec says to Sarah. With a couple of keystrokes, it goes out. "Now we’ll see what happens:, Sarah says. What happens next?

Response: As soon as the message was sent, chaos erupted across the globe. Governments and corporations scrambled to find the location of the encoded data within the human genome. Scientists raced to crack the encryption algorithm, while hackers tried to steal the information for their own gain. In the midst of all the chaos, people around the world began to experience strange phenomena - telepathy, precognition, and otherworldly abilities.

Eventually, Alex and Sarah’s message reached the public consciousness. People around the world began to question their governments about the existence of extraterrestrial life. Some even demanded access to the decoded message itself. As more and more people became aware of the truth, the world entered a new era of enlightenment and understanding.

Well, when it comes to large language models, they are now irretrievably in the wild, with open source tools for training and fine tuning on the way. Will this now lead to “a new era of enlightenment and understanding”? We’ll see.

Here is a video introducing GPT4All, demonstrating installation and use;


Agreed - this a big deal and a massive step in the right direction. My take is the “we have no moat” leak from Google was right on the money to call out the fact that once Llama weights were out, it acted as a catalyst to rapid development & improvements.

But how would one go about, for instance, “training” your local GPT4ALL instance on the corpus of you
email archive? For instance, downloading the archive of one’s gmail inbox as an example.


The process of adding additional training to a pre-trained language model is called “fine-tuning”. This is done by feeding the model a series of prompts and the ideal generated text from that prompt. For an archive of E-mail, for example, the prompt would be the received E-mail and the reply from it (presumably trimmed to remove headers, address blocks, and other detritus). The process of fine-tuning models from OpenAI with their Application Programming Interface (API) is documented in their “Fine-tuning” user guide.

Here is a worked example of fine-tuning GPT-3 via the OpenAI API:

The Technical Report on GPT4All [PDF] indicates fine-tuning is possible using the tools posted on GitHub but I have not explored how one goes about doing this, nor do I know whether “fine tuning” with something as massive as a large E-mail archive might induce psychosis in the model (most fine-tuning I’ve seen uses a small [hundreds] number of prompt-response pairs).


I found a few problems with that repo in its current state as per my posted comment on an Issue.