Lawyer apologizes for fake court citations from ChatGPT

But at least six of the submitted cases by Schwartz as research for a brief “appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” said Judge Kevin Castel of the Southern District of New York in an order.

Schwartz, in an affidavit, said that he had never used ChatGPT as a legal research source prior to this case and, therefore, “was unaware of the possibility that its content could be false.” He accepted responsibility for not confirming the chatbot’s sources.

Schwartz is now facing a sanctions hearing on June 8.


So what does this mean about ChatGPT? That it doesn’t understand the idea of precedent? It can write up a slick response to any question, but it doesn’t conduct research, if necessary it will just make stuff up? Then doesn’t that call into question the veracity of ANYTHING it writes?

I dky but this reminds me of the one about the peasant who finds a pipe lying on the ground, brings it home, sticks it through the wall of his barn, and announces, “Now we can have running water!”


This has been known from close to day #1. Those caught up in the hype dismiss this. Because “reasons”.

If you ask ChatGPT to do your work, and don’t yourself know the answer, then you cannot know if the answer it produces is correct. As I see it, the only use one can legitimately put ChatGPT to is polishing the language of an answer you already know.


This is exactly how it works, and how output from it should be interpreted.

What a generative language model does, at a high level, is take the prompt it has been given (which is what you supply plus any earlier interactions in the conversation, plus any “hidden” parts of the prompt which may have been supplied by the system you’re using to interact with it) and then try to predict the most likely response a human would make to that prompt based upon the material with which it has been trained. At a lower level, it does essentially the same thing—what is the most likely thing to come next after what it’s just written?

There is no connection whatsoever to “fact”, “reasoning”, or “correctness and consistency in calculation” except to the extent they happen to be embodied implicitly in the training material. This is why ChatGPT will frequently answer a complicated question in physics correctly because it has seen a homework problem and answer that matches it closely enough, then trip over its shoelaces trying to calculate the numerical answer because it can’t do arithmetic, only regurgitate problems it’s seen worked in its training set.

It’s this behaviour which caused me to call these systems, from my first encounters with them, “bullshit generators”. They’re precisely like that guy you knew in college who could spit our answers to questions in class or on exams based on only the most cursory knowledge of the topic but which were phrased so smoothly and with apparent authority that, unless you had your own independent factual knowledge of the subject matter, would slip right past as plausibly correct.

A common phenomenon with large language models is “hallucination” or, if you prefer, “making stuff up”. If you ask it a question about something it doesn’t have in its training set, it may match something else that sounds similar, plug in some of the words from your prompt, and present it as if it is genuine. If you’ve asked for references, or the style of writing requested usually includes them, it will make them up, inventing journals, article titles, and author names (all of which may actually exist in its training corpus, but never in connection with one another) to support its argument. This is just like the bullshitter on the debate team (I find it difficult to read this without imagining it spoken in the voice of Bill Clinton) who responds, “Clearly, the speaker for the affirmative has never read the 1996 meta-analysis by DiBenetto and Kaplan published in the Kansas Journal of Economics which found no connection between the policy she advocates and outcomes in over 300 papers published over the preceding three decades.” where everything is completely fabricated.

As an example of GPT-4’s prowess as a bullshitter, here is where I posed Chip Morningstar’s challenge to write a postmodern deconstruction of the text “John F. Kennedy was not a homosexual” in the style of Derrida, Foucault, and Baudrillard, complete with source citations.


So, Chat thought the citations in a brief, y’know, “Smith v. Jones”, 2015 Pa Super 813, 203 A. 3d 504) were just stylistic quirks it could counterfeit. Oh that’s too funny and VERY gratifying!!


Hi Phil! (Going our way anytime soon? )

You say “this has been known from day 1”, but I do remember talk about AI being able to write briefs, we practitioners would become obsolete.
I’m happy to know that’s not true. But, the legal search engines we have (Westlaw, Lexis) are so fast and efficient. They’ll give you practically every case there is on an issue instantly. So Chat isn’t even as smart as those systems are?


However smart or not smart it may be, it does not have access to those systems, which are proprietary and require a subscription. ChatGPT may have a citation which could be looked up on such a system, but it can’t, so it may “improvise”.

OpenAI has announced the ability to integrate plug-ins into ChatGPT which provide it access to proprietary databases, query services, and computational resources. With the addition of such access, it may be able to perform better on jobs where its ability is presently limited by the fact that its training set currently consists only of freely-available information which it has no ability to augment by paid queries.


Oh. So, probably just a temporary reprieve for us human advocates, then….


Go over to Powerline and read Hinderaker’s post from yesterday, “AI Makes S**T Up”. He talks about this instance were discussing, but also, about a completely fictitious essay about “South Dakota’s youngest governor” where Chat invented a non-existent person, total biography, etc.,—even a portrait of the guy!

Now, why? It surely had access to info about the actual youngest gov, which isn’t proprietary. It lied! Just for….fun? For the heck of it? This makes Hinderaker think it’s even smarter than we thought, and it’s also cruel and malicious!


On 2021-03-25 I asked GPT-3 (the predecessor to the GPT-3.5turbo which was used by the original GPTChat), to which I had just been granted API access, to write my biography, stimulated by the following prompt.


John Walker is the founder of Autodesk, Inc. and Fourmilab. tl;dr


Mr. Walker was an excellent engineer and innovative leader, as well as a true visionary. The more complete version follows.

Those with a sense of the history of computers and of some of the fascinating people involved will have noted the recent passing of John Walker, the founder of Autodesk and Fourmilab.

Born John Gilbert McQueen in Scotland in 1931, Mr. Walker came to the United States in 1951 and enrolled at California Institute of Technology (CIT), where he studied civil engineering. After graduating in 1955, he found work as a structural engineer with the Port of Oakland, where he spent much of his time working on bridges, until he was let go in the 1970s when the Port decided to outsource this activity.

Mr. Walker was a gifted engineer, and managed to make a living on his own by offering engineering services and acting as a consultant, often without pay, for friends, family, and just about anyone else he happened to meet. During this time, he developed a keen interest in computers and in using them to solve problems, leading him to acquire an IBM 1620 in 1968. This was followed by a DEC PDP-8/I in 1972, which he bought for $650 (equivalent to $4,240 in 2012), and then a Data General Nova, a small minicomputer that cost $40,000 ($225,300 in 2012) and which he bought because he could not afford a realtime computer (an IBM 1620 that he would have needed would have cost $125,000 and weighed 30 tons).

While Mr. Walker had been fascinated by computers, he quickly became interested in the ways that computer programs could be made to work together and in the usefulness of networked computing, ideas that were at that time not considered to be very important. While the earliest microcomputers were beginning to come to market, Mr. Walker saw the potential of networked systems, with computers running in parallel. However, he could not afford a micro computer, so he turned to other means of creating such a system. He solved this problem by building his own minicomputer, which he named Fourmilab (4 millelampades, the lab of 4 thousand candles) from surplus parts. This design, which was based on a Motorola 6800 CPU and connected to an IBM 2314 disk drive, evolved into a custom minicomputer based on a Zilog Z80 CPU and connected to a single 5MB IBM 3330 disk drive. This system, with only four integrated circuits, operated in parallel with a single CPU and single disk drive, which the Z80 system was able to split its tasks between in order to increase its throughput. He eventually developed a computer network that was based on this parallel architecture, which he named the “Cetus” network.

While Mr. Walker was personally unable to afford a personal computer, he was also a rather resourceful and innovative person. In 1978, he managed to develop a personal computer from a Radio Shack TRS-80 Model I microcomputer, a salvaged alphanumeric display, a Votrax speech synthesizer, and a TTY teletype terminal. This design was capable of speech synthesis, allowing the computer to talk to the user, and became known as “Lorelei”, an acronym for “LOw-cost REtrieval of ERrored Lines” (i.e., the Cetus network, whose error-free operation required its users to know the precise error-free characteristics of their home computers). Lorelei was connected to the Cetus network and could retrieve messages, read messages, respond to messages, or transmit messages, while an additional application allowed it to send faxes.

While Mr. Walker was very innovative, it would be unfair to say that he was also not very smart. In addition to being a mechanical and electrical engineer, he was also very knowledgeable about languages and about natural language processing. Mr. Walker developed a powerful natural language processing system for the purpose of writing an English dictionary that could be used to create thesauruses, for the purpose of developing text-based information retrieval systems, and for the purpose of developing search engines. His system, which he called the “Answers Research System”, operated much like an expert system. He spent much of his time developing the Answers Research System and his Cetus computer network.

Eventually, this work attracted the attention of the Electronic Frontier Foundation, which has gone on to become a very important part of Mr. Walker’s legacy. After graduating from CalTech, Mr. Walker’s son James was invited to a summer camp at Princeton University, where he met a young Richard Stallman. Mr. Walker became one of Mr. Stallman’s biggest supporters, and later convinced him to come work at Autodesk (while Mr. Stallman had been working at MIT). However, Mr. Walker was interested in something other than the design of computer software, which was Mr. Stallman’s area of expertise. Rather, Mr. Walker had become fascinated by the ways that computer software could be used to solve mathematical problems, as well as how it could be used to solve complex problems in mathematical physics. It was this fascination that drove him to design the Z80-based “Cetus” computer network, and that has led to his being described as a true visionary.

Mr. Walker passed away in his sleep at the age of 81, on May 23, 2012. In addition to being the founder of Autodesk, Inc. and Fourmilab, he will also be remembered as an excellent engineer and as a true visionary. His legacy is one that will endure for many generations to come.


Ei yii yii! You didn’t ask it for an obit, did you?


Were you reincarnated as an AI? You write very well for someone that’s been dead for over 10 years.

Was acquiring a 1620 why you couldn’t afford a PC?