I tried to add this to the Chat GPT thread, but I can’t figger out where to jump in, so:
Everybody has read about the attorney who filed a brief or motion written by Chat, which turned out to contain non-existent cases.
Great article in our state br magazine this month, written by an associate who of course was aware of the above-referenced fiasco, but skied Chat for help on an issue he was researching, just to see what it would do.
It fabricated a case on point, complete with lower case history, and a buncha citations. So far that’s what we saw before, but he asked it, twice, if the case was “a real case”, and it assured him that it was.
This is what I don’t get: if it can’t think, how can it lie? Or maybe more appropriately, why does it lie?
Or was it a problem with the meaning of “real”? Maybe Chat interpreted “real” to mean it has an existence (which it does, now since Chat created it; it exists on paper, just like all those other cases in the law reporters. In that sense it’s just as “real” as they are.) . It reminds me of a friend of mine who has a fake aquarium in his living room, complete with a stuffed fish. “Is the fish real, Uncle Jug?” my toddler daughter asked. “Oh, it’s real”, he replied. One beat. “it just ain’t alive.”
What if the attorney had asked Chat, “is this case valid legal precedent?” Would Chat have known what that meant, and would it have answered honestl—ah, correctly?
My point is, though: unless and until we KNOW the answers to these questions,
WHY is anybody trying to use it for legal research? It just seems ……perverse.