Should you fire goodcounsel and hire ChatGPT?
Short answer: not yet. My “hot take” is below.
[UPDATE 3/21/23 – As with so many things, Last Week Tonight with John Oliver explains AI with enormous clarity and humor. Highly recommended.]
Like everyone else, we at goodcounsel are trying to figure out what to make of ChatGPT. It feels like it came out of nowhere. ChatGPT is, as the saying goes, an overnight success story, but in truth 50 years in the making. Natural language processing (NLP) and artificial intelligence (AI) research date back at least to the 1950s. (Here is ChatGPT’s story of AI and NLP in the style of Dr. Seuss – because hey, why not?)
Will ChatGPT and similar AI tools replace skilled professionals such as doctors and – gulp – lawyers? I will venture a few early thoughts, though I want to emphasize that I am neither an AI expert nor a fortune teller, and I still have a great deal to learn about this area.
Let’s first remember that computer-powered processing has been assisting lawyers for close to a half-century, since the advent of personal computing in the 1980s, initially for the task of searching through large text libraries. Tools such as LEXIS and Westlaw help lawyers find judicial decisions and now also other resources that match their text queries. E-discovery tools search through massive electronic document repositories for matching litigation documents. Advances in algorithms and processing power have allowed these tools to become faster and more sophisticated about “understanding” the tone and meaning of documents rather than just searching for matching text strings. (Here is ChatGPT’s summary of the impact of AI on legal practice.)
ChatGPT is a chatbot, based on the GPT3 (Generative Pre-trained Transformer 3) large language model (LLM). LLMs are “trained” on massive data sets – troves of books, articles, and curated web pages – and use machine learning to develop probabilistic (“Bayesian”) relationships within the data. The result of this intense data processing is the ability for the tool to predict the most likely next word in a sentence based on the previous word. If you have experienced ChatGPT generating text, you are seeing it “think” one word at a time. This probabilistic word-by-word approach generates remarkably human-sounding and often correct responses though these responses are likely to be general and sometimes a bit bland.
Tools like ChatGPT know only what they have been taught. This is not a trivial point; machine intelligence rests heavily on the prior output of human intelligence. (ChatGPT seems to agree with me.) If you created a law chatbot that was trained only on a dataset of legal briefs written by former President Trump’s lawyers during their fruitless litigation challenging the 2020 election results, you would have a very stupid chatbot with a deeply misguided view of federal election law. As programmers have long said, “Garbage in, garbage out.” Similarly, current AI models could not produce an original analysis of the implications of a new federal statute, though it would certainly produce an excellent summary if trained on articles written by human legal experts about the new law (or perhaps, by analogy to prior analyses of similar laws?).
It may be tempting to say that humans, too, know only what we’ve been taught, but that is only superficially true. A human learns not just from others but on its own, from its own experiences. Our minds are embodied, and that body moves through and interacts with the world. The way we learn is also fundamentally different from how an AI tool learns. At a certain point in our development, we don’t just passively accept and incorporate information presented to us, as a computer might; rather, we engage in a dialogue with the instructor and subject the proposed information to critical thinking and analysis. We might reject information being proposed to us as incorrect if it seems inconsistent with previously held information, or we might be persuaded to abandon old information and accept new information. We might even conclude that neither the old nor the new information is entirely correct and, in the process of consideration, arrive at a creative insight, leading to entirely new information.
ChatGPT will certainly not be ready to replace me until it can make creative or novel legal arguments, draft full sets of documents, or apply known legal principles to my clients’ unique and sometimes challenging factual circumstances. These are the types of insight that AI is not good at. Nevertheless, I expect that AI tools will become increasingly helpful as assistants. (Move over, Clippy!) For the moment, I view AI not as “artificial intelligence” as much as “augmented intelligence” – and the intelligence that is being augmented is mine.
I am hoping that tools like ChatGPT will give me a “running start,” with solid AI-generated overviews of legal issues, which might have taken me or someone on my team much longer to do. If the AI tool can quickly perform background research, then the human professional can spend more time on the aspects of a project that truly require her complex thinking abilities. Similarly, I won’t be offended (or surprised) if clients start coming to conversations with me, having briefed themselves on the issue using an AI chatbot. It’s not as if clients never researched issues on the web; ChatGPT might simply provide them a better answer in less time spent searching for it!
Or, as one author put it:
While AI and NLP are already transforming the practice of law, it’s important to note that these technologies are not intended to replace lawyers, but rather to support them in their work by automating repetitive and time-consuming tasks. The integration of AI and NLP into the legal industry is still in its early stages, and it remains to be seen what additional impact these technologies will have in the future.
Who wrote that? Of course: ChatGPT. (I know, I know; this is pretty cliché by now. But I had to do that just once.)
Postscript: Clippy lives!
Categorised as: Artificial Intelligence, Law Practice Innovation, Lawyering