Using artificial intelligence intelligently

In my previous post about AI (specifically, AI large language models or “LLMs”) and its impact on the legal profession, I concluded that the current versions of LLMs would not replace lawyers but could serve as very able assistants. As our friends at Clausebase put it in a recent webinar: AI can take over a great many human tasks on the “production” side but the lawyer is still indispensable in carrying out key “creative” work.

I also noted in my post that I would not object if our clients used LLMs to brief themselves about issues before talking to me. That seemed to me a good use of these emerging tools. However, as many recent articles have noted: current AI language models may deliver incorrect information in a highly confident way, misleading the unaware. So, beware!

Let me share one cautionary example. I asked ChatGPT (version 3.5, if you must know) to generate a “broad-weighted antidilution provision” using simple language. Such a provision provides an investor additional shares in a company if there is a “down round” (i.e., a new round at a lower price than the investor paid). The new shares partially compensate the investor for the decrease in value, but not fully. ChatGPT responded to me with the following:

“If the Company issues more shares of stock in the future, and those shares are sold for a price lower than the price paid for your shares, your ownership percentage in the Company will be adjusted to make up for the difference in price. This means that you will receive additional shares to maintain the same percentage ownership in the Company as before the new shares were issued.”

ChatGPT followed my instruction to use simple language (something we at goodcounsel always strive to do), but the content was wrong. It did not describe a “broad-weighted” antidilution provision but rather a “full-ratchet” provision, which adjusts an investor’s equity so that it is completely unaffected by a down round. Full-ratchet anti-dilution is unusual, and far more favorable to the investor and less favorable to the founders than broad-weighted antidilution. You wouldn’t want to confuse these two concepts. ChatGPT got it wrong, without expressing any doubt or uncertainty to the user.

This illustrates the risk involved in asking AI about something entirely outside the scope of your current knowledge; it may be hard for you to assess (without additional confirmatory research) whether the AI is correct. The risk is lower when asking AI to help you explore areas where you already have background knowledge.

AI is evolving quickly, so what I just wrote may be outdated quite soon. What is important is to be a savvy user of AI tools and understand the use cases in which they are trustworthy and those in which they are more prone to error.

Categorised as: Artificial intelligence and the practice of law, Law Practice Innovation, Lawyering

Contact Us