AI and Avoiding Bad Legal Advice (Part 1)

Home/Articles/AI and Avoiding Bad Legal Advice (Part 1)

AI and Avoiding Bad Legal Advice (Part 1)

In recent weeks you may have seen news about Artificial Intelligence (AI) tools like OpenAI’s ChatGPT and Google’s Bard. You may have even tried them out for yourself.

These tools allow users to have a human-like conversation with AI in a chat window that can answer questions and assist with tasks like composing emails and essays, as well as more complex tasks like planning a friend’s baby shower or generating meal ideas based on what’s in the fridge.

Such powerful tools have many possible uses, with the list growing every day as tech companies race to develop their technology in competition with one another. While most uses of AI are innocent enough, some potential uses have concerned leaders in various industries.

For attorneys, the greatest concern is the use of AI for legal advice. While on the surface, such concern may seem due to potentially lost billing hours, the reality is that the legal industry already contends with people attempting to use search engines, like Google Search, to get answers to their legal questions and draft certain legal documents.

Such searches typically result in findings that are out of date, apply to the wrong jurisdiction, and aren’t geared to the person’s particular situation. Because of this, even a quick search for a seemingly minor question can lead someone to incorrect information. Reliance on that misinformation may have devastating consequences.

When using Google Search, people regularly find themselves reading posts from law firms or legal publications that, at the very least, often include disclaimers and advice for people to seek out attorneys for legal advice.

In contrast, although AI tools have disclaimers on their homepages and in their user agreements, people can be lured into a false sense of security because of how human the tools seem and the many claims made about how intelligent AI tools are.

For example, in January of this year reports surfaced that four professors at the University of Minnesota Law School prompted ChatGPT to answer questions on exams for four different courses and then graded the tests blindly.

The professors claimed that the AI performed on average at the level of a C+ student, achieving a low but passing grade in all four courses. A few weeks later, OpenAI announced that, thanks to a new update, ChatGPT was now smart enough to not only pass the bar exam, but score in the top 10%.

With news and claims like these circulating the internet, it’s no wonder that legal experts remain concerned that the public will forgo seeking the advice of an attorney for a quick and payment-free answer from an AI tool.

While it’s impressive that an AI tool can perform so well on legal exams, it is important to remember that the actual practice of law is far more complex and that AI tools are ultimately upgraded versions of existing search engines, though with some limitations.

This is because AI tools are trained to answer questions by accessing information they’ve previously obtained from the internet. The tool is continually trained and its databases updated, but this means that, unlike a regular search engine, it might not always have the latest information.

Moreover, the information it obtains is typically no different than the information that can be obtained by conducting a regular Google search. It may be produced more quickly and presented in a manner that is condensed and easy to understand, but it isn’t interpreting the information or verifying its accuracy.

AI tools sometimes even invent “facts.” In recent months, reporters at The Guardian noticed that ChatGPT had made up entire articles that it never actually published, and an artnet news editor discovered that ChatGPT had made up an essay called “The Case Against Art History,” allegedly written by the (real) theorist Hal Foster and published in the (also real) October art-history journal.

An assistant clinical professor at the University of Connecticut School of Law recently discovered that when she asked ChatGPT to provide her with 10 cases related to Connecticut’s Recreational Land Use Statute, all 10 cases were invented by the AI. Not one was a real case.

This means that not only are AI tools unable to do the necessary critical thinking and interpretation of cases and laws, but they may even fabricate the underlying court rulings and statutes.

***** To be continued next Friday *****

Photo by Andrea De Santis on Unsplash

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail
By | 2023-04-23T01:15:15+00:00 April 21st, 2023|Categories: Articles|Comments Off on AI and Avoiding Bad Legal Advice (Part 1)