Artificial intelligence, also known as AI, has become an increasingly popular topic of discussion in many industries, particularly as a tool for professions that require a large amount of writing. However, for reasons both legal and practical, substituting AI programs like ChatGPT or Bard for human work is not a wise idea. At best, it can be used to supplement the work of human beings, but overreliance on these programs can lead to major headaches for everyone involved, especially in the legal field.
Explaining Artificial Intelligence Put simply, artificial intelligence is a broad term referring to any computer program intended to mimic some aspect of human intelligence, such as pattern recognition, speech recognition, visual perception, or certain aspects of decision making. In the context of programs like ChatGPT, however, they are often referring to a specific type of AI known as a “learning language model” or LLM, which is meant to “learn” how to create sentences by identifying what words are likely to come after other certain words. In this way, an LLM can learn to construct grammatically correct sentences, paragraphs, or pages that can mimic human speech.
Legal Hurdles to Using AI-Generated Content First and foremost, anyone who uses AI for the purposes of content generation should know there are legal risks to doing so. According to a rule issued by the U.S. Copyright Office, only content created by a human being is considered copyrightable. This is consistent with a legal opinion in Naruto v. Slater, in which copyright was denied to a crested macaque (who was being represented by the animal rights group known as People for the Ethical Treatment of Animals, or PETA) who took a “selfie” using a photographer’s camera. In both cases, the rationale for denying copyright was the same: only content with a “creative contribution from a human actor” can receive copyright protections.
Risks of Copyright Infringement To make matters worse, just because AI content is not copyrightable does not mean that it is incapable of violating copyright itself. Artificial intelligence gains the ability to generate new content by “training” itself on existing works, essentially learning to mimic the information it is fed and recombining it according to prompts given by human users. If the data it derives its output from includes copyrighted material that it was not authorized to use, then the AI may infringe on the copyright of the person who generated the original content.
The Dangers of Overreliance on AI However, the ultimate case study in how AI can be dangerous in a legal context comes from the case Mata v. Avianca, Inc. At its core, the lawsuit was a simple matter about pursuing a personal injury claim against a bankrupt airline after the statute of limitations had passed. However, the case took a sudden turn when it was discovered the plaintiff’s attorney had used ChatGPT to write and research his filings to the court, either partly or entirely, and had failed to check the output of the AI program. As a result, the filing had several serious errors, including citations to cases that did not exist. Worse still, when he was asked to provide these cases to the court, he asked ChatGPT to provide them, at which point the AI synthesized the false cases for him.
Dealing With Hallucinatory Data What the attorney in Mata did not realize is that AI programs are prone to a phenomenon referred to as “hallucinations. ” This is a polite way of saying that when an AI program is given a prompt it cannot accurately fulfill based on the data it has available, it will simply invent a response. In a human being, this would probably be referred to as a “lie” rather than a hallucination, but AI lacks the ability to intentionally deceive its users.
However, this points to the greater issue with AI as a whole: although it is referred to as “artificial intelligence,” it is not actually intelligent. It cannot create things of its own volition, it can only attempt to mimic data fed to it by humans. It cannot make value judgments about what it should or should not do, it can only attempt to fulfill prompts given to it in accordance with the parameters of its programming. And if it cannot fulfill a request by a user, it will “hallucinate” an answer rather than accept it is incapable of doing what it has been asked to do.
For all the reasons previously outlined, people should tread lightly when using AI for any kind of professional work, and lawyers in particular should take care when using AI to assist in their work. Make sure to go over anything generated by an AI to ensure it is accurate and compliant with the law, and at minimum make sure the cases you cite to are actually real.
1 https://www.ibm.com/topics/artificial-intelligence
2 “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence,” 88 FR 16190 (Mar. 16, 2023). https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence
3 888 F.3d 418 (9th Cir. 2018).
4 U.S. Copyright Office Review Board, Decision Affirming Refusal of Registration of a Recent Entrance to Paradise at 2 (Feb. 14, 2022), https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf.
5 Gil Appel, Juliana Neelbauer, David A. Schweidel. “Generative AI Has an Intellectual Property Problem.” Harvard Business Review, April 07, 2023. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-problem
6 Id.
7 1:22-cv-01461 (S.D.N.Y., 2023).
8 Hayden Field, “OpenAI is pursuing a new way to fight A.I. ‘hallucinations,’” CNBC, May 31, 2023. https://www.cnbc.com/2023/05/31/openai-is-pursuing-a-new-way-to-fight-ai-hallucinations.html
At Advocati Marketing, we are committed to creating original content for use in our client’s marketing.