In a recent bizarre incident, a lawyer was found using ChatGPT to research cases for his federal filing. Attorney Steven Schwartz used AI writing tool to work on six cases – individual cases as well as a detailed line-up of the previous events leading to it.
It was later found that the entire thing was imagined by the AI language model. However, this isn’t a huge surprise. After Microsoft collaborated with OpenAI to bring ChatGPT to Edge, many users complained about the bot making up facts, over-exaggerating the details, or simply going off the rails with a sudden emotional outburst.
Something similar happened here – it simply made up facts and hallucinated the details of the cases.
What Is “Mandatory Certification Regarding Generative Artificial Intelligence”?
The wave of criticism that followed along with the nationwide media coverage of this catastrophic federal filing might have already deterred other lawyers from trying their hands at ChatGPT for legal proceedings. However, Judge Brantley Starr, who was handling Schwartz’s case, doesn’t want to take any risk.
The federal site for Texas’s Northern District allows every judge to set their own set of rules for their courtroom. Starr took advantage of the same to create an order about the “Mandatory Certification Regarding Generative Artificial Intelligence.”.
Under this, all attorneys will be required to attest “no portion of the filing was drafted by generative artificial intelligence”. In case it was, it should be “checked by a human being.”
From now on, every lawyer who presents a case before Starr will be required to sign this form and vouch for the authenticity of each part of their filings, including quotations, legal analysis, citations, etc.
It hasn’t been explicitly mentioned if this order is a direct result of the chaos that ensued with Schwartz’s filing. But considering the timeline of the events and the content of this order, it’s safe to draw the connection.
Use Of ChatGPT Or Other AI Models Forbidden During Legal Proceedings
In the age of technology, it might be hard for many to comprehend the necessity of such a step that limits the use of AI in legal proceedings. After all, it could have expedited the process.
Sure, they are powerful and can be used in many other ways in the legal industry. AI tools can help find errors in a document or guess questions that might be thrown during oral arguments and much more. But legal filings that heavily depend on the accuracy of the data to direct the course of the trial shouldn’t be left to tools.
Not only do they make mistakes, but they’re also under no oath to be unbiased. Think about it, every practicing attorney has had to take the oath to set aside their personal prejudice and beliefs to upload the law. But these AI tools or the developers who created them are under no such obligation.
In that case, how do you know if they can be trusted to create neutral filings? Apple also recently put a ban on the internal use of ChatGPT.
Well, this was just one judge sharing his views on the use of AI in a court of law. We are yet to see if the other judges follow suit or beg to differ.