close
close

Lawyer faces punishment after ChatGPT gave him fake cases to quote on letter filed by court

Lawyer faces punishment after ChatGPT gave him fake cases to quote on letter filed by court

One of the problems with conversational AI chatbots at this stage is that they tend to hallucinate. In other words, they make up information that fits the user’s request. ChatGPT is a language model designed to provide the user with an answer to a question, and the AI ​​chatbot will come up with information to fill in any gaps, even if what it comes up with isn’t true.

The New York Times (through Mashable) reported on a lawyer named Steven Schwartz of Levidow, Levidow & Oberman who practiced law for 30 years. But thanks to ChatGPT, Schwartz was able to look for a new profession. You see, Schwartz represented a client named Roberto Mata who sued Colombian airline Avianca after his knee was injured by a serving cart that crashed into him during a flight.

Schwartz’s decision to use ChatGPT could cost him his 30-year legal career

Avianca tried to get a judge to dismiss the case and Mata’s lawyers, including Schwartz, filed a letter detailing similar cases that had been heard in court in an attempt to prove to the judge that the case should not be dismissed. But this is where ChatGPT and Schwartz messed up. Schwartz filed the case when it first landed in state court and provided the legal review when it transferred to federal court in Manhattan.

See also  Eagles look to repeat at Section champs

To improve his record, Schwartz turned to ChatGPT to help him find similar cases that went to court. ChatGPT produced a list of these cases: Varghese v. China Southern Airlines, Shaboon v. Egyptair, Petersen v. Iran Air, Martinez v. Delta Airlines, Estate of Durden v. KLM Royal Dutch Airlines and Miller v. United Airlines.” sounds like quite a list of cases that could be brought to court. There was only one very small problem, none of those cases were real! They were all made up by ChatGPT.

The lawyer never considered that the AI ​​chatbot could give him false information

Avianca’s legal team and the judge quickly realized they couldn’t find any of the cases mentioned in Schwartz’s file. In an affidavit he filed with the court, Schwartz included a screenshot of his interaction with ChatGPT and said that as for the conversational AI chatbot, he was “not aware of the possibility that its content could be faked.” can be”. The judge has scheduled a hearing for next month to discuss “possible sanctions” for Schwartz.

Let this be a warning to anyone planning to let AI do some of the work for them. You might think you’re saving time, but you might end up in more trouble if you replace your own hard work with results from an AI chatbot. I would never want my articles to be written by AI and not only are you hurting yourself by using AI, you are also lying to your audience by claiming to be the author of something you didn’t write, or because you may be cheating your audience provided with information.

See also  Bitcoin 'big move' expected in July after $30,000 push in March - latest analysis

It doesn’t matter if you are a student who plans to use an AI chatbot to help you with a paper you need to write or a lawyer who wants to cite cases. AI chatbots can hallucinate and give you false information. That’s not to say they can’t be helpful and might point you in the right direction. But once the pointing is done, it’s up to you to make sure the information you get is legit.

Stay connected with us on social media platform for instant update click here to join our Facebook

For the latest News and Updates, Follow Us on Google News

Read original article here

  • May 28, 2023