No AI in my classroom unless a human verifies its accuracy Ars Technica

Illustration of a judge's gavel on a digital background similar to a computer circuit board.

Illustration of a judge's gavel on a digital background similar to a computer circuit board.

Getty Images | the-lightwriter

A Texas federal judge has a new rule for lawyers in his courtroom: no AI-written communications unless the AI’s output is controlled by a human. US District Judge Brantley Starr also ordered attorneys to file certificates proving that their documents were written or reviewed by humans.

“All attorneys appearing before the Court must file a certificate on the register stating that no part of the filing was authored by Generative AI (such as ChatGPT, Harvey.AI, or Google Bard) or that any language authored by Generative AI was checked for accuracy, using news reporters or traditional legal databases, by a human,” according to a new “judge-specific requirement” in the Starr courtroom.

A certification must be submitted for each case and would cover all filings in the case. A sample certification states that the requirements apply to any language in a filing, “including quotations, citations, paraphrased statements, and legal analysis.”

Starr, a Trump nominee in the US District Court for the Northern District of Texas, said “AI platforms are incredibly powerful and have many uses in the law: filing divorces, filing requests, suggesting errors in documents , anticipated questions during the oral argument. But the legal briefing is not one of them”. AI platforms “in their current state are prone to hallucinations and biases. On hallucinations, they do things on quotes and quotes as well.”

Starr’s new standing order on AI was released yesterday, according to an article by legal expert Eugene Volokh. “Note that federal judges routinely have their own standing orders for attorneys practicing in their courtrooms. These are in addition to the local district rules and the normal federal rules of civil and criminal procedure,” Volokh wrote.

The lawyer cited fake cases concocted by ChatGPT

Starr’s order came after New York attorney Steven Schwartz admitted he used ChatGPT to help write court papers citing six non-existent cases concocted by the AI ​​tool. Schwartz and his associates are awaiting possible retribution from Judge Kevin Castel of the US District Court for the Southern District of New York.

In what Castel called “an unprecedented circumstance,” the judge said the plaintiff’s attorneys’ documents included six “bogus court decisions with bogus subpoenas and bogus internal subpoenas.” The documents included fabricated case names and a number of “excerpts” from bogus court rulings citing further false precedents fabricated by AI.

An affidavit Schwartz filed states that he “very much regrets that he has used generative artificial intelligence to supplement the legal research performed here and will never do so in the future without absolute verification of its authenticity.” He also stated that he “never used ChatGPT as a source to conduct legal research prior to this event and was therefore unaware of the possibility that its content could be false”.

Judge: AI does not take an oath, it is not faithful

Starr’s new standing order discussed potential biases in AI tools. “While lawyers swear to put aside their biases, biases, and personal beliefs to faithfully uphold the law and represent their clients, generative AI is the product of programming devised by humans who didn’t have to take that oath” , he has declared. she wrote.

The “AI systems have no allegiance to any customer, the rule of law, or the laws and Constitution of the United States (or, as noted above, the truth),” Starr continued. “Unencumbered by any sense of duty, honor or justice, such programs operate according to computer code rather than belief, based on programming rather than principle. Any party that believes a platform has the accuracy and reliability needed for the legal briefing can ask for leave and explain why.”

Going forward, Starr’s order said that the court “will void any filing by an attorney who fails to submit a certificate on the record that the attorney has read the specific requirements of the Court judge and understands that he will be held liable pursuant to of Rule 11 for the content of any deposit it signs and submits to the Court, whether or not the generative artificial intelligence has drafted any part of that deposit.”

#classroom #human #verifies #accuracy #Ars #Technica

Previous articlePeople with social anxiety are more likely to become overly dependent on conversational AI agents
Next articleThe fight to control the AI


Please enter your comment!
Please enter your name here