A federal appeals judge turned to an AI chatbot and Urban Dictionary to help figure out whether the phrase “monkey a–” carried a racial slur when used against Black employees.
The unusual move came during a discrimination case brought by Thomas Michael Smith and Monaleto Sneed, who said their supervisors at PAM Transport Inc. targeted them with terms like “monkey” and “monkey a–.” They argued the insults were racially charged and part of broader discrimination at the trucking company.
Last year, District Judge Eli Richardson in Tennessee sided with PAM Transport, tossing out the claims. But on Thursday, a three-judge panel at the Sixth Circuit Court of Appeals reversed that ruling, breathing new life into the lawsuit, reported the Daily Beast.

Judge Chad Readler, a Trump appointee, wrote a concurring opinion that stood out for its methods. He admitted the Tennessee court “had difficult issues to address in the delicate setting of race discrimination.” One question, he said, was how to assess intent and context when “the individual who purportedly engaged in race discrimination is a member of the plaintiff’s race.”
That’s when Readler turned to unconventional sources. “Does the term ‘monkey a–,’ a phrase understandably not included in traditional dictionaries, have the same racial connotation as the term ‘monkey’?” he wrote. To help answer, he cited definitions from Urban Dictionary and even ChatGPT.
When Readler asked the chatbot what the phrase meant, it responded, “Racial? Not inherently — but can be, depending on how and to whom it’s said.”
While Readler included the AI analysis in his concurrence, the panel’s main opinion, written by Judge Jane Stranch, an Obama appointee, made the call more directly. Stranch wrote that “there is no meaningful difference between the terms ‘monkey’ and ‘monkey a–’ when used by a supervisor against an African American employee, as alleged here.”
Stranch also noted in a footnote that Readler’s reliance on ChatGPT was unusual. She explained that while ChatGPT can consolidate massive amounts of text, “it does not independently verify the accuracy of any material or its unknown sources.”
The Sixth Circuit’s ruling clears the way for Smith and Sneed to continue their case, a major reversal from the Tennessee district court’s earlier dismissal. It also highlights how questions of language and race can land in unexpected legal territory — and how even judges are now testing tools like AI chatbots when conventional references fall short.
Readler, who was nominated to the appeals court by Donald Trump in 2018 and confirmed the following year, has not commented publicly on why he included ChatGPT’s explanation in his reasoning.
The case now heads back to court, but the fact that AI made its way into a federal opinion has already drawn attention. Whether or not it becomes more common, this ruling is a reminder of just how unsettled and sensitive the legal system’s handling of language and racial discrimination can be.

