Allegations of using AI to fabricate testimony against conservative YouTuber in Minnesota case brought against Stanford professor.
Jeff Hancock's testimony was submitted to court by Minnesota Attorney General Keith Ellison.
A Stanford University "expert" has been accused of using AI to craft testimony later used by Minnesota Attorney General Keith Ellison in a politically-charged case.
Jeff Hancock, a professor of communications and founder of the esteemed school's Social Media Lab, gave an expert statement in a case concerning a satirical conservative YouTuber named Christopher Kohls. The legal dispute centers on Minnesota's recent prohibition on political deepfakes, which the defendants contend is an infringement on free speech.
Ellison submitted Hancock's testimony to the court, arguing in favor of the law. Hancock is renowned for his research on how individuals use deception with technology, including text and email detection as well as identifying fake online reviews, as stated on Stanford's website.
The plaintiff's lawyers have requested the Minnesota federal judge to dismiss the testimony of Hancock, claiming that the study he cited was fake.
"Lawyers contended in a recent 36-page memo that Prof. Jeff Hancock's Declaration cites a nonexistent study, as no article with the specified title exists."
The study titled "The Influence of Deepfake Videos on Political Attitudes and Behavior," which was supposedly published in the Journal of Information Technology & Politics, was found to be false. The Nov. 16 filing confirms that the journal is legitimate, but it has never published a study with that title.
""The study was likely a 'hallucination' generated by an AI large language model like ChatGPT, despite the existence of the publication and its cited pages belonging to unrelated articles," the lawyers contended."
"The hallucination mentioned in Hancock's declaration raises doubts about the entire document, particularly when much of the commentary lacks methodology or analytical reasoning."
The document challenges Ellison's conclusions, stating that they lack methodology and are based solely on expert opinions.
"The memo suggests that Hancock could have referenced a legitimate study that supported the claim in paragraph 21. However, the use of a made-up citation raises doubts about the accuracy and reliability of the entire statement."
The memorandum further strengthens the assertion that the citation is invalid, stating the extensive efforts made by lawyers to find the study.
"According to the document, the alleged article authored by 'Hwang' and containing the term 'deepfake' cannot be found on Google or Bing, the most commonly used search engines. Additionally, a search on Google Scholar, a specialized search engine for academic papers and patent publications, does not reveal any articles matching the description of the citation."
"The article doesn't exist, as the filing flatly states, and it was not a copy-paste error."
If the declaration is partially fabricated, it is entirely unreliable and should be dismissed from court consideration.
"The court may investigate the source of the fabrication and take further action based on the conclusion that at least some of Prof. Hancock's declaration is based on fabricated material generated by an AI model."
Stanford University, Hancock, and Ellison were contacted by Planet Chronicle Digital for comment.