Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Russian Disinformation Campaign Targets Moldova’s Upcoming Elections

September 25, 2025

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»AI Misinformation Expert’s Testimony, Drafted by AI, Contains Inaccuracies.
News

AI Misinformation Expert’s Testimony, Drafted by AI, Contains Inaccuracies.

Press RoomBy Press RoomJanuary 1, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

AI-Generated Legal Document Sparks Controversy in Minnesota Election Law Case

A prominent Stanford University misinformation expert, Jeff Hancock, finds himself embroiled in a legal controversy after admitting to using artificial intelligence to draft a court document containing fabricated citations. The case revolves around a Minnesota law that prohibits the use of AI to deceive voters before an election, a law being challenged on First Amendment grounds by the Hamilton Lincoln Law Institute and the Upper Midwest Law Center. Hancock, serving as an expert witness for the state, submitted a declaration containing several citations that were later revealed to be non-existent, prompting the opposing counsel to petition for the document’s dismissal.

Hancock, who billed the state $600 an hour for his services, attributed the fabricated citations to the AI chatbot ChatGPT-4o, which he used to draft the declaration. The Minnesota Attorney General’s Office, representing the state, claims to have been unaware of the false citations until the opposing counsel raised the issue. They have requested the court’s permission for Hancock to resubmit a corrected declaration. This incident raises critical questions about the ethical implications of using AI in legal proceedings and the potential for such technology to undermine the integrity of the judicial process.

Hancock argues that using AI for drafting legal documents is becoming increasingly common, citing the integration of generative AI tools into software like Microsoft Word and Gmail. He also points to the widespread use of ChatGPT among academics and students for research and drafting purposes. However, this defense raises concerns about the potential for AI-generated content to perpetuate misinformation and inaccuracies within the legal system. The incident highlights the need for clear guidelines and regulations regarding the use of AI in legal contexts.

This case is not the first instance of AI-generated legal documents causing controversy. Earlier this year, a New York court dismissed an expert’s declaration after discovering that Microsoft’s Copilot was used to verify mathematical calculations within the document. In other instances, lawyers have faced sanctions for submitting AI-generated briefs containing fabricated citations. These cases underscore the growing awareness within the legal community of the potential pitfalls associated with relying on AI-generated content.

Hancock, a nationally recognized expert on misinformation and technology, explained that he used GPT-4o to review academic literature on deepfakes and draft substantial portions of his declaration. He claims the AI misinterpreted his notes intended for adding citations later, leading to the inclusion of the fabricated references. However, the opposing counsel, Frank Bednarz of the Hamilton Lincoln Law Institute, expressed concern over the Attorney General’s Office’s decision not to retract the report containing the fabrications, citing the ethical obligations of attorneys to the court.

This incident underscores the broader debate surrounding the ethical implications of AI in various professional fields. As AI technology continues to advance, it becomes increasingly crucial to establish clear guidelines and regulations to ensure its responsible and ethical use. The legal profession, in particular, must grapple with the potential for AI to both enhance and undermine the integrity of the judicial process. The outcome of this case and the subsequent discussions within the legal community will likely shape the future use of AI in legal proceedings, influencing how attorneys and experts approach the integration of this technology into their practice. The case highlights the need for transparency and accountability in the use of AI, especially within sensitive areas like legal proceedings where accuracy and truthfulness are paramount. Furthermore, it underscores the urgency for legal professionals to develop a comprehensive understanding of the limitations and potential biases of AI tools to prevent their misuse. The controversy also raises broader societal questions about the role and responsibility of experts who utilize AI in their work and how to ensure the continued trustworthiness of expertise in an age of increasingly sophisticated technological tools.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

The Impact of Flagged Misinformation on Social Media Engagement

September 25, 2025

Navigating Misinformation: Introducing “The Reality Check” Series

September 25, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025

The Impact of Flagged Misinformation on Social Media Engagement

September 25, 2025

Paige Bueckers’ On-Court Impact Drives Historic Social Media Milestone with Dallas Wings

September 25, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Contested Transitions: The Siege of Electoral Processes

By Press RoomSeptember 25, 20250

Moldova’s Democracy Under Siege: A Deep Dive into the Information War Moldova, a small Eastern…

Navigating Misinformation: Introducing “The Reality Check” Series

September 25, 2025

Telegram Serves as Primary News Source for Half of Ukrainian Population, Survey Reveals

September 25, 2025

Obama Denounces Trump’s Dissemination of Harmful Misinformation Regarding Autism and Tylenol.

September 25, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.