Article/Mention - Media and Entertainment Newsroom

Defamation in the AI Age

October 28, 2025

Imagine you are a newspaper reporter working on a feature piece about a high-profile college football player. To assist in your reporting, you ask a generative artificial intelligence (AI) platform about the player’s life off the field. The AI platform responds that the player was “arrested for suspicion of a DUI his freshman year of college” and was “credibly accused of cheating in high school.” Without vetting or independently confirming this information, you include those statements in your article. Unfortunately, a different college football player with the same name had indeed been arrested for a DUI and was credibly accused of cheating—but that player was not the subject of your article. The player sues you, the newspaper and the AI platform for defamation, alleging that the statements are false and ruined his chance to play in the National Football League. What happens next?

As the public increasingly incorporates AI tools into their lives and jobs, courts have begun to grapple with this scenario—and the many other legal questions that have arisen in the AI age. Dozens of AI cases are winding their way through U.S. and international courts on a wide variety of legal topics.[1] Copyright litigation, in particular, has spiked as content creators (newspapers, authors, music companies, and others) have sued AI companies.[2] Courts have begun considering whether AI-generated works are copyrightable,[3] what constitutes copyright infringement, and whether fair use is a viable defense. Even the U.S. Copyright Office has weighed in with a three-part series on AI.[4] A few cases have already been settled or decided in trial courts, and some decisions have been appealed.[5] In one case, the parties agreed to a whopping $1.5 billion class action settlement.[6]

By contrast, there have only been a handful of reported cases alleging defamation by AI, most of which have either settled or are still being litigated.

For example, in July 2023, technologist Jeffery Battle sued Microsoft for $25 million after a Bing search using ChatGPT confused him with Jeffrey Battle, a convicted terrorist.[7] Bing incorrectly stated that Battle “was sentenced to eighteen years in prison after pleading guilty to seditious conspiracy and levying war against the United States.” Microsoft moved to compel arbitration, which the court granted, and the case has been stayed pending the resolution of arbitration. As of the date of this article, the court’s docket does not indicate whether arbitration has been completed.

In April 2025, activist Robby Starbuck sued in Delaware Superior Court, alleging that an AI chatbot inaccurately identified him as a participant in the Jan. 6, 2021 insurrection.[8] Four months later, the parties settled the case.[9]

In March 2025, Wolf River Electric, a solar panel company with 200 employees, sued in Minnesota state court after an AI platform inaccurately stated that the company was “currently facing a lawsuit from the Minnesota Attorney General due to allegations of deceptive sales practices regarding their solar panel installations.”[10] Wolf River Electric claims that the statements harmed its reputation and relationship with customers, who terminated their relationship with the company after seeing the false assertion online. The case was removed to federal court, but Wolf River Electric filed a motion to remand in July 2025, which remains pending.

However, one case has provided some answers. In June 2023, syndicated radio host Mark Walters sued OpenAI for defamation in Georgia Superior Court after ChatGPT incorrectly responded to a prompt by journalist Fred Riehl that Walters was “accused of defrauding and embezzling funds from the Second Amendment Foundation” in a lawsuit.[11] Walters is neither a party to that lawsuit nor has he been accused of defrauding or embezzling funds.

OpenAI filed a motion to dismiss, arguing that Riehl did not and could not understand the statement as defamatory, ChatGPT’s output does not constitute publication, and that Walters is a public figure who cannot show actual malice. The court denied the motion to dismiss without explanation,[12] but after discovery, OpenAI filed a motion for summary judgment, which the court granted on May 19, 2025.

Judge Tracie Cason granted summary judgment in favor of OpenAI for three independent reasons. First, the court found that the AI output did not communicate a defamatory meaning as a matter of law because, under the circumstances, a reasonable reader in Riehl’s position could not have concluded that ChatGPT communicated actual facts. ChatGPT repeatedly warned users, including in its Terms of Use, that ChatGPT sometimes provides factually inaccurate information and the output contained warnings, contradictions, and other red flags that the information was not factual. Separately, the court determined that Riehl did not actually believe that Walters was accused of embezzling funds. Riehl testified that he did not believe the response at first and confirmed that the output was false within 90 minutes.

Second, Walters could not show fault under either the negligence or actual malice standard. Walters presented no deposition testimony, documents, or expert report showing that OpenAI published the allegedly defamatory statements with negligence. The court also found that Walters, a prominent radio host and commentator, was a limited-purpose public figure and that OpenAI did not act with actual malice. The fact that OpenAI went to “great lengths to reduce hallucination in ChatGPT” and issued extensive warnings that errors were possible negated any possibility that a jury could find that OpenAI acted with subjective knowledge of falsity or reckless disregard for the truth.

Third, Walters could not recover damages. Even if the statements were defamatory per se, any presumption of damages was rebutted by Walters’ admission that he had not been harmed by ChatGPT’s statements. To that point, Riehl, the only person who received the output, was “always skeptical” about the veracity of the output. Moreover, because the statements involved a “matter of public concern”—a publicly filed lawsuit—Walters could not meet the “actual malice” standard.

While Walters v. Open AI is helpful—after all, it is the first known U.S. defamation by AI decision with thorough reasoning—it (1) only addresses a few of many issues, (2) only represents the views of one trial court, and (3) features novel facts not likely to be repeated. Many cases could arise with different factual and legal questions. For example, what is the result if an output is received by many individuals, an output is repeated by others, the recipient of the output believes the output to be true, a plaintiff notifies the AI platform about the allegedly defamatory content, or a plaintiff ties concrete economic and reputational damages to an output?

Until the legal landscape of defamation and AI is settled, reporters and news organizations must rely on traditional defamation and reporting principles. News organizations should establish an AI-use policy, and reporters should ask their editors if there is a specific AI-use policy.

Some general tips to mitigate risk include using AI platforms that are reliable and consistently updated. Do not assume the outputs are correct. Vet outputs like you would information from any other source. Human oversight is critical. Consider whether to disclose to readers that AI-generated content has been used as part of your reporting. The more you rely on AI for your reporting, the more you should employ transparency. Transparency builds relationships with readers and signals to them that you are trustworthy. If AI is used responsibly, its benefits can be enjoyed while mitigating risks of being on the receiving end of a defamation lawsuit. 



[1] See Bruce Barcott, AI Lawsuits Worth Watching: A Curated Guide, July 1, 2024, https://www.techpolicy.press/ai-lawsuits-worth-watching-a-curated-guide/.

[2] See Kate Knibbs, Every AI Copyright Lawsuit in the US, Visualized, WIRED, December 19, 2024 (updated May 2, 2025), https://www.wired.com/story/ai-copyright-case-tracker/.

[3] Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025).

[5] Thomson Reuters Enter. Ctr. GMBH v. Ross Intelligence Inc., 765 F. Supp. 3d 382 (D. Del. 2025) (rejecting fair use defense), appeal docketed, No. 25-2153 (June 24, 2025); Kadrey v. Meta Platforms, Inc., No. 23-CV-03417, 2025 WL 1752484 (N.D. Cal. June 25, 2025) (use of works to train AI models is fair use; additional briefing underway); No. 24-CV-05417, 2025 WL 1741691 (N.D. Cal. June 23, 2025) (use of purchased copies of books to create digital permanent library constitute fair use, but use of pirated books to create such library does not constitute fair use; parties now considering settlement).

[7] Battle v. Microsoft, No. 23-cv-1822 (D. Md.); Eugene Volokh, New Lawsuit Against Bing Based on Allegedly AI-Hallucinated Libelous Statements, The Volokh Conspiracy, July 13, 2023, https://reason.com/volokh/2023/07/13/new-lawsuit-against-bing-based-on-allegedly-ai-hallucinated-libelous-statements/.

[8] Starbuck v. Meta, No. N25C-04-093.

[9] Joseph De Avila, Meta, Robby Starbuck Settle AI Defamation Lawsuit, The Wall Street Journal, August 8, 2025.

[10] LTL LED, LLC v. Google LLC, No. 25-cv-02394 (D. Minn.); Eugene Volokh, Large Libel Models: Small Business Sues Google, Claiming AI Overview in Searches Hallucinated Attorney General Lawsuit, The Volokh Conspiracy, June 11, 2025, https://reason.com/volokh/2025/06/11/large-libel-models-small-business-sues-google-claiming-ai-overview-in-searches-hallucinated-attorney-general-lawsuit/.

[11] Walters v. Open AI, LLC, No. 23-A-04860-2 (Ga. State Ct. June 5, 2023).

[12] Walters v. Open AI, LLC, No. 23-A-04860-2, 2024 Ga. Super. LEXIS 322 (Ga. State Ct. Jan. 11, 2024).