Breaking News! Wall Street's Top Law Firm Faces Major AI "Hallucination" Mishap! Sullivan & Cromwell's Court Document Error Leads to Public Apology

Sullivan & Cromwell (S&C), a prestigious Wall Street law firm, recently faced backlash after submitting court documents riddled with AI-generated errors in a high-profile bankruptcy case. This incident highlights the growing concerns over AI "hallucinations" in the legal industry.

In today's rapidly advancing AI technology landscape, AI "hallucinations"—where AI models fabricate false information or incorrect citations—have become a significant concern in the legal sector. On April 22, The Guardian reported a high-profile case where the elite Wall Street law firm Sullivan & Cromwell (S&C) submitted court documents with multiple severe errors due to AI assistance in bankruptcy proceedings related to the Prince Group, causing a public uproar.

The incident originated from an important document S&C submitted to the New York federal bankruptcy court on April 9. This document pertained to a series of legal disputes involving the Prince Group, controlled by Chinese businessman Chen Zhi. Previously, U.S. prosecutors accused Chen Zhi of wire fraud and money laundering, alleging his involvement in operating a "forced labor scam park" in Cambodia, from which billions of dollars were stolen from victims in the U.S. and globally. The Prince Group denied all accusations, claiming the charges were baseless. Furthermore, U.S. authorities sought to seize nearly $9 billion in Bitcoin, deeming it criminal proceeds of the Prince Group. Chen Zhi was arrested earlier this year in Cambodia and extradited to China at the request of Chinese authorities. This background has drawn international attention to the Prince Group case, making S&C's document particularly crucial.

However, the document "crashed." The opposing law firm, Boies Schiller Flexner (BSF), discovered during their review that S&C's document contained multiple AI-generated "hallucination" errors: incorrect citations of specific sections of the U.S. Bankruptcy Code, erroneous references to other case conclusions, and severely distorted summaries of past cases. As a law firm with over 900 lawyers, renowned in corporate restructuring and bankruptcy, this mishap by S&C shocked the industry.
On April 18 (Saturday), Andrew Dietderich, S&C's global restructuring co-head, personally wrote to U.S. Bankruptcy Judge Martin Glenn, publicly apologizing on behalf of the entire team. The letter stated: "We deeply regret this. I apologize on behalf of the entire team. On Friday, I called BSF to thank them for promptly pointing out the issues and directly apologized to them." Dietderich emphasized that S&C had established "comprehensive AI tool usage policies and training requirements" to prevent such errors, but the "AI policy was not followed," and even the secondary review process failed to catch the AI-generated erroneous citations.

S&C quickly submitted revised documents to the court. The firm stated that while using AI tools to assist work is not prohibited, under ethical guidelines, they are obliged to ensure all materials submitted to the court are accurate. In this incident, S&C did not disclose which AI tool was used (such as ChatGPT, Claude, or other legal-specific AI), nor did they reveal the identities of the involved lawyers or whether any internal disciplinary actions were taken.

This incident is not isolated, but it draws attention as it involves a top Wall Street law firm. In recent years, AI applications in the legal field have been booming, from contract review and case retrieval to drafting legal opinions, AI tools are seen as "magic" that can significantly enhance efficiency. However, the issue of "hallucinations" persists: AI may fabricate nonexistent case law, misinterpret regulations, or improperly summarize facts. Since 2023, there have been several cases where lawyers were fined or publicly reprimanded by courts for relying on AI-generated false citations, but this mistake by S&C, a benchmark in the industry, still surprises many.

Legal experts indicate that this case highlights the limitations of current AI technology in high-risk areas. Although S&C claims to have a sound AI governance framework, lapses in policy enforcement and human review directly led to errors entering official documents. U.S. Bankruptcy Judge Martin Glenn has yet to formally respond to the apology letter, but the industry generally believes this incident may prompt more courts and law firms to re-evaluate AI usage guidelines.

On a broader scale, AI "hallucination" incidents are accelerating the evolution of regulations in the legal sector. Organizations like the American Bar Association (ABA) have long advocated that lawyers must bear ultimate responsibility when using AI, and cannot consider AI outputs as "authoritative." Some law firms are introducing "AI + human dual review" mechanisms and even developing proprietary legal verification AI to cross-check output accuracy. However, as S&C's experience shows, even top institutions may relax their vigilance under high-pressure deadlines.

The Prince Group case itself involves cross-border crime, massive asset recovery, and international extradition, among other complex factors. As the representative for liquidators, the quality of S&C's documents directly impacts the case's direction. Although the AI errors were promptly corrected and did not cause substantial judicial consequences, had BSF not discovered them, the consequences could have been dire: judges might have made incorrect rulings, parties' rights could have been compromised, or it might have led to a storm of appeals.

S&C's apology letter also indirectly sent a signal: the law firm is willing to face issues with transparency, which somewhat alleviated external doubts about "even top law firms relying on AI to slack off." However, it also reminds the entire industry—technological advancement cannot replace professional judgment. In the future, as more powerful legal AI models emerge, balancing efficiency and accuracy will become a shared challenge for lawyers, judges, and regulatory bodies.

Notably, after this case was exposed, legal professionals on social media had heated discussions. Some felt that "AI finally brought Wall Street law firms down from their pedestal," while others called for the prompt establishment of national AI legal usage guidelines. Regardless, this incident will become a landmark case in the legal technology field in 2026, reminding practitioners that while pursuing AI efficiency, the commitment to truth must never be relaxed.

The S&C incident may encourage more law firms to openly share AI governance experiences. Boies Schiller Flexner's "rescue" action also demonstrated the importance of peer oversight—in adversarial litigation, opponents are often the most stringent reviewers.

Winzheng Lab believes that as generative AI continues to penetrate deeper into the legal field, similar "hallucination" incidents may evolve from isolated cases to industry-wide challenges. Regulatory bodies may need to consider introducing more detailed AI disclosure requirements, such as requiring lawyers to indicate "AI-assisted sections" and attach human review statements when submitting documents. Only in this way can AI truly become a valuable assistant for legal professionals, rather than a potential source of risk.