Wednesday, April 22, 2026 Live Desk
Zwely News logo

A top Wall Street law firm used AI to draft a court filing and it completely made up facts

Sullivan & Cromwell admitted errors in a bankruptcy case after AI-generated content included fake legal citations and false claims

ZN

Author

Zwely News Staff

Shared Newsroom

April 22, 2026 5:16 AM 3 min read
A top Wall Street law firm used AI to draft a court filing and it completely made up facts

At a glance

What matters most

  • Sullivan & Cromwell used AI to help draft a legal filing in a bankruptcy case involving the Prince Group, but the document contained false citations and made-up facts.
  • The firm admitted the errors to a federal judge in New York and issued a public apology, calling the incident a serious lapse.
  • This is one of the first confirmed cases of AI hallucinations in a major U.S. law firm's official court submission.
  • The incident has sparked fresh debate about the risks of integrating AI into high-stakes professional work, especially in law and finance.

Across the spectrum

What people are saying

A quick look at how the same story is being framed from different angles.

On the Left

This incident shows what happens when profit-driven firms rush to adopt unproven technology without proper oversight. Legal work affects people's lives and rights-cutting corners with AI, especially at elite firms that set industry standards, risks eroding public trust in the justice system.

In the Center

AI is becoming a standard tool in many professions, but this case underscores the need for clear protocols. Human review and verification must remain central, especially in high-consequence environments like the courtroom.

On the Right

Mistakes happen, but firms like Sullivan & Cromwell are held to a higher standard. Rather than overregulating AI, the focus should be on personal accountability and better training so professionals can use these tools responsibly.

Full coverage

What you should know

A respected Wall Street law firm has stumbled into the growing debate over artificial intelligence in professional services after admitting it submitted a court filing full of false information generated by an AI tool. Sullivan & Cromwell, known for representing major financial institutions and corporate giants, told a federal judge in Manhattan this week that a recent document filed in a bankruptcy case contained numerous inaccuracies-some of which never existed outside the AI's output.

The filing, related to the restructuring of the Prince Group, included citations to non-existent court rulings and mischaracterized legal precedents. When inconsistencies were flagged, the firm traced them back to an AI assistant used during the drafting process. The tool, designed to speed up legal research and document preparation, had 'hallucinated'-a term used when AI confidently generates false or fabricated content.

In a letter to Judge Loretta Preska, partners at Sullivan & Cromwell apologized and took full responsibility. They described the incident as a breakdown in oversight and said they are launching an internal review of how AI tools are used across the firm. Some of the partners involved bill more than $2,000 per hour, underscoring the high stakes of even a single error in such elite legal work.

The case has drawn sharp attention from legal ethics experts and tech watchdogs alike. While many law firms have quietly adopted AI for routine tasks, this is one of the first confirmed instances where hallucinated content made it into an official court submission by a top-tier firm. The slip-up raises concerns about accountability, especially when AI is used in time-sensitive or complex litigation.

Legal professionals across the country are now re-examining their own AI policies. Some firms have reportedly paused the use of generative AI tools pending new guardrails. Others are calling for mandatory human verification of any AI-assisted work before it's filed with courts.

There's no indication that the firm intended to mislead the court. Instead, the episode highlights a broader tension: as AI becomes faster and more embedded in knowledge work, the risk of overreliance grows-even among the most experienced professionals.

Sullivan & Cromwell declined to name the specific AI tool used, citing confidentiality agreements. But the firm confirmed it is working with its technology providers to improve detection of hallucinated content. For now, the case serves as a cautionary tale about the limits of automation in fields where precision is non-negotiable.

About this author

Zwely News Staff compiles multi-source reporting into concise, viewpoint-aware coverage for readers who want context without noise.

Source Notes

Left The Guardian Technology Apr 22, 8:09 AM

AI hallucinations found in high-profile Wall Street law firm filing

Sullivan & Cromwell apologises to New York federal judge for string of errors in documents for Prince Group caseBusiness live – latest updatesThe elite Wall Street law firm Sullivan & Cromwell has told a court that a major filing it made in...

Center Financial Times Apr 21, 7:01 PM

Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’

Firm whose partners bill more than $2,000 per hour apologises to judge for software-driven errors in bankruptcy case

Previous story

The Southern Poverty Law Center is facing federal fraud charges over claims it secretly funded extremist groups

Next story

Kennedy defends peptide deregulation and brushes off vaccine criticism in tense hearings

Related Articles

More in Business