Technology & Law March 30, 2026

Majority of Federal Judges Are Using AI, First-of-Its-Kind Study Finds

A random-sample survey of 112 federal judges found 61.6% have used at least one AI tool in their judicial work — but daily use is rare, most received no training, and opinions on AI's impact are nearly evenly split.

The Study

Researchers at Northwestern University, in collaboration with the New York City Bar Association, published the first random-sample survey of federal judges on their use of artificial intelligence. The study, titled "Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges," is forthcoming in Volume 27 of The Sedona Conference Journal.

The research team selected a stratified random sample of 502 federal judges from a population of 1,738 active federal judges as of August 2025, drawn from bankruptcy, magistrate, district court, and court of appeals positions. The list was compiled using Ballotpedia, the Almanac of the Federal Judiciary, and the Federal Judicial Center's Biographical Directory. Researchers surveyed participants via email from December 2 to 19, 2025, and received 112 responses — a 22.3% response rate.

The study was led by Daniel Linna, director of Law and Technology Initiatives and senior lecturer at Northwestern Pritzker School of Law, and V.S. Subrahmanian, Walter P. Murphy Professor of Computer Science and director of the Northwestern Security & AI Lab at the McCormick School of Engineering.

Key Numbers

Of the 112 responding judges, 61.6% reported using at least one AI tool in their judicial work. The breakdown of usage frequency among those who use AI:

Combining daily and weekly users, only 22.4% of responding judges use AI tools on a frequent basis. The majority who do use AI do so only monthly or rarely.

What Tools Judges Are Using

The survey covered both general-purpose AI platforms and legal-specific tools. Westlaw AI-Assisted Research or Deep Research (Thomson Reuters) was the most widely used tool, with 38.4% of judges reporting some level of use. ChatGPT (OpenAI) came second at 28.6%.

However, usage frequency differed sharply between categories. For legal-specific AI tools, 5.4% of judges reported daily use and 9.8% reported weekly use. For general-purpose tools, only 0.9% reported daily use and 9.8% reported weekly use.

Other tools showed minimal adoption. Claude (Anthropic) was used by 0.9% of judges, all at a "rarely" frequency. Harvey and Legora showed 0% usage. Vincent AI (vLex) showed 0.9% rare usage.

The study included tools from CoCounsel (Thomson Reuters), Protégé or Lexis+ AI (LexisNexis), Grok (X.ai), Gemini (Google), Copilot (Microsoft), and Perplexity, among others.

How Judges Are Using AI

Legal research was the dominant use case, reported by 30.0% of judges as their primary AI application. Document review came second at 15.5%, followed by drafting documents not filed in cases (7.3%), summarizing text or audio (7.3%), and preparing case timelines or chronologies (5.5%).

For staff working in chambers, the patterns were similar: legal research was the top use case at 39.8%, followed by reviewing documents at 16.7%.

One in three judges reported that they permit — or permit and encourage — AI use by those working in their chambers. However, only 20.4% of judges formally prohibit AI use. Another 17.6% discourage but do not formally prohibit it. About 24.1% of judges reported having no official AI policy at all; when those who only discourage AI are included, that figure rises to 41.7%.

Training Gap

Despite growing adoption, most judges have not received formal AI training through court administration. The survey found that 45.5% of judges said AI training had not been offered by court administration, and 15.7% were unsure whether any training had been made available. That means roughly 61% of judges surveyed either had not been offered AI training or didn't know if it existed.

Among judges who were offered training, three out of four attended. The researchers interpreted this as evidence of unmet demand: when training is made available, judges take it.

"We need to think about how we bring these technologies into the courts, offer AI training to judges and analyze the benefits and risks," Linna said in a statement published by Northwestern University.

Split Outlook

Judges' attitudes toward AI are nearly evenly divided. The study found that responding judges simultaneously recognized AI's potential efficiency gains while expressing concern about specific risks — most notably AI "hallucinations" (fabricated citations or facts), so-called "zombie cases" (AI-generated references to non-existent case law), and skill atrophy among clerks and staff who rely on AI too heavily.

The study noted a correlation between personal and professional AI use: judges who use AI tools in their personal lives are more likely to use them in their judicial work. Overall, 38% of judges reported daily or weekly AI use outside of work, while 26.9% said they rarely use AI outside of work and 25.9% said they never do.

Context

The study represents the first empirical, random-sample data on AI adoption across the federal judiciary. Prior research relied on voluntary surveys or anecdotal reporting, which can skew toward early adopters or AI skeptics depending on who self-selects to respond.

The 22.3% response rate is a standard limitation the authors acknowledge; the judges who did not respond may use AI more or less than those who did. However, the stratified random sampling design — covering all four major categories of federal judge — gives the findings more statistical credibility than prior self-selected surveys.

The findings arrive at a moment when AI tools are rapidly expanding their presence in legal practice. Several federal courts have issued standing orders requiring attorneys to disclose AI use in court filings; the survey found that judicial policy on disclosure and prohibition remains inconsistent across chambers.

Subrahmanian said the findings signal a turning point: "Our study shows that a significant number of federal judges are already using AI tools." The question, the researchers argued, is whether the judiciary will get ahead of the curve on training and policy — or scramble to catch up.