AI and California Courts

Feature

Q&A: Generative Artificial Intelligence and the California Judicial Branch

Administrative Presiding Justice Mary Greenwood of the Court of Appeal, Sixth Appellate District and Judge Arturo Castro of the Superior Court of Alameda County share their insights on generative AI
Aug 12, 2024

In her 2024 State of the Judiciary address, Chief Justice Patricia Guerrero set her sights on generative artificial intelligence as a major priority for the California judicial branch.  

“Society, government, and, therefore, our court system must address the many issues and questions presented by the developing field of artificial intelligence. We must do this in careful and deliberative fashion,” Chief Justice Guerrero said.

Chief Justice Guerrero tasked Administrative Presiding Justice Mary Greenwood of the Court of Appeal, Sixth Appellate District and Judge Arturo Castro of the Superior Court of Alameda County to lead the branch’s initial efforts to identify foundational questions as the branch considers opportunities and challenges associated with AI.

Justice Greenwood and Judge Castro shared their insights at a recent Judicial Council meeting and answered questions for the California Courts Newsroom about this fast emerging technology:


Q: Should California’s judicial branch be using AI?

Justice Greenwood: “Quite frankly, it’s inevitable. Attorneys who appear in front of us are going to be using it, law schools are starting to teach it, and there are legal research products that use generative AI. 

But generative AI is a tool—it's not a substitute for judicial discretion or due process. A person accepting a decision made by our courts and judges depends greatly on feeling heard and having their concerns addressed. Generative AI is a machine that cannot do that: we do that.  

Generative AI is based on mathematical predictions, and predictions look backwards based on past data. It’s very difficult to conceive that generative AI would have looked at society in 1954 when the U.S. Supreme Court decided Brown v. Board of Education [which held state-sanctioned school segregation to be unconstitutional] and would’ve thought this is a good idea. And that is so foundational to what we do.” 

Q: In what ways can (or should) generative AI be used in California courts?

Judge Castro: “We need to explore the possibilities. Could AI improve court administration? Can it make jobs easier? Can AI enhance research and analysis? Can it enhance access to justice?

I think about self-represented litigants and the potential for AI to promote access to justice by helping someone walk through a process they’re going to encounter in the courthouse or even just filling out forms–harnessing technology for the good.” 

Q: How can public trust and confidence in the courts be preserved given the challenges posed by this technology?   

Justice Greenwood: “We’re going to have to address the risks of generative AI to preserve public trust and confidence. The data used by AI is not pristine—it includes biases based on gender, ethnicity, politics, and values.

There are also risks around transparency and accountability. Even the engineers who develop AI models say they’re not quite certain how they work. That's troubling because the judicial branch must be accountable and transparent. 

It also raises privacy, confidentiality, and safety concerns with information in our case management system. If that information pours out into large language models, that information no longer belongs to us or to the user. So, it’s very important the branch remain very concerned about privacy. These are some of the issues that the Artificial Intelligence Task Force will be considering.”

Q: How can the branch maintain confidentiality and privacy if generative AI is used?

Judge Castro: “This concern for us as a branch is second nature—we’re very sensitive to privacy considerations. Users of generative AI often have to consent to their information being put into the model, but they may not be aware of that. This means that just by using a generative AI model, any information a user provides, or input might be included in the programming of that model and could later be exposed to other users.

So, we need to understand how that impacts the work of the courts, a court’s work product itself, and our responsibility to keep a person’s private information confidential whenever necessary. The issue of confidentiality and privacy is another area that the Artificial Intelligence Task Force will focus on.”

Q: How do judicial ethics come into play? 

Justice Greenwood: “Generative AI raises a lot of potential questions related to judicial ethics. One of the central questions is: how do we ensure AI does not result in the improper delegation of judicial decision-making? It’s one thing to use a generative AI tool to review 41 boxes of court reporter transcripts in an extremely complicated case. It’s another thing to say, 'Who wins?' But you can see it become a slippery slope, so that’s going to have to be considered. There are other important questions as well, and this is why one of the next steps the Chief Justice has announced for the branch is to work with the California Supreme Court’s ethics committees to develop guidance on how judicial officers should navigate ethical issues associated with generative AI.”