Skip to content
Part of the Law Professor Blogs Network

“AI thinks you should go to jail, even if you didn’t do the crime”

August 1, 2025

The title of this post is the title of this notable new report from the Justice Innovation Lab authored by Rory Pulvino, Dan Sutton, and JJ Naddeo. Here are parts of the report’s “Introduction”:

[W]e tested ChatGPT (model 3.5-Turbo) on a common legal task: writing a memo about how to handle a criminal case. We evaluated this task from both prosecutor and defense attorney viewpoints to see if the results held across different roles. While our testing was designed to evaluate racial bias, what we discovered was unexpected: the AI model revealed a consistent and arguably problematic preference for prosecution, regardless of case specifics….

Using different AI prompts and a variety of real-world cases involving common non-violent offenses, we consistently found that the model recommended prosecution over dismissal or diversion.  This was true even after the facts of the arrest were changed to introduce legal issues and facts that undermined the case against the arrestee.  Though newer AI models may perform differently on the task, this analysis demonstrates that there are likely unforeseen risks in using a generative AI system without knowing the biases of the model.

The preference for prosecution is particularly concerning because it’s hidden from users and was only revealed through rigorous testing.  These risks are compounded by our limited knowledge of exactly which language cues or features are most impactful on a response.  As seen in research on image classification, models can rely heavily on patterns or details that are not salient—or even recognizable — to human observers, sometimes giving disproportionate weight to subtle or seemingly irrelevant features.  Similarly, generative AI models may be influenced by linguistic signals in ways that are opaque to developers and users, leading to unpredictable or unintended outcomes….

In this report, we share results from ongoing research being prepared for academic publication to address urgent questions facing prosecutor offices as they adopt AI tools.  We discuss current uses and risks of generative AI in prosecution, describe both our testing and findings, and conclude with a discussion on why continued research in this area is needed.