Skip to content
Part of the Law Professor Blogs Network

What does AI think about the sentencing of Sam Bankman-Fried?

September 2, 2025

The quirky question in the title of this post is prompted by the latest essay by Jonathan Wroblewski over at the Sentencing Matters Substack.  The essay, which should be read in full, is titled, “An AI Experiment — Part 1: How would a large language model consider the sentencing of Sam Bankman-Fried.”  As Jonathan explains at the outset, the Federal Sentencing Reporter will soon publish a big double issue focused on various ways artificial intelligence (AI) may already be changing our criminal justice systems.  Jonathan was the chief editor of this exciting forthcoming FSR issue, and here is part of his preface in this new Substack essay to his discussion with the AI tool Claude about a famous recent white-collar sentencing:

There has certainly been a lot of hyperbole about how AI will soon take all our jobs and upend all aspects of our civilization. Some fear that the robots are coming for us, that they will end up as our overlords, and that we are all doomed. Having lived through other periods of technological change, I’ve seen that predicting the future is not so easy.

One thing about all of this is for sure, though. AI is here now, and it will certainly be changing the landscape of working with words, ideas, and data. It is increasingly part of the criminal justice process and increasingly important to that process. And to be an effective advocate, jurist, probation officer, other criminal justice professional, or policymaker now and into the future, one will need to understand these tools, how they are used, and how they might be regulated and governed.

With that in mind, I hope to publish here, in the run-up to the forthcoming FSR issue and beyond, a few experiments in and around AI and sentencing.  Here’s an opening one, rather simple and rudimentary, around the sentencing of Sam Bankman-Fried in 2024.  It is my conversation, if you will, with Anthropic’s Claude 4 Sonnet. (The version I have access to has a 200k token context window — a maximum amount of text, ~150k words, it can process and remember in a single conversation or session — and its knowledge cutoff is March 2025.)