IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

How AI Can Help Address Safety and Rehabilitation in Prisons

Corrections officers spend a disproportionate amount of time on administrative tasks rather than helping prisoners in ways that improve outcomes. AI is one tool to help, but it must be implemented thoughtfully.

A hallway with cells on the right side in a prison.
America’s prisons are being asked to do the impossible: house and rehabilitate growing populations despite shrinking budgets and overburdened staff.

Corrections officers currently spend more time than they’d like at their desks, handling tasks like incident reports, visitor scheduling, and manual movement logs. Every hour these officers spend on routine documentation is an hour not spent de‑escalating conflict, coaching someone through a reentry plan, or simply connecting with those in prison, which research suggests leads to safer prisons and better outcomes upon release.

Modernizing prison technology systems could give staff what they need most: time. Using new AI tools and technology, state department of corrections directors can automate some routine tasks — introducing chatbots to help with visitor scheduling, for example — and free up officer time for activities that promote safety and rehabilitation.

Used well, AI can be a staffing force multiplier, allowing for more programming, more case management, and more visitation. But with AI-focused vendors marketing their services as quick fixes to these challenges, corrections leaders should remember that these benefits won’t happen by accident and automation isn’t enough. Policymakers and prison administrators must be intentional about codifying how AI can be used in prison contexts, ensuring that new technology improves safety and rehabilitation efforts instead of squeezing staff tighter.

HOW CAN AI ALLEVIATE EXISTING CHALLENGES?


Right now, prison systems nationwide face a litany of challenges. Prisons struggle to keep their staff amid budget pressures and burnout. In February 2024, about 39 percent of North Carolina’s correction officer positions remained unfilled, increasing workloads and putting more strain on the existing officers. States nationwide report similar levels of staff vacancies.

What’s more, prison populations have begun to climb nationwide, with 39 states experiencing increases in their prison populations in 2023. While shrinking prison populations through policy reform and early release could alleviate pressure on corrections officers, a growing share of Americans think the justice system is “not tough enough,” meaning there’s little political appetite for such steps. Instead, chronic staff shortages drive lockdowns and lead to fewer classes, visits and work assignments: the activities that keep facilities stable and prepare people to return home.

Given this state of affairs, the question for corrections leaders is not whether to use AI to improve staffing and inmate conditions, but how.

By integrating AI-assisted tools into routine administrative tasks, prison facilities can free up officer time for programming and other activities that promote rehabilitation. AI can also help facilities respond to safety and health threats by taking a first pass at reviewing video feeds for fights, unauthorized movement or contraband. AI analytics could also detect sudden drops in activity levels or changes in language patterns to identify people at elevated risk of self-harm, enabling staff to intervene before a crisis instead of after.


RECOGNIZING AND PREPARING FOR RISKS


While these new AI systems could help make up for staff shortages, this technology also carries clear risks that officials must understand and address.

AI is only as good as the data it uses, and in corrections, records are often incomplete, inconsistent across facilities and biased by past decisions. When this data feeds risk assessment tools or predictive monitoring systems, the outputs can heighten scrutiny of the people who are already more likely to be burdened by punitive policies. Facial recognition and communications monitoring also raise vital privacy questions: who owns the data collected on incarcerated people and their families, who can access it and how long is it retained? Those being watched do not know what is being recorded about them or how that information is used to influence decision-making.

Compounding these concerns is the opacity of many AI systems. Complex models give answers that sound confident but often work in ways that are hard to understand, making it difficult for staff to know whether to trust the AI or rely on their own intuition. In high‑stakes environments like prisons, faulty recommendations erode trust, especially if it’s difficult to know who to hold accountable: the corrections officer or the algorithm.

These risks shouldn’t cause policymakers and prison leaders to avoid AI, but they should encourage responsible adaptation.

HOW PRISONS CAN RESPONSIBLY INTEGRATE AI


Currently, many prisons rely on older technology systems that are difficult to integrate with new tools. Some prisons even lack basic broadband connectivity. State policymakers and corrections directors will need to invest in their prison’s technological capacity in conjunction with any AI-driven tools. 

As prison systems modernize, they can begin to test AI tools and technologies in the following ways:
  • Pilot low-risk applications, such as incident report drafting or visitor scheduling, in a limited number of facilities with rigorous, independent evaluation. The General Services Administration successfully piloted an AI chatbot to take on routine administrative tasks, which 30 percent of its workforce now uses.
  • Sign short‑term contracts rather than sweeping, multiyear deals. This way, agencies can walk away from tools that underperform or create unforeseen harm.
  • Establish AI oversight committees that bring together staff, incarcerated people, community members and independent researchers with expertise in both technology and corrections policy. These bodies can review proposed tools, set standards for fairness and accuracy, and ensure transparency.
  • Promote a policy that humans make decisions and AI tools only advise on them. Prison leaders should ensure plain language explanations of how AI systems inform corrections work and create transparent avenues for people to appeal erroneous algorithmic decisions.

Ultimately, any adoption strategy should commit to using the time saved by AI tools to expand programming, not intensify surveillance or close budget gaps. If time savings from administrative tools are captured as budget “efficiencies,” the technology will have done little more than continue the status quo. This is the bargain that corrections should make: If AI is coming through the prison gates, it must do so on terms that strengthen safety and improve rehabilitation.

David Pitts is vice president for the Justice and Safety Division at the Urban Institute, where he leads researchers, technical assistance providers and support professionals focused on all aspects of the criminal legal system. In addition to his work as part of Urban’s leadership team, Pitts directs projects that focus on evidence-driven reform in corrections. Pitts is a member of the U.S. Sentencing Commission's Research and Data Practices Advisory Group, the advisory council of the Correctional Association of New York, and the editorial board of Criminology. He is an adjunct professor at the John Jay College of Criminal Justice, City University of New York.