IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Justice and Public Safety Leaders Chart AI Priorities

During a recent briefing on Capitol Hill, leaders and members of national associations considered artificial intelligence use cases and topics, along with a new playbook guiding the technology’s ethical, scalable adoption.

Facial recognition technology "examines" a person's head as they walk on a city street at night.
An artificial intelligence-powered facial recognition system examines a person walking on a city street at night. (AI-generated/Adobe Stock).
Across the country, public safety and justice agencies are piloting and scaling artificial intelligence tools to automate tasks and extend staff capacity. At the same time, professional organizations and associations are collaborating on use cases, working on best practices and influencing national policy.

At a recent Capitol Hill briefing co-hosted by the Congressional Artificial Intelligence Caucus and the IJIS Institute, public safety leaders highlighted how AI is beginning to reshape justice and law enforcement operations. From automatically drafted police reports to smart 911 triage, the applications are expanding rapidly. Vendors like Motorola and CentralSquare are investing heavily in AI integrations, signaling a shift in everyday public safety functions.

The briefing, July 11 in Washington, D.C., featured panelists from major justice organizations. Speakers emphasized practical AI use cases like e-filing, digital forensics and language translation, and discussed tools that could help address persistent staffing gaps. Organizations represented included the Major County Sheriffs of America, the Association of State Criminal Investigative Agencies, the American Correctional Association, the National White Collar Crime Center, the National Center for State Courts, and the National Center for Missing and Exploited Children.

Their focus on low-risk, high-impact applications for AI also underpins the Artificial Intelligence Playbook for Justice, Public Safety and Security Professionals, published this spring by the IJIS Institute, formerly named the Integrated Justice Information Systems Institute. The guide was part of the conversation, and it aims to help agencies adopt AI responsibly and sustainably, with built-in protections for privacy, fairness and transparency.

The playbook was developed by an IJIS-led AI working group, and more than 35 participants collaborated on the final product. They included federal, state, local, tribal and international justice agencies; 11 mission-area associations; and multiple AI vendors, Ashwini Jarral, IJIS Institute strategic adviser, said via email. The playbook came out of repeated questions from justice leaders, he said.

“‘Where does AI fit? How do we move forward responsibly? How do we evaluate vendors?’ Since there was no resource available to address these questions, IJIS decided to take the lead in developing the playbook as one … to help guide AI adoption.”

Published in April, it outlines 13 “plays” covering the full AI adoption life cycle, from governance to workforce development. Topics include ethics, policy development, use case identification, security and privacy, risk management, funding and transparency. Each section lists questions to ask, checklists, resources and what stakeholders should be involved. The free publication also strongly emphasizes ethical use.

“If a play is not implemented or properly evaluated within the organizations implementing AI, it can cause mistrust within the broader stakeholder communities, increase liability and, more importantly, result in security and privacy risks that can shut down the use of AI,” Jarral said.

For agencies operating under tight budgets, the playbook suggests testing solutions on a small scale. “It focuses on selecting high-value use cases and testing them in a limited capacity to determine the long-term value and before making huge upfront investments,” he said.

IJIS’ AI Center of Excellence continues this work, supporting justice and public safety agencies as they develop their AI strategies and governance practices.
Rae D. DeShong is a Texas-based staff writer for Government Technology and a former staff writer for Industry Insider — Texas. She has worked at The Dallas Morning News and as a community college administrator.