IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Keeping Empathy in the Age of Automated Government

Governments can and should use AI to reduce burdens. But they must also preserve the ability to override AI and the moral flexibility that allows a public servant to say, “The data says no, but the right answer is yes.”

Closeup of one person holding another person's hand in comfort.
Shutterstock
It’s 11:47 p.m. when Maria finally finishes her Free Application for Federal Student Aid (FAFSA) form. She’s the first in her family to apply for college. Her parents’ taxes are complicated, they work multiple jobs and every page of the application feels like decoding another bureaucratic riddle. She clicks “Submit,” relieved. A red box appears: “Ineligible due to missing parental income verification.” A week later, a financial aid officer calls her. “Let’s fix this,” the woman says. She walks Maria through the paperwork, makes a note in the system and overrides a technical rejection that would have ended her dream before it began.

That moment of a person recognizing the story behind the form is the essence of public service. It’s empathy in action: discretion, understanding and humanity applied in a system designed to be impersonal.

As AI and automation move deeper into the machinery of public service, from eligibility determinations to case management to constituent chatbots, we risk losing those moments. Efficiency, speed and fairness are worthy goals. But if public services become entirely deterministic, governed by models rather than judgment, we risk building systems that are accurate but inhumane.

THE PROMISE AND PERIL OF AI


Across agencies, artificial intelligence is being heralded as a way to streamline government operations. AI chatbots answer questions 24/7. Predictive analytics detect fraud before it happens. Large language models summarize case notes so workers can spend less time on paperwork and more time on people. These tools offer enormous potential. They can reduce bias in some decisions, standardize inconsistent processes and help overburdened public servants manage caseloads. In many cases, AI can extend the reach of human compassion by catching errors or omissions before they harm someone.

But AI systems are only as empathetic as their design allows. Algorithms don’t “see” context; they interpret data. And empathy isn’t data — it’s an act of interpretation. A caseworker can sense when “noncompliance” really means “no transportation.” A financial aid counselor can detect the quiet shame of a student who doesn’t want to admit her parents are undocumented. A chatbot can’t do those things unless it’s designed to recognize and route complexity, not just process it.

WHERE EMPATHY STILL MATTERS MOST


1. Public Benefits and the Edge Case Problem: Consider the experience of applying for food assistance. A single parent working multiple gig jobs applies for benefits. One month, their income spikes because they worked extra hours to cover school clothes for their kids. The next month, the system flags them as “over-income” and automatically suspends benefits. To a model trained on structured income data, that’s a rational decision. To a human caseworker, it’s a moment for discretion. They know gig work fluctuates wildly and that a temporary boost doesn’t change long-term need. AI can make this better, if designed with empathy. Instead of automatic denial, the system could flag income volatility and recommend review. It could surface similar cases and show how human workers resolved them. AI should inform empathy, not replace it.

2. Financial Aid and the Paperwork Barrier: The FAFSA example isn’t unique. Millions of students are denied or delayed access to aid each year due to paperwork issues, small errors, incomplete forms or missing data. An AI-powered system could, in theory, help students like Maria by detecting when a question is confusing or by proactively prompting users with clarifying examples. Natural language processing could analyze freeform text responses to detect uncertainty or distress and connect the applicant with a live counselor. But it’s easy to imagine the opposite outcome: a system so rigid it shuts out anyone who doesn’t fit its rules. Without intentional design, automation hardens bureaucracy. The same form that’s meant to level the playing field becomes a gatekeeper.

3. Aging Services and the Human Touch: Empathy is equally essential for older adults navigating benefits and care systems. Imagine an 82-year-old trying to renew her Medicaid waiver for in-home support. She misses the renewal notice because she doesn’t check email. The AI-driven system marks her as “nonresponsive,” automatically closing the case. What she really needed was a phone call or, better yet, a visit. Here, AI could be the difference between automation and augmentation. Predictive analytics could flag residents at risk of missing renewals based on communication preferences or cognitive changes. The system could nudge human workers to intervene early. But the follow-up, the listening, the patience, the reassurance, must remain human.

THE EMPATHY DEFICIT IN DATA DESIGN


Most government AI systems are built on historical data. But historical data is not morally neutral — it reflects all the inequities, biases and omissions of past systems. If a data set underrepresents a population or encodes past administrative bias, AI models trained on it can perpetuate those inequities. That’s why empathy must be an explicit design requirement, not an emergent property. Empathy-aware design starts with asking:
  • Who could this system misunderstand?
  • What harm could occur if the algorithm is technically right but contextually wrong?
  • At what point should a human review the outcome?
Governments can and should use AI to reduce burdens. But they must also preserve the discretion to override, the moral flexibility that allows a public servant to say, “The data says no, but the right answer is yes.”

DESIGNING FOR EMPATHY: PRACTICAL STEPS FOR TECHNOLOGISTS


  1. Build Empathy Checkpoints Into Automation: Every eligibility or adjudication system should include rules that trigger human review when a decision could cause harm or hardship. Automation should streamline, not seal off.
  2. Keep Humans in the Loop: The most effective AI deployments don’t replace workers; they amplify them. Use AI to handle routine verification, but leave exceptions and nuance to people trained in compassion.
  3. Train AI on Qualitative Feedback: Numbers tell part of the story. Case notes, survey comments and service feedback can provide essential emotional context. Incorporate that into system design and model testing.
  4. Design for Transparency and Appeal: When an algorithm denies a service or benefit, residents should know why and how to contest it. Empathy isn’t just about decisions; it’s about dignity.
  5. Empower the Frontline: Caseworkers, call center staff and eligibility reviewers are empathy’s last mile. Give them the authority and encouragement to challenge or correct algorithmic decisions. When they do, celebrate it as good governance, not noncompliance.
The future of government technology doesn’t have to be cold or transactional. The same tools that threaten to depersonalize service can, when wielded wisely, make it more humane. AI can help agencies anticipate need, reach people before crises escalate and personalize outreach in ways impossible at scale before. But empathy is not an emergent property of machine learning. It’s a choice — one that must be built into systems, culture and leadership. Government exists not just to administer policy, but to serve people. The true test of any digital transformation is not whether it saves money or reduces processing time, but whether it protects the moments where compassion meets bureaucracy.

The most successful public servants and the most successful systems are those that remember why government exists in the first place: to help people when they need it most. As we automate more decisions, we must hold fast to the principle that not everything should be automated. AI can make government faster, smarter and more efficient. But only humans can make it kind.

Editor's note: This story was initially edited by generative AI and then edited again by humans for accuracy, bias and clarity. Have feedback? Send it to Lauren Kinkade.

Joe Morris is chief innovation officer for e.Republic, Government Technology's parent company.