IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Using GenAI to Write Clearer Government Communications

The process of writing clear, accessible communications that get residents the information and services they need can be anything but simple. AI can help if agencies know how to use the tools effectively.

Close-up of hands on a laptop keyboard with an overlay of AI icons.
Adobe Stock/Looker_Studio
Ensuring communications sent to state and local government agency clients are clear and understandable is crucial to ensure that individuals and families receive the services they need. Social services in particular rely on time-sensitive communications including correspondence, notices and program alerts that are critical to a recipient understanding what action needs to be taken and responding appropriately.

Unfortunately, these communications are often anything but clear. Usually drafted by highly educated subject matter experts and edited by government legal teams, they are often written well above the average resident's reading level of sixth grade or below, making them difficult for many to understand. The consequences of this are significant: A 2022 study found that up to 16 percent of individuals eligible for government programs may have missed out on benefits simply because they couldn’t understand the communications they received.


SIMPLIFYING LANGUAGE FOR BETTER OUTCOMES


A new approach to government communications starts by adopting plain language writing guidelines that make communications clear and accessible to a wide audience. The concept of plain language first emerged in the 1970s and various guidelines have been developed over the years since then to support its implementation, including the Plain Writing Act, which applies to federal agencies.

More recently, in June 2023, the International Organization for Standardization (ISO) released guidelines for how plain language should be adopted for text-based communications and documents. According to ISO, communications should use familiar words and phrases; avoid acronyms, legalese and jargon; and use short, clear sentences and concise paragraphs.

But the work of content clarification is time-consuming, difficult and costly, especially for government agencies with outdated systems. Employees must comb through hundreds, or potentially thousands, of communication templates to identify and rewrite content that’s unclear, often repeatedly having to update the same content that exists in many variations of a communication. Multiple authors and teams are often involved, each with their own preferences and varying levels of experience, which makes consistent application of plain language standards extremely difficult.


LEVERAGING GENERATIVE AI TO WRITE IN PLAIN LANGUAGE


Fortunately, technological advances have opened avenues for governments to accelerate the adoption of plain language. This includes generative AI tools like ChatGPT. However, just because these tools exist, doesn’t mean they’re the most effective solution for writing clear communications. The process of using ChatGPT doesn’t scale well since it must be leveraged outside a government’s current platform, introducing data security and privacy risks. It is critical that the use of generative AI tools is accomplished in a way that prevents constituent data from being sent to third-party AI platforms where it could be vulnerable to a data leak.

Using ChatGPT also requires teams to create their own prompts or instructions to generate responses. The multifaceted nature of plain language writing and the need for accuracy within critical communications makes it difficult to get good results consistently. The iterative process of trying to improve results can easily take longer than simply rewriting the content yourself.

HOW TO USE AI SAFELY AND EFFECTIVELY


Much of the risk and inefficiency outlined above can be mitigated with the right approach. This includes how you guide the AI, as well as how you manage its use across teams and systems. Through thoughtful prompt engineering and strong safeguards, agencies can use AI to automate plain language adoption while maintaining the security and control they need. Here are six best practices governments can apply to get there:

1. Provide specific plain language principles: Simply asking AI to “rewrite this communication in plain language” won’t lead to accurate, consistent results. Prompts should include the actual principles from the standard you’re aligning to, such as those outlined by ISO. To ensure consistency and save time, standardize prompts across teams using an internal prompt library or with platforms that include built-in plain language optimization capabilities.

2. Provide the context of the entire communication: Plain language writing includes structuring content for clarity — placing key information first and using clear headings. AI can’t make these structural improvements when it’s only given isolated sentences or paragraphs to work on. Always provide as much of the full communication as possible so the AI has the context it needs to produce a comprehensive plain language rewrite.

3. Instruct the AI to preserve meaning and variable data: Without guidance, AI may make unwanted changes to your content. Explicitly instruct the AI to preserve variable content elements and the underlying meaning of the content it’s optimizing. This will improve the accuracy of your results and minimize the manual rework that’s required after optimization.

4. Lock down your legal or proprietary content: Some language within government communications — such as regulatory disclaimers, legal language or approved terminology — must remain exactly as written. Clearly define terms that must be used and content that is off-limits for rewriting, either through prompt instructions or systems that let you lock down this content so AI can’t modify it.

5. Never include personally identifiable information (PII) in the content submitted for optimization: AI should only be used to optimize generic templates with placeholder variable data fields, not fully composed communications containing live client data. This ensures that PII does not get shared with external systems, eliminating the risk of data leaks or compliance violations.

6. Block your content from being used to train AI models: Only use AI from providers that explicitly state they won’t use your data for model training or at least provide the option to opt out. Ensure this safeguard is applied globally across all users to protect your agency’s content and maintain control over how it’s used.

Clear communication is more than the best practice; it’s a public service. By using AI to accelerate the adoption of plain language principles, government agencies can build trust, empower communities and make it easier for people to engage and take any action needed. The opportunity to lead with clarity is here with technology that makes meaningful change possible.
 
Patrick Kehoe is executive vice president of product management, driving product strategy in collaboration with the product development team at Messagepoint, a provider of customer communications management software. Kehoe brings to the company more than 25 years of experience delivering business solutions for document processing, customer communications and content management. For more information, visit www.messagepoint.com.