IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Maine CISO on the State's Six-Month Generative AI 'Pause'

Maine paused the use of ChatGPT and other generative AI apps for six months beginning in June. After hearing wide-ranging reactions, I decided to ask Nathan Willigar, the state CISO, about the move.

Silhouette of a human face made from light blue dots and connected lines. A soundwave is coming from the mouth to indicate speech, also in light blue. Dark blue background.
Shutterstock
Beginning June 21, 2023, the state of Maine banned generative AI apps for at least six months, under most circumstances. This is the first time a state has done this that I'm aware of.

The reactions from around the country, and indeed the world, were all over the map when I posted the story about it on LinkedIn. You can see those comments and discussions here.

I was initially torn on this policy, thinking that this might give the impression that Maine was limiting creativity and innovative new technologies. At the same time, I applaud the state for their bold and decisive decision to safeguard their networks, data and people while policies and procedures are developed.

I have been impressed with Maine’s technology and security management teams and have highlighted them in the past. The state's CISO, Nathan Willigar, is an excellent cybersecurity leader who has a wealth of knowledge and experience. You can see Nate’s LinkedIn profile here.

In the interview that follows, I asked Nate about Maine's ban — or pause — on generative AI and what the future of the new tech might look like.


Dan Lohrmann (DL): Why did MaineIT decide to ban the use of GenAI apps like ChatGPT for at least six months?

Nathan Willigar (NW): MaineIT has implemented a pause on generative artificial intelligence technologies to ensure a holistic assessment can be performed to better inform our understanding surrounding the potential benefits and risks associated with them. The emerging threats associated with these technologies are both known and unknowable. Although these technologies are seen to have many benefits for society, their expansive nature potentially introduces a wide array of security, privacy, algorithmic bias and trust risks into an already complex IT landscape.

With the industry moving at such an accelerated rate to deploy generative AI solutions to promote productivity and increase efficiencies, increasing demands have been placed on state and local entities to adopt these technologies. This moratorium will allow us time to perform our due diligence with generative AI and identify best practices to ensure our responsible use of these technologies.

DL: Why is this technology such a big risk to staff and data?

NW: We implemented a pause on generative AI technologies because of the potential negative impact to both staff and the citizens of the state of Maine. Respected threat analysts anticipate that cyber actors will use generative AI to create seemingly authentic content and impersonate human behavior for malicious purposes, including sophisticated phishing scams, malware and ransomware, impersonation, and disinformation campaigns.

There are documented examples of where these technologies are known to have concerning and exploitable weaknesses. This delay to implementing generative AI technologies will allow us time to raise our overall awareness of these technologies within the agency and identify the scaffolding necessary to safeguard our staff and the sensitive data entrusted to us by the state of Maine.

DL: Is the main concern that many current free uses of ChatGPT and other GenAI products do not allow for the governance and control of sensitive data? Please explain.

NW: The risks that generative AI pose are both known and unknowable at this time; many of the known risks transcend security. The expansive capability of these technologies introduces potential regulatory, legal, privacy, financial and reputational risks. These technologies are unique as they can deliver sophisticated outputs independently and absent structured input (i.e., produces uncontrolled results). While other currently employed technology is part of the AI family, the difference is that generative AI uses unstructured inputs and develops its own logic instead of following structured inputs and logic. During this pause, we will use this time to track evolving best practices and federal guidance on the topic as it is released.

DL: Where can people go to read the details of the pause?

NW: The Cybersecurity Directive is available on MaineIT’s website (see Cybersecurity Directive 2023-03).

DL: Is there a list of products or services covered by the pause?

NW: The Cybersecurity Directive is broad in nature and covers generative AI that produces unstructured responses to unstructured input.

DL: Other states have limited the use of AI and put policies and procedures in place to control acceptable use of the technology. Why did Maine decide to go this other route?

NW: Every state has its own unique approach to managing its cyber risk profile; we have decided to take a brief pause to ensure we take a measured approach that fully assesses the risks associated with generative AI technologies within our enterprise.

As the growing national effort to assess AI’s risks and benefits continues to evolve, MaineIT is exercising caution with the use of these technologies by establishing a moratorium for at least six months on the adoption and use of generative AI for all state of Maine business. The moratorium allows for time to perform a risk assessment for the state’s use of generative AI and identify best practices to follow to ensure the responsible use of these technologies.

During this time, we will be informed by the work underway at the national level to identify best practices for the responsible use of generative AI, including standards established by the National Institute of Standards and Technology, as well as any developing policy regulations risks associated with these technologies.

DL: Do you expect other governments around the country to follow your lead?

NW: It is not for me to judge the direction taken by other states. However, our review is timely, as it follows work being done across the country, as well as at the White House, the National Institute of Standards and Technology, Congress and the European Union.

DL: Do you view this as a “pause” that will likely be lifted after six months or perhaps a year, or do you expect the ban to last longer? 

NW: We intend to perform our internal risk assessment in an efficient manner within the six-month window unless additional time is required.

DL: What steps are you taking now to test this GenAI technology and ensure that it can be used safely when the ban is lifted?

NW: We are reviewing emerging federal guidance and best practices on this topic as it is being developed within the context of our internal risk assessment. Once it is lifted, we intend to have sufficient information to revise our internal governance to ensure sufficient guardrails are in place for the responsible use of generative AI. We will also assess any upskilling and training requirements that will be necessary to help our employees adapt to these emerging technologies.

DL: What is the difference of this directive and the ban on TikTok?

NW: The Cybersecurity Directive on TikTok was implemented as a tailored response to well-documented national security risks posed by TikTok and recently enacted federal legislation that prohibited the use of the application on all federal government devices (see the “No TikTok on Government Devices Act,” enacted as part of the 2023 omnibus spending bill signed on Dec. 29, 2022). The Office of Management and Budget (OMB) issued a memorandum to federal security agencies to develop guidelines for agencies on its removal from federally issued devices (see Feb. 27, 2023, OMB memorandum).

DL: Thank you, Nathan, for answering my questions and for your service to the state of Maine and our country as Maine CISO.

FINAL THOUGHTS


For those seeking to research this topic further, see this video.
Daniel J. Lohrmann is an internationally recognized cybersecurity leader, technologist, keynote speaker and author.