To address those challenges, the Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) recently released “Principles for the Secure Integration of Artificial Intelligence in Operational Technology,” a guide that lays out four core considerations for safely integrating AI into critical systems.
The document first recommends that teams deepen their understanding of how AI works, what it relies on, and what security concerns may arise during development and deployment. Second, it advises organizations to carefully assess whether an AI tool is justified for their specific business case and to consider how data from sensitive operational technology (OT) systems could be accessed or used. Governance comes next, and the recommendation of internal structures that can continuously test AI models and ensure compliance with relevant regulations. Last, the authors stress the importance of embedding safety and security considerations into every phase of an AI project, including incident response planning.
Overall, the guidance highlights the delicate work of bringing AI and other emerging technologies into critical systems, a global concern that explains the document’s creation as an international collaboration. In addition to ACSC, the project included the National Security Agency’s Artificial Intelligence Security Center, the FBI, the Canadian Centre for Cyber Security, Germany’s Federal Office for Information Security, the national cybersecurity centers of the Netherlands and New Zealand, and the U.K.’s National Cyber Security Centre.
Federal officials said the release comes at a time when many industries are actively experimenting with machine learning, large language models and automated agents to speed operations or anticipate equipment failures. And while those technologies promise efficiencies, they also introduce new ways for systems to behave unpredictably or become exposed to cyber threats.
CISA Acting Director Madhu Gottumukkala acknowledged the push-and-pull that comes with bringing AI into these systems in a recent news release, and said while the technology can boost the resilience of OT environments, integrating it still demands “a thoughtful, risk-informed approach.” The goal of the guidance, he emphasized, is to ensure that AI strengthens — rather than undermines — the reliability and safety of essential services.
Although the guide focuses primarily on the forms of AI most likely to appear in industrial environments, the news release notes that the same principles can extend to systems that rely on statistical models or rules-based automation as technology evolves.