Grabbing headlines, Anthropic revealed in November 2025 how malicious actors used the company’s advanced agentic AI capabilities to execute cyber attacks. These were cyber attacks “that can be run autonomously for long periods of time and that complete complex tasks largely independent of human intervention.”
So as we enter a new phase of AI adoption, what does “preparing for a new AI world” really mean in a security context?
Perhaps more important, how can government teams prepare to be organizationally ready for the next generation of AI-generated cyber threats and adequately equipped with the right tools and skills to win the cybersecurity battles ahead?
According to the National Security Agency (NSA), while AI brings unprecedented opportunities for advancement to every organization, it also opens a large and volatile attack surface, which must be carefully and meticulously addressed.
Attackers are already using these cutting-edge AI tools to study organizational dynamics and look for weaknesses in an agency’s cyber defenses, from critical unpatched vulnerabilities to wild-card email rules that expose executives to policies that are unequally enforced on networks.
On a personal level, bad actors are going after C-suite leaders. AI-enabled cyber attack campaigns can blend multiple tactics. These range from malvertizing and smishing to multifactor authentication (MFA) bombing, where an attacker floods a user with multiple MFA requests, hoping they will eventually approve one out of frustration.
In August 2025, a global study from Accenture determined that 90 percent of enterprises are unprepared for AI-driven attacks. Accenture urges tech leaders to embed security into the process of digital transformation and
AI projects that may be getting more ROI attention. The same report found that 77 percent of organizations lack data and AI-specific security practices to safeguard models, pipelines and cloud workloads.
The majority of AI and cyber experts agree that the best way to continuously address these sophisticated and rapidly growing AI-enabled cyber attacks is to fight AI fire with AI fire — or use AI tools to defend our people, data and networks.
To respond to this new AI cyber threat environment, NSA launched the Artificial Intelligence Security Center. Some of their top goals include: detect and counter AI vulnerabilities; advance partnerships with industry and experts; and develop, evaluate and promote AI security best practices.
Here are three areas to consider for improving cybersecurity in state and local governments in our new AI world:
Train staff on AI and upskill the team with new tools and certifications: The Accenture study found that 89 percent of respondents prefer to hire cybersecurity candidates with certifications. Another study found that a lack of staff with sufficient AI expertise (48 percent) is the biggest challenge foreseen by IT decision-makers when it comes to implementing AI in cybersecurity.
To help your team understand the role of AI in cyber, consider AI-cyber developmental courses from ISC2. SANS also offers several classes on AI in cybersecurity. You can also take continuing education classes from Harvard and other universities around the country covering cyber defense using AI.
Conduct organizational AI risk assessments in various areas:
- Technical Safety — evaluates robustness, reliability and failure modes of AI models.
- Bias and Fairness — checks for discriminatory outcomes or unequal performance across groups.
- Security and Misuse — analyzes vulnerabilities, models theft risks and potential malicious use.
- Ethical and Societal Impact — considers broader effects on society, rights and human well-being.
- Regulatory and Compliance — ensures alignment with laws, standards and organizational policies.
Upgrade operational real-time response: Organizations can use a new generation of AI tools to shift the operating model within their security operations center. This can provide a systems layer that interrogates every control directly, revealing shallow deployments, misconfigurations and missing protections. AI can help strengthen preventative controls, rather than relying solely on reactive signal triage.
As management expert Peter Drucker said, “The greatest danger in times of turbulence is not the turbulence — it is to act with yesterday’s logic.”