CISA Publishes Security Guidance for Using AI in OT
Summary
A coalition of national cyber agencies led by the US CISA, FBI and NSA AI Security Center, together with partners from Australia, Canada, Germany, the Netherlands, New Zealand and the UK, has released a 25-page set of joint principles for securely integrating artificial intelligence into operational technology (OT) environments. The paper highlights potential benefits—such as anomaly detection, diagnostics and predictive maintenance—while warning of unique OT risks including model drift, hallucinations and the potential to bypass safety processes.
The guidance recommends that organisations educate staff on AI risks, make clear business cases before adopting AI in OT, limit model data access, embed governance and assurance into existing security frameworks, and implement oversight and failsafe mechanisms such as human-in-the-loop controls so AI can ‘fail gracefully’ without endangering critical operations.
Key Points
- Joint guidance authored by CISA, FBI, NSA AI Security Center and multiple international partners addresses secure AI integration into OT.
- AI can improve efficiency and decision-making in OT (anomaly detection, diagnostics, predictive maintenance) but also introduces safety and reliability risks.
- Specific OT risks include model drift, probabilistic outputs conflicting with deterministic systems, LLM hallucinations and potential exploitation of AI agents for attacks or RCE.
- Recommendations include education, rigorous business-case assessment, data minimisation, strong governance and assurance frameworks, and thorough testing and evaluation.
- Operational controls emphasised: human-in-the-loop decision-making, monitoring, bounded automation, and fail-safe mechanisms to avoid disrupting critical infrastructure.
- The guidance stresses that LLMs are high risk for OT use and may not be the right ML approach for certain detection and behavioural-analytics tasks under standards like NERC CIP-015.
- Vendors and contributors (e.g. Fortinet, Darktrace, Nozomi Networks) note limited current adoption; foundational OT cybersecurity practices remain the priority.
Why should I read this?
Short and blunt: if you run, secure or design OT systems or are considering adding AI, this is the checklist you need. It tells you what to avoid, what to test, who should own decisions and how not to let shiny AI break safety-critical kit. Read it to dodge expensive mistakes and to prove you thought this through to auditors and regulators.
Context and Relevance
This guidance is timely because AI and OT are both attractive adversary targets and the stakes in OT settings (energy, water, manufacturing, medical, defence) are very high. The global coordination behind the document signals that AI-in-OT is seen as a systemic risk, not a niche problem. The recommendations align with broader regulatory and industry trends toward governance, behavioural analytics and demonstrable assurance. For compliance-conscious teams, the guidance clarifies why some AI techniques—notably unchecked use of LLMs—are unsuitable for certain OT functions and why bounded automation and human oversight must be central to any deployment.
Author style
Punchy: this isn’t a gentle advisory note — it’s a joint, cross-country wake-up call. If your estate touches OT, treat the details as operational imperatives rather than optional reading.
Source
Source: https://www.darkreading.com/cybersecurity-operations/cisa-publishes-security-guidance-ai-ot
