5 Ways to Strengthen Cybersecurity in Autonomous Industrial Operations

When an optimiser can move setpoints, you’ve changed the risk equation. Here’s how to make autonomy strengthen security, not weaken it.

The moment software gains authority over a kiln or compressor, it’s no longer ‘someone hacked a laptop’. It’s ‘the plant did something it shouldn’t’. For energy-intensive manufacturers, autonomy isn’t an IT problem. It’s a cyber-physical one, and the stakes are different.

But autonomy isn’t the enemy. Badly designed autonomy is. Done right, it reduces risk by replacing ad-hoc remote access and emailed setpoints with bounded, reversible control. Here’s how to pressure-test any autonomous optimisation layer before you deploy it.

1. On-prem control: keep the brain where the assets are

When software can move setpoints, you don’t want that authority travelling across the public internet. The safest pattern keeps control authority on hardened infrastructure, physically on site, right alongside the assets it manages. Cloud handles benchmarking and model training. It advises. It doesn’t act.

In practice, an industrial PC on the control network reads from PLCs, applies optimisation within guardrails, and writes setpoints locally. If the wide-area link drops, the loop keeps running on site or falls back to PLC control. That’s resilience by design, not by accident. [1]

2. Zero-trust remote access: no more ‘temporary’ firewall holes

Remote access is usually the weakest link. Shared VPNs, flat routing, firewall exceptions that were only meant to last a week. A better approach is on-prem devices that establish outbound, authenticated tunnels to a hardened enclave. Engineers check in with multi-factor authentication and can only reach a specific device for a time-bound session.

There’s no lateral movement between sites. Every session is logged. Each request is authenticated and observed, not given a free pass. [2]

3. Least privilege for machines as well as people

An optimiser is just another subject requesting access, so treat it like a privileged user. Give it a unique identity, scoped to specific tags and write paths. No blanket authority.

Then tier that authority deliberately. Observe (read-only). Recommend (suggests setpoints). Supervised (writes with human oversight). Autonomous (runs unattended, inside constraints). And insist on instant downgrade per asset. For any given kiln or compressor, you should be able to drop back to Recommend or Manual straightaway. Least privilege isn’t just security. It’s a resilience feature.

4. Guardrails, rate limits and forensic-grade audit

The goal is to make AI feel bounded, not magical. Start with hard guardrails in configuration, not code: min/max setpoints, safe states, interlocks. Declarative and version-controlled. Add rate limits so the optimiser can only move a valve by a set percentage per minute.

Then demand full auditability for every write: timestamp, optimiser identity, authority tier, inputs and rationale. [3]

5. Training and drills: treat autonomy like a safety system

Even the best architecture fails if people don’t know what to do when things go wrong. Operators need to know what can change, what authority tier they’re in, and how to drop authority fast. Engineers own guardrails, rollback and change control. Security and IT need to know where it sits in the stack, what ‘normal’ looks like, and how to isolate quickly. [4]

Making autonomy work for you

On-prem closed-loop control isn’t the problem. It’s part of the solution. Pair it with zero-trust access, least privilege, guardrails, forensic audit and proper training, and autonomy strengthens your cyber-physical resilience rather than undermining it.

By the QiO Technologies engineering team

References

[1] How to define zones and conduits, ISA Global Cybersecurity Alliance (2020).

[2] Zero Trust Architecture, NIST Special Publication 800-207 (2020).

[3] ISA/IEC 62443 explained: OT cybersecurity standards, Dragos (2025).

[4] Securing Industrial Control Systems, CISA (2020).