Way back at the turn of the century, Microsoft’s “10 Immutable Laws of IT Security” became a cornerstone of cybersecurity education, at least for nerds. The list included such zingers as: "If a bad guy can persuade you to run his program on your computer, it’s not your computer anymore” and "An out-of-date virus scanner is only marginally better than no virus scanner at all.”

And of course "Technology is not a panacea". The list was a simple, powerful way to convey that while technology evolves, human nature and system vulnerabilities remain consistent. More than two decades later, those laws still resonate, but the digital world has changed beyond recognition.

Artificial Intelligence now powers everything from personal assistants to autonomous defence systems, and with that power comes a new class of risks demanding a fresh perspective.

AI Is Changing the Cybersecurity Landscape

AI isn’t just a tool, it’s an accelerant. It amplifies the capabilities of both defenders and attackers. For security professionals, it enables predictive threat detection and near real-time response. For cybercriminals, it offers automation, precision, and deception at scale.

Recent trends show a 320% rise in AI-driven ransomware since 2023, with up to 70% of new malware now generated using some kind of machine learning models. However, even in the fickle world of cybersecurity statistics, such metrics are disputed more or less vociferously. The hype train has not been immune to AI FUD and there is significant debate within the cybersecurity community regarding its current maturity, real-world prevalence, and definition. Most current examples are viewed as AI-assisted rather than fully autonomous, self-replicating "AI malware".  Indicators of deception, once telltale signs of phishing or spoofing, are rapidly disappearing as generative AI crafts emails, audio, and even videos indistinguishable from reality.

Nevertheless, there is a clear shift, and it impacts everyone: technical users face evolving threat surfaces and automated adversaries, while managers must govern complex ecosystems where AI decisions blur accountability and compliance. As governments and enterprises rush to integrate AI, the challenge is no longer whether to use it, but how to use it safely.

The 10 Immutable Rules of AI Cybersecurity

It’s time to refresh the old commandments for an AI-driven world. Below are The 10 Immutable Rules of AI Cybersecurity, inspired by the enduring wisdom of traditional IT security, now reimagined for a landscape of algorithms, automation, and autonomous decision-making:

1. If you can't explain your AI's decision, you can't justify it to the public.

2. If an AI system can access resident data, so can whoever controls the model.

3. If your AI can be prompted, it can be manipulated.

4. If your AI learns from open or public data, assume it's inherited risk and bias.

5. AI-generated outputs are not facts, they're predictions.

6. The integrity of your AI depends on the integrity of your municipal data.

7. Compromised inputs create compromised outcomes.

8. Every AI integration expands your attack surface.

9. It will take AI to defend against AI.

10. AI has replaced Big Data, and citizen behaviour is the new dataset.

Why These Rules Matter

These rules aren’t just for CISOs or AI developers, they’re also a fun way for the rest of us to navigate and understand how human behaviour can be replicated for good or for bad. Feel free to stress-test them by sharing them with professional networks and like-minded peers. At the very least, it will trigger some discussion and a reminder that while AI may appear benign or even helpful, its misuse can lead to consequences rivalling or surpassing the data breaches of the big data era.

AI shouldn't change how we defend systems but it does change what needs defending.