Our lives are more digital than ever, and so is the danger lurking online. Cybercrime has always evolved alongside cybersecurity, but now, AI is hitting fast-forward. Phishing, ransomware, and malware are getting faster, smarter, and harder to detect.
But while attackers gain new AI superpowers, defenders aren’t helpless. They have their own tools.
Bottom line: the race has never been fiercer.
To understand how AI is rewriting cybersecurity, I caught up with Stu Solomon, CEO of HUMAN Security and Philippe Humeau, CEO of CrowdSec, ahead of the big annual industry gathering Les Assises de la Cybersécurité in Monaco last week.
According to Solomon, the line between human and automated activity online has already blurred. “For the first time in history," he said, "there’s more bot traffic than human traffic on the internet.”
But these aren’t the simple crawlers of the 2000s. Today’s bots, powered by AI, can act autonomously, make contextual decisions, and coordinate across systems. In short, they are becoming agentic: software that pursues goals and adapts without direct human input.
Both Solomon and Humeau agree: AI isn’t just changing cybersecurity. It’s accelerating it at an unprecedented rate.
The pressing question is: can defenders keep pace in this new agentic era?
Before: The Era of Human-Led Defense
Hackers, whether motivated by profit, ideology, or revenge, have always sought new ways to access, steal, disrupt, or destroy data and systems. Common attack vectors included:
- Phishing or vishing: fraudulent messages designed to steal credentials
- Malware and ransomware: locking victims out of their networks until ransom is paid
- Zero-day exploits: taking advantage of undiscovered software vulnerabilities
- DDoS attacks: overwhelming networks to cause outages
- Man-in-the-middle or injection attacks: hijacking communications or inserting malicious code
While some automation existed, human intervention was still essential. Defenders relied on firewalls, CAPTCHA, endpoint protection, and SIEM tools to monitor, detect, and respond, often manually.
In the pre-AI era, speed was the main challenge. CrowdSec’s Humeau recalled: “Speed was the regular problem back then. Once you were compromised, you had to act super fast.”
Even then, attacks often outpaced human response. Today, the challenge is not just speed - how fast threats move, but also independence - how independently they can act.
With AI in the loop, cyberattacks no longer wait for human input. They adapt in real time, forcing defenders to anticipate rather than react.

Today: The AI-Augmented Battlefield
If the same attack types exist today, they are now supercharged by AI. Where cybercrime once required skill and patience, it can now be industrialized and automated.
“Agentic is just the next step in a trend. Barriers to entry are simply being removed, and at a pace that’s unprecedented,” Solomon explained.
Anyone with intent and an internet connection can now weaponize AI. Off-the-shelf “turnkey attacks” can be purchased like subscription software, ready to deploy in minutes. Generative AI has also transformed deception: fake personas and synthetic voices can convincingly mimic real people, making phishing and social engineering almost impossible to spot. Humeau demonstrated how quickly identity can now be forged: “It took me fifteen seconds to change my voice.”
This industrialization has created entire black-market ecosystems: ransomware-as-a-service, phishing-as-a-service, and increasingly, malicious AI-as-a-service.
For defenders, this means adversaries that never sleep, never hesitate, and continually learn. Yet AI is also reshaping defense.
As Solomon put it: “AI doesn’t eliminate humans. It extends our capacity. It helps us make more informed decisions faster.”
HUMAN Cybersecurity claims it can now analyze 20 trillion digital interactions each week, distinguishing between legitimate and malicious behavior in fractions of a second. By studying how a user moves, types, and clicks, their system can tell whether an interaction is naturally human or artificially generated - an evolution of the CAPTCHA.

As every connection, app, and API expands the attack surface, visibility has become the decisive battleground. “You can’t defend what you don’t see,” Solomon warned.
Defenders must therefore detect intent before an attack even takes shape. Humeau agrees. “At CrowdSec, we’re seeing exploitation before the vulnerability is detected. That puts more pressure on defenders to know what’s going to happen next, even when they don’t yet know what it is,” he said.
If yesterday’s race was about responding faster, today’s is therefore about seeing further ahead. And if AI-powered threats move at machine speed today, tomorrow’s may move on their own.
Near Future: The Rise of MOAI
If AI-powered attacks today feel fast and scalable, the next wave will think for itself. In his recent e-book, Humeau calls it MOAI: Multi-Modal Offensive AI.

A MOAI coordinates every phase of a cyberattack – from reconnaissance to delivery – using specialized sub-agents under a central coordinator. Some analyze code for vulnerabilities, others craft phishing messages, clone websites, mimic voices, or generate deepfakes. The coordinator decides when to strike for maximum impact or stealth.
“A MOAI can target humans through digital means – email, phone, social media – and machines through internet-connected systems. It acts as an automated Initial Access Broker, working off-radar until a successful breach or a decision point is reached,” Humeau explained.
Unlike current AI-assisted attacks, a MOAI can act semi-independently, iterating through exploits, adapting tactics, and learning from failure. Its multi-modal capabilities let it see, hear, and speak – watching executive interviews, scraping employee data, and orchestrating spear-phishing campaigns – all autonomously.
Defenders are constrained by regulations and infrastructure; attackers are not. Humeau noted: “LLM hallucinations don’t matter much when you’re an attacker. A false exploit just wastes time.”
Even worse, cost favors offense: finding a vulnerability with an LLM may cost as little as $10 in API credits.
Early versions of MOAIs already exist in research labs and state programs. For now, they are largely conceptual or supervised. A human-in-the-loop still decides when to strike.
But Humeau predicts that within just a few years, “MOAIs that were once government-grade will be within reach of cybercriminals for the price of a Netflix subscription.”
The trend is clear. The era of MOAI won’t be defined by faster attacks. It will be defined by attacks that no longer need attackers. For defenders, the race is no longer about speed - it’s about autonomy and foresight.
Tomorrow: Defending Against the Autonomous Threat
MOAI represents autonomous digital predators, and defensive AI must become their ecological counterweight.
The future of cybersecurity isn’t human versus machine. It’s machine versus machine, guided by human ethics and strategy.
The “detect and respond” model is crumbling. Luigi Lenguito, CEO of BforeAI, warns in Humeau’s e-book: “With the increase of velocity, variety, and volume of cyber attacks… the default reactive approach will continue to be unsatisfactory. New paradigms like predictive and preemptive security are emerging, where defenders actively disrupt criminal infrastructure.”
Active defense means hunting early signals of intent, probing the same terrain attackers exploit – but for protection. Visibility becomes the weapon.
In DevSecOps, Machine Learning models already detect anomalies in network traffic and applications. Firewalls like CheckPoint’s open-appsec train anomaly detectors on a per-user basis. Tools like Snyk and SonarQube now use LLMs to spot vulnerabilities before deployment- the same capabilities that attackers leverage.
CrowdSec applies these principles at scale as its transformer-based models analyze hundreds of thousands of servers, detecting threats collaboratively. Humeau explained: “The first test of these models ensures performance with minimal latency and the lowest possible false positives. The intelligence of the crowd becomes a shield against the intelligence of machines.”

MOAI won’t end cybercrime. It will industrialize it. But it also marks a turning point: cyber defense that’s finally as intelligent, adaptive, and tireless as the threats it faces.
In the near term, experts anticipate a phase where Defensive AIs trail Offensive AIs by a few years, creating a temporary but dangerous imbalance.
As security researcher Daniel Miessler notes in CrowdSec’s e-book: “The addition of fully automated AI agents to attacker workflows in 2025 and 2026 will have a profound impact. Defenders will get this functionality as well, but adoption is likely to lag attackers by a number of years in all but the most advanced companies.”
Eventually, however, the same multi-agent systems that power MOAI will be mirrored by autonomous blue teams – defensive AIs that continuously scan codebases, test infrastructure, and simulate full-scale attacks to reveal weaknesses before real ones do.
With the help of agentic AI, defenders will therefore see sooner, decide faster, and eventually, mirror the autonomy of the threats they face.

,