Why traditional cybersecurity is already obsolete

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Having spent the last 20 years in cybersecurity, helping to evolve cybersecurity companies, I saw methods of attacking in a creative manner. But Kevin Mandia's prediction on cyber attacks fueled by AI in a year is not only turned forward, the data show that we are already there.

The figures do not lie

Last week, Kaspersky published statistics from 2025: more than 3 billion malware attacks worldwide, defenders detecting an average of 467,000 malware per day. Trojan detections jumped 33% in annual shift, mobile financial threats have doubled, and here is the botter, 45% of passwords can be cracked in less than a minute.

But the volume is not the whole story. The nature of threats changes fundamentally as AI becomes armed.

This is already happening. Here is the proof

Microsoft and Openai have confirmed what many of us suspected – the actors of the nation state already use AI for cyber attacks. We are talking about the big actors: the fancy bear of Russia using LLM for the collection of information on satellite communications and radar technologies. Chinese groups and the typhoon of charcoal generate social engineering content in several languages ​​and carry out post-compromised advanced activities. Iran Crimson Sandstorm is developing phishing emails, while vulnerabilities of grooms research in North Korea and experts in the nuclear program.

What could be more about? Kaspersky researchers now find models of malicious AI hosted on public standards. Cybercriminals use AI to create phishing content, develop malware and launch social engineering attacks based on Deepfake. Researchers see LLM native vulnerabilities, AI supply chain attacks and what researchers call “AI shadow” – unauthorized use of employees of employees who disclose sensitive data.

But this is only the beginning

What we see now is that AI helps attackers to evolve operations and translate the malicious code into new languages ​​and architectures in which they were not previously competent. If a nation state has developed a really new use case, we may not detect it before it is too late.

We are heading for autonomous Cyber ​​weapons specially designed to move unteashed in the environments. It is not your typical script Kiddie attacks, we are talking about AI agents that can carry out recognition, identify vulnerabilities and execute attacks without any loop.

The challenge goes beyond faster attacks. These autonomous systems cannot reliably distinguish between legitimate infrastructure and civilian targets, which security researchers call for the “principle of discrimination”. When a weapon targets an electrical network, he cannot make the difference between military communications and the hospital next door.

We need global governance now

This calls governance and global agreements similar to nuclear weapons treaties. At present, there is essentially no international framework governing the armament of AI. We already have three levels of autonomous development systems in development: supervised systems with humans surveillance, semi-autonomous systems that engage in pre-selected targets and fully autonomous systems that select and engage the targets independently.

The frightening part? Many of these systems can be diverted. There is no autonomous system that cannot be hacked, and the risk that non -state actors take control by contradictory attacks are real.

Fight fire with fire

There are a number of cybersecurity companies that build new ways to defend themselves against such attacks. Take analysts of the company SOCs like Dropzone AI, which allow teams to obtain 100% alert surveys, fill a huge gap in safety operations today. Or companies like Natoma, which build solutions to identify, monitor, secure and govern AI agents in the company.

The key is to fight the fire with the fire, or in this case, AI with AI.

New generation SOCs (security centers) which combine AI automation with human expertise are necessary to defend the current and future state of cyber attacks. These systems can analyze the machine's speed attack models, automatically correlate threats to several vectors and respond to incidents faster than any human team could not manage. They do not replace human analysts – they increase them with the capacities which we desperately need.

The stakes could not be higher

What makes it different from previous cyber-evolutions is the potential of mass victims. Autonomous cyber-armes targeting critical infrastructure, hospitals, electrical networks and transport systems could cause physical damage to an unprecedented scale. We are no longer talking about data violations; We are talking about AI systems that could literally endanger lives.

The preparation window closes quickly. Mandia's one -year calendar is optimistic if we consider that criminal organizations are already experiencing improved AI attack tools using less controlled AI models, not the models focused on Openai or Anthropic Safety.

The bottom line

Increase safety teams with AI agents It's not just the future, it's now. The AI ​​will not replace the defenders of our country; They will be their partners 24/7 in defensive organizations and our great nation. These systems can monitor 24 hours a day, treat massive quantities of threat intelligence and respond to millisecond attacks.

But this partnership model only works if we start to build it now. Each day, we delay opponents more time to develop autonomous offensive capacities while our defenses remain largely dependent on man.

The question is not whether the cyber attacks fed by AI will come, it is if we will have defenses powered by AI when they do. The race is on, and frankly, we are already late.

Latest articles by Douglas Gotay (see everything))

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.