Manage the risks of cybersecurity in the AI ​​era

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Regarding cybersecurity, we must consider the good, the bad and the ugly of artificial intelligence. Although there are advantages of how AI can strengthen defenses, cybercriminals also use technology to improve their attacks, creating emerging risks and consequences for organizations.

Good: the role of AI in improved security

AI represents a powerful opportunity for organizations to improve threat detection. An emerging opportunity is to train automatic learning algorithms to identify and report suspicious threats or anomalies. The Association of IA security tools with cybersecurity professionals reduces the response time and limits the repercussions of cyber attacks.

An excellent example is an automated red team, a form of ethical piracy that simulates real -scale attacks on the world, so that brands can identify vulnerabilities. In addition to the RED team, there is a blue team, which simulates the defense against attacks, and its Violette team, which validates the safety of the two points of view. These approaches propelled by AI are essential given the vulnerability of large -company business language models to security violations.

Previously, cybersecurity teams were limited to data sets available to train their predictive algorithms. But with Genai, organizations can create high -quality synthetic data sets to form their system and strengthen vulnerability forecast, rationalization of safety management and hardening of the system.

AI tools can be used to mitigate the increased threat of social engineering attacks fueled by AI. For example, AI tools can be used in real time to monitor incoming communications from external parties and identify cases of social engineering. Once detected, an alert can be sent both to the employee and his supervisor to ensure that this threat is stopped before any system compromise or leak of sensitive information.

However, defense against threats powered by AI is only part. Automatic learning is an essential tool for detecting initiate threats and compromise accounts. According to IBM Cost of a data violation of 2025 reportsIT failure and human error represented 45% of data violations. The AI ​​can be used to find out what the “normal” operating state of your organization is by assessing your system newspapers, your activity by e-mail, your data transfers and your physical access newspapers. AI tools can then detect abnormal events compared to this basic line to help identify the presence of a threat. Examples of this include: detection of suspicious newspapers, signaling requests for access to unusual documents and entered in physical spaces not generally accessible.

Bad: evolution of security threats led by AI

Simultaneously, as organizations collect the advantages of AI mastery, cybercriminals take advantage of AI to launch sophisticated attacks. These attacks have a wide range, a follower of evidence detection and capable of maximizing damage with unprecedented speed and precision.

THE Report on the global cybersecurity prospects of the World Economic Forum found that 66% of organizations in 57 countries expect AI to have a significant impact on cybersecurity this year, while almost half (47%) of respondents identified the attacks fueled by AI as their main concern.

They have reasons to worry. Worldwide, $12.5 billion were lost in cybercrime In 2025, a 22% increase in losses compared to the previous year, which should continue to increase in the years to come.

If it is impossible to predict all threats, learning proactively to recognize and prepare AI attacks is essential to lead a great fight.

Deep phishing

Deepfakes becomes a greater threat as Genai tools become more common. According to a Survey 2025 by DeloitteAbout a quarter of companies experienced a Deepfake incident targeting financial and accounting data in 2025, and 50% expect the risk to increase in 2025.

This increase in deep phishing highlights the need to move from implicit confidence in continuous validation and verification. It is as much a question of implementing a more robust cybersecurity system as to develop a Corporate culture for raising awareness and risk assessment.

Automated cyber attacks

Automation and AI also prove to be a powerful combination for cybercriminals. They can use AI to create malicious self-learning software that continuously adapts its tactics in real time to better escape the defense of an organization. According to the cybersecurity company Sonicwall cyber-menace report in 2025AI automation tools facilitate recruit cybercriminals to execute complex attacks.

The ugly: high cost of cyber attacks and crimes fueled by AI

In a large -scale incident last year, an employee of the multinational engineering company, Arup, transferred $ 25 million After being educated during a video call with Deepfakes generated by the AI-imitation of his colleagues and CTO.

But losses are not only financial. According to the Deloitte report, around 25% of business leaders consider a loss of confidence among stakeholders (including employees, investors and suppliers) as the largest organizational risk from AI technologies. And 22% are concerned with compromised proprietary data, including the infiltration of trade secrets.

Another concern is the potential of AI to disrupt critical infrastructure, posing serious risks for public security and national security. Cybercriminals are increasingly targeting electrical networks, health systems and emergency intervention networks, taking advantage of AI to improve the scale and sophistication of their attacks. These threats could cause generalized breakdowns, patients compromised to patients or paralyzed emergency services, with potentially fatal consequences.

While organizations are engaged in AI ethics such as data responsibility and privacy, equity, robustness and transparency, cybercriminals are not linked by the same rules. This ethical division amplifies the defense challenge against threats fueled by AI, while malicious actors exploit the capacities of AI without regard for societal implications or long -term consequences.

Building cyber-resilience: combining human expertise with IA innovation

As cybercriminals become more sophisticated, organizations need expert support to fill the gap between the defenses they have in place and quickly emerging and scalable threats. A way of accomplishing that works with a trusted and experienced partner who has the capacity to merge human intervention with powerful technologies for the most complete security measures.

Between improved AI tactics and advanced social engineering, such as Deepfakes and automated malware, companies and their cybersecurity teams responsible for protecting them are faced with a persistent and increasingly sophisticated challenge. But by better understanding threats, kissing AI and human expertise to detect, mitigate and attack cyber attacks, and find trust partners to work alongside, organizations can help tip the scales in their favor.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.