Programmer seeing error popup on computer display in data center
Source: Freepik

Cybersecurity training has come a long way. Most organisations now run phishing simulations, require password hygiene modules, and have policies in place for suspicious emails. That is a solid foundation. The problem is that the threat landscape has moved on, and most training programmes have not moved with it.

AI has not simply made old attacks faster or more frequent. It has introduced entirely new categories of threat that require a fundamentally different kind of awareness. The gap between what attackers can do and what employees are trained to recognise is widening, and that gap is where breaches happen.

The threat has changed, but most employee training has not

The standard security awareness curriculum was built around a specific mental model: an attacker sends a suspicious link, an employee clicks it, and the organisation is compromised. Train employees to spot the link, and you reduce the risk.

That model is increasingly inadequate. Today's AI-powered attacks do not always announce themselves with a suspicious email. They manipulate the tools your organisation already trusts. They impersonate colleagues on video calls. They corrupt the data your leadership relies on to make decisions.

According to a 2026 threat report, the average time it takes an attacker to move laterally through a network after gaining access has dropped to just 29 minutes, with the fastest recorded case clocking in at 27 seconds. Spotting a suspicious link is no longer the primary skill your employees need.

What makes this particularly difficult for organisations is that the new generation of attacks is designed to look completely normal. There is no obviously suspicious behaviour to flag, no warning from an email filter, and no moment where an employee instinctively feels that something is wrong.

Deepfakes, data poisoning, and prompt injection: the new attack playbook

Ai-powered cybersecurity and biometric authentication
Source: Freepik

In 2024, a finance employee at engineering firm Arup was tricked into transferring $25 million to fraudsters. The attack did not involve a phishing email. It involved a deepfake video call in which the attacker convincingly impersonated senior colleagues, including the CFO. The employee had no reason to doubt what they were seeing.

This is not an isolated case. It represents a broader shift in how malicious actors operate, with convincing deepfake attacks becoming increasingly common. Synthetic media, including convincing audio and video of people who do not exist or people who do, is now accessible to attackers at scale. The same technology that powers legitimate video production tools is being used to manufacture trust in corporate settings.

Then there is data poisoning. Attackers can subtly corrupt the data feeding your AI tools, skewing the outputs your teams use for financial, operational, or strategic decisions. Unlike a ransomware attack, there is no alarm. The damage looks like a series of bad calls, and by the time anyone traces the source, the consequences are already in motion.

Prompt injection takes this further. If your organisation uses AI-powered tools, attackers can embed hidden instructions into the content those tools process, directing the AI to act against your interests without anyone noticing. Your own systems become the vector.

Building internal expertise around AI risk

Defending against these threats requires more than updated training slides. It requires security professionals who specifically understand how AI is being weaponised, how these attacks are constructed, and how to build organisational defences around them.

The challenge is that most security teams were trained for a different era. Their expertise covers network security, access controls, incident response, and compliance frameworks, all of which remain important. But understanding how a large language model can be manipulated, how training data can be corrupted, or how an AI-generated synthetic voice can bypass standard verification procedures requires a different body of knowledge entirely.

Organisations need to assess honestly whether their current security leadership has that knowledge, and if not, how to build it. That means investing in upskilling existing staff, hiring professionals with AI-specific security experience, and ensuring that whoever is responsible for AI risk governance actually understands the technology they are governing.

Some organisations are also beginning to run internal red team exercises focused specifically on AI attack scenarios, testing how their systems and their people respond to threats that fall outside the traditional security playbook.

This is a relatively new skill set, but security professionals can develop and demonstrate it through focused credentialing. The AAISM (Advanced in AI Security Management) is one example of a certification built specifically around AI security risk, covering governance, risk management, and the controls organisations need to address AI-specific threats.

For security professionals looking to get there efficiently, Destination Certification's three-day AAISM bootcamp is one of the most focused preparation options available. Led by instructors with decades of hands-on cybersecurity experience, the programme goes beyond exam prep, covering practical tools your team can apply immediately, including an AI threats quick reference, a vendor risk evaluation guide, and an AI data security checklist. Sessions run live online with direct access to instructors throughout, and participants retain a full year of access to all learning materials after the bootcamp concludes. For organisations trying to move quickly on building internal AI security expertise, that combination of depth and accessibility is worth considering.

Why conventional security awareness training falls short

Standard awareness training does one thing well: it teaches employees to pause before clicking something unfamiliar. That reflex has genuine value. But it does not transfer well to the scenarios AI-powered attackers are now creating.

A procurement manager reviewing an AI-generated supplier report has no training to help them question whether that report has been tampered with upstream. An executive on a video call with apparent colleagues has no instinct to verify whether those colleagues are real. A finance team member approving a wire transfer requested by someone who looks and sounds exactly like the CFO is not going to rely on phishing awareness to catch the problem.

In fact, fewer than 20% of organisations have updated their incident response plans to account for AI-related threats, even as these incidents become more frequent. The skills gap is not a matter of employee negligence. It is a matter of what the training was designed to address. Updating the content of existing programmes is not enough if the underlying assumptions remain the same.

What effective AI threat training actually looks like

Effective AI threat training starts with threat modelling specific to your organisation's workflows, not generic scenarios. The relevant threats for a financial services firm look different from those facing a logistics company or a healthcare provider. Training built around actual roles, actual tools, and actual processes is significantly more effective than broad awareness campaigns.

Beyond that, employees need practical frameworks for questioning AI-generated outputs, verifying identities through secondary channels, and understanding that incremental, low-visibility access is how most sophisticated attacks unfold. No single action triggers an alarm. The attack is assembled across dozens of small steps.

On the simulation side, organisations should be running deepfake phishing lures alongside traditional phishing tests and measuring response rates over time. If your team can spot a suspicious link but would wire $25 million following a convincing video call, the training programme has a gap.

Why this is a board-level decision, not an IT problem

The organisations that are responding well to AI-powered threats share one characteristic: they are not treating this as an IT department problem. Boards and leadership teams are engaging with AI risk the same way they engage with financial or operational risk, setting policy, allocating budget, and holding the right people accountable.

The cost of inaction is no longer theoretical. The Arup incident, the growing volume of deepfake fraud cases, and the expanding attack surface created by enterprise AI adoption all point in the same direction. Waiting for an incident to justify investment in AI-aware training and internal expertise is the most expensive approach available. The window for getting ahead of this is still open, but it is narrowing. The organisations that act now will be significantly better positioned than those that wait for a breach to make the case for them.