Cyber Attak
NCSC has urged organisations and individuals to implement protective measures.

Artificial intelligence (AI) is poised to escalate the global ransomware threat over the next two years, according to a new report released on Wednesday by the National Cyber Security Centre (NCSC), a division of GCHQ.

The report highlighted that AI is already being exploited in malicious cyber activities and is expected to substantially amplify the volume and impact of cyberattacks, particularly in the realm of ransomware.

Talking about ransomware, checkpoint data showed that cyber threats continued to escalate in 2023 and that organisations around the world experienced an average of 1158 weekly cyber attacks each. In 2023, the top industries impacted by ransomware attacks were education/research (22 per cent of organisations), government/military (16 per cent) and healthcare (12 per cent).

The NCSC's near-term impact assessment on AI and cyber threats concludes that the technology is facilitating relatively unskilled threat actors to conduct more sophisticated access and information-gathering operations.

NCSC predicts that by lowering the barrier of entry for novice cybercriminals, hackers-for-hire and hacktivists, AI enables these actors to enhance their capabilities, contributing to the global ransomware threat in the coming years.

Ransomware remains a critical cyber threat facing UK organisations and businesses and the cybercriminals continue to adapt their business models, seeking efficiencies and maximising profits, NCSC stated.

In order to tackle this developing threat, the UK Government has invested £2.6 billion in its Cyber Security Strategy, with a focus on enhancing the nation's resilience. The NCSC and the private industry have already adopted AI to improve threat detection and implement security by design.

NCSC CEO Lindy Cameron stressed the importance of managing the risks associated with AI in cyberattacks, saying: "The emergent use of AI in cyberattacks is evolutionary, not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term."

"As the NCSC does all it can to ensure AI systems are secure by design, we urge organisations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyberattacks."

In response to the heightened threat, the government, in collaboration with the private sector, declared the Bletchley Declaration during the AI Safety Summit at Bletchley Park in November. This initiative represented a global effort to manage the risks of frontier AI and ensure its safe and responsible development.

The NCSC urged organisations and individuals to follow its ransomware and cybersecurity hygiene advice to strengthen defences and boost resilience to cyberattacks. The report stressed upon the necessity of effective preparation in preventing ransomware incidents and encouraged the implementation of protective measures outlined in the NCSC's guidance.

According to the National Crime Agency (NCA), cybercriminals have already begun developing criminal Generative AI (GenAI) and offering 'GenAI-as-a-service', making improved capability accessible to those willing to pay.

However, the report stated that the effectiveness of GenAI models will be constrained by the quantity and quality of data on which they are trained.

The commoditisation of AI-enabled capability technically aligns with a report that was jointly published by the NCSC and NCA in September 2023, which discussed the professionalisation of the ransomware ecosystem and the shift towards the "ransomware-as-a-service" model.

James Babbage, Director General for Threats at the National Crime Agency, highlighted the national security threat posed by ransomware, stating: "Ransomware continues to be a national security threat. As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cybercriminals."

"The NCA will continue to protect the public and reduce the serious crime threat to the UK, including by targeting criminal use of GenAI and ensuring we adopt the technology ourselves where safe and effective."

The NCSC report suggested that fraud and child sexual abuse are also likely to be affected by the increased capability of cybercriminals enabled by AI services.

The NCA remains committed to protecting the public and reducing the serious crime threat to the UK by targeting criminal use of GenAI and adopting the technology where safe and effective, the government agency NCSC stated.

The report concluded by outlining the importance of effective preparation in preventing ransomware attacks and accentuated the role of the NCSC's advice in reducing the likelihood of infection.

CYBERUK 2024, scheduled for May in Birmingham, will further explore the challenges of securing future technology, with a focus on "Future Tech, Future Threat, Future Ready". The event's program will be issued in the coming days, NCSC mentioned further.