Artificial Intelligence AI Google OpenAI Bard
Google says advances in artificial intelligence that can create realistic-seeming video or audio prompted changes to its political advertisement policies. Lionel BONAVENTURE/AFP News

A new report published by the World Economic Forum (WEF) claims AI-powered misinformation is the world's biggest short-term threat.

With 2024 being dubbed by many as "the year of elections", their Global Risks Report expressed fears that a wave of artificial intelligence-driven misinformation and disinformation could influence democratic processes and polarise society.

Such a threat is the most immediate risk to the global economy, the document, released annually, concluded.

Produced in partnership with Zurich Insurance and the professional services firm Marsh McLennan, the WEF ranks risks over a two-year and 10-year horizon, canvassing the opinion of 1,400 experts.

The authors stress that the boom in generative AI chatbots like ChatGPT means that creating sophisticated synthetic content that can be used to manipulate groups of people won't be limited any longer to those with specialised skills.

As a result, the report lists misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse.

The document was released ahead of the annual elite gathering of CEOs and world leaders in the Swiss ski resort town of Davos, where AI is set to be high on the agenda of topics discussed.

The conference is expected to be attended by tech company bosses including OpenAI CEO Sam Altman, Microsoft CEO Satya Nadella and AI industry players like Meta's chief AI scientist, Yann LeCun.

Carolina Klint, a risk management leader at Marsh, whose parent company Marsh McLennan co-authored the report with Zurich Insurance Group, explained the rising threat of AI.

"You can leverage AI to do deepfakes and to impact large groups, which drives misinformation," Klint said.

In the context of the upcoming year, this is a serious concern.

Elections are due to take place in countries that represent 60 per cent of global GDP, including Britain, the US, the EU and India, and the WEF said the nexus between falsified information and societal unrest would take centre stage during campaigns.

"Societies could become further polarised as people find it harder to verify facts," Klint continued.

"Fake information also could be used to fuel questions about the legitimacy of elected governments, which means that democratic processes could be eroded, and it would also drive societal polarisation even further," the expert said.

The other big global concern for respondents of the risk survey centred around climate change.

The report found that 30 per cent of respondents thought there was a high risk of a global catastrophe over the next two years, with two-thirds fearful of a disastrous event within the next decade.

Following disinformation and misinformation, extreme weather is presented as the second-most-pressing short-term risk.

In the long term – defined as 10 years or more – extreme weather was described as the chief threat, followed by four other environmental-related risks: critical change to Earth systems; biodiversity loss and ecosystem collapse; and natural resource shortages.

"An unstable global order characterised by polarising narratives and insecurity, the worsening impacts of extreme weather and economic uncertainty are causing accelerating risks – including misinformation and disinformation – to propagate," said Saadia Zahidi, a managing director of the WEF.

"World leaders must come together to address short-term crises as well as lay the groundwork for a more resilient, sustainable, inclusive future."

Last November, UK Prime Minister Rishi Sunak hosted the first AI Summit in Bletchley Park, Buckinghamshire.

In the build-up to the conference, Sunak announced the establishment of a 'world first' UK AI safety institute.

The summit concluded with the signature of the Bletchley Declaration – the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".