Google's DeepMind
Google's DeepMind AI Lab's Shane Legg believes AGI can be achieved by the end of this decade. Wikimedia Commons

The co-founder of Google's DeepMind AI Lab, Shane Legg, believes there is a possibility AI (artificial intelligence) will achieve AGI (artificial general intelligence) by 2028.

During an interview with podcaster Dwarkesh Patel, the top executive noted that researchers have a 50-50 chance of achieving AGI. To recap, he first announced this stance on his blog back in 2011.

This prediction is significant considering big tech companies have been showing a lot of interest in the AI space lately. Notably, OpenAI CEO Sam Altman has been advocating for an AGI for a while now.

What is AGI and why is it a game-changer?

AGI alludes to a machine that can understand the world as well as any human, according to a report by ZDNet. The report suggests AGI is as capable as humans when it comes to learning how to carry out a wide range of tasks.

Still, Altman was recently criticised for predicting AGI could be hired as a co-worker in the future. However, it is still unclear whether we will get to that point.

Apparently, Legg started looking for his 2028 goalpost back in 2001 when he read Ray Kurzweil's groundbreaking book "The Age of Spiritual Machines". The book predicts there will be superhuman AIs in the future.

"There were two really important points in his book that I came to believe as true," he explained. "One is that computational power would grow exponentially for at least a few decades. And that the quantity of data in the world would grow exponentially for a few decades."

At the start of the last decade, Legg predicted that AGI could well be achieved provided "nothing crazy happens like a nuclear war". In his latest interview, the DeepMind co-founder emphasised the AGI era is likely to arrive by the end of this decade.

Further claiming that there are caveats to his prediction, Legg noted that definitions of AGI are based on definitions of human intelligence. This is difficult to test precisely because the way humans think is complicated.

You'll never have a complete set of everything that people can do," Legg said. This includes things like developing episodic memory or recalling complete episodes that happened in the past.

Is achieving AGI entirely plausible?

He also admitted that it is not easy to have a comprehensive set of tests that comprises all aspects of human intelligence. However, he suggests that if researchers could create a battery of tests for human intelligence and if an AI model manages to perform well in them, then it could be considered AGI.

Patel asked if there could be a simple test to determine whether an AI system had reached general intelligence, such as beating "Minecraft", but Legg disagreed with the idea.

"There is no one thing that would do it, because I think that's the nature of it," the AGI expert said. "It's about general intelligence. So I'd have to make sure [an AI system] could do lots and lots of different things and it didn't have a gap."

The second biggest caveat Legg highlighted is the necessity to scale up AI training models significantly. This point is particularly relevant in this era, given that AI companies are consuming a lot of energy to develop LLMs (large language models).

Legg believes it is imperative to handle the computational demands of AGI by developing more scalable algorithms.

"There's a lot of incentive to make a more scalable algorithm to harness all this computing data. So I thought it would be very likely that we'll start to discover scalable algorithms to do this," he explained.

According to Legg, computational power is currently at a level that could make AGI achievable. He believes the "first unlocking step" would be to "start training models now with the scale of the data that is beyond what a human can experience in a lifetime".

Legg reiterated that he thinks there's only a 50 per cent chance researchers will achieve AGI before the end of this decade. It is worth noting that Google DeepMind chief Demis Hassabis recently compared AI risk to the climate crisis.