OpenAI
OpenAI is reportedly not working on a humanity-threatening AI, contrary to some reports. Wikimedia Commons

The rumour mill has been churning out speculations surrounding OpenAI's "humanity-threatening" AI dubbed Q* for a while now. However, has the American AI company really built this AI technology?

Last week, reports by Reuters and The Information indicated that several OpenAI employees had warned the company's board of directors of the "prowess" and "potential danger" of an internal research project called Q* (pronounced Q Star).

According to the report, this AI project is capable of solving certain math problems at grade-school level. While that isn't really impressive, the researchers involved believed Q* could be a step towards creating artificial general intelligence (AGI).

As per the Reuters report, OpenAI senior exec Mira Murati told staff that a letter about Q* led to CEO Sam Altman's dismissal earlier this month. Reportedly, the board fired Altman to stop him from releasing this humanity-threatening AI model.

Contrary to these reports, a person familiar with the matter told The Verge that the board never received a letter about such an AI model. Aside from this, the source noted that Altman's sudden firing wasn't due to OpenAI's research progress.

Is Q* as groundbreaking and threatening as it sounds?

Meta's chief AI scientist Yann LeCun and other reputed AI researchers on X (formerly Twitter) believe that Q* is simply an extension of an ongoing project at OpenAI and other AI research labs.

Taking to the Elon Musk-owned social media platform, Rick Lamers pointed to an MIT guest lecture OpenAI co-founder John Schulman gave 7 years ago during which he shed light on a mathematical function called Q*.

To those unaware, Lamers writes the Substack newsletter "Coding with Intelligence". Some researchers suggest the "Q" in the name "Q*" stands for "Q-learning".

Q-learning alludes to an AI technique that enables a model to learn and improve at a particular task by taking and being rewarded for specific correct actions.

As far as the asterisk is concerned, researchers suggest it could be a reference to A*, which is an algorithm that comes in handy for checking the nodes that make up a graph. Also, A* helps explore the routes between these nodes.

Now, it is worth noting that both Q-learning, as well as A*, have been around for a while. In fact, Google DeepMind used Q-learning to build an AI algorithm capable of playing Atari 2600 games at the human level back in 2014.

Likewise, A* was mentioned in an academic paper published in 1968. Moreover, researchers at UC Irvine previously tried to improve A* with Q-learning, which might be what OpenAI is currently exploring.

What do researchers say about Q*?

Research scientist at the Allen Institute Nathan Lambert told TechCrunch he believes Q* is probably connected to approaches in AI "mostly [for] studying high school math problems". So, it is safe to assume that Q* is not an AI model that has the potential to destroy humanity.

"OpenAI even shared work earlier this year improving the mathematical reasoning of language models with a technique called process reward models," Lambert said.

However, he says it will be interesting to see whether improved math abilities will do anything other than make ChatGPT a "better code assistant". Similarly, Mark Riedl, who is a computer science professor at Georgia Tech, criticised Reuters' and The Information's reporting on Q*.

Riedl was also critical of the broader media narrative around the American AI company's attempt to achieve artificial general intelligence (AGI). Citing a source, Reuters indicated that Q* could be a step towards AGI. However, several researchers including Riedl disagree with this theory.

"There's no evidence that suggests that large language models [like ChatGPT] or any other technology under development at OpenAI are on a path to AGI or any of the doom scenarios," Riedl told TechCrunch.

Riedl went on to point out that these ideas (Q-learning or A*) are currently being pursued by other researchers across academia and industry. Moreover, he noted that there have been several papers on these topics in the last six months or more.

"It's unlikely that researchers at OpenAI have had ideas that have not also been had by the substantial number of researchers also pursuing advances in AI," Riedl added.