OpenAI CEO Sam Altman
TechCrunch/Flickr

Sam Altman might be at the very top of the artificial intelligence world, but former employees allege he does not fully understand the tech he is selling. A recent investigation brought to light surprising claims about the OpenAI CEO. According to people who worked with him, the man running the biggest AI company on the planet often gets basic machine learning concepts mixed up.

These revelations paint a leader who operates primarily as a businessman rather than a technological innovator. Insiders claim Altman does not fully grasp the underlying mechanisms of the software he sells, a disconnect that has prompted questions about the company's direction.

What Former Employees Say About Altman's Technical Knowledge

The chief executive is far from being a technical expert. People around him say he does not have a deep background in coding or building machine learning models. This gap has become increasingly apparent to the engineering teams building the company's systems.

Multiple developers recalled specific instances where Altman misused or confused foundational technical terminology. Being a non-technical CEO is not uncommon in Silicon Valley. Still, because OpenAI is so large and currently valued at around £63.5 billion ($80 billion), the knowledge gap is a concern for investors who are putting a significant amount of trust into a leader who depends on his staff to build the product.

Winning People Over Instead of Writing Code

What makes Altman stand out is his talent for persuading engineers, investors, and the public to back his vision. He routinely convinces these distinct groups that their differing priorities are his priorities, a diplomatic skill that has proven more valuable to the company than any programming language.

Whenever internal critics attempt to challenge his next move, he consistently finds the words to neutralise them. His approach involves establishing boundaries to placate concerned parties, only to dismantle those boundaries later.

'He sets up structures that, on paper, constrain him in the future,' Carroll Wainwright, a former OpenAI researcher, said.

How Altman Handles Safety Structures, According to Insiders

These structures are designed to reassure safety-conscious staff members about responsible development. Yet they appear conditional when they conflict with expansion goals.

Wainwright elaborated on how the chief executive handles self-imposed limitations once the technology advances. 'But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.'

These findings are drawn from The New Yorker's 18-month investigation by Ronan Farrow and Andrew Marantz, which drew on more than 100 interviews and never-before-disclosed internal documents. IBTimes UK reported separately on the investigation's broader findings about Altman's conduct at OpenAI.

Altman published a blog post on 10 April describing the New Yorker piece as 'incendiary,' saying he had 'underestimated the power of words and narratives.' He acknowledged a tendency towards being 'conflict-averse' and said it had 'caused great pain for me and OpenAI.' His post came after someone threw a Molotov cocktail at his San Francisco home in the early hours of the same morning, an incident police connected to a suspect later arrested at OpenAI's headquarters. No one was injured.