OpenAI CEO Sam Altman
The energy costs associated with using polite language in interactions with AI chatbots like ChatGPT are substantial, incurring tens of millions of dollars annually, according to OpenAI CEO Sam Altman. X / Colin @colin_digital

At the recent Snowflake Summit in San Francisco, OpenAI Chief Executive Sam Altman made a bold prediction: artificial intelligence agents may soon do more than automate routine tasks—they could begin to discover new knowledge. His statement has sparked debate among technologists, policymakers, and ethicists, raising urgent questions about the real capabilities of AI systems and their broader societal impact.

Speaking at the 2025 Snowflake Summit, Altman likened current AI agents to junior employees, capable of handling assignments such as research, content creation, and data analysis.

However, he suggested that the next evolution would see these agents move beyond assistance into generating original insights that could reshape how organisations solve complex problems and unlock new discoveries.

But what does it truly mean for artificial intelligence to discover knowledge? And is this a credible prediction—or just another example of technology hype outpacing reality?

From Automation to Insight: The Next Frontier of AI

The notion of AI agents discovering knowledge implies more than processing data or identifying patterns. It envisions autonomous systems capable of drawing conclusions, generating hypotheses, and even proposing solutions within domains such as science, law, medicine, and finance.

Altman's vision is rooted in emerging technologies such as multi-agent systems and autonomous workflows. These are environments where AI models initiate tasks independently, collaborate with other agents, and iteratively improve outcomes. Tools like Auto-GPT, Devin by Cognition, and OpenAI's own API integrations are early versions of such systems.

This evolution moves artificial intelligence from a reactive tool to a proactive collaborator—capable of identifying insights that humans may overlook. Experts believe such systems could revolutionise industries by generating original contributions in scientific research, legal review, and strategic planning.

Some evidence already supports this trajectory. In 2023, Google DeepMind's AlphaFold2 accurately predicted the structure of over 200 million proteins—a scientific breakthrough that demonstrated the power of AI to contribute within areas of limited human knowledge.

Some evidence already supports this trajectory. In 2023, Google DeepMind's AlphaFold2 accurately predicted the structure of over 200 million proteins—a scientific breakthrough that demonstrated the power of AI to contribute within areas of limited human knowledge.

Limitations and the Human Element

Despite the excitement, many researchers advise caution. Genuine discovery, in a human sense, involves abstraction, creativity, and contextual judgement—areas where artificial intelligence still falls short.

Professor Gary Marcus, a well-known AI commentator, described Altman's remarks as ambitious but potentially misleading. In an interview with The Financial Times, Marcus said: 'AI can interpolate data and simulate insight. However, understanding meaning, consequence, and ethics remains out of reach. That is what makes human discovery fundamentally different from data processing.'

Another significant challenge lies in interpretability. Large language models are known for hallucinations—instances where the system generates false or misleading information. If a model cannot reliably distinguish fact from fiction, how can it be trusted to discover genuine truths?

Altman acknowledged these concerns, emphasising the importance of safety and transparency. "It is not about replacing scientists or thinkers," he said. "It is about augmenting them."

Regulation, Ethics, and the Road Ahead

If AI agents are to generate new knowledge, there are far-reaching implications for intellectual property, research ethics, misinformation, and regulatory oversight.

Governments worldwide are racing to keep pace with developments. In the United Kingdom, the AI Safety Institute, launched in late 2023, is already tasked with testing advanced AI systems for safety, reliability, and risk before their wider release. According to the institute's inaugural report, model autonomy is a core area of concern.

Gaia Marcus, Director of the Ada Lovelace Institute, has also underscored the importance of robust governance. 'We are keen for governments to deepen their understanding of what AI models are doing in real-world contexts,' she said. 'We need to ensure that public expectations are not outpaced by private innovation.

An Uncertain but Promising Future

Sam Altman's prediction that AI agents will discover new knowledge encapsulates the soaring ambition of the tech sector. While some early successes suggest that artificial intelligence can enhance human understanding, the journey from automation to true insight is still unfolding.

For now, the possibility that your next research collaborator could be an AI system is both exciting and unnerving. As with all AI innovation, the potential is immense—but its true value will only be proven through real-world results.