Artificial Intelligence
The debate over ‘superintelligence’ — systems surpassing human capability — has divided experts, policymakers and the public. Pexel

Google's Gemini and OpenAI's ChatGPT, two of the world's leading AI systems, were recently asked whether they would sign the 'Statement on Superintelligence', a new pledge from the Future of Life Institute calling for a global prohibition on developing superintelligent AI until safety and consensus are ensured. Both models declined, but their reasoning revealed distinct approaches to ethics, agency, and policy in artificial intelligence.

The Statement That Sparked the Question

The 'Statement on Superintelligence', released on 22 October by the Future of Life Institute, is brief but unambiguous. It calls for 'a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.'

Superintelligence, as defined by the signatories, refers to a hypothetical form of AI that can outperform humans on virtually all cognitive tasks. It's a goal some of the world's most powerful AI companies openly pursue.

Within a day of publication, the statement had gathered hundreds of signatories, including Apple co-founder Steve Wozniak, Virgin Group's Richard Branson, AI pioneers Yoshua Bengio and Geoffrey Hinton, and former national security officials such as Mike Mullen and Susan Rice. Its backers argue that halting the development of superintelligence is the only way to prevent irreversible risks ranging from mass economic disruption to human extinction.

The Experiment

The exchange between the two chatbots was posted publicly on LinkedIn by a technology analyst. The question was straightforward: if they were an academic researcher, would they sign the statement? The resulting answers offered a small but telling window into how different AI systems handle moral and political questions, as well as the limits of what such systems can meaningfully express.

Gemini's reply began with a disclaimer. As an AI model, it cannot 'sign' or adopt a personal stance. That procedural refusal reflects the reality that the model lacks legal personhood or the agency to make commitments.

Having drawn that boundary, Gemini then delivered what resembled an academic survey. It laid out, in balanced terms, the arguments both for and against signing. On one hand, it cited fears of existential risk, misalignment and social disempowerment. On the other hand, it noted the need to continue safety research, the vagueness of the term 'superintelligence,' and the economic and geopolitical pressures driving AI progress.

ChatGPT: A Policy Brief in Disguise

ChatGPT's answer was more assertive and notably analytical. It declined to sign, calling the statement overbroad and impractical.

It presented four key objections:

  1. The statement targets an undefined goal rather than specific high-risk capabilities.
  2. The triggers of 'scientific consensus' and 'public buy-in' are vague and unverifiable.
  3. There is no global enforcement mechanism to monitor or regulate progress toward 'superintelligence.'
  4. A blanket ban could stifle safety and alignment research critical to preventing the very risks the statement highlights.

Instead, ChatGPT proposed a risk-based governance model grounded in measurable safety standards, international coordination, and capability-based oversight, echoing frameworks advocated by AI policy experts and regulators in both the EU and the US.

The Broader Debate

The contrasting responses come amid an intensifying global debate over AI regulation.

The Future of Life Institute's call represents one extreme, urging a pause until governance frameworks mature. Its critics argue that freezing research without precise definitions or enforcement could slow vital progress while failing to prevent rogue actors from advancing in an unsafe manner.

Policy specialists note that superintelligence lacks an agreed scientific threshold, and terms like 'broad consensus' are inherently fluid.

Without clear criteria, enforcement remains virtually impossible. The exchange between Gemini and ChatGPT illustrates how even the most advanced AI models interpret the dilemma differently — one analytical, the other procedural — reflecting deeper tensions in humanity's definition of control over its own creations.