OpenAI CEO Sam Altman
How OpenAI’s £19.5 Billion Argentina Investment Could Transform Latin America’s Role in Artificial Intelligence SmartFrame

AI companies are increasingly finding themselves at the centre of a heated global debate about ethics, security, and responsibility.

As governments around the world race to integrate advanced AI systems into defence, intelligence, and public administration, questions have grown about how much control technology firms should have over the use of their tools. Those tensions were laid bare this week when OpenAI chief executive Sam Altman addressed employees during an internal town hall meeting.

Altman delivered a blunt message highlighting the complex relationship between AI developers and governments. While OpenAI can design systems with safeguards and safety guidelines, he told staff that it cannot ultimately decide how sovereign governments deploy those technologies.

His remarks came amid rising scrutiny of the company's growing collaboration with the United States Department of Defence, which has raised ethical concerns within the technology industry.

Altman To Staff: Governments Will Make Operational Decisions

During the internal meeting, Altman reportedly made it clear that OpenAI employees do not have the authority to dictate how governments use the company's technology once it has been provided for official purposes. He explained that operational decisions, particularly those related to military or national security activities, fall entirely within the jurisdiction of the government using the technology.

To illustrate the point, Altman suggested that employees might have personal opinions about various geopolitical decisions, such as the 'Iran strike' or the 'Venezuela invasion', but those views would not influence how governments conduct operations.

The role of OpenAI, it seems, is to build and supply the technology responsibly rather than determine the policies or missions in which it is used.

His comments came after OpenAI entered a controversial agreement with the US Department of Defence to deploy its AI models within classified government systems. The deal followed a breakdown in negotiations between the Pentagon and rival AI company Anthropic, which reportedly declined to allow its systems to be used in certain government contexts because of ethical concerns about surveillance and autonomous weapons.

Altman acknowledged that OpenAI can influence how its technology is designed and what safeguards are included, but once a government adopts the system, the company does not control how day-to-day decisions are made on the ground. This distinction between building technology and governing its use is at the heart of the debate currently unfolding across the AI industry.

Pentagon Partnership Causes Ethical Debate

The partnership between OpenAI and the US Department of Defence has caused a lot of scrutiny around how artificial intelligence could be used in military operations. Critics argue that powerful AI systems could enable surveillance, automated targeting, or strategic decision-making that raises serious ethical and legal questions. Some industry figures believe technology companies should set strict limits on how their tools are used. For example, Anthropic has publicly opposed allowing its models to be deployed in scenarios such as mass surveillance or fully autonomous weapons systems.

OpenAI, meanwhile, has defended its approach by pointing to safeguards included in its agreements with government partners. According to the company, these measures are designed to prevent uses such as domestic mass surveillance or fully autonomous weapons, although critics might argue that the restrictions may not be sufficiently strong or legally binding.