OpenAI Classifies ChatGPT Agent as High Bio-Risk, Strengthens Safety Measures to Prevent Abuse
Here's how ChatGPT Agent works.

OpenAI has officially classified its new ChatGPT Agent as a high bio-risk tool, citing concerns about its potential misuse in the creation of biological or chemical weapons.
The decision marks the first time the company has applied this level of risk classification to any of its AI models.
The announcement came on 17 July 2025, following the global launch of ChatGPT Agent. The software, which functions as a highly autonomous virtual assistant, has been hailed as a technological breakthrough.
However, its capabilities have also prompted serious safety reviews. Experts warn that while the tool can dramatically increase productivity, it could also lower the barrier for dangerous misuse by individuals with limited expertise.
OpenAI Flags Bioweapon Concerns
OpenAI's internal safety researchers flagged ChatGPT Agent as a potential tool for abuse in synthetic biology and chemistry. Although the company said there is no direct evidence that the model has enabled 'severe biological harm', testing revealed it could assist non-experts in carrying out harmful actions.
Boaz Barak, a member of OpenAI's technical staff, told Fortune: 'Some might think that biorisk is not real, and models only provide information that could be found via search. That may have been true in 2024, but it is definitely not true today.'
Barak explained that AI models like ChatGPT Agent are now capable of bridging the gap between general information and actionable expertise, which raises the stakes for responsible deployment.
Internal testing and external audits confirmed the model's ability to provide step-by-step assistance that could simplify the creation of biological threats.
What ChatGPT Agent Can Do
ChatGPT Agent operates from a virtual computer environment and combines capabilities from OpenAI's earlier Operator and Deep Research models. According to OpenAI's official release, the assistant can perform a wide range of tasks, from managing calendars and booking services to coding, creating slide decks, and browsing the internet autonomously.
Its key features include both visual and text-based web browsing, allowing users to interact with online content efficiently. It also features an integrated code terminal designed for automation tasks, giving users the ability to execute scripts and solve technical problems without switching platforms.
The agent connects directly to services such as Gmail and GitHub through secure APIs, streamlining workflows and increasing interoperability with common productivity tools. In addition, it supports secure file handling, allowing users to upload and download documents safely.
The system can analyse user data, generate presentations or spreadsheets, and adapt to evolving user needs with minimal input. It requests explicit user approval before taking any sensitive actions and includes controls that allow users to pause or redirect tasks in real time.
Safety Measures and Biosecurity Safeguards
In response to identified risks, OpenAI has implemented several layers of protection within ChatGPT Agent. The model is programmed to automatically refuse prompts related to bioweapon development and other high-risk topics.
All prompts and actions are monitored in real time, particularly those linked to biology, chemistry or similarly sensitive subjects. If a prompt is flagged, the system halts the task and initiates a secondary review.
To further guard against abuse, OpenAI has equipped the system with real-time detection tools that monitor for knowledge misuse, such as attempts to elevate the capabilities of users lacking the necessary scientific background.
The software also deletes browsing data immediately after use, manages passwords securely, and includes a 'takeover mode' that allows moderators to override and control inputs during sensitive exchanges.
Additionally, OpenAI has introduced a dynamic rate-limiting mechanism based on risk levels, ensuring that high-risk requests cannot be rapidly repeated or scaled.
High-risk commands are blocked unless the user provides verifiable consent. These permissions are limited to adult users, who are expected to maintain active oversight during operation.
External Collaboration and Oversight
To ensure a robust safety framework, OpenAI partnered with biosecurity experts, academic researchers and government officials to stress-test the model before release. The company hosted dedicated workshops to simulate real-world abuse scenarios and evaluate the model's defences under pressure.
A formal bug bounty programme is also in development, along with ongoing risk audits. OpenAI emphasised that defending against AI-enabled threats requires a global, multi-layered approach.
'A layered defence is the only way to balance innovation and safety at this level,' an OpenAI spokesperson said. 'We are committed to developing tools that support users while preventing harmful misuse.'
The rollout of ChatGPT Agent represents both a leap forward in AI utility and a critical moment in risk management. As OpenAI moves to secure its most powerful models, it hopes to set a new standard for responsible innovation in an increasingly complex digital landscape.
© Copyright IBTimes 2025. All rights reserved.