OpenAI Rolls Out ChatGPT Voice: It's Now Built Directly Into the Chat App
Users can speak naturally while seeing real-time visuals (like maps) and a running text version of the discussion

The way we talk to technology is changing. OpenAI has officially unveiled a major update, rolling out its much-anticipated voice feature directly into the ChatGPT mobile app.
No more typing—you can now have a natural, spoken conversation with your AI assistant, making interaction smoother and more immediate than ever before.
Voice Interaction Now Seamless
OpenAI has introduced a significant upgrade to ChatGPT: a smooth, integrated voice experience that eliminates the need for users to adjust settings manually. The AI powerhouse, led by Sam Altman, confirmed the launch in an X post, stating the capability is ready for all users across mobile devices and browsers.
You can now use ChatGPT Voice right inside chat—no separate mode needed.
— OpenAI (@OpenAI) November 25, 2025
You can talk, watch answers appear, review earlier messages, and see visuals like images or maps in real time.
Rolling out to all users on mobile and web. Just update your app. pic.twitter.com/emXjNpn45w
The update allows people to talk naturally with ChatGPT, view written records of their dialogue as it happens, and receive visual answers—including photos, diagrams, or location data—all on the same screen.
'You can now use ChatGPT Voice right inside chat—no separate mode needed,' OpenAI stated in their announcement, adding that the new experience is accessible simply by refreshing the application.
Voice Now Part of the Chat
The accompanying video clearly shows how the new speech function operates from start to finish. It opens with a user greeting the assistant: 'Hey. Rocky. Great to have you here.' ChatGPT then answers effortlessly when prompted to explain the latest changes.
'Absolutely voice is now built right into our chat, so you get a live transcript as we talk. Plus I can show you things like maps, weather, and more in real time,' the assistant clarifies. This represents a move away from previous ChatGPT voice versions, which required users to switch into a dedicated voice mode.
With this upgrade, the interaction feels more like a dialogue and is lively, combining spoken conversation with on-screen visual elements.
If you prefer the original experience, turn on “Separate mode” under Settings → Voice Mode.
— OpenAI (@OpenAI) November 25, 2025
In a follow-up post, OpenAI noted that those who prefer the original communication method can activate 'Separate mode' under Settings → Voice Mode.
Live Maps, Visuals, and Information
To demonstrate the latest abilities, the user asks the assistant: 'Okay, okay, can you show me a map of the best bakeries in the Mission District?' The assistant instantly shows one. 'Sure, here's a map of the top bakeries in the Mission District. As you'd expect. Taurine is right up there as a favorite.'
This on-screen result appears next to the discussion text, showing how effortlessly the audio and visual parts now fit together.
The user keeps exploring the bakery topic, inquiring about the popular shop's sweet treats. ChatGPT provides a thorough response: 'So at Taurine they've got some beloved pastries like their morning, but which is buttery and cinnamon, sweet, classic flaky croissants, rich pan or chacoette, and even a frail pan cros filled with almond cream.'
Even when the user mixes up the names, the assistant adjusts easily. After the user asks how to say one pastry—'Oh sorry, how do you pronounce that one? The frail,' ChatGPT explains: 'Yeah, no problem, it's pronounced franapan. So kind of like fraiaan.'
The simplicity of correcting mistakes underlines a main aim of the new version: a genuine, human-style conversation that responds to the natural progression of speech.
A Dialogue That Feels Truly Natural
OpenAI's release indicates a move towards smoother, sense-blending communication—where speech, writing, and images work together effortlessly. Users can speak to ChatGPT while seeing answers appear instantly, reviewing previous exchanges, or looking at visuals like maps or pictures generated right there.
The company notes that the update is available to all users immediately on both phones and computers, with no extra action or sign-up required. 'Rolling out to all users on mobile and web,' the X post reads. 'Just update your app.'
The new integration shows OpenAI's ongoing work to make AI conversations feel less robotic, more adaptable, and more practical in daily life. Whether finding nearby locations, requesting clarification, or just talking without using your hands, the speech function seeks to eliminate obstacles between human language and AI support.
As the demonstration finishes, the user offers a happy, brief farewell: 'Okay, amazing. Thank you.'
© Copyright IBTimes 2025. All rights reserved.





















