OpenAI upgrades its transcription and voice-generating AI models
Open AI
OpenAI is bringing new transcription and voice-generating AI models to its API that the company claims improve upon its previous releases.
For OpenAI, the models fit into its broader “agentic” vision: building automated systems that can independently accomplish tasks on behalf of users. The definition of “agent” might be in dispute, but OpenAI Head of Product Olivier Godement described one interpretation as a chatbot that can speak with a business’s customers.
“We’re going to see more and more agents pop up in the coming months” Godement told TechCrunch during a briefing. “And so the general theme is helping customers and developers leverage agents that are useful, available, and accurate.”
Chat GBT
OpenAI claims that its new text-to-speech model, “gpt-4o-mini-tts,” not only delivers more nuanced and realistic-sounding speech but is also more “steerable” than its previous-gen speech-synthesizing models. Developers can instruct gpt-4o-mini-tts on how to say things in natural language — for example, “speak like a mad scientist” or “use a serene voice, like a mindfulness teacher.”
Jeff Harris, a member of the product staff at OpenAI, told TechCrunch that the goal is to let developers tailor both the voice “experience” and “context.”
“In different contexts, you don’t just want a flat, monotonous voice,” Harris said. “If you’re in a customer support experience and you want the voice to be apologetic because it’s made a mistake, you can actually have the voice have that emotion in it … Our big belief, here, is that developers and users want to really control not just what is spoken, but how things are spoken.”
Deepseek
OpenAI claims that its new text-to-speech model, “gpt-4o-mini-tts,” not only delivers more nuanced and realistic-sounding speech but is also more “steerable” than its previous-gen speech-synthesizing models. Developers can instruct gpt-4o-mini-tts on how to say things in natural language — for example, “speak like a mad scientist” or “use a serene voice, like a mindfulness teacher.”
Jeff Harris, a member of the product staff at OpenAI, told TechCrunch that the goal is to let developers tailor both the voice “experience” and “context.”
“In different contexts, you don’t just want a flat, monotonous voice,” Harris said. “If you’re in a customer support experience and you want the voice to be apologetic because it’s made a mistake, you can actually have the voice have that emotion in it … Our big belief, here, is that developers and users want to really control not just what is spoken, but how things are spoken.”