<p>ChatGPT has released its newest model called GPT-4o.</p>
<p>GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to <a class="transition ease-curve-a duration-250 underline-offset-[0.125rem] underline decoration-gray-40 dark:decoration-gray-60 hover:decoration-copy-primary" href="https://www.pnas.org/doi/10.1073/pnas.0903616106" target="_blank" rel="noopener noreferrer">human response time<span class="sr-only">(opens in a new window)</span></a> in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.</p>
<p>Prior to GPT-4o, you could use <a class="transition ease-curve-a duration-250 underline-offset-[0.125rem] underline decoration-gray-40 dark:decoration-gray-60 hover:decoration-copy-primary" href="https://openai.com/index/chatgpt-can-now-see-hear-and-speak"><u>Voice Mode</u></a> to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average. To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio. This process means that the main source of intelligence, GPT-4, loses a lot of information—it can’t directly observe tone, multiple speakers, or background noises, and it can’t output laughter, singing, or express emotion.</p>
<p>With GPT-4o, the company trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. Because GPT-4o is our first model combining all of these modalities, we are still just scratching the surface of exploring what the model can do and its limitations.</p>
<p>GPT-4o’s text and image capabilities are starting to roll out now in ChatGPT. We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We&#8217;ll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.</p>
<p>Developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo. We plan to launch support for GPT-4o&#8217;s new audio and video capabilities to a small group of trusted partners in the API in the coming weeks.</p>

SAN FRANCISCO -- Chai Discovery, the AI company that predicts and reprograms the interactions between…
SAN FRANCISCO -- Nirvana Insurance, an AI-native commercial insurer, has secured a preemptive $100 million…
SAN FRANCISCO -- Kargo, a provider of industrial artificial intelligence (AI) technology for supply chain…
The Federal Trade Commission announced that grocery delivery provider Instacart will pay $60 million in…
SANTA CLARA -- ServiceNow has agreed to buy Armis for $7.75 billion in cash. Armis…
The Nasdaq stock market has reformulated the list of 100 companies in the Nasdaq-100 Index…