OpenAI announced the launch of GPT-4o, an iteration of its GPT-4 model that powers its hallmark product, ChatGPT. The latest update “is much faster” and improves “capabilities across text, vision, and audio,” OpenAI CTO Mira Murati said in a livestream announcement on Monday. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added.
In a blog post from the company, OpenAI says GPT-4o’s capabilities “will be rolled out iteratively (with extended red team access starting today),” but its text and image capabilities will start to roll out today in ChatGPT.
OpenAI CEO Sam Altman posted that the model is “natively multimodal,” which means the model could generate content or understand commands in voice, text, or images. Developers who want to tinker with GPT-4o will have access to the API, which is half the price and twice as fast as GPT-4-turbo, Altman added on X.
Prior to today’s GPT-4o launch, conflicting reports predicted that OpenAI was announcing an AI search engine to rival Google and Perplexity, a voice assistant baked into GPT-4, or a totally new and improved model, GPT-5. Of course, OpenAI was sure to time this launch just ahead of Google I/O, the tech giant’s flagship conference, where we expect to see the launch of various AI products from the Gemini team.