At its Dev Day event on Monday, OpenAI introduced a range of updates to its API offerings, headlined by the release of GPT-5 Pro, its most advanced language model to date. Alongside GPT-5 Pro, the company also revealed Sora 2, a next-generation video creation model, and a more affordable, lightweight voice model designed for real-time applications.
These updates are part of a broader effort to attract developers to OpenAI’s growing ecosystem. The company is positioning its platform as a one-stop solution for building everything from smart agents to fully integrated apps within ChatGPT.
GPT-5 Pro is expected to resonate with developers working in industries like finance, healthcare, and law, where precision, reliability, and in-depth reasoning are essential. According to CEO Sam Altman, the new model was designed with these high-demand use cases in mind.
OpenAI also acknowledged the increasing importance of voice-based AI interactions. To address that, the company announced gpt-realtime mini, a compact and cost-effective voice model that supports low-latency streaming for speech and audio tasks. Despite being 70% cheaper than the previous top-tier voice model, it retains high-quality output and expressive voice performance.
On the multimedia side, OpenAI rolled out Sora 2, now available in API preview for developers. Sora 2 was launched shortly after the release of the new Sora app, which lets users generate short videos based on text prompts and interact through a TikTok-style feed filled with AI-generated clips.
Altman explained that developers now have access to the same engine powering the Sora app, allowing them to integrate stunning video generation into their own applications. This opens new creative possibilities for media platforms, marketers, and content creators.
Sora 2 builds on its predecessor with improvements in realism, visual consistency, and the ability to control camera movements and artistic styles. It offers finer creative direction, giving users tools to craft everything from cinematic scenes to stylized animations with ease.
A standout feature is Sora 2’s improved audio synchronization. It doesn’t just generate speech—it creates ambient audio and environmental sound effects that are accurately aligned with on-screen visuals, enhancing immersion and storytelling capabilities.
OpenAI is positioning Sora 2 as a tool for concept development across creative industries. Whether it’s visualizing early ideas for an ad campaign or transforming a product sketch into a toy prototype—as shown in a live example involving Mattel—OpenAI sees Sora 2 as a bridge between imagination and production-ready content.
Also Read:
OpenAI rolls out apps inside ChatGPT
OpenAI launches AgentKit to empower developers to build and ship AI agents








