OpenAI announces latest capabilities for developers


OpenAI announced several new capabilities for developers, including the availability of OpenAI o1 in the API and updates to the Realtime API.

OpenAI o1 is the company’s reasoning model for complex multi-step tasks, and it has begun rolling out to developers on the API’s usage tier 5. 

Some key capabilities enabled by o1 include function calling, structured outputs, the ability to specify instructions or context for the model to follow, vision capabilities, and a new ‘reasoning_effort’ API parameter that lets developers set how long the model thinks before answering. 

The company claimed that OpenAI o1 also uses 60% fewer reasoning tokens on average, compared to o1-preview.

According to OpenAI, the o1 snapshot being shipped today is a post-trained version of the version of o1 released in ChatGPT two weeks ago. This new snapshot improves on areas of model behavior based on feedback. This latest version is also now being added to ChatGPT. 

Additionally, OpenAI made updates to the Realtime API, which is an API that developers can use to create low-latency, natural conversational experiences, such as voice assistants, live translation tools, virtual tutors, or interactive customer support systems. 

The API now supports WebRTC, an open standard for building real-time voice products that allows video, voice, and generic data to be sent between services. The integration handles audio encoding, streaming, noise suppression, and congestion control. 

It also includes new GPT-4o and GPT-4o mini realtime snapshots, and OpenAI is dropping the audio token price by 60% and the cached audio input price by 87.5% due to efficiency improvements.  

Other new features in the Realtime API include concurrent out-of-band responses, custom input context, controlled response timing, and an increase in the maximum session length from 15 to 30 minutes.

Next, the fine-tuning API was updated to support Preference Fine-Tuning, which uses Direct Preference Optimization to compare pairs of model responses and teach the model the preferred and non-preferred outputs. According to OpenAI, this functionality is particularly useful for subjective tasks where tone, style, and creativity matter.

And finally, OpenAI announced a beta of Go and Java SDKs, adding to its existing Python, Node.js, and .NET libraries

“Our goal is for OpenAI APIs to be easy to use, no matter what programming language you choose,” OpenAI wrote in a blog post

Leave a Reply

Your email address will not be published. Required fields are marked *