OpenAI announced numerous new options for developers who use their technology to build products and services, promising the upgrades will “improve performance, flexibility, and cost-efficiency.”

In their live announcement today — which suffered from audio problems — the OpenAI team first highlighted changes to OpenAI o1, the company’s reasoning model that can “handle complex multi-step tasks,” according to the company. Developers can now utilize the model on their highest usage tier; it’s currently used by developers to build automated customer service systems, help inform supply chain decisions, and even forecast financial trends.

The new o1 model can also connect to external data and APIs (aka Application Programming Interfaces, which is how different software applications communicate with each other). Developers can also use o1 to fine-tune messaging to give their AI applications a specific tone and style; the model also has vision capabilities so it can use images to “unlock many more applications in science, manufacturing, or coding, where visual inputs matter.”

Mashable Light Speed

Improvement were also announced for OpenAI’s Realtime API, which developers utilize for voice assistants, virtual tutors, translation bots, and AI Santa voices. The company’s new WebRTC Support will help in real-time voice services, utilizing JavaScript to ostensibly create better audio quality and more helpful responses (e.g., the RealTime API can start formulating responses to a query even while a user is still speaking). OpenAI also announced price reductions for services like WebRTC Support.

Also of note, OpenAI is now offering Preference Fine-Tuning to developers, which customizes the technology to respond better to “subjective tasks where tone, style, and creativity matter” than so-called Supervised Fine-Tuning. Catch the full presentation below.





Source link