OpenAI Reorganizes Teams to Advance Audio-First AI Devices

OpenAI has consolidated internal teams to accelerate development of advanced audio models ahead of a planned audio-first personal device. The effort signals a broader industry shift toward voice-based interfaces.

By Maria Konash Published: Updated:

OpenAI has reorganized several engineering, product, and research groups to focus on audio technology, according to reporting by The Information. The changes, made over the past two months, support development of next-generation audio models and an audio-first personal device expected to debut in roughly a year.

The company is reportedly working on a new audio model targeted for early 2026. The system is designed to sound more natural, manage interruptions, and support overlapping speech, features that remain limited in current AI voice systems. OpenAI is also exploring a range of hardware products, including screenless devices and potentially smart glasses, built around conversational interaction rather than visual interfaces.

The move reflects broader momentum across the technology sector. Meta Platforms recently added advanced audio features to its Ray-Ban smart glasses, while Alphabet has begun testing audio summaries in search. Tesla is integrating xAI’s Grok chatbot into vehicles to enable voice-based control of in-car functions.

OpenAI’s hardware strategy includes the influence of former Apple design chief Jony Ive, whose firm io was acquired for $6.5 billion in May. Ive has emphasized audio-first design as a way to reduce reliance on screens and rethink consumer device engagement.

AI & Machine Learning, Consumer Tech, News