OpenAI Releases GPT-5.4 Mini and Nano Models

OpenAI introduced GPT-5.4 mini and nano, smaller models optimized for speed, cost, and high-volume AI workloads such as coding and automation.

By Daniel Mercer Published:

OpenAI has released GPT-5.4 mini and GPT-5.4 nano, introducing more efficient versions of its flagship model designed for high-volume and latency-sensitive applications. The models aim to balance performance, speed, and cost across enterprise and developer use cases.

GPT-5.4 mini delivers significant improvements over GPT-5 mini in coding, reasoning, multimodal understanding, and tool use, while operating more than twice as fast. It approaches the performance of the larger GPT-5.4 model in benchmarks such as SWE-Bench Pro and OSWorld-Verified, highlighting strong efficiency gains.

GPT-5.4 nano, the smallest and most cost-efficient variant, targets simpler tasks including classification, data extraction, and lightweight coding workflows. It is designed for scenarios where speed and scalability are critical.

Both models are optimized for real-time applications such as coding assistants, AI subagents, and multimodal systems that process images and screenshots. The release reflects a broader industry shift toward smaller, faster models that deliver strong performance without the cost and latency of large-scale systems.

AI & Machine Learning, News