DeepSeek Drops Two New AI Models That Rival GPT-5 and Gemini 3 Pro And They’re Open Source

DeepSeek has launched two new open-source AI models — V3.2 and V3.2-Speciale — that the company says match or exceed leading U.S. systems in advanced reasoning and tool-use tasks.

By Maria Konash Published: Updated:
DeepSeek Drops Two New AI Models That Rival GPT-5 and Gemini 3 Pro And They’re Open Source
Chinese startup DeepSeek Releases two new AI models rivaling GPT-5 and Gemini 3 Pro. Photo: Solen Feyissa / Unsplash

Chinese AI company DeepSeek has announced two next-generation models — DeepSeek-V3.2 and DeepSeek-V3.2-Speciale — both released under an open-source license and positioned as direct competitors to top U.S. frontier systems such as OpenAI’s GPT-5 and Google’s Gemini 3 Pro. According to the company, the new models offer advanced reasoning, long-context capabilities, and improved tool-use performance at significantly lower compute costs.

Two Models Built for High-Level Reasoning

DeepSeek describes V3.2 as a general-purpose reasoning model suitable for everyday use and V3.2-Speciale as a higher-capacity variant optimized for complex, multi-step tasks. Benchmark data published by the company indicates that the Speciale model achieves competitive results on international math and programming competitions, including the International Mathematical Olympiad (IMO), the Harvard–MIT Math Tournament, and the ICPC World Finals.

The models also incorporate an extended context window of 128,000 tokens, enabling long-form analysis and multi-stage workflow planning.

New Architecture Designed to Reduce Compute Costs

A central technical feature of the release is DeepSeek Sparse Attention (DSA), an attention mechanism designed to reduce the computational burden of processing long documents. Unlike traditional transformers, which evaluate every token in relation to all others, DSA selectively focuses on the most relevant segments. DeepSeek claims this reduces compute requirements for long-context tasks by up to 70%, lowering the cost of deployment for both researchers and developers.

Advances in Tool-Use and Agentic Reasoning

The company also emphasized improvements in its agent framework. Traditional models often lose internal reasoning context when switching between tools, reducing performance in multi-step agent workflows. DeepSeek says its new models maintain internal state across tool operations, enabling more consistent results in scenarios such as coding, research assistance, and structured planning.

Training reportedly included more than 85,000 synthetic agent tasks covering browser navigation, code execution, file manipulation, and multi-tool coordination.

Open-Source Licensing and International Scrutiny

Both models are released under the MIT license, allowing unrestricted copying, modification, and commercial use. This approach contrasts with the increasingly closed nature of many U.S. frontier models, which are often accessible only through hosted APIs.

However, the open release has drawn regulatory attention. Several European regulators have expressed concerns about data-privacy compliance, and some governments have moved to limit the use of DeepSeek’s products on official devices due to geopolitical sensitivities.

Availability and Next Steps

DeepSeek-V3.2 is already available via the company’s app, web interface, and API. The Speciale version is currently accessible through a temporary API endpoint and is expected to be integrated into the main V3.2 offering by mid-December 2025.

The release further accelerates China’s push into open-source frontier AI — a trend highlighted by recent reporting that DeepSeek’s rapid growth has already prompted Google to explore deeper engagement with China’s open-source ecosystem — underscoring how cost-efficient, open models are reshaping global AI competition across both commercial and academic domains.