Jakarta, INTI – The competition in artificial intelligence intensifies as DeepSeek introduces its latest AI model, V3.2, designed to address the demand for faster, more efficient systems capable of processing long text at lower costs. The launch comes amid growing global interest in AI’s ability to support research, the digital industry, and software development. DeepSeek claims that V3.2 was created to overcome inefficiencies, limitations in open-source models, and the long-standing need for high-performance AI. Supported by a new architecture and enhanced training mechanisms, the model is positioned as a solution to drive AI innovation across various sectors. Since its introduction, V3.2 has immediately drawn attention for its potential to compete with two AI giants: GPT-5 and Gemini 3.0 Pro.
Two Model Versions: Regular and Speciale for Different Use Cases
DeepSeek released V3.2 in two main variants.
The V3.2 Regular version is designed as a reasoning assistant for everyday use, while V3.2 Speciale targets high-performance needs, competitions, and heavy technical workloads.
Both models are equipped with DeepSeek Sparse Attention (DSA), a mechanism capable of re-evaluating tokens more selectively. Its intelligent indexing system enables the model to identify key parts of text history, making the process far more efficient without sacrificing accuracy.
DeepSeek also increased its post-training budget by 10 percent, significantly higher than the 1 percent increase made two years ago.
Dataset Development and Synthetic Training Environments
To strengthen the model’s capabilities, DeepSeek developed highly specialized datasets focusing on mathematics, logic, programming, and autonomous agent training.
The company also built more than 1,800 synthetic environments, complete with thousands of real-world scenarios based on GitHub issues, to train AI agents to handle real-world challenges.
A Serious Competitor to GPT-5 and Gemini 3 Pro
Across numerous international benchmark tests, DeepSeek V3.2 has shown competitive performance.
In the AIME 2025 benchmark, the model scored 93.1%, only slightly below GPT-5’s 94.6%. On the LiveCodeBench programming test, V3.2 scored 83.3%, right behind GPT-5’s 84.5%. In the SWE Multilingual software development benchmark, V3.2 achieved 70.2%, surpassing GPT-5, which scored only 55.3%. DeepSeek also outperformed in Terminal Bench 2.0 with 46.4%, higher than GPT-5’s 35.2%, though still below Gemini 3 Pro’s 54.2%.
Speciale Version Surges Ahead, Outperforming Gemini 3 Pro in Global Competitions
The DeepSeek V3.2 Speciale variant demonstrates significantly stronger performance.
It earned a gold medal at the 2025 International Olympiad in Informatics (IOI), ranking 10th, and secured second place at the 2025 ICPC World Final.
At the 2025 International Mathematical Olympiad (IMO), the Speciale version again led the competition, supported by integration with the DeepSeek Math V2 module.
The model also solved Codeforces problems using an average of 77,000 tokens, far exceeding Gemini 3 Pro’s 22,000 tokens.
Remaining Limitations, but Strong Future Potential
Despite its impressive performance, DeepSeek acknowledges that V3.2 still lags in three key areas:
The startup plans to address these issues through expanded pre-training and dataset enhancement.
DeepSeek V3.2 is now available under the Apache 2.0 license on HuggingFace and via public API access.
Conclusion
DeepSeek V3.2 emerges as a major innovation in the global AI race, offering a more efficient, competitive, and flexible alternative to leading models like GPT-5 and Gemini 3 Pro. With strong performance, DSA-powered architecture, and the Speciale variant’s victories in international competitions, DeepSeek marks a new chapter in Asia’s rise within the AI landscape. Although some limitations remain, the model is projected to become one of the key players in the generative AI ecosystem in the years ahead.
Read More:TikTok Tests New AI Content Controls for Users on FYP