BTCC / BTCC Square / Cryptopolitan /
DeepSeek’s R1-0528 Nips at OpenAI’s Heels—Is the AI Underdog Catching Up?

DeepSeek’s R1-0528 Nips at OpenAI’s Heels—Is the AI Underdog Catching Up?

Published:
2025-05-30 10:30:42
20
2

DeepSeek’s R1-0528 now ranks right behind OpenAI’s o4-mini

Move over, OpenAI—DeepSeek’s R1-0528 just clawed its way to second place, trailing only the mighty o4-mini. The AI arms race just got a fresh contender.

No fluff, no hype: raw benchmark rankings don’t lie. While OpenAI still holds the crown, the gap’s narrowing faster than a crypto trader’s profit margins during a market correction.

Behind the scenes: relentless optimization, brute-force compute, and maybe a sprinkle of that secret-sauce architecture. But let’s be real—until it moons like an altcoin, the incumbents won’t sweat.

R1-0528 now ranks right behind OpenAI’s o3 and o4-mini

On LiveCodeBench, which measures AI model performance, R1-0528 now ranks just behind OpenAI’s o4-mini and o3 models.

“DeepSeek’s latest upgrade is sharper on reasoning, stronger on math and code, and closing in on top-tier models like Gemini and O3,” said Adina Yakefu, an AI researcher at Hugging Face.

She added that the new version shows “major improvements in inference and hallucination reduction” and proves the start-up is not merely catching up but actively competing.

The rapid progress came after Washington had restricted advanced chips and other technology exports to China. Yet Chinese firms continue to refine their systems. Earlier this month, Baidu and Tencent described ways they are making their models run more efficiently despite limited access to cutting-edge semiconductors.

Nvidia chief executive Jensen Huang criticized export controls on Wednesday. “The U.S. has based its policy on the assumption that China cannot make AI chips,” he said. “That assumption was always questionable, and now it’s clearly wrong. The question is not whether China will have AI. It already does.”

DeepSeek raised the performance of Alibaba’s Qwen3 8B model by 10%

DeepSeek also said it distilled the reasoning steps used in R1-0528 into Alibaba’s Qwen3 8B Base model. That process created a new, smaller model that surpassed Qwen3’s performance by more than 10%, according to the company. At the same time, the model was 30 times smaller.

“We believe the chain-of-thought from DeepSeek-R1-0528 will hold significant importance for academic research on reasoning models and industrial work on small models,” the firm stated.

According to Reuters, a DeepSeek representative told a WeChat group that the change was a “minor trial upgrade” that was already open for public testing. In response to fiercer competition, Google has discounted some Gemini access tiers, while OpenAI introduced the lower-cost o3 Mini model.

Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users