Mistral AI’s New Release Undercuts ChatGPT and Claude—Without the VC Price Tag
Open-source upstart Mistral drops another bombshell model, challenging AI giants with 90% cost savings.
Subheader: The ’too cheap to ignore’ alternative
While Silicon Valley burns cash on trillion-parameter models, Mistral’s latest release delivers comparable performance at startup-friendly prices—proving once again that most AI ’breakthroughs’ are just margin calls in disguise.

But unlike the models that made Mistral famous, users will not be able to modify it or run it locally.
Right now, the best source for open source enthusiasts is Mixtral-8x22B-v0.3, a mixture of experts model that runs 8 experts of 22 billion parameters each. Besides Mixtral, the company has over a dozen different open-source models available.
It’s also initially available for enterprise deployment and not domestic usage via LeChat—Mistral’s chatbot interface. Mistral AI emphasized the model’s enterprise adaptation capabilities, supporting continuous pretraining, full fine-tuning, and integration into corporate knowledge bases for domain-specific applications.
Beta customers across financial services, energy, and healthcare sectors are testing the model for customer service enhancement, business process personalization, and complex dataset analysis.
The API will launch immediately on Mistral La Plateforme and Amazon Sagemaker, with a forthcoming integration planned for IBM WatsonX, NVIDIA NIM, Azure AI Foundry, and Google Cloud Vertex.
The announcement sparked considerable discussion across social media platforms, with AI researchers praising the cost-efficiency breakthrough while noting the proprietary nature as a potential limitation.
The model’s closed-source status marks a departure from Mistral’s open-weight offerings, though the company hinted at future releases.
"With the launches of Mistral Small in March and Mistral Medium today, it’s no secret that we’re working on something ’large’ over the next few weeks," Mistral’s Head of Developer Relationships Sophia Yang teased in the announcement. "With even our medium-sized model being resoundingly better than flagship open source models such as Llama 4 Maverick, we’re excited to ’open’ up what’s to come."
Mistral tends to hallucinate less than the average model, which is excellent news considering its size.
It’s better than Meta Llama-4 Maverick, Deepseek V3 and Amazon Nova Pro, to name a few. Right now, the model that has the least hallucinations is Google’s recently launched Gemini 2.5 Pro.
This release comes amid impressive business growth for the Paris-based company despite being quiet since the release of Mistral Large 2 last year.
Mistral recently launched an enterprise version of its Le Chat chatbot that integrates with Microsoft SharePoint and Google Drive, with CEO Arthur Mensch telling Reuters they’ve "tripled (their) business in the last 100 days, in particular in Europe and outside of the U.S."
The company, now valued at $6 billion, is flexing its technological independence by operating its own compute infrastructure and reducing reliance on U.S. cloud providers—a strategic MOVE that resonates in Europe amid strained relations following President Trump’s tariffs on tech products.
Whether Mistral’s claim of achieving enterprise-grade performance at consumer-friendly prices holds up in real-world deployment remains to be seen.
But for now, Mistral has positioned Medium 3 as a compelling middle ground in an industry that often assumes bigger (and pricier) equals better.
Edited by Josh Quittner and Sebastian Sinclair