
French AI startup Mistral is back in the spotlight—and they’ve just dropped something that could seriously shake up the AI industry’s cost vs. performance game.
Introducing Mistral Medium 3, the newest large language model in their lineup. It’s fast, efficient, and priced to disrupt: just $0.40 per million input tokens and $2 per million output tokens. And despite the budget-friendly price tag, it’s punching way above its weight. Mistral claims Medium 3 performs on par with—if not better than—Anthropic’s Claude Sonnet 3.7 on major AI benchmarks. It also edges past open models like Meta’s Llama 4 Maverick and Cohere’s Command R+.
In plain terms? You’re getting elite AI capabilities without elite-level pricing.
Medium 3 is already making waves in industries like finance, energy, and healthcare, where teams are using it to power customer support, automate workflows, and dig into complex data. It shines especially bright when it comes to STEM and coding tasks—and it’s built for multimodal smarts, too. Best part? It’s flexible. You can deploy it on your own setup (four GPUs and up), or tap into it via Mistral’s API. No vendor lock-in, no drama.
The model is live now on AWS SageMaker, with support for Azure AI Foundry and Google Vertex AI coming soon. And if you’re running an enterprise stack, Mistral’s got you covered there too. They’ve also launched Le Chat Enterprise—a corporate chatbot platform with AI agent tools and integrations for Gmail, Google Drive, and SharePoint. Basically, think ChatGPT, but business-ready.
Mistral is backed by over €1.1 billion in funding, with customers like AXA, BNP Paribas, and Mirakl already on board. And they’re not done yet—there’s talk of a bigger model dropping soon.
Bottom line? Mistral Medium 3 delivers serious AI power, real-world readiness, and an unbeatable price point. Game officially on.