The Team Behind the Engine: A Tribute to Mistral AI & Voxtral
Every time you speak into Vox Bar and watch your words appear on screen, you're using technology built by a small team in Paris who believed AI should belong to everyone. This is their story — and our thank you.
It Started With Three Researchers in Paris
In April 2023, three French AI researchers walked away from two of the biggest names in tech — Google DeepMind and Meta — to start something different. Their names are Arthur Mensch, Guillaume Lample, and Timothée Lacroix. They'd met studying at École Polytechnique, one of France's most prestigious engineering schools, and they shared a conviction: AI is too important to be controlled by a handful of American corporations.
They founded Mistral AI in Paris with a radical premise — build the most powerful AI models in the world and release them as open source. Let anyone download them. Let anyone run them. Let the technology belong to everyone.
The Three Founders
Arthur Mensch — CEO
Arthur came from Google DeepMind, where he worked on large-scale language models. As CEO of Mistral, he's been the driving force behind the company's open-source-first philosophy. His view is simple: AI models should be tools you own, not services you rent. Under his leadership, Mistral has grown from three people to over 400 employees across offices in Paris, London, Palo Alto, Germany, and Singapore.
Guillaume Lample — Chief Scientist
Guillaume spent years at Meta working on large-scale AI research. As Mistral's Chief Scientist, he leads the technical direction — ensuring their models don't just match the big players but genuinely compete with them. The efficiency breakthroughs in Mistral's models, including the Mixture of Experts architecture, bear his fingerprints.
Timothée Lacroix — CTO
Timothée, also from Meta, is the engineering brain behind Mistral's infrastructure. He's responsible for turning research breakthroughs into models that actually work at scale — efficiently, reliably, and in formats that anyone can download and run on their own hardware.
From Startup to $14 Billion in Two Years
Mistral's growth has been extraordinary:
- June 2023 — Seed round: €105 million. Backed by Lightspeed Venture Partners and Eric Schmidt (former Google CEO).
- December 2023 — Series A: €385 million. Led by Andreessen Horowitz, with Salesforce joining. Valuation passes $2 billion.
- June 2024 — Series B: €600 million. Led by General Catalyst, with NVIDIA, Samsung, IBM, and BNP Paribas joining. Valuation hits $6 billion.
- September 2025 — Series C: €1.7 billion. Led by ASML. Valuation reaches $14 billion. The largest AI funding round in European history.
- February 2026 — Mistral commits €1.2 billion to build AI data centres in Sweden, cementing their position as Europe's AI champion.
But here's what matters most: despite the billions, they kept their models open source. Every major Mistral model is available for anyone to download, run, and build on. No gatekeeping. No permission required.
Voxtral: The Model That Powers Your Voice
In July 2025, Mistral released Voxtral — a family of open-source speech models that changed everything for voice AI. Released under the permissive Apache 2.0 licence, Voxtral is free for anyone to use, modify, and build products on.
The Voxtral family includes:
- Voxtral Small — 24.3 billion parameters (~16 GB on your GPU). Production-grade transcription for enterprise use.
- Voxtral Mini — 4.7 billion parameters (~3 GB on your GPU). Optimised for local and edge deployment. This is the model that makes Vox Bar possible.
- Voxtral Mini Transcribe 2 — Enhanced with speaker diarisation, word-level timestamps, and context biasing.
- Voxtral Realtime — Multilingual real-time transcription supporting 13 languages with configurable latency.
In benchmarks, Voxtral consistently outperforms OpenAI's Whisper, GPT-4o mini, and Google's Gemini 2.5 Flash for transcription accuracy — while being fully open source and runnable on consumer hardware.
Why This Matters to Us
Vox Bar exists because of a decision three researchers made in Paris: to release their work to the world instead of locking it behind an API.
When you speak into Vox Bar and watch your words appear — privately, locally, without a single byte leaving your computer — that's Voxtral running on your GPU. It's a model built by hundreds of brilliant engineers at Mistral, funded by billions of dollars in research, trained on vast multilingual datasets — and then given away for free.
Without that decision, private voice transcription as Vox Bar delivers it simply wouldn't be possible. Cloud-based alternatives like Otter.ai charge monthly subscriptions and process your voice on remote servers. The only reason you can transcribe privately, offline, on your own hardware is because Mistral chose to make their technology open.
We built Vox Bar on Mistral's shoulders. Their open-source commitment made it possible for a small team to deliver frontier-quality voice transcription that runs entirely on your computer. We owe them everything.
The Bigger Picture
Mistral isn't alone in the open-source AI movement — teams like DeepSeek, Meta's Llama team, Google's Gemma team, and Alibaba's Qwen team are all contributing. But Mistral holds a special place because they proved something the industry doubted: a European startup can compete with the biggest AI labs in the world, and do it while keeping their models open.
From three researchers in a Paris office to a $14 billion company with 400+ employees and data centres being built across Europe — Mistral's story is one of the great startup stories of this decade. And every time you use Vox Bar, you're part of it.
🇫🇷
Merci, Mistral
To Arthur, Guillaume, Timothée, and the entire Mistral AI team — thank you for believing that AI should belong to everyone.
Vox Bar is built on Voxtral. Open source made this possible.
Experience Voxtral for yourself
Vox Bar brings Mistral's frontier transcription to your desktop. Private. Local. Yours.
Coming Soon Early Bird