Mistral AI News https://mistral.ai/news Latest news and updates from Mistral AI http://www.rssboard.org/rss-specification python-feedgen en Thu, 12 Feb 2026 12:13:27 +0000 Voxtral transcribes at the speed of sound. https://mistral.ai/news/voxtral-transcribe-2 Precision diarization, real-time transcription, and a new audio playground. <a href="https://console.mistral.ai/build/audio/speech-to-text"><img src="https://mistral.ai/_next/image?url=%2Fimg%2Fvoxtral%2Fvoxtral-icon.png&amp;w=256&amp;q=75&amp;dpl=7010cc49e9c065f0f075971d7149e003373b765dfbbe1a9bbf28b0600ce26ccb363938336432373434363535633930303038393962623261"/>VoxtralTranscribe 2Listening...Listening...</a> <img src="https://mistral.ai/_next/image?url=%2Fimg%2Fvoxtral%2Fvoxtral-icon.png&amp;w=256&amp;q=75&amp;dpl=7010cc49e9c065f0f075971d7149e003373b765dfbbe1a9bbf28b0600ce26ccb363938336432373434363535633930303038393962623261"/> <a href="https://console.mistral.ai/build/audio/speech-to-text">Try Voxtral Transcribe 2 in Mistral Studio</a> <p> <!-- -->Precision diarization, real-time transcription, and a new audio playground.</p> <p>Today, we're releasing Voxtral Transcribe 2, two next-generation speech-to-text models with state-of-the-art transcription quality, diarization, and ultra-low latency. The family includes Voxtral Mini Transcribe V2 for batch transcription and Voxtral Realtime for live applications. Voxtral Realtime is open-weights under the Apache 2.0 license.</p> <p>We're also launching an <a href="https://console.mistral.ai/build/audio/speech-to-text">audio playground in Mistral Studio</a> to test transcription instantly, powered by Voxtral Transcribe 2, with diarization and timestamps.</p> <a href="https://console.mistral.ai/build/audio/speech-to-text">audio playground in Mistral Studio</a> <p>Voxtral Mini Transcribe V2: State-of-the-art transcription with speaker diarization, context biasing, and word-level timestamps in 13 languages.</p> <p>Voxtral Realtime: Purpose-built for live transcription with latency configurable down to sub-200ms, enabling voice agents and real-time applications.</p> <p>Best-in-class efficiency: Industry-leading accuracy at a fraction of the cost, with Voxtral Mini Transcribe V2 achieving the lowest word error rate, at the lowest price point.</p> <p>Open weights: Voxtral Realtime ships under Apache 2.0, deployable on edge for privacy-first applications.</p> <p>Voxtral Realtime is purpose-built for applications where latency matters. Unlike approaches that adapt offline models by processing audio in chunks, Realtime uses a novel streaming architecture that transcribes audio as it arrives. The model delivers transcriptions with delay configurable down to sub-200ms, unlocking a new class of voice-first applications.</p> <img src="https://cms.mistral.ai/assets/a21cafea-208d-4290-b65e-36c0514dc179.png?width=2621&amp;height=1293"/> <p>Word error rate (lower is better) across languages in the FLEURS transcription benchmark.</p> <p>At 2.4 seconds delay, ideal for subtitling, Realtime matches Voxtral Mini Transcribe V2, our latest batch model. At 480ms delay, it stays within 1-2% word error rate, enabling voice agents with near-offline accuracy.</p> <p>The model is natively multilingual, achieving strong transcription performance in 13 languages, including English, Chinese, Hindi, Spanish, Arabic, French, Portuguese, Russian, German, Japanese, Korean, Italian, and Dutch. With a 4B parameter footprint, it runs efficiently on edge devices, ensuring privacy and security for sensitive deployments.</p> <p>We’re releasing the model weights under Apache 2.0 on the <a href="https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602">Hugging Face Hub.</a></p> <a href="https://huggingface.co/mistralai/Voxtral-Mini-4B-Realtime-2602">Hugging Face Hub.</a> <img src="https://cms.mistral.ai/assets/6b8d07b0-1526-4e94-ac66-a861f092aa41.png?width=1775&amp;height=1103"/> <p>Average diarization error rate (lower is better) across five English benchmarks (Switchboard, CallHome, AMI-IHM, AMI-SDM, SBCSAE) and the TalkBank multilingual benchmark (German, Spanish, English, Chinese, Japanese).</p> <img src="https://cms.mistral.ai/assets/58664549-fe38-420a-86fd-5c7003476689.png?width=1775&amp;height=1111"/> <p>Average word error rate (lower is better) across the top-10 languages in the FLEURS transcription benchmark.</p> <p>Voxtral Mini Transcribe V2 delivers significant improvements in transcription and diarization quality across languages and domains. At approximately 4% word error rate on FLEURS and $0.003/min, Voxtral offers the best price-performance of any transcription API. It outperforms GPT-4o mini Transcribe, Gemini 2.5 Flash, Assembly Universal, and Deepgram Nova on accuracy, and processes audio approximately 3x faster than ElevenLabs’ Scribe v2 while matching on quality at one-fifth the cost.</p> <p>Voxtral Mini Transcribe V2 introduces key capabilities for enterprise deployments.</p> <img src="https://cms.mistral.ai/assets/4b23530e-23c1-4a58-8296-c61a1d5cff98.png?width=96&amp;height=96"/> <p>Generate transcriptions with speaker labels and precise start/end times. Ideal for meeting transcription, interview analysis, and multi-party call processing. Note: with overlapping speech, the model typically transcribes one speaker.</p> <img src="https://cms.mistral.ai/assets/221e1a35-1b56-41a1-b294-6e5a3492e0cd.png?width=72&amp;height=64"/> <p>Provide up to 100 words or phrases to guide the model toward correct spellings of names, technical terms, or domain-specific vocabulary. Particularly useful for proper nouns or industry terminology that standard models often miss. Context biasing is optimized for English; support for other languages is experimental.</p> <img src="https://cms.mistral.ai/assets/e8c644a8-93bf-49c0-ae96-a859c1cec3ad.svg?width=32&amp;height=32"/> <p>Generate precise start and end timestamps for each word, enabling applications like subtitle generation, audio search, and content alignment.</p> <img src="https://cms.mistral.ai/assets/ecdac1ed-522e-4a58-a3cc-23d03c40ba6e.png?width=96&amp;height=96"/> <p>Like Realtime, this model now supports 13 languages: English, Chinese, Hindi, Spanish, Arabic, French, Portuguese, Russian, German, Japanese, Korean, Italian, and Dutch. Non-English performance significantly outpaces competitors.</p> <img src="https://cms.mistral.ai/assets/ec93c74e-5cdd-4950-b494-0a8f3a43dd74.svg?width=32&amp;height=32"/> <p>Maintains transcription accuracy in challenging acoustic environments, such as factory floors, busy call centers, and field recordings.</p> <img src="https://cms.mistral.ai/assets/d18e43b2-1c42-4916-b787-d28739220476.svg?width=32&amp;height=32"/> <p>Process recordings up to 3 hours in a single request.</p> <img src="https://cms.mistral.ai/assets/97f4a4ee-7448-4a2f-889e-17409821e503.png?width=2849&amp;height=1358"/> <p>Word error rate (lower is better) across languages in the FLEURS transcription benchmark.</p> <p>Test Voxtral Transcribe 2 directly in <a href="https://console.mistral.ai/build/audio/speech-to-text">Mistral Studio</a>. Upload up to 10 audio files, toggle diarization, choose timestamp granularity, and add context bias terms for domain-specific vocabulary. Supports .mp3, .wav, .m4a, .flac, .ogg up to 1GB each.</p> <a href="https://console.mistral.ai/build/audio/speech-to-text">Mistral Studio</a> <p>Voxtral powers voice workflows in diverse applications and industries.</p> <p>Transcribe multilingual recordings with speaker diarization that clearly attributes who said what and when. At Voxtral's price point, annotate large volumes of meeting content at industry-leading cost efficiency.</p> <p>Build conversational AI with sub-200ms transcription latency. Connect Voxtral Realtime to your LLM and TTS pipeline for responsive voice interfaces that feel natural.</p> <p>Transcribe calls in real time, enabling AI systems to analyze sentiment, suggest responses, and populate CRM fields while conversations are still happening. Speaker diarization ensures clear attribution between agents and customers.</p> <p>Generate live multilingual subtitles with minimal latency. Context biasing handles proper nouns and technical terminology that trip up generic transcription services.</p> <p>Monitor and transcribe interactions for regulatory compliance, with diarization providing clear speaker attribution and timestamps enabling precise audit trails.</p> <p>Both models support GDPR and HIPAA-compliant deployments through secure on-premise or private cloud setups.</p> <p><a href="https://docs.mistral.ai/models/voxtral-mini-transcribe-26-02">Voxtral Mini Transcribe V2</a> is available now via API at $0.003 per minute. Try it now in the new Mistral Studio <a href="https://console.mistral.ai/build/audio/speech-to-text">audio playground</a> or in <a href="http://chat.mistral.ai">Le Chat</a>.</p> <a href="https://docs.mistral.ai/models/voxtral-mini-transcribe-26-02">Voxtral Mini Transcribe V2</a> <a href="https://console.mistral.ai/build/audio/speech-to-text">audio playground</a> <a href="http://chat.mistral.ai">Le Chat</a> <p><a href="https://docs.mistral.ai/models/voxtral-mini-transcribe-realtime-26-02">Voxtral Realtime</a> is available via API at $0.006 per minute and as open weights on <a href="https://huggingface.co/mistralai/Voxtral-Mini-3B-Realtime-2602">Hugging Face</a>.</p> <a href="https://docs.mistral.ai/models/voxtral-mini-transcribe-realtime-26-02">Voxtral Realtime</a> <a href="https://huggingface.co/mistralai/Voxtral-Mini-3B-Realtime-2602">Hugging Face</a> <p><a href="https://docs.mistral.ai/capabilities/audio_transcription">Explore documentation</a> on Mistral’s audio and transcription capabilities.</p> <a href="https://docs.mistral.ai/capabilities/audio_transcription">Explore documentation</a> <p>If you're excited about building world-class speech AI and putting frontier models into the hands of developers everywhere, we'd love to hear from you. <a href="https://mistral.ai/careers">Apply to join our team</a>.</p> <a href="https://mistral.ai/careers">Apply to join our team</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/voxtral-transcribe-2 Product Wed, 04 Feb 2026 00:00:00 +0000 Terminally online Mistral Vibe. https://mistral.ai/news/mistral-vibe-2-0 Terminally online Mistral Vibe. | Mistral AI <p>Today, we're releasing Mistral Vibe 2.0—a major upgrade to our terminal-native coding agent, powered by the state-of-the-art Devstral 2 model family. Build custom subagents, clarify before you execute, load skills with slash commands, and configure your own workflows to match how you work. </p> <p>Vibe powers you and your team to build, maintain, and ship code faster.</p> <p>Mistral Vibe is now available on the Le Chat Pro and Team plans—with pay-as-you-go credits for power use, or bring your own API key.</p> <p>Mistral Vibe 2.0: Custom subagents, multi-choice clarifications, slash-command skills, unified agent modes, and automatic updates.</p> <p>Available today on Le Chat Pro and Team plans with PAYG for extra usage, or BYOK.</p> <p>Devstral 2 moves to paid API access: Free on the Experiment plan in Mistral Studio.</p> <p>Enterprise services: fine-tuning, reinforcement learning, and code modernization.</p> <p>Mistral Vibe already gives you terminal-native code automation with natural language commands, multi-file orchestration, smart references, and full codebase context. With 2.0, we're adding the controls to make it yours.</p> <p>Custom subagents: Build specialized agents for targeted tasks—deploy scripts, PR reviews, test generation—and invoke them on demand. </p> <p>Multi-choice clarifications: Vibe asks before it acts. When intent is ambiguous, it prompts with options instead of guessing.</p> <p>Slash-command skills: Load skills with / —preconfigured workflows for common tasks like deploying, linting, or generating docs.</p> <p>Unified agent modes: Configure custom modes that combine tools, permissions, and behaviors. Switch contexts without switching tools.</p> <p>Bug fixes and improvements in the Vibe CLI now ship continuously. No manual updates required.</p> <p>Mistral Vibe is available on Le Chat Pro and Team plans with generous usage for full-time development. Subscribers can continue beyond limits with pay-as-you-go at API rates until usage resets. Devstral 2 now moves to paid API access.</p> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2F8f96c9e2-03f4-49e2-97e9-08dd3208986f amp;w=96&amp;q=75"/> <p>Full access to Mistral Vibe CLI and Devstral 2. Students get 50% off. Ideal for sustained, daily dev work.</p> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2F8bf1fb9e-d179-49bf-8343-44867062c109&amp;w=96&amp;q=75"/> <p>Everything in Pro, plus unified billing, management, and priority support.</p> <p>Build with Devstral directly via <a href="https://console.mistral.ai/home">Mistral Studio</a>.</p> <a href="https://console.mistral.ai/home">Mistral Studio</a> <p>Input</p> <p>Output</p> <p>Devstral 2</p> <p>$0.40/M tokens</p> <p>$2.00/M tokens</p> <p>Devstral 2 Small</p> <p>$0.10/M tokens</p> <p>$0.30/M tokens</p> <p>Free API usage remains available on the Experiment plan — ideal for testing and prototyping.</p> <p>For teams with advanced needs, we offer fine-tuning on internal languages and DSLs, reinforcement learning with your own environment, and end-to-end code modernization—migrate entire codebases to modern stacks without losing business logic or breaking behavior. We already deliver these solutions for some of the world's largest organizations in finance, defense, and infrastructure.</p> <p><a href="https://mistral.ai/contact">Contact us</a> to learn more. </p> <a href="https://mistral.ai/contact">Contact us</a> <p>1. Install the Vibe CLI in your terminal:</p> <p>2. <a href="https://console.mistral.ai/codestral/cli">Sign up to get started</a> to unlock full access.</p> <a href="https://console.mistral.ai/codestral/cli">Sign up to get started</a> <p>3. Start building: run vibe in your terminal.</p> <p>4. Explore the <a href="https://docs.mistral.ai/mistral-vibe/introduction">documentation</a> or <a href="https://x.com/Mistralvibe">join us on X</a> for updates.</p> <a href="https://docs.mistral.ai/mistral-vibe/introduction">documentation</a> <a href="https://x.com/Mistralvibe">join us on X</a> <p>If you want to build world-class AI products with us, we'd love to hear from you. <a href="https://mistral.ai/careers">Apply to join our team</a>.</p> <a href="https://mistral.ai/careers">Apply to join our team</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-vibe-2-0 Product Tue, 27 Jan 2026 00:00:00 +0000 Bringing open AI models to the frontier https://mistral.ai/news/about-mistral-ai Why we're building Mistral AI. <p>Why we're building Mistral AI.</p> <p>Generative AI, particularly large language models, is revolutionising content creation, knowledge retrieval, and problem-solving by generating human-quality text, content and commands based on human instructions. In the coming years, generative AI will completely redefine our culture and our lives, the way we interact with machines and with fellows.</p> <p>As in the previous ages of software, proprietary solutions were developed first—and we’re grateful they revealed the power of generative models to the world. Yet, as with the Web, with web browsers (<a href="https://webkit.org/">Webkit</a>), with operating systems (<a href="https://www.kernel.org/">Linux</a>), with cloud orchestration (<a href="https://kubernetes.io/">Kubernetes</a>), open solutions will quickly outperform proprietary solutions for most use cases. They will be driven by the power of community and the requirement for technical excellence that successful open-source projects have always promoted.</p> <a href="https://webkit.org/">Webkit</a> <a href="https://www.kernel.org/">Linux</a> <a href="https://kubernetes.io/">Kubernetes</a> <p>At Mistral AI, we believe that an open approach to generative AI is necessary. Community-backed model development is the surest path to fight censorship and bias in a technology shaping our future.</p> <p>We strongly believe that by training our own models, releasing them openly, and fostering community contributions, we can build a credible alternative to the emerging AI oligopoly. Open-weight generative models will play a pivotal role in the upcoming AI revolution.</p> <p>Mistral AI’s mission is to spearhead the revolution of open models.</p> <p>Working with open models is the best way for both vendors and users to build a sustainable business around AI solutions. Open models can be finely adapted to solve many new core business problems, in all industry verticals—in ways unmatched by black-box models. The future will be made of many different specialised models, each adapted to specific tasks, compressed as much as possible, and connected to specific modalities.</p> <p>In the open model paradigm, the developer has full control over the engine that powers their application. Model sizes and costs can be adapted to fit specific task difficulty, to put costs and latency under control. For enterprises, deploying open models on one’s infrastructure using well-packaged solutions simplifies dependencies and preserves data privacy.</p> <p>Closed and opaque APIs introduce well-known technical liabilities, in particular IP leakage risks; in the case of generative AI, it introduces a cultural liability, since the generated content is fully under the control of the API provider, with limited customization capacities. With model weights at hand, end-user application developers can customise the guardrails and the editorial tone they desire, instead of depending on the choices and biases of black-box model providers.</p> <p>Open models will also be precious safeguards against the misuse of generative AI. They will allow public institutions and private companies to audit generative systems for flaws, and to detect bad usage of generative models. They are our strongest bet for efficiently detecting misinformation content, whose quantity will increase unavoidably in the coming years.</p> <p>Our ambition is to become the leading supporter of the open generative AI community, and bring open models to state-of-the-art performance. We will make them the go-to solutions for most of the generative AI applications. Many of us played pivotal roles in important episodes in the development of LLMs; we’re thrilled to be working together on new frontier models with a community-oriented mindset.</p> <p>In the coming months, Mistral AI will progressively and methodically release new models that close the performance gap between black-box and open solutions – making open solutions the best options on a growing range of enterprise use-cases. Simultaneously, we will seek to empower community efforts to improve the next generations of models.</p> <p>As part of this effort, we’re releasing today [Mistral 7B]( {{&lt; ref "news/announcing-mistral-7b" &gt;}} ), our first 7B-parameter model, which outperforms all currently available open models up to 13B parameters on all standard English and code benchmarks. This is the result of three months of intense work, in which we assembled the Mistral AI team, rebuilt a top-performance MLops stack, and designed a most sophisticated data processing pipeline, from scratch.</p> <p>Mistral 7B’s performance demonstrates what small models can do with enough conviction. Tracking the smallest models performing above 60% on MMLU is quite instructive: in two years, it went from Gopher (280B, DeepMind. 2021), to Chinchilla (70B, DeepMind, 2022), to Llama 2 (34B, Meta, July 2023), and to Mistral 7B.</p> <p>Mistral 7B is only a first step toward building the frontier models on our roadmap. Yet, it can be used to solve many tasks: summarisation, structuration and question answering to name a few. It processes and generates text much faster than large proprietary solutions, and runs at a fraction of their costs.</p> <p>Mistral 7B is released in Apache 2.0, making it usable without restrictions anywhere.</p> <p>To actively engage with our user community and promote responsible usage of the models and tools we release under open-source licences, we are opening a <a href="https://github.com/mistralai/mistral-src">GitHub repository</a> and a <a href="https://discord.com/invite/mistralai">Discord channel</a>. These will provide a platform for collaboration, support, and transparent communication.</p> <a href="https://github.com/mistralai/mistral-src">GitHub repository</a> <a href="https://discord.com/invite/mistralai">Discord channel</a> <p>We’re committing to release the strongest open models in parallel to developing our commercial offering. We will propose optimised proprietary models for on-premise/virtual private cloud deployment. These models will be distributed as white-box solutions, making both weights and code sources available. We are actively working on hosted solutions and dedicated deployment for enterprises.</p> <p>We’re already training much larger models, and are shifting toward novel architectures. Stay tuned for further releases this fall.</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/about-mistral-ai Company Wed, 27 Sep 2023 00:00:00 +0000 Mistral AI Fine-tuning Hackathon https://mistral.ai/news/2024-ft-hackathon We are thrilled to announce the Mistral AI fine-tuning hackathon, a virtual experience taking place from June 5 - 30, 2024. <p>We are thrilled to announce the Mistral AI fine-tuning hackathon, a virtual experience taking place from June 5 - 30, 2024.</p> <p>We are thrilled to announce the Mistral AI fine-tuning hackathon, a virtual experience taking place from June 5 - 30, 2024. This is your chance to experiment with our brand new fine-tuning API and showcase your projects!</p> <p>€2,500 Mistral API credits each for top 3 winning projects</p> <a href="https://console.mistral.ai/">La Platforme</a> <a href="https://forms.gle/wXzs7yzS9XJky8YJ9">this form</a> <p>To officially join the hackathon, please submit your project using this <a href="https://forms.gle/EwLXqf7xesSnNfUi8">form</a>.</p> <a href="https://forms.gle/EwLXqf7xesSnNfUi8">form</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/2024-ft-hackathon Company Wed, 05 Jun 2024 00:00:00 +0000 Mistral Compute https://mistral.ai/news/mistral-compute Frontier AI infrastructure for everyone. <p>Frontier AI infrastructure for everyone.</p> <p>Frontier AI. In Your Hands. This was the founding north star of Mistral AI and continues to guide our mission two years on. It has helped us evolve from a research lab to the world’s leading independent AI company, rooted in open science. </p> <p>Through this journey, however, we have realized that to truly democratize AI, it will require us to not just build open models and frontier AI products, but to provide everyone across the world the tools and environments to do so themselves, from infrastructure up. </p> <p>In particular, we believe that the buildout and ownership of AI infrastructure should not be restricted to two or three powerful companies, but rather, be independent and self-determined. </p> <p>Frontier AI should be in your hands. And in service of that mission, we are proud to announce the latest addition to our Mistral AI’s portfolio of offerings—Mistral Compute. </p> <p>Mistral Compute is a new AI infrastructure offering that will provide customers a private, integrated stack—GPUs, orchestration, APIs, products, and services in whatever form factor they need, from bare-metal servers to fully-managed PaaS. It is an unprecedented AI infrastructure undertaking in Europe, and a strategic initiative that will ensure that all nation states, enterprises, and research labs globally remain at the forefront of AI innovation.</p> <p>The genesis of Mistral Compute was organic. At Mistral AI, we faced our fair share of challenges training models and building state-of-the-art AI products. GPU hardware is expensive and scarce, tooling is patchy, and securing critical workloads is difficult. We learned hard lessons along the way, but eventually managed to build a robust AI platform on which we trained many state of the art models that today serve millions of users. And now, we’re unleashing its power for anyone to build on.</p> <p>Mistral Compute empowers sovereigns, companies, and research institutions to completely reimagine the way they build their AI stack. What used to be a choice between a few third party cloud providers is now the right to build your AI environment to your spec, and own it top to bottom. The offering will include Mistral AI’s training suite that can accelerate region and domain-specific AI efforts across <a href="https://www.htx.gov.sg/whats-happening/all-news---events/all-news/2025/media-release-htx-inks-contract-with-mistral-ai-and-microsoft-to-boost-ai-model-development-for-home-team">nation</a> and <a href="https://www.cmacgm-group.com/en/news-media/cma-cgm-group-adopts-custom-designed-ai-solutions-mistral-ai">industry</a>-wide endeavors such as defense technology, pharmaceutical discovery, financial markets, and more.</p> <a href="https://www.htx.gov.sg/whats-happening/all-news---events/all-news/2025/media-release-htx-inks-contract-with-mistral-ai-and-microsoft-to-boost-ai-model-development-for-home-team">nation</a> <a href="https://www.cmacgm-group.com/en/news-media/cma-cgm-group-adopts-custom-designed-ai-solutions-mistral-ai">industry</a> <img src="https://cms.mistral.ai/assets/8303b765-01b3-4cae-85d0-ea41047e08f3.png?width=2120&amp;height=1472"/> <p>Customers can use Mistral Compute to train and serve any AI workload they imagine. It is being developed and managed by a team with decades of experience building state of the art AI and high-performance compute (HPC) infrastructure, and training and serving the world’s best AI models. </p> <p>As a premier NVIDIA partner, Mistral Compute will offer the latest NVIDIA reference architectures, with availability of tens of thousands of GPUs, and rapid expansion in the coming years.</p> <p>Critically, Mistral Compute aims to provide a competitive edge to entire industries and countries in Europe, the Middle East, Asia, and the entire Southern Hemisphere that have been waiting for an alternative to US or China-based cloud and AI providers. This applies equally to the global entities of US or Chinese companies that want to serve international customers, particularly in Europe, with AI that is operating and running regionally. </p> <p>With a strong emphasis on sustainability and data sovereignty, Mistral Compute is built to meet the stringent requirements of European regulations while minimizing environmental impact through the use of decarbonized energy. </p> <p>We would like to thank our launch partners Black Forest Labs, BNP Paribas, Kyutai, Mirakl, Orange, Schneider Electric, SLB Groupe, SNCF, Thales, and Veolia for their support for Mistral Compute. If you would like to join them, please send us a note using the form. </p> <p>Along with Mistral Compute, we will continue to make the Mistral AI suite of models, products, and solutions available on-premises and on all global cloud leaders through our partners. </p> <p>Join us on our mission to place frontier AI in everyone’s hands.</p> <p>Get in touch.</p> <p>Interested in Mistral Compute?</p> <p>By submitting this form, you agree with our <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a>. We process your data to respond to your contact request in accordance with our <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a></p> <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a> <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-compute Company Wed, 11 Jun 2025 00:00:00 +0000 Our contribution to a global environmental standard for AI https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Our contribution to a global environmental standard for AI | Mistral AI <p>At Mistral AI, our mission is to bring artificial intelligence in everyone’s hands. For this purpose, we have consistently advocated for openness in AI, with a unique focus on empowering organizations that want to own their AI future. </p> <p>Today, as AI becomes increasingly integrated into every layer of our economy, it is crucial for developers, policymakers, enterprises, governments and citizens to better understand the environmental footprint of this transformative technology. At Mistral AI, we believe that we share a collective responsibility with each actor of the value chain to address and mitigate the environmental impacts of our innovations. </p> <p>Even though some recent initiatives have been taken, such as the Coalition for Sustainable AI, launched during the Paris AI Action Summit in February 2025, the work to achieve here remains important. Without more transparency, it will be impossible for public institutions, enterprises and even users to compare models, take informed purchasing decisions, fill enterprises' extra-financial obligations or reduce the impacts associated with their use of AI.</p> <p>In this context, we have conducted a first-of-its-kind comprehensive study to quantify the environmental impacts of our LLMs. This report aims to provide a clear analysis of the environmental footprint of AI, contributing to set a new standard for our industry. </p> <p>After less than 18 months of existence, we have initiated the first comprehensive lifecycle analysis (LCA) of an AI model, in collaboration with Carbone 4, a leading consultancy in CSR and sustainability, and the French ecological transition agency (ADEME). To ensure robustness, this study was also peer-reviewed by Resilio and Hubblo, two consultancies specializing in environmental audits in the digital industry.</p> <img src="https://cms.mistral.ai/assets/ee83637f-9f22-4e54-b63f-86277bea2a69.jpg?width=1841&amp;height=2539"/> <p>In addition to complying with the most rigorous standards*, the aim of this analysis was to quantify the environmental impacts of developing and using LLMs across three impact categories: greenhouse gas emissions (GHG), water use, and resource depletion**. Today, we are disclosing two important metrics for our industry in the long term: </p> <p>the environmental footprint of training Mistral Large 2: as of January 2025, and after 18 months of usage, Large 2 generated the following impacts: </p> <p>20,4 ktCO₂e, </p> <p>281 000 m3 of water consumed, </p> <p>and 660 kg Sb eq (standard unit for resource depletion). </p> <p>the marginal impacts of inference, more precisely the use of our AI assistant Le Chat for a 400-token response - excluding users’ terminals:</p> <p>1.14 gCO₂e, </p> <p>45 mL of water, </p> <p>and 0.16 mg of Sb eq. </p> <img src="https://cms.mistral.ai/assets/4605d8e3-b58f-4077-ba14-be1838696497.jpg?width=1841&amp;height=1540"/> <p>These figures reflect the scale of computation involved in Gen AI, requiring numerous GPUs, often in regions with carbon-intensive electricity and sometimes water stress. They also include “upstream emissions” – impacts from manufacturing servers for instance, and not just energy use. </p> <p>Given the results of our study, we are convinced that the following three indicators are of great importance for users, AI developers and policy makers to fully understand and manage the environmental impacts of LLMs: </p> <p>the absolute impacts of training a model,</p> <p>the marginal impacts of inference,</p> <p>and the ratio of total inference to total life-cycle impacts. </p> <p>Indicators 1 and 2 could be mandatory figures to report in order to inform the public on impacts, while indicator 3 can act as an internal indicator with optional disclosure. The latter indicator is key to grasp a complete vision of lifecycle impacts, and to ensure that models’ training phases are amortized, and not wasted. </p> <p>Our study also shows a strong correlation between a model’s size and its footprint. Benchmarks have shown impacts are roughly proportional to model size: a model 10 times bigger will generate impacts one order of magnitude larger than a smaller model for the same amount of generated tokens. This highlights the importance of choosing the right model for the right use case.</p> <p>It is worth noting that this study is a first approximation, with the difficulty to make precise calculations in such an exercise when no standards exist for LLM environment accountability, and the absence of publicly available impact factors. For instance, a reliable life-cycle inventory of GPUs is yet to be made, as their embodied impacts had to be approximated but account for a significant portion of total impacts.</p> <p>To comply with the GHG Protocol Product Standard, future audits made in the industry may follow this study’s principles of using location-based approach for electricity emissions and including all significant upstream impacts— i.e., not only those from GPU electricity use, but also all other electricity consumptions (CPUs, cooling devices, etc.) and manufacturing of hardware.</p> <p>These results point to two levers to reduce the environmental impact of LLMs. </p> <p>First, to improve transparency and comparability, AI companies ought to publish the environmental impacts of their models using standardized, internationally recognized frameworks. Where needed, specific standards for the AI sector could be developed to ensure consistency. This could enable the creation of a scoring system, helping buyers and users identify the least carbon-, water- and material-intensive models.</p> <p>Second, from the user side, encouraging the research for efficiency practices can make a significant difference: </p> <p>developing AI literacy to help people use GenAI in the most optimal way, </p> <p>choosing the model size that is best adapted to users’ needs, </p> <p>grouping queries to limit unnecessary computing, </p> <p>For public institutions in particular, integrating model size and efficiency into procurement criteria could send a strong signal to the market.</p> <p>Moving forward, we are committed to updating our environmental impact reports in the future and participating in discussions around the development of international industry standards. We will advocate for greater transparency across the entire AI value chain and work to help AI adopters make informed decisions about the solutions that best suit their needs. The results will later be available via ADEME’s Base Empreinte database, setting a new standard for future reference for transparency in the AI sector.</p> <p>By encouraging sufficiency and efficiency practices and publishing standardized environmental impact reports, we can collectively work towards aligning the AI sector with global climate goals. This study is a humble contribution towards a more accessible and sustainable future for AI.</p> <p>—* This study was carried out following the Frugal AI methodology developed by AFNOR and is compliant with international standards, including the Green House Gas (GHG) Protocol Product Standard and ISO 14040/44.</p> <p>** The environmental impacts were assessed using standard indicators common in Lifecycle Analyses (LCAs): greenhouse gas emissions measured by Global Warming Potential over 100 years (GWP100), water consumption measured by Water Consumption Potential (WCP), and material consumption measured by Abiotic Resource Depletion (ADP).</p> <p>ADP quantifies the depletion of non-renewable resources (such as metals and minerals) by considering both the current extraction rates and the estimated reserves of each material. These values are standardized relative to Antimony’s ADP, providing a uniform unit since Antimony is a scarce resource.</p> <p>For example, extracting 1 kg of gold corresponds to an ADP of 2.35 kg Sb eq, whereas extracting 1 kg of copper corresponds to 0.000000161 kg Sb eq.</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/our-contribution-to-a-global-environmental-standard-for-ai Company Tue, 22 Jul 2025 00:00:00 +0000 Mistral AI raises 1.7B€ to accelerate technological progress with AI https://mistral.ai/news/mistral-ai-raises-1-7-b-to-accelerate-technological-progress-with-ai Mistral AI raises 1.7B€ to accelerate technological progress with AI | Mistral AI <p>We are announcing a Series C funding round of 1.7B€ at a 11.7B€ post-money valuation. This investment fuels our scientific research to keep pushing the frontier of AI to tackle the most critical and sophisticated technological challenges faced by strategic industries.</p> <p>The Series C funding round is led by leading semiconductor equipment manufacturer, ASML Holding NV (ASML).</p> <p>“ASML is proud to enter a strategic partnership with Mistral AI, and to be lead investor in this funding round. The collaboration between Mistral AI and ASML aims to generate clear benefits for ASML customers through innovative products and solutions enabled by AI, and will offer potential for joint research to address future opportunities.” said ASML CEO Christophe Fouquet.</p> <p>It includes participation from existing investors: DST Global, Andreessen Horowitz, Bpifrance, General Catalyst, Index Ventures, Lightspeed and NVIDIA.</p> <p>For the last two years, we have advanced AI through cutting-edge research and strategic partnerships with corporate and industrial champions. We will continue to develop custom decentralized frontier AI solutions that solve the most complex engineering and industrial problems. It empowers enterprises, public sectors, and industries a competitive edge through state-of-the-art models, tailored solutions, and high-performance compute infrastructure. This funding round reaffirms the company’s independence. </p> <p>“This investment brings together two technology leaders operating in the same value chain. We have the ambition to help ASML and its numerous partners solve current and future engineering challenges through AI, and ultimately to advance the full semiconductor and AI value chain”, said Mistral AI CEO Arthur Mensch.</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-ai-raises-1-7-b-to-accelerate-technological-progress-with-ai Company Tue, 09 Sep 2025 00:00:00 +0000 Mistral AI - KI für Deutschland https://mistral.ai/news/ki-fur-deutschland Mistral AI - KI für Deutschland | Mistral AI <p>At Mistral AI, we’ve believed from the very beginning in building frontier AI that Europe could own, giving people and organizations the ability to shape and determine the role of AI in their future. </p> <p>As we are expanding our presence, we are proud to expand our commitment to German citizens through several long-term strategic commitments with German enterprises and institutions. </p> <p>We are building a multiyear partnership with SAP to deliver a fully sovereign AI stack for Germany and Europe, integrating our models into SAP’s AI Foundation, and co-developing industry-specific solutions for Europe’s most complex industries and administrations. </p> <p>With Helsing, we are accelerating the development of vision-language-action models for real-world defense and security applications. This will support Europe’s strategic autonomy and strengthen its technological independence in critical sectors.</p> <p>As Europe accelerates in its search for more digital autonomy, we have reaffirmed our founding principles to European leaders: at Mistral AI, we want to put AI into everyone’s hands so that our customers can own their AI journey. In Germany, this means benefiting from the best AI solutions and talents, without sacrificing strategic autonomy or sending critical data abroad. </p> <p>To do so, we are expanding our presence in German, significantly increasing our local team and opening in the coming months an office in Germany.</p> <p>Our work in Germany demonstrates our shared ambitions and Germany’s pivotal role in securing Europe’s AI autonomy. Today marks an important step forward.</p> <p>Read the German version of this blogpost <a href="https://mistral.ai/de/news/ki-fur-deutschland">here</a>.</p> <a href="https://mistral.ai/de/news/ki-fur-deutschland">here</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/ki-fur-deutschland Company Wed, 19 Nov 2025 00:00:00 +0000 Meet Mistral AI: Episode 1, Arthur Mensch https://mistral.ai/news/meet-mistral-ai-e1 Meet the people who help make the AI magic happen. <p>Meet the people who help make the AI magic happen.</p> <p>In our new video series, Meet Mistral AI, we speak to the people behind AI—the people who help make the magic happen.</p> <p>Our first episode features Arthur Mensch, the CEO and Co-founder of Mistral AI, in an interview with Head of North American Communications, Howard Cohen.</p> <p>Watch to get Arthur’s perspective on:</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/meet-mistral-ai-e1 Company Thu, 11 Dec 2025 00:00:00 +0000 Devstral https://mistral.ai/news/devstral Introducing the best open-source model for coding agents. <p>Introducing the best open-source model for coding agents.</p> <p>Today we introduce Devstral, our agentic LLM for software engineering tasks. Devstral is built under a collaboration between Mistral AI and <a href="https://www.all-hands.dev/">All Hands AI</a> 🙌, and outperforms all open-source models on SWE-Bench Verified by a large margin. We release Devstral under the Apache 2.0 license. </p> <a href="https://www.all-hands.dev/">All Hands AI</a> <img src="https://cms.mistral.ai/assets/a8f418f6-f7ee-4f21-8ab8-08bd76c37186.png?width=1600&amp;height=882"/> <p>While typical LLMs are excellent at atomic coding tasks such as writing standalone functions or code completion, they currently struggle to solve real-world software engineering problems. Real-world development requires contextualising code within a large codebase, identifying relationships between disparate components, and identifying subtle bugs in intricate functions. </p> <p>Devstral is designed to tackle this problem. Devstral is trained to solve real GitHub issues; it runs over code agent scaffolds such as OpenHands or SWE-Agent, which define the interface between the model and the test cases. Here, we show Devstral’s performance on the popular SWE-Bench Verified benchmark, a dataset of 500 real-world GitHub issues which have been manually screened for correctness.</p> <p>Devstral achieves a score of 46.8% on SWE-Bench Verified, outperforming prior open-source SoTA models by more than 6% points. When evaluated under the same test scaffold (OpenHands, provided by <a href="https://www.all-hands.dev/">All Hands AI</a> 🙌), Devstral exceeds far larger models such as Deepseek-V3-0324 (671B) and Qwen3 232B-A22B. </p> <a href="https://www.all-hands.dev/">All Hands AI</a> <p>In the table below, we also compare Devstral to closed and open models evaluated under any scaffold (including ones custom for the model). Here, we find that Devstral achieves substantially better performance than a number of closed-source alternatives. For example, Devstral surpasses the recent GPT-4.1-mini by over 20%. </p> <p>Devstral is light enough to run on a single RTX 4090 or a Mac with 32GB RAM, making it an ideal choice for local deployment and on-device use. Coding platforms such as <a href="https://github.com/All-Hands-AI/OpenHands">OpenHands</a> can allow the model to interact with local codebases and provide fast resolution to issues. To try it yourself, view the <a href="https://docs.all-hands.dev/modules/usage/llms/local-llms">documentation</a> or <a href="https://www.youtube.com/watch?v=oV9tAkS2Xic">tutorial video</a>.</p> <a href="https://github.com/All-Hands-AI/OpenHands">OpenHands</a> <a href="https://docs.all-hands.dev/modules/usage/llms/local-llms">documentation</a> <a href="https://www.youtube.com/watch?v=oV9tAkS2Xic">tutorial video</a> <p>The performance of the model also makes it a suitable choice for agentic coding on privacy-sensitive repositories in enterprises, especially ones subject to stringent security and compliance requirements. </p> <p>Finally, if you’re building or using an agentic coding IDE, plugin, or environment, Devstral is a great choice to add to your model selector. </p> <p>We release this model for free under an Apache 2.0 license for the community to build on, customize, and accelerate autonomous software development. To try it for yourself, head over to our <a href="https://huggingface.co/mistralai/Devstral-Small-2505">model card</a>. </p> <a href="https://huggingface.co/mistralai/Devstral-Small-2505">model card</a> <p>The model is also available on our API under the name devstral-small-2505 at the same price as Mistral Small 3.1: $0.1/M input tokens and $0.3/M output tokens. </p> <p>Should you choose to self-deploy, you can download the model on <a href="https://huggingface.co/mistralai/Devstral-Small-2505">HuggingFace</a>, <a href="https://ollama.com/library/devstral">Ollama</a>, <a href="https://www.kaggle.com/models/mistral-ai/devstral-small-2505">Kaggle</a>, <a href="https://docs.unsloth.ai/basics/devstral">Unsloth</a>, <a href="https://lmstudio.ai/model/devstral-small-2505-MLX">LM Studio</a> starting today. </p> <a href="https://huggingface.co/mistralai/Devstral-Small-2505">HuggingFace</a> <a href="https://ollama.com/library/devstral">Ollama</a> <a href="https://www.kaggle.com/models/mistral-ai/devstral-small-2505">Kaggle</a> <a href="https://docs.unsloth.ai/basics/devstral">Unsloth</a> <a href="https://lmstudio.ai/model/devstral-small-2505-MLX">LM Studio</a> <p>For enterprise deployments that require fine-tuning on private codebases, or higher-fidelity customization such as continued pre-training or distilling Devstral’s capabilities into other models, please <a href="https://mistral.ai/contact">contact us</a> to connect with our applied AI team. </p> <a href="https://mistral.ai/contact">contact us</a> <p>Devstral is a research preview and we welcome feedback! We’re hard at work building a larger agentic coding model that will be available in the coming weeks.</p> <p>Interested in discussing how we can help your team put Devstral to use, and about our portfolio of models, products and solutions? <a href="https://mistral.ai/contact">Contact us</a> and we’ll be happy to help.</p> <a href="https://mistral.ai/contact">Contact us</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/devstral Research Wed, 21 May 2025 00:00:00 +0000 Codestral Embed https://mistral.ai/news/codestral-embed The new state-of-the-art embedding model for code. <p>The new state-of-the-art embedding model for code.</p> <p>We are excited to release Codestral Embed, our first embedding model specialized for code. It performs especially well for retrieval use cases on real-world code data. </p> <p>Codestral Embed significantly outperforms leading code embedders in the market today: Voyage Code 3, Cohere Embed v4.0 and OpenAI’s large embedding model.</p> <p>Codestral Embed can output embeddings with different dimensions and precisions, and the figure below illustrates the trade-offs between retrieval quality and storage costs. Codestral Embed with dimension 256 and int8 precision still performs better than any model from our competitors. The dimensions of our embeddings are ordered by relevance. For any integer target dimension n, you can choose to keep the first n dimensions for a smooth trade-off between quality and cost.</p> <img src="https://cms.mistral.ai/assets/11ffe4df-ee2e-4438-a490-e575727b3669.png?width=1600&amp;height=1071"/> <p>Below  we show the performance of Codestral Embed for several categories. The details of the benchmarks corresponding to each category can be found in the table in the “Benchmarks details” section.</p> <p>SWE-Bench is based on a dataset of real-world GitHub issues and corresponding fixes, and is especially relevant for retrieval-augmented generation for coding agents. Text2Code (GitHub) contains benchmarks relevant for giving context for code completion or edition. We believe that these two categories are especially relevant to code assistants. </p> <p>Codestral Embed is optimized for high-performance code retrieval and semantic understanding. It enables a range of practical applications across development workflows, especially when working with large-scale code corpora. </p> <p>Codestral Embed facilitates rapid and efficient context retrieval for code completion, editing, or explanation tasks. It is ideal for AI-powered software engineering in copilots or coding agent frameworks. </p> <p>Embed enables accurate search of relevant code snippets from natural language or code queries. It is suitable for use within developer tools, documentation systems, and copilots. </p> <p>The model’s embeddings can be used to identify near-duplicate or functionally similar code segments, even with significant lexical variation. This supports use cases such as identifying reusable code to avoid duplicates, or detecting copy-paste reuse to enforce licensing policies.</p> <p>Codestral Embed supports unsupervised grouping of code based on functionality or structure. This is useful for analyzing repository composition, identifying emergent architecture patterns, or feeding into automated documentation and categorization systems.</p> <p>Codestral Embed is available on our API under the name `codestral-embed-2505` at a price of $0.15 per million tokens. It is also available on our <a href="https://docs.mistral.ai/capabilities/batch/">batch API</a> at a 50% discount. For on-prem deployments, please <a href="https://mistral.ai/contact">contact us</a> to connect with our applied AI team. </p> <a href="https://docs.mistral.ai/capabilities/batch/">batch API</a> <a href="https://mistral.ai/contact">contact us</a> <p>Please check our <a href="https://docs.mistral.ai/capabilities/embeddings/code_embeddings/">docs</a> to get started and our <a href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/embeddings/code_embedding.ipynb">cookbook </a>for examples of how to use Codestral Embed for code agent retrieval. </p> <a href="https://docs.mistral.ai/capabilities/embeddings/code_embeddings/">docs</a> <a href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/embeddings/code_embedding.ipynb">cookbook </a> <p>For retrieval use cases, while you can use the full context size of 8192 tokens, it is often more efficient to chunk your dataset. We recommend using chunks of 3000 characters with 1000 characters overlap. Larger chunks tend to adversely affect the performance of the retrieval system. Refer to <a href="https://docs.mistral.ai/guides/rag/#split-document-into-chunks">our </a><a href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/embeddings/code_embedding.ipynb">cookbook</a> for more information about chunking. </p> <a href="https://docs.mistral.ai/guides/rag/#split-document-into-chunks">our </a> <a href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/embeddings/code_embedding.ipynb">cookbook</a> <p>You can find the details of the benchmarks that we used to evaluate our model in the table below. We report the average score per category, and the macro average (average of the scores of each category). </p> <p>swebench_lite</p> <p>CodeSearchNet Code -&gt; Code</p> <p>Given real-world code from GitHub, retrieve the code that appears in the same context</p> <p>code2code</p> <p>CodeSearchNet doc2code</p> <p>Given a docstring from real-world GitHub code, retrieve the corresponding code </p> <p>Text2code (github)</p> <p>CommitPack</p> <p>Given a commit message from real-world GitHub code, retrieve the corresponding modified files </p> <p>Text2code (github)</p> <p>Spider</p> <p>Retrieve SQL code given a query</p> <p>Text2SQL</p> <p>WikiSQL</p> <p>Retrieve SQL code given a query</p> <p>Text2SQL</p> <p>Synthetic Text2SQL</p> <p>Retrieve SQL code given a query</p> <p>Text2SQL</p> <p>DM code contests</p> <p>Match problem descriptions to correct solutions for programming competition websites (corpus is correct + incorrect solutions for each problem).</p> <p>Text2Code (Algorithms)</p> <p>APPS</p> <p>Match problem descriptions to solutions for programming competition websites.</p> <p>Text2Code (Algorithms)</p> <p>CodeChef</p> <p>Match problem descriptions to solutions for programming competition websites.</p> <p>Text2Code (Algorithms)</p> <p>MBPP+</p> <p>Match algorithmic questions to solutions for mostly basic python programs</p> <p>Text2Code (Algorithms)</p> <p>DS 1000</p> <p>Match data science questions to implementations</p> <p>Text2Code (Data Science)</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/codestral-embed Research Wed, 28 May 2025 00:00:00 +0000 Magistral https://mistral.ai/news/magistral Stands to reason. <p>Stands to reason.</p> <p>Research</p> <p>Jun 10, 2025</p> <p>Mistral AI</p> <p>Announcing Magistral — the first reasoning model by Mistral AI — excelling in domain-specific, transparent, and multilingual reasoning.</p> <p>The best human thinking isn’t linear — it weaves through logic, insight, uncertainty, and discovery. Reasoning language models have enabled us to augment and delegate complex thinking and deep understanding to AI, improving our ability to work through problems requiring precise, step-by-step deliberation and analysis.</p> <p>But this space is still nascent. Lack of specialized depth needed for domain-specific problems, limited transparency, and inconsistent reasoning in the desired language — are just some of the known limitations of early thinking models.</p> <p>Today, we’re excited to announce our latest contribution to AI research with Magistral — our first reasoning model. Released in both open and enterprise versions, Magistral is designed to think things through — in ways familiar to us — while bringing expertise across professional domains, transparent reasoning that you can follow and verify, along with deep multilingual flexibility.</p> <p>A one-shot physics simulation showcasing gravity, friction and collisions with Magistral Medium in Preview.</p> <img src="https://cms.mistral.ai/assets/d73ee721-ea92-46f5-af77-79674fdb4163.png?width=1600&amp;height=635"/> <p>Magistral is a dual-release model focused on real-world reasoning and feedback-driven improvement.</p> <p>We’re releasing the model in two variants: Magistral Small — a 24B parameter open-source version and Magistral Medium — a more powerful, enterprise version.</p> <p>Magistral Medium scored 73.6% on AIME2024, and 90% with majority voting @64. Magistral Small scored 70.7% and 83.3% respectively.</p> <p>Reason natively — Magistral’s chain-of-thought works across global languages and alphabets.</p> <p>Suited for a wide range of enterprise use cases — from structured calculations and programmatic logic to decision trees and rule-based systems.</p> <p>With the new Think mode and Flash Answers in Le Chat, you can get responses at 10x the speed compared to most competitors.</p> <p>The release is supported by our <a href="https://arxiv.org/pdf/2506.10910">latest paper</a> covering comprehensive evaluations of Magistral, our training infrastructure, reinforcement learning algorithm, and novel observations for training reasoning models. </p> <a href="https://arxiv.org/pdf/2506.10910">latest paper</a> <p>As we’ve open-sourced Magistral Small, we welcome the community to examine, modify and build upon its architecture and reasoning processes to further accelerate the emergence of thinking language models. Our earlier open models have already been leveraged by the community for exciting projects like <a href="https://www.futurehouse.org/research-announcements/ether0-a-scientific-reasoning-model-for-chemistry">ether0</a> and <a href="https://huggingface.co/NousResearch/DeepHermes-3-Mistral-24B-Preview">DeepHermes 3</a>.</p> <a href="https://www.futurehouse.org/research-announcements/ether0-a-scientific-reasoning-model-for-chemistry">ether0</a> <a href="https://huggingface.co/NousResearch/DeepHermes-3-Mistral-24B-Preview">DeepHermes 3</a> <p>Magistral is fine-tuned for multi-step logic, improving interpretability and providing a traceable thought process in the user’s language, unlike general-purpose models.</p> <p>We aim to iterate the model quickly starting with this release. Expect the models to constantly improve.</p> <p>The model excels in maintaining high-fidelity reasoning across numerous languages. Magistral is especially well-suited to reason in languages including English, French, Spanish, German, Italian, Arabic, Russian, and Simplified Chinese.</p> <p>Prompt and response in Arabic with Magistral Medium in Preview in Le Chat.</p> <p>With Flash Answers in Le Chat, Magistral Medium achieves up to 10x faster token throughput than most competitors. This enables real-time reasoning and user feedback, at scale.</p> <p>Speed comparison of Magistral Medium in Preview in Le Chat against ChatGPT.</p> <p>Magistral is ideal for general purpose use requiring longer thought processing and better accuracy than with non-reasoning LLMs. From legal research and financial forecasting to software development and creative storytelling — this model solves multi-step challenges where transparency and precision are critical.</p> <p>Building on our flagship <a href="https://mistral.ai/models">models</a>, Magistral is designed for research, strategic planning, operational optimization, and data-driven decision making — whether executing risk assessment and modelling with multiple factors, or calculating optimal delivery windows under constraints.</p> <a href="https://mistral.ai/models">models</a> <p>Legal, finance, healthcare, and government professionals get traceable reasoning that meets compliance requirements. Every conclusion can be traced back through its logical steps, providing auditability for high-stakes environments with domain-specialized AI.</p> <p>Magistral enhances coding and development use cases: compared to non-reasoning models, it significantly improves project planning, backend architecture, frontend design, and data engineering through sequenced, multi-step actions involving external tools or API.</p> <p>Our early tests indicated that Magistral is an excellent creative companion. We highly recommend it for creative writing and storytelling, with the model capable of producing coherent or — if needed — delightfully eccentric copy.</p> <p>Magistral Small is an open-weight model, and is available for self-deployment under the Apache 2.0 license. You can download it from:  </p> <p>Hugging Face: <a href="https://huggingface.co/mistralai/Magistral-Small-2506">https://huggingface.co/mistralai/Magistral-Small-2506</a></p> <a href="https://huggingface.co/mistralai/Magistral-Small-2506">https://huggingface.co/mistralai/Magistral-Small-2506</a> <p>You can try out a preview version of Magistral Medium in <a href="http://chat.mistral.ai">Le Chat</a> or via API on <a href="http://console.mistral.ai">La Plateforme</a>. </p> <a href="http://chat.mistral.ai">Le Chat</a> <a href="http://console.mistral.ai">La Plateforme</a> <p>Magistral Medium is also available on Amazon SageMaker, and soon on IBM WatsonX, Azure AI and Google Cloud Marketplace.</p> <p>For enterprise and custom solutions, including on-premises deployments, <a href="https://mistral.ai/contact">contact our sales team</a>.</p> <a href="https://mistral.ai/contact">contact our sales team</a> <p>Magistral represents a significant contribution by Mistral AI to the open source community, with input from seasoned experts and interns. And we’re keen to grow our family to further shape future AI innovation.</p> <p>If you’re interested in joining us on our mission to democratize artificial intelligence, we welcome your applications to <a href="https://mistral.ai/careers">join our team</a>! </p> <a href="https://mistral.ai/careers">join our team</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/magistral Research Tue, 10 Jun 2025 00:00:00 +0000 Upgrading agentic coding capabilities with the new Devstral models https://mistral.ai/news/devstral-2507 Upgrading agentic coding capabilities with the new Devstral models | Mistral AI <p>Today, we introduce Devstral Medium, as well as an upgrade to Devstral Small. These models are released under the collaboration between Mistral AI and <a href="https://www.all-hands.dev/">All Hands AI</a> 🙌, with a strong emphasis on generalization to different prompts and agentic scaffolds. </p> <a href="https://www.all-hands.dev/">All Hands AI</a> <p>The new Devstral Small 1.1 is released under the Apache 2.0 license, and is state-of-the-art amongst open models for code agents. Devstral Medium is available through our API, and sets a new point on the cost/performance pareto frontier, surpassing Gemini 2.5 Pro and GPT 4.1 for a quarter of the price.</p> <p>As with the previous version of Devstral Small, we release Devstral Small 1.1 under the Apache 2.0 license. While the architecture remains the same, with only 24B parameters, Devstral Small 1.1 comes with significant improvements over its predecessor:</p> <p>Devstral Small 1.1 achieves a score of 53.6% on SWE-Bench Verified, and sets a new state-of-the-art for open models without test-time scaling.</p> <p>Devstral Small 1.1 excels when paired with OpenHands, and also demonstrates better generalization to different prompts and coding environments. Its versatility is further enhanced by supporting both Mistral function calling and XML formats, making it adaptable to a wide range of applications and agentic scaffolds.</p> <img src="https://cms.mistral.ai/assets/a8227ebf-fba7-4ad7-9d6b-83ce5da42a3e.png?width=1920&amp;height=1080"/> <p>Devstral Medium builds upon the strengths of Devstral Small and takes performance to the next level with a score of 61.6% on SWE-Bench Verified. Devstral Medium is available through our public API, and offers exceptional performance at a competitive price point, making it an ideal choice for businesses and developers looking for a high-quality, cost-effective model.</p> <p>For those who prefer on-premise solutions, Devstral Medium can be directly deployed on private infrastructure, offering enhanced data privacy and control. We also support custom finetuning for Devstral Medium, allowing enterprises to customize the model for specific use cases, and achieve optimal performance tailored to their specific requirements. </p> <img src="https://cms.mistral.ai/assets/ae27ff96-c7b6-4a6b-a3a1-0e02e156198a.png?width=2046&amp;height=996"/> <p>Both models are available through our API under the the following names: </p> <p>devstral-small-2507 at the same price as Mistral Small 3.1: $0.1/M input tokens and $0.3/M output tokens. </p> <p>devstral-medium-2507 at the same price as Mistral Medium 3: $0.4/M input tokens and $2/M output tokens. </p> <p>We release Devstral Small 1.1 under the Apache 2.0 license for the community to build on, customize, and accelerate autonomous software development. To try it for yourself, head over to our <a href="https://huggingface.co/mistralai/Devstral-Small-2507">model card</a>. </p> <a href="https://huggingface.co/mistralai/Devstral-Small-2507">model card</a> <p>Devstral Medium will also be available on <a href="https://mistral.ai/news/mistral-code">Mistral Code</a> for enterprise customers and on our <a href="https://docs.mistral.ai/guides/finetuning/">finetuning API</a>. To deploy and customize the model in your environment, please <a href="https://mistral.ai/contact">contact us</a>. </p> <a href="https://mistral.ai/news/mistral-code">Mistral Code</a> <a href="https://docs.mistral.ai/guides/finetuning/">finetuning API</a> <a href="https://mistral.ai/contact">contact us</a> <p>We are dedicated to open-sourcing our most accessible and impactful models, ensuring the open-source community can easily utilize and benefit from our advanced technology. While Devstral Small is easily usable for local deployment and available under the Apache 2.0 license for everyone to use and build upon, Devstral Medium is available on our API and offers high performance for developers and enterprises. </p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/devstral-2507 Research Thu, 10 Jul 2025 00:00:00 +0000 Voxtral https://mistral.ai/news/voxtral Introducing frontier open source speech understanding models. <p>Introducing frontier open source speech understanding models.</p> <img src="https://cms.mistral.ai/assets/ec026954-d85f-4b11-94fd-d26fc8e13ae2.png?width=2206&amp;height=1190"/> <p>Voice was humanity’s first interface—long before writing or typing, it let us share ideas, coordinate work, and build relationships. As digital systems become more capable, voice is returning as our most natural form of human-computer interaction.</p> <p>Yet today’s systems remain limited—unreliable, proprietary, and too brittle for real-world use. Closing this gap demands tools with exceptional transcription, deep understanding, multilingual fluency, and open, flexible deployment.</p> <p>We release the Voxtral models to accelerate this future. These state‑of‑the‑art speech understanding models are  available in two sizes—a 24B variant for production-scale applications and a 3B variant for local and edge deployments. Both versions are released under the Apache 2.0 license, and are also available on our <a href="https://docs.mistral.ai/capabilities/audio/#transcription">API</a>. The API routes transcription queries to a transcribe-optimized version of Voxtral Mini (Voxtral Mini Transcribe) that delivers unparalleled cost and latency-efficiency. For a comprehensive understanding of the research and development behind Voxtral, please refer to our detailed research paper, available for download <a href="https://arxiv.org/abs/2507.13264">here</a>.</p> <a href="https://docs.mistral.ai/capabilities/audio/#transcription">API</a> <a href="https://arxiv.org/abs/2507.13264">here</a> <p>Until recently, gaining truly usable speech intelligence in production meant choosing between two trade-offs:</p> <p>Open-source ASR systems with high word error rates and limited semantic understanding</p> <p>Closed, proprietary APIs that combine strong transcription with language understanding, but at significantly higher cost and with less control over deployment</p> <p>Voxtral bridges this gap. It offers state-of-the-art accuracy and native semantic understanding in the open, at less than half the price of comparable APIs. This makes high-quality speech intelligence accessible and controllable at scale. </p> <p>Both Voxtral models go beyond simple transcription with capabilities that include:</p> <p>Long-form context: with a 32k token context length, Voxtral handles audios up to 30 minutes for transcription, or 40 minutes for understanding </p> <p>Built-in Q&amp;A and summarization: Supports asking questions directly about the audio content or generating structured summaries, without the need to chain separate ASR and language models</p> <p>Natively multilingual: Automatic language detection and state-of-the-art performance in the world’s most widely used languages (English, Spanish, French, Portuguese, Hindi, German, Dutch, Italian, to name a few), helping teams serve global audiences with a single system</p> <p>Function-calling straight from voice: Enables direct triggering of backend functions, workflows, or API calls based on spoken user intents, turning voice interactions into actionable system commands without intermediate parsing steps.</p> <p>Highly capable at text: Retains the text understanding capabilities of its language model backbone, Mistral Small 3.1</p> <p>These capabilities make the Voxtral models ideal for real-world interactions and downstream actions, such as summaries, answers, analysis, and insights. For cost-sensitive use-cases, Voxtral Mini Transcribe outperforms OpenAI Whisper for less than half the price. For premium use cases, Voxtral Small matches the performance of ElevenLabs Scribe, also for less than half the price.</p> <p>To assess Voxtral’s transcription capabilities, we evaluate it on a range of English and multilingual benchmarks. For each task, we report the macro-average word error rate (lower is better) across languages. For English, we report a short-form (&lt;30-seconds) and long-form (&gt;30-seconds) average.</p> <p>Voxtral comprehensively outperforms Whisper large-v3, the current leading open-source Speech Transcription model. It beats GPT-4o mini Transcribe and Gemini 2.5 Flash across all tasks, and achieves state-of-the-art results on English short-form and Mozilla Common Voice, surpassing ElevenLabs Scribe and demonstrating its strong multilingual capabilities.</p> <img src="https://cms.mistral.ai/assets/3f77f74c-f1a9-4a42-af04-3c72c78fd295.png?width=1600&amp;height=995"/> <p>When evaluated across languages in FLEURS, Voxtral Small outperforms Whisper on every task, achieving state-of-the-art performance in a number of European languages.</p> <img src="https://cms.mistral.ai/assets/6a3aaf3e-7041-41c6-9cd0-14cde11e3cba.png?width=1600&amp;height=900"/> <p>Macro-average details:</p> <p>En short-form: LibriSpeech Clean, LibriSpeech Other, GigaSpeech, VoxPopuli, Switchboard, CHiME-4, SPGISpeech</p> <p>En long-form: Earnings-21 10-m, Earnings-22 10-m</p> <p>Mozilla Common Voice 15.1: English, French, German, Spanish, Italian, Portuguese, Dutch, Hindi</p> <p>FLEURS: English, French, German, Spanish, Italian, Portuguese, Dutch, Hindi, Arabic</p> <p>Voxtral Small and Mini are capable of answering questions directly from speech, or by providing an audio and a text-based prompt. To evaluate Audio Understanding capabilities, we create speech-synthesized versions of three common Text Understanding tasks. We also evaluate the models on an in-house Audio Understanding (AU) Benchmark, where the model is tasked with answering challenging questions on 40 long-form audio examples. Finally, we assess Speech Translation capabilities on the FLEURS-Translation benchmark.</p> <p>Voxtral Small is competitive with GPT-4o-mini and Gemini 2.5 Flash across all tasks, achieving state-of-the-art performance in Speech Translation.</p> <img src="https://cms.mistral.ai/assets/d77c4d21-84a9-437f-b9c9-3b27725261ff.png?width=2746&amp;height=1482"/> <p>Voxtral retains the text capabilities of its Language-Model backbone, enabling it to be used as a drop-in replacement for Ministral and Mistral Small 3.1 respectively.</p> <img src="https://cms.mistral.ai/assets/977d9b1c-a999-4e6a-9b84-87654cee6ec5.png?width=1600&amp;height=958"/> <p>Whether you’re prototyping on a laptop, running private workloads on-premises, or scaling to production in the cloud, getting started is straightforward.</p> <p>Download and run locally: Both Voxtral (24B) and Voxtral Mini (3B) are available to <a href="https://huggingface.co/mistralai/">download</a> on Hugging Face</p> <a href="https://huggingface.co/mistralai/">download</a> <p>Try the API: Integrate frontier speech intelligence into your application with a single <a href="https://console.mistral.ai/">API call</a>. Pricing starts at $0.001 per minute, making high-quality transcription and understanding affordable at scale. Check out our documentation <a href="https://docs.mistral.ai/capabilities/audio/">here</a>. </p> <a href="https://console.mistral.ai/">API call</a> <a href="https://docs.mistral.ai/capabilities/audio/">here</a> <p>Try it on Le Chat: Try Voxtral in <a href="http://chat.mistral.ai">Le Chat</a>’s voice mode (rolling out to all users in the next couple of weeks)—on web or mobile. Record or upload audio, get transcriptions, ask questions, or generate summaries.</p> <a href="http://chat.mistral.ai">Le Chat</a> <p>We also offer capabilities for Voxtral designed for enterprises with higher security, scale, or domain-specific requirements. Please <a href="https://mistral.ai/contact">reach out to us</a> if you are considering: </p> <a href="https://mistral.ai/contact">reach out to us</a> <p>Private deployment at production-scale: Our solutions team can help you set up Voxtral for production-scale inference entirely within your own infrastructure. This is ideal for use cases in regulated industries with strict data privacy requirements. This includes guidance and tooling for deploying Voxtral across multiple GPUs or nodes, with quantized builds optimized for production throughput and cost efficiency.</p> <p>Domain-specific fine-tuning: Work with our <a href="https://mistral.ai/services">applied AI</a> team to adapt Voxtral to specialized contexts—such as legal, medical, customer support, or internal knowledge bases—improving accuracy for your use case.</p> <a href="https://mistral.ai/services">applied AI</a> <p>Advanced context: We’re inviting design partners to build support for speaker identification, emotion detection, advanced diarization, and even longer context windows to meet a wider variety of needs out of the box.</p> <p>Dedicated integration support: Priority access to engineering resources and consulting to help integrate Voxtral cleanly into your existing workflows, products, or data pipelines.</p> <p>We will be hosting a live webinar with our friends at Inworld (check out their cool speech-to-speech <a href="https://inworld-mistral-demo.inworld.ai/index.html">demo</a> with Voxtral and Inworld TTS!) to showcase how you can build end-to-end voice-powered agents on Wednesday, Aug 6. If you’re interested, please register <a href="https://lu.ma/zzgc68zw">here</a>. </p> <a href="https://inworld-mistral-demo.inworld.ai/index.html">demo</a> <a href="https://lu.ma/zzgc68zw">here</a> <p>We’re working on making our audio capabilities more feature-rich in the forthcoming months. In addition to speech understanding, will we soon support: </p> <p>Speaker segmentation </p> <p>Audio markups such as age and emotion</p> <p>Word-level timestamps</p> <p>Non-speech audio recognition</p> <p>And more!</p> <p>We’re excited to see what you will build with Voxtral.</p> <p>The release of our Voxtral models marks a significant step forward, but our journey is far from over. Our ambition is to build the most natural, delightful near-human-like voice interfaces and there's lot more work to do. We are actively expanding our nascent audio team and looking for talented research scientists and engineers who share our ambition.</p> <p>If you’re interested in joining us on our mission to democratize artificial intelligenceI, we welcome your applications to <a href="https://mistral.ai/careers">join our team</a>! </p> <a href="https://mistral.ai/careers">join our team</a> <p>Get in touch.</p> <p>By submitting this form, you agree with our <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a>. We process your data to respond to your contact request in accordance with our <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a></p> <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a> <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/voxtral Research Tue, 15 Jul 2025 00:00:00 +0000 Announcing Codestral 25.08 and the Complete Mistral Coding Stack for Enterprise https://mistral.ai/news/codestral-25-08 Announcing Codestral 25.08 and the Complete Mistral Coding Stack for Enterprise | Mistral AI <p>How the world’s leading enterprises are using integrated coding solutions from Mistral AI to cut development, review, and testing time by 50%—and why the playbook now fits every company that wants AI-native software development. </p> <img src="https://cms.mistral.ai/assets/973b0f2f-1139-4630-be69-7c8de535c775.png?width=1727&amp;height=497"/> <p>Over the past year, AI coding assistants have introduced powerful capabilities, such as multi-file reasoning, contextual suggestions, and natural-language agents, all directly within the IDE. Despite these improvements, however, adoption inside enterprise environments has been slow. The reasons have less to do with model performance or the interface, and more with how these tools are built, deployed, and governed.</p> <p>Key limitations holding back enterprise teams include:</p> <p>Deployment constraints: Most AI coding tools are SaaS-only, with no options for VPC, on-prem, or air-gapped environments. This is a hard blocker for organizations in finance, defense, healthcare, and other regulated industries.</p> <p>Limited customization: Enterprises often need to adapt models to their own codebases and development conventions. Without access to model weights, pos-training workflows, or extensibility, teams are locked out of leveraging the best of their codebases.</p> <p>Fragmented architecture: Agents, embeddings, completions, and plugins are frequently decoupled across vendors—leading to integration drift, inconsistent context handling, and operational overhead. Moreover, coding copilots are not well-integrated into full enterprise platforms, such as product development tools, CRMs, and customer issue trackers.</p> <p>No unified observability or control: Teams lack visibility into how AI is being used across the development lifecycle. Without telemetry, audit trails, and centralized controls, it’s difficult to scale AI usage responsibly or measure real ROI.</p> <p>Incompatibility with internal toolchains: Many assistants operate in closed environments, making it hard to connect with internal CI/CD pipelines, knowledge bases, or static analysis frameworks.</p> <p>For enterprises, these limitations aren’t edge cases—they’re baseline requirements. Solving them is what separates a good developer tool from an AI-native software development platform.</p> <p>Our approach to enterprise coding isn’t a bundle of isolated tools. It’s an integrated system designed to support enterprise-grade software development across every stage—from code suggestion to autonomous pull requests.</p> <img src="https://cms.mistral.ai/assets/da3fc2a0-146a-4394-90f7-3997a58230af.png?width=1710&amp;height=1376"/> <p>It starts with fast, reliable completion—and scales up to full codebase understanding and multi-file automation.</p> <p>At the foundation of the stack is Codestral, Mistral’s family of code generation models built specifically for high-precision fill-in-the-middle (FIM) completion. These models are optimized for production engineering environments: latency-sensitive, context-aware, and self-deployable.</p> <p>Today, we announce its latest update. Codestral 25.08 delivers measurable upgrades over prior versions:</p> <p>+30% increase in accepted completions</p> <p>+10% more retained code after suggestion</p> <p>50% fewer runaway generations, improving confidence in longer edits</p> <p>Improved performance on academic benchmarks for short and long-context FIM completion</p> <p>These improvements were validated in live IDE usage across production codebases. The model supports a wide range of languages and tasks, and is deployable across cloud, VPC, or on-prem environments—with no architectural changes required.</p> <p>Codestral-2508 also brings improvements to chat mode:</p> <p>Instruction following: +5% on IF eval v8</p> <p>Code abilities: +5% in average MultiplE</p> <img src="https://cms.mistral.ai/assets/a11e458a-15f4-40c0-aa98-5af69ff961e1.png?width=1527&amp;height=1151"/> <img src="https://cms.mistral.ai/assets/397701a8-1849-4cdd-aff1-a672388640a5.png?width=1505&amp;height=1135"/> <img src="https://cms.mistral.ai/assets/9dd41059-f1ac-478a-b9ae-3c78926f72f2.png?width=1505&amp;height=1166"/> <p>Autocomplete accelerates, but only if the model understands your codebase. <a href="https://docs.mistral.ai/capabilities/embeddings/code_embeddings/">Codestral Embed</a> sets a new standard in this domain. Designed specifically for code rather than general text, it outperforms leading embedding models from OpenAI and Cohere in real-world code retrieval benchmarks. </p> <a href="https://docs.mistral.ai/capabilities/embeddings/code_embeddings/">Codestral Embed</a> <p>Key advantages include:</p> <p>High-recall, low-latency search across massive monorepos and poly-repos. Developers can find internal logic, validation routines, or domain-specific utilities using natural language.</p> <p>Flexible embedding outputs, with configurable dimensions (e.g., 256-dim, INT8) that balance retrieval quality with storage efficiency—while outperforming alternatives even at lower dimensionality</p> <p>Private deployment for maximum control, ensuring no data leakage via third-party APIs. All embedding inference and index storage can run within enterprise infrastructure</p> <p>This embedding layer serves as both the context foundation for agentic workflows and the retrieval engine powering in‑IDE code search features—without sacrificing privacy, performance, or precision.</p> <p>With relevant context surfaced, AI can take meaningful action. <a href="https://mistral.ai/news/devstral">Devstral</a>, powered by the <a href="https://github.com/All-Hands-AI/OpenHands">OpenHands </a>agent scaffold, enables enterprise-ready agentic coding workflows. It’s built specifically for engineering tasks—cross-file refactors, test generation, and PR authoring—using structured, context-rich reasoning.</p> <a href="https://mistral.ai/news/devstral">Devstral</a> <a href="https://github.com/All-Hands-AI/OpenHands">OpenHands </a> <p>Standout capabilities include:</p> <p>Top open‑model performance on SWE‑Bench Verified: Devstral Small 1.1 scores 53.6%, and Devstral Medium reaches 61.6%, outperforming Claude 3.5, GPT‑4.1‑mini, and other open models by wide margins</p> <p>Flexible architecture for any environment: Devstral is available in multiple sizes. The open-weight Devstral Small (24B, Apache-2.0) runs efficiently on a single Nvidia RTX 4090 or Mac with 32 GB RAM—ideal for self-hosted, air-gapped, or experimental workflows. The larger Devstral Medium is available through enterprise partnerships and our API for more advanced code understanding and planning capabilities.</p> <p>Open model for extensibility: Teams can fine-tune Devstral Small on proprietary code, build custom agents, or embed it directly into CI/CD workflows—without licensing lock-in. For production environments requiring higher model performance, Devstral Medium is available with enterprise-grade support, including the ability for companies to post-train and fine-tune.</p> <p>Delivering agentic automation within private infrastructure lets engineering organizations reduce friction, ensure compliance, and speed up delivery with repeatable, auditable AI workflows.</p> <p>All capabilities in the Mistral stack—completion, semantic search, and agentic workflows—are surfaced through Mistral Code, a native plugin for JetBrains and VS Code.</p> <img src="https://cms.mistral.ai/assets/f5361c96-bf6c-4876-8a1c-c9fd6e80a612.png?width=1152&amp;height=840"/> <p>It provides:</p> <p>Inline completions using Codestral 25.08, optimized for FIM and multi-line editing</p> <p>One-click task automations like “Write commit message”, “Fix function”, or “Add docstring”, powered by Devstral</p> <p>Context awareness from Git diffs, terminal history, and static analysis tools</p> <p>Integrated semantic search, backed by Codestral Embed</p> <p>Mistral Code is built to support enterprise deployment requirements:</p> <p>Deploy in any environment: cloud, self-managed VPC, or fully on-prem (GA in Q3)</p> <p>No mandatory telemetry, and no external API calls for inference or search</p> <p>SSO, audit logging, and usage controls for secure, policy-compliant adoption</p> <p>Usage observability via the Mistral Console, including metrics on AI-generated code, suggestion acceptance, and agent usage</p> <p>These features give engineering, platform, and security teams the ability to roll out AI tooling safely, incrementally, and with full visibility.</p> <p>The Mistral coding stack integrates autocomplete, semantic retrieval, and agentic workflows directly into the IDE—while giving platform teams control over deployment, observability, and security. In a typical development task:</p> <p>Say a developer is working on a payments service written in Python. A recent update to a third-party billing API means they need to update the integration logic and add proper error handling.</p> <p>They start by navigating to the billing handler. As they modify the function signature, Codestral fills in the expected parameters and suggests a first-pass implementation, reducing the need to copy patterns from other services.</p> <p>Before changing the retry logic, they need to understand how similar failures are handled elsewhere. Instead of switching to Slack or searching GitHub manually, they enter a query directly in the IDE: “How do we handle Stripe timeouts in the checkout flow?” The embedding index, running locally, returns a helper module from another service that wraps retry logic with exponential backoff.</p> <p>They copy the pattern into their own handler—but realize three other services are using outdated retry code. They invoke a Devstral-powered agent from within the IDE: “Replace all uses of retry_with_sleep in the billing and checkout services with the new retry_exponential helper, and update the docs.” Devstral scans the codebase using the same embeddings, makes the required edits across files, and generates a draft PR. The agent also writes a changelog and updates the README section on error handling.</p> <p>The developer reviews the PR, confirms the logic, and merges it. A cross-service update that previously would have required search, coordination, and hand-written boilerplate now completes in one editing session—with traceable, reviewable output.</p> <p>At the organization level, this same workflow unlocks broader advantages:</p> <p>Every component in the stack can be self-hosted or run on-prem, giving teams control over data, latency, and deployment architecture.</p> <p>Observability is built in. The Mistral Console tracks usage patterns, model acceptance rates, and agent adoption, providing the data needed to tune rollout and measure ROI.</p> <p>Security and compliance controls—including SSO, audit logging, and telemetry configuration—make it easy to integrate with internal policies and infrastructure.</p> <p>No stitching required. Because completion, search, and agents share architecture, context handling, and support boundaries, teams avoid the drift, overhead, and security gaps of piecing together third-party tools.</p> <p>The result is a development workflow that’s both faster and easier to govern—designed for individual productivity and organizational scale.</p> <p>The Mistral coding stack is already being used in production by organizations across consulting, finance, transportation, and industry—each with different requirements, but shared constraints around data control, deployment flexibility, and internal code complexity.</p> <p>Capgemini has rolled out the stack across global delivery teams to accelerate development while maintaining code ownership and compliance across clients in defense, telecom, and energy.</p> <p>Abanca, a leading bank in Spain operating under European banking regulations, uses Mistral’s models in a fully self-hosted deployment to meet data residency and network isolation requirements—without sacrificing usability.</p> <p>SNCF, the French national railway company, uses agentic workflows to modernize legacy Java systems safely and incrementally, with human oversight built into the loop.</p> <p>“Leveraging Mistral’s Codestral has been a game changer in the adoption of private coding assistant for our client projects in regulated industries. We have evolved from basic support for some development activities to systematic value for our development teams“.</p> <p>Alban Alev, VP head of Solutioning at Capgemini France.</p> <p>In addition, several tier-1 global banks and industrial manufacturers are actively piloting or scaling adoption across their engineering teams—driven by requirements that hosted copilots and fragmented tooling can’t support.</p> <p>These use cases reflect a growing shift: organizations are no longer looking for isolated assistants—they’re adopting integrated AI systems that match the complexity, security posture, and velocity of modern enterprise software development.</p> <p>The full Mistral coding stack—Codestral 25.08, Devstral, Codestral Embed, and the Mistral Code IDE extension—is available today for enterprise deployment.</p> <p>Teams can start with autocomplete and semantic search, then expand to agentic workflows and private deployments at their own pace.</p> <p>To begin:</p> <p>Install Mistral Code from the <a href="https://plugins.jetbrains.com/plugin/27493-mistral-code-enterprise">JetBrains</a> or <a href="https://marketplace.visualstudio.com/items?itemName=mistralai.mistral-code">VS Code</a> marketplace</p> <a href="https://plugins.jetbrains.com/plugin/27493-mistral-code-enterprise">JetBrains</a> <a href="https://marketplace.visualstudio.com/items?itemName=mistralai.mistral-code">VS Code</a> <p>Connect to your preferred deployment modality (cloud, VPC, or on-prem)</p> <p>If you would like to use the models for your own copilot, get your keys at <a href="http://console.mistral.ai">console.mistral.ai</a>. For more information on Mistral’s coding solutions, please visit our <a href="https://mistral.ai/solutions/coding">website</a> and <a href="https://docs.mistral.ai/">documentation</a>. </p> <a href="http://console.mistral.ai">console.mistral.ai</a> <a href="https://mistral.ai/solutions/coding">website</a> <a href="https://docs.mistral.ai/">documentation</a> <p>To evaluate on-prem options, enterprise-scale deployments, or schedule a hands-on pilot, fill out the demand form on this page. A member of the Mistral team will follow up to help tailor the rollout to your environment.</p> <p>Get in touch.</p> <p>Explore Codestral and Mistral Code.</p> <p>By submitting this form, you agree with our <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a>. We process your data to respond to your contact request in accordance with our <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a></p> <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a> <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/codestral-25-08 Research Wed, 30 Jul 2025 00:00:00 +0000 Introducing Mistral 3 https://mistral.ai/news/mistral-3 The next generation of open multimodal and multilingual AI <p>The next generation of open multimodal and multilingual AI</p> <p>Today, we announce Mistral 3, the next generation of Mistral models. Mistral 3 includes three state-of-the-art small, dense models (14B, 8B, and 3B) and Mistral Large 3 – our most capable model to date – a sparse mixture-of-experts trained with 41B active and 675B total parameters. All models are released under the Apache 2.0 license. Open-sourcing our models in a variety of compressed formats empowers the developer community and puts AI in people’s hands through distributed intelligence.</p> <p>The Ministral models represent the best performance-to-cost ratio in their category. At the same time, Mistral Large 3 joins the ranks of frontier instruction-fine-tuned open-source models.</p> <img src="https://cms.mistral.ai/assets/98aeee04-e1c3-43b7-b90e-c51da84d5e56.png?width=1905&amp;height=1242"/> <img src="https://cms.mistral.ai/assets/bdf27a12-76fd-4e62-be9b-938f14288a9a.png?width=1346&amp;height=1115"/> <p>Mistral Large 3 is one of the best permissive open weight models in the world, trained from scratch on 3000 of NVIDIA’s H200 GPUs. Mistral Large 3 is Mistral’s first mixture-of-experts model since the seminal Mixtral series, and represents a substantial step forward in pretraining at Mistral. After post-training, the model achieves parity with the best instruction-tuned open-weight models on the market on general prompts, while also demonstrating image understanding and best-in-class performance on multilingual conversations (i.e., non-English/Chinese).</p> <p>Mistral Large 3 debuts at #2 in the OSS non-reasoning models category (#6 amongst OSS models overall) on the <a href="https://lmarena.ai/leaderboard/text">LMArena leaderboard</a>.</p> <a href="https://lmarena.ai/leaderboard/text">LMArena leaderboard</a> <img src="https://cms.mistral.ai/assets/4626af3d-7554-4d50-9c0e-041fe7111ece.png?width=1905&amp;height=1242"/> <p>We release both the base and instruction fine-tuned versions of Mistral Large 3 under the Apache 2.0 license, providing a strong foundation for further customization across the enterprise and developer communities. A reasoning version is coming soon! </p> <p>Working in conjunction with vLLM and Red Hat, Mistral Large 3 is very accessible to the open-source community. We’re releasing a checkpoint in NVFP4 format, built with <a href="https://github.com/vllm-project/llm-compressor">llm-compressor</a>. This optimized checkpoint lets you run Mistral Large 3 efficiently on Blackwell NVL72 systems and on a single 8×A100 or 8×H100 node using <a href="https://github.com/vllm-project/vllm">vLLM</a>.</p> <a href="https://github.com/vllm-project/llm-compressor">llm-compressor</a> <a href="https://github.com/vllm-project/vllm">vLLM</a> <p>Delivering advanced open-source AI models requires broad optimization, achieved through a <a href="https://blogs.nvidia.com/blog/mistral-frontier-open-models/">partnership with NVIDIA</a>. All our new Mistral 3 models, from Large 3 to Ministral 3, were trained on NVIDIA Hopper GPUs to tap high-bandwidth HBM3e memory for frontier-scale workloads. NVIDIA’s extreme co-design approach brings hardware, software, and models together. NVIDIA engineers enabled efficient inference support for <a href="https://github.com/NVIDIA/TensorRT-LLM">TensorRT-LLM</a> and <a href="https://github.com/sgl-project/sglang">SGLang</a> for the complete Mistral 3 family, for efficient low-precision execution.</p> <a href="https://blogs.nvidia.com/blog/mistral-frontier-open-models/">partnership with NVIDIA</a> <a href="https://github.com/NVIDIA/TensorRT-LLM">TensorRT-LLM</a> <a href="https://github.com/sgl-project/sglang">SGLang</a> <p>For Large 3’s sparse MoE architecture, NVIDIA integrated state-of-the-art Blackwell attention and MoE kernels, added support for prefill/decode disaggregated serving, and collaborated with Mistral on speculative decoding, enabling developers to efficiently serve long-context, high-throughput workloads on GB200 NVL72 and beyond. On the edge, delivers optimized deployments of the Ministral models on <a href="http://nvidia.com/en-us/products/workstations/dgx-spark/">DGX Spark</a>, <a href="https://www.nvidia.com/en-us/ai-on-rtx/">RTX PCs and laptops</a>, and <a href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/">Jetson devices</a>, giving developers a consistent, high-performance path to run these open models from data center to robot.</p> <a href="http://nvidia.com/en-us/products/workstations/dgx-spark/">DGX Spark</a> <a href="https://www.nvidia.com/en-us/ai-on-rtx/">RTX PCs and laptops</a> <a href="https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/">Jetson devices</a> <p>We are very thankful for the collaboration and want to thank vLLM, Red Hat, and NVIDIA in particular.</p> <img src="https://cms.mistral.ai/assets/ea1fcc83-5bad-400e-b63a-35c8a8c0bf9c.png?width=1726&amp;height=1062"/> <p>For edge and local use cases, we release the Ministral 3 series, available in three model sizes: 3B, 8B, and 14B parameters. Furthermore, for each model size, we release base, instruct, and reasoning variants to the community, each with image understanding capabilities, all under the Apache 2.0 license. When married with the models’ native multimodal and multilingual capabilities, the Ministral 3 family offers a model for all enterprise or developer needs.</p> <p>Furthermore, Ministral 3 achieves the best cost-to-performance ratio of any OSS model. In real-world use cases, both the number of generated tokens and model size matter equally. The Ministral instruct models match or exceed the performance of comparable models while often producing an order of magnitude fewer tokens. </p> <p>For settings where accuracy is the only concern, the Ministral reasoning variants can think longer to produce state-of-the-art accuracy amongst their weight class - for instance 85% on AIME ‘25 with our 14B variant.</p> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2Fcacf82e0-4772-4aeb-9ba9-5049858b5426.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2F9ebc68c0-4cf0-4fac-9eae-c80c36a92d85.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2Fe1d05b89-fe53-48c7-93fc-6048e9908e07.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2F2e78ca03-e926-4061-b48d-48be95d3f15d.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2Ff32ee051-b9fe-4567-851d-7218075bcb6a.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2F1821c1b0-d4f7-4e99-bd32-7eafcbfaecb3.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2Fd03bf1e5-824a-4593-8fd0-db148aab2a5a.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2Fe76e313e-06d2-48ea-9126-950d4bc948bd.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <img src="https://mistral.ai/_next/image?url=https%3A%2F%2Fcms.mistral.ai%2Fassets%2F0c6c1567-0648-4b4e-803f-8683a3e49171.png%3Fwidth%3D1905%26height%3D1242&amp;w=3840&amp;q=75"/> <p>Mistral 3 is available today on <a href="https://console.mistral.ai/home">Mistral AI Studio</a>, <a href="https://aws.amazon.com/about-aws/whats-new/2025/12/mistral-large-3-ministral-3-family-available-amazon-bedrock/">Amazon Bedrock</a>, Azure Foundry, Hugging Face (<a href="https://huggingface.co/collections/mistralai/mistral-large-3">Large 3</a> &amp; <a href="https://huggingface.co/collections/mistralai/ministral-3">Ministral</a>), <a href="https://modal.com/docs/examples/ministral3_inference">Modal</a>, IBM WatsonX, OpenRouter, Fireworks, Unsloth AI,and Together AI. In addition, coming soon on NVIDIA NIM and AWS SageMaker.</p> <a href="https://console.mistral.ai/home">Mistral AI Studio</a> <a href="https://aws.amazon.com/about-aws/whats-new/2025/12/mistral-large-3-ministral-3-family-available-amazon-bedrock/">Amazon Bedrock</a> <a href="https://huggingface.co/collections/mistralai/mistral-large-3">Large 3</a> <a href="https://huggingface.co/collections/mistralai/ministral-3">Ministral</a> <a href="https://modal.com/docs/examples/ministral3_inference">Modal</a> <p>For organizations seeking tailored AI solutions, Mistral AI offers <a href="https://mistral.ai/solutions/custom-model-training">custom model training services</a> to fine-tune or fully adapt our models to your specific needs. Whether optimizing for domain-specific tasks, enhancing performance on proprietary datasets, or deploying models in unique environments, our team collaborates with you to build AI systems that align with your goals. For enterprise-grade deployments, custom training ensures your AI solution delivers maximum impact securely, efficiently, and at scale.</p> <a href="https://mistral.ai/solutions/custom-model-training">custom model training services</a> <p>The future of AI is open. Mistral 3 redefines what’s possible with a family of models built for frontier intelligence, multimodal flexibility, and unmatched customization. Whether you’re deploying edge-optimized solutions with Ministral 3 or pushing the boundaries of reasoning with Mistral Large 3, this release puts state-of-the-art AI directly into your hands.</p> <p>Frontier performance, open access: Achieve closed-source-level results with the transparency and control of open-source models.</p> <p>Multimodal and multilingual: Build applications that understand text, images, and complex logic across 40+ native languages.</p> <p>Scalable efficiency: From 3B to 675B parameters, choose the model that fits your needs, from edge devices to enterprise workflows.</p> <p>Agentic and adaptable: Deploy for coding, creative collaboration, document analysis, or tool-use workflows with precision.</p> <p>Explore the model documentation: </p> <p><a href="https://docs.mistral.ai/models/ministral-3-3b-25-12">Ministral 3 3B-25-12</a></p> <a href="https://docs.mistral.ai/models/ministral-3-3b-25-12">Ministral 3 3B-25-12</a> <p><a href="https://docs.mistral.ai/models/ministral-3-8b-25-12">Ministral 3 8B-25-12</a></p> <a href="https://docs.mistral.ai/models/ministral-3-8b-25-12">Ministral 3 8B-25-12</a> <p><a href="https://docs.mistral.ai/models/ministral-3-14b-25-12">Ministral 3 14B-25-12</a></p> <a href="https://docs.mistral.ai/models/ministral-3-14b-25-12">Ministral 3 14B-25-12</a> <p><a href="https://docs.mistral.ai/models/mistral-large-3-25-12">Mistral Large 3</a></p> <a href="https://docs.mistral.ai/models/mistral-large-3-25-12">Mistral Large 3</a> <p>Technical documentation for customers is available on our <a href="https://legal.mistral.ai/">AI Governance Hub</a></p> <a href="https://legal.mistral.ai/">AI Governance Hub</a> <p>Start building: <a href="https://huggingface.co/collections/mistralai/ministral-3">Ministral 3</a> and <a href="https://huggingface.co/collections/mistralai/mistral-large-3">Large 3</a> on Hugging Face, or deploy via <a href="https://console.mistral.ai/home">Mistral AI’s platform</a> for instant API access and <a href="https://mistral.ai/pricing#api-pricing">API pricing</a></p> <a href="https://huggingface.co/collections/mistralai/ministral-3">Ministral 3</a> <a href="https://huggingface.co/collections/mistralai/mistral-large-3">Large 3</a> <a href="https://console.mistral.ai/home">Mistral AI’s platform</a> <a href="https://mistral.ai/pricing#api-pricing">API pricing</a> <p>Customize for your needs: Need a tailored solution? <a href="https://mistral.ai/contact">Contact our team</a> to explore fine-tuning or enterprise-grade training.</p> <a href="https://mistral.ai/contact">Contact our team</a> <p>Share your projects, questions, or breakthroughs with us: <a href="https://x.com/MistralAI">Twitter/X</a>, <a href="https://discord.com/invite/mistralai">Discord</a>, or <a href="https://github.com/mistralai">GitHub</a>.</p> <a href="https://x.com/MistralAI">Twitter/X</a> <a href="https://discord.com/invite/mistralai">Discord</a> <a href="https://github.com/mistralai">GitHub</a> <p>We believe that the future of AI should be built on transparency, accessibility, and collective progress. With this release, we invite the world to explore, build, and innovate with us, unlocking new possibilities in reasoning, efficiency, and real-world applications.</p> <p>Together, let’s turn understanding into action.</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-3 Research Tue, 02 Dec 2025 00:00:00 +0000 Introducing: Devstral 2 and Mistral Vibe CLI. https://mistral.ai/news/devstral-2-vibe-cli Devstral2 <p>Devstral2</p> <p>Mistral Vibe CLI</p> <p>State-of-the-art, open-source agentic coding models and CLI agent.</p> <p>█░ Date: <!-- -->Dec 9, 2025</p> <p>█░ Category: <!-- -->Research</p> <p>█░ Author: <!-- -->Mistral AI</p> <p>Today, we're releasing Devstral 2—our next-generation coding model family available in two sizes: Devstral 2 (123B) and Devstral Small 2 (24B). Devstral 2 ships under a modified MIT license, while Devstral Small 2 uses Apache 2.0. Both are open-source and permissively licensed to accelerate distributed intelligence.</p> <p>Devstral 2 is currently free to use via <a href="https://console.mistral.ai/">our API</a>.</p> <a href="https://console.mistral.ai/">our API</a> <p>We are also introducing Mistral Vibe, a native CLI built for Devstral that enables end-to-end code automation.</p> <p>Devstral 2: SOTA open model for code agents with a fraction of the parameters of its competitors and achieving 72.2% on SWE-bench Verified.</p> <p>Up to 7x more cost-efficient than Claude Sonnet at real-world tasks.</p> <p>Mistral Vibe CLI: Native, open-source agent in your terminal solving software engineering tasks autonomously.</p> <p>Devstral Small 2: 24B parameter model available via API or deployable locally on consumer hardware.</p> <p>Compatible with on-prem deployment and custom fine-tuning.</p> <p>Devstral 2 is a 123B-parameter dense transformer supporting a 256K context window. It reaches 72.2% on SWE-bench Verified—establishing it as one of the best open-weight models while remaining highly cost efficient. Released under a modified MIT license, Devstral sets the open state-of-the-art for code agents.</p> <p>Devstral Small 2 scores 68.0% on SWE-bench Verified, and places firmly among models up to five times its size while being capable of running locally on consumer hardware.</p> <img src="https://cms.mistral.ai/assets/d295e716-acbe-4d05-8764-861ca2f2a2eb.png?width=1686&amp;height=1093"/> <img src="https://cms.mistral.ai/assets/9c36eef1-2b4c-4fb8-8ef0-d531116ec53a.png?width=1686&amp;height=1093"/> <p>Devstral 2 (123B) and Devstral Small 2 (24B) are 5x and 28x smaller than DeepSeek V3.2, and 8x and 41x smaller than Kimi K2—proving that compact models can match or exceed the performance of much larger competitors. Their reduced size makes deployment practical on limited hardware, lowering barriers for developers, small businesses, and hobbyists.hardware.</p> <img src="https://cms.mistral.ai/assets/3c7a5ea7-d83f-4dc4-9129-965c321bb379.png?width=1686&amp;height=969"/> <img src="https://cms.mistral.ai/assets/49e0d71c-436c-4334-9fff-fa68c9f60380.png?width=1686&amp;height=969"/> <p>Devstral 2 supports exploring codebases and orchestrating changes across multiple files while maintaining architecture-level context. It tracks framework dependencies, detects failures, and retries with corrections—solving challenges like bug fixing and modernizing legacy systems.</p> <p>The model can be fine-tuned to prioritize specific languages or optimize for large enterprise codebases.</p> <p>We evaluated Devstral 2 against DeepSeek V3.2 and Claude Sonnet 4.5 using human evaluations conducted by an independent annotation provider, with tasks scaffolded through Cline. Devstral 2 shows a clear advantage over DeepSeek V3.2, with a 42.8% win rate versus 28.6% loss rate. However, Claude Sonnet 4.5 remains significantly preferred, indicating a gap with closed-source models persists.</p> <img src="https://cms.mistral.ai/assets/48b2b0fc-f8d8-44da-a3a2-4961aad2f10e.png?width=1371&amp;height=670"/> <img src="https://cms.mistral.ai/assets/542495d8-31d9-4053-a426-df9dccc58ef1.png?width=1371&amp;height=670"/> <p>“Devstral 2 is at the frontier of open-source coding models. In Cline, it delivers a tool-calling success rate on par with the best closed models; it's a remarkably smooth driver. This is a massive contribution to the open-source ecosystem.” — Cline.</p> <p>“Devstral 2 was one of our most successful stealth launches yet, surpassing 17B tokens in the first 24 hours. Mistral AI is moving at Kilo Speed with a cost-efficient model that truly works at scale.” — Kilo Code.</p> <p>Devstral Small 2, a 24B-parameter model with the same 256K context window and released under Apache 2.0, brings these capabilities to a compact, locally deployable form. Its size enables fast inference, tight feedback loops, and easy customization—with fully private, on-device runtime. It also supports image inputs, and can power multimodal agents. </p> <p>Mistral Vibe CLI is an open-source command-line coding assistant powered by Devstral. It explores, modifies, and executes changes across your codebase using natural language—in your terminal or integrated into your preferred IDE via the Agent Communication Protocol. It is released under the Apache 2.0 license.</p> <p>Vibe CLI provides an interactive chat interface with tools for file manipulation, code searching, version control, and command execution. Key features:</p> <p>Project-aware context: Automatically scans your file structure and Git status to provide relevant context</p> <p>Smart references: Reference files with @ autocomplete, execute shell commands with !, and use slash commands for configuration changes</p> <p>Multi-file orchestration: Understands your entire codebase—not just the file you're editing—enabling architecture-level reasoning that can halve your PR cycle time</p> <p>Persistent history, autocompletion, and customizable themes.</p> <p>You can run Vibe CLI programmatically for scripting, toggle auto-approval for tool execution, configure local models and providers through a simple config.toml, and control tool permissions to match your workflow.</p> <p>Devstral 2 is currently offered free via <a href="http://console.mistral.ai">our API</a>. After the free period, the API pricing will be $0.40/$2.00 per million tokens (input/output) for Devstral 2 and $0.10/$0.30 for Devstral Small 2.</p> <a href="http://console.mistral.ai">our API</a> <p>We’ve partnered with leading, open agent tools <a href="https://kilo.ai/">Kilo Code</a> and <a href="https://cline.bot/">Cline</a> to bring Devstral 2 to where you already build.</p> <a href="https://kilo.ai/">Kilo Code</a> <a href="https://cline.bot/">Cline</a> <p>Mistral Vibe CLI is available as an extension in <a href="https://zed.dev/extensions">Zed</a>, so you can use it directly inside your IDE.</p> <a href="https://zed.dev/extensions">Zed</a> <p>Devstral 2 is optimized for data center GPUs and requires a minimum of 4 H100-class GPUs for deployment. You can try it today on <a href="http://build.nvidia.com">build.nvidia.com</a>. Devstral Small 2 is built for single-GPU operation and runs across a broad range of NVIDIA systems, including DGX Spark and GeForce RTX. NVIDIA NIM support will be available soon.</p> <a href="http://build.nvidia.com">build.nvidia.com</a> <p>Devstral Small runs on consumer-grade GPUs as well as CPU-only configurations with no dedicated GPU required.</p> <p>For optimal performance, we recommend a temperature of 0.2 and following the best practices defined for <a href="https://github.com/mistralai/mistral-vibe/blob/main/README.md">Mistral Vibe CLI</a>.</p> <a href="https://github.com/mistralai/mistral-vibe/blob/main/README.md">Mistral Vibe CLI</a> <p>We’re excited to see what you will build with Devstral 2, Devstral Small 2, and Vibe CLI!</p> <p>Share your projects, questions, or discoveries with us on <a href="https://x.com/mistralai">X/Twitter</a>, <a href="https://discord.com/invite/mistralai">Discord</a>, or <a href="https://github.com/mistralai">GitHub</a>.</p> <a href="https://x.com/mistralai">X/Twitter</a> <a href="https://discord.com/invite/mistralai">Discord</a> <a href="https://github.com/mistralai">GitHub</a> <p>If you’re interested in shaping open-source research and building world-class interfaces that bring truly open, frontier AI to users, we welcome you to <a href="https://mistral.ai/careers">apply to join our team</a>.</p> <a href="https://mistral.ai/careers">apply to join our team</a> <a href="https://mistral.ai/news">□ <!-- -->News</a> <a href="https://mistral.ai/models">□ <!-- -->Models</a> <a href="https://mistral.ai/services">□ <!-- -->Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/devstral-2-vibe-cli Research Tue, 09 Dec 2025 00:00:00 +0000 Empowering product development with an agentic workflow https://mistral.ai/news/agentic-workflows-from-meetings-to-dev-tickets Transform meeting transcripts into development tasks with Mistral AI: From call transcript to PRD to engineering tickets. <p>Transform meeting transcripts into development tasks with Mistral AI: From call transcript to PRD to engineering tickets.</p> <p>Product development teams face constant pressure to move quickly while maintaining alignment across stakeholders. Traditional methods of converting stakeholder discussions into actionable development plans often involve manual, time-consuming processes that can introduce errors and delays. By using AI Agents, teams can dramatically accelerate this workflow while improving accuracy and consistency.</p> <img src="https://cms.mistral.ai/assets/72a46eef-f789-48d9-bb0d-8a8773a44088.png?width=550&amp;height=120"/> <p>The journey from initial product discussions to actual development typically involves multiple manual steps: </p> <p>Product managers often spend hours converting raw meeting notes into structured documentation, and engineers waste valuable time interpreting requirements and breaking them down into actionable tasks. This process can create bottlenecks, especially as organizations scale and the volume of product initiatives increases.</p> <p>But what if we could automate this entire workflow? What if we could take a meeting transcript and automatically generate both a comprehensive PRD and a set of actionable development tickets? This would not only save time but ensure consistent documentation and improved alignment across teams.</p> <p>Consider a typical product planning cycle. Stakeholders meet to discuss new features, but the follow-up work of documentation and task creation consumes valuable time that could be spent on actual development. By implementing an automated agentic workflow, teams can dramatically reduce this overhead.</p> <p>Using our TranscriptToPRDTicket agentic workflow, powered by Mistral AI LLMs, we have created a system. This system automatically processes meeting transcripts, generates detailed Product Requirements Documents (PRDs), and creates actionable development tickets. This end-to-end automation ensures teams can move from discussion to development with minimal manual intervention.</p> <img src="https://cms.mistral.ai/assets/e46ccb7f-6602-4d31-9391-048eb35ef529.png?width=1920&amp;height=750"/> <p>At its core, our agentic solution leverages two key components: PRDAgent and TicketCreationAgent, both powered by Mistral Large 2, a state-of-the-art LLM. The workflow follows a clear progression:</p> <a href="https://github.com/mistralai/cookbook/blob/main/mistral/agents/transcript_linearticket_agent/lechat_product_call_trascript.pdf">Meeting transcripts</a> <p>This automated pipeline ensures consistency and traceability while dramatically reducing manual effort. The entire process is orchestrated by Mistral AI's advanced LLMs understanding capabilities and structured output mechanisms. At the end of the workflow, it automatically generates structured tickets in project management tools like Linear, as illustrated in the image below.</p> <img src="https://cms.mistral.ai/assets/228d61ba-597f-428a-8e0a-759de45de271.png?width=1920&amp;height=839"/> <p>Mistral AI's LLMs provide the ideal foundation for this agentic workflow through our powerful natural language understanding and structured output capabilities. Our workflow leverages Mistral Large 2 for:</p> <p>If you're interested in implementing this Agentic Workflow in your organization, we've provided a complete implementation in our <a href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/agents/transcript_linearticket_agent/TranscriptToLinearTicketAgent.ipynb">Google Colab notebook</a>. This resource will help you get started quickly and customize the workflow for your specific needs.</p> <a href="https://colab.research.google.com/github/mistralai/cookbook/blob/main/mistral/agents/transcript_linearticket_agent/TranscriptToLinearTicketAgent.ipynb">Google Colab notebook</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/agentic-workflows-from-meetings-to-dev-tickets Solutions Tue, 04 Mar 2025 00:00:00 +0000 Evaluating RAG with LLM as a Judge https://mistral.ai/news/llm-as-rag-judge Using Mistral Models for LLM as a Judge (With Structured Outputs) <p>Using Mistral Models for LLM as a Judge (With Structured Outputs)</p> <p>Large Language Models (LLMs) are rapidly becoming essential tools for creating widely-used applications. But making sure these models perform as expected is much easier said than done. Evaluating LLM systems isn't just about verifying the outputs are coherent, but also about making sure the answers are relevant and meet the necessary requirements.</p> <p>Retrieval-Augmented Generation (RAG) systems have become a popular way to boost LLM capabilities. By pairing an LLM with a data retrieval system, LLMs can generate responses that are not only coherent but also grounded in relevant and current information. This helps cut down on moments when the model sounds confident but may actually be hallucinating.</p> <p>However, evaluating whether these RAG systems are performant isn't straightforward. It's not just about whether the output generated by the LLMs sounds correct, it's also about verifying at the source if the retrieved information is relevant and accurate. Traditional methods often miss these nuances, making a comprehensive evaluation framework essential.</p> <p>Taking a step back, there are many domains that face this same problem where there may be a lack of clear quantitative metrics or evaluation data to measure performance against and the measures of success may be more qualitative and nuanced.</p> <p>In scenarios in which evaluations must be run at scale and there is a lack of human evaluators, “LLM As A Judge” has become a popular solution to evaluate the answers of LLM systems.</p> <p>The way an “LLM As A Judge” typically works is to create a “judge LLM” that given the answer of a “generator LLM” is instructed to grade that answer based on a given scale. This scale can be numerical (e.g a 1-10 or 0-3 scale), binary (e.g True or False) or qualitative (e.g “Excellent”, “Good”, “Okay”, “Bad”). This grade can then be averaged across all questions of an evaluation dataset for an overall weighted score.</p> <p>For the instruction prompt of an “LLM as A Judge” you can create custom criteria for the LLM to evaluate answers on. For example you can create specific grading criteria checking if responses are accurate, relevant, and grounded. Mistral’s models can be effectively used for both the generator and judge component of LLM systems.</p> <img src="https://cms.mistral.ai/assets/d0f68ba1-77f4-4d28-ad97-4c62b34e4787.png?width=2461&amp;height=1408"/> <p>The RAG Triad is a popular evaluation framework for evaluating RAG systems designed to check the reliability and contextual integrity of LLM responses from the data sources. It was introduced by <a href="https://www.trulens.org/getting_started/core_concepts/rag_triad/">TruLens</a> and several similar frameworks created like <a href="https://docs.ragas.io/en/latest/concepts/metrics/overview/">RAGAS</a> have been created in order to more qualitatively evaluate RAG systems. </p> <a href="https://www.trulens.org/getting_started/core_concepts/rag_triad/">TruLens</a> <a href="https://docs.ragas.io/en/latest/concepts/metrics/overview/">RAGAS</a> <p>The RAG Triad focuses on three key areas:</p> <p>1. Context Relevance: This metric checks if the retrieved documents align well with the user's query. It ensures that the information used to generate the response is relevant and appropriate, setting the stage for a coherent and useful answer.</p> <p>2. Groundedness: This metric verifies if the generated response is based accurately on the retrieved context. It ensures that the LLM's output is factual and reliable, reducing the risk of those pesky hallucinations.</p> <p>3. Answer Relevance: This metric assesses how well the final response addresses the user's original query. It ensures that the generated answer is helpful and on point, aligning with the user's intent and providing valuable insights.</p> <p>By using the RAG Triad, evaluators can get a comprehensive view of the LLM's performance, spot potential issues, and ensure that the responses are accurate, reliable, and contextually sound. This framework helps build trustworthy interactions between humans, AI and knowledge bases.</p> <img src="https://cms.mistral.ai/assets/42c8c934-bcf2-4450-b33a-5b90079f4ec0.png?width=2700&amp;height=1502"/> <p>This is particularly useful in evaluating a RAG LLM system end-to-end as it includes assessing both the retrieval and generation steps. This helps evaluate not just the LLM answer, but also it’s correctness relative to the source data retrieved.</p> <p>Custom structured outputs is a recent feature launched in our API which allows you to control the output format of LLMs and give values in a structured machine readable way so you have greater reliability.</p> <p>Mistral's models together with structured outputs offer a practical way to implement the RAG triad and combat common evaluation challenges. By setting a clear criteria and schema for values, structured outputs help you build a more reliable “LLM As A Judge” that can be used for evaluating LLM systems.</p> <p>Structured outputs sets a perfect framework to define new and reusable criteria and enforce a format that is then easily machine readable for evaluation.</p> <p>Evaluating LLM systems is crucial for developing reliable AI applications. The RAG Triad, along with Mistral's structured outputs, provides a robust framework for assessing LLM performance with LLM as a judge. By focusing on context relevance, groundedness, and answer relevance, we can ensure that LLM responses are more accurate and meaningful. This approach enhances the user experience and the reliability of AI-driven applications, creating a more trustworthy interaction with the AI system.</p> <p>You can find the full code for LLM As A Judge for RAG here: <a href="https://github.com/mistralai/cookbook/blob/main/mistral/evaluation/RAG_evaluation.ipynb">https://github.com/mistralai/cookbook/blob/main/mistral/evaluation/RAG_evaluation.ipynb</a></p> <a href="https://github.com/mistralai/cookbook/blob/main/mistral/evaluation/RAG_evaluation.ipynb">https://github.com/mistralai/cookbook/blob/main/mistral/evaluation/RAG_evaluation.ipynb</a> <p>Interested in more custom work with the Mistral AI team? <a href="https://mistral.ai/contact">Contact us</a> for solution support</p> <a href="https://mistral.ai/contact">Contact us</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/llm-as-rag-judge Solutions Wed, 09 Apr 2025 00:00:00 +0000 Announcing AI for Citizens https://mistral.ai/news/ai-for-citizens Empowering countries to use AI to transform public action and catalyze innovation for the benefit of their citizens. <p>Empowering countries to use AI to transform public action and catalyze innovation for the benefit of their citizens.</p> <p>It’s clear that artificial intelligence will have significant and lasting impact not only on companies, but also on governments and societies. However, in the rush to attempt to put AI to use, it all too often seems that AI is something that happens to people and countries, an inevitability beyond their influence that leaves them at the mercy of closed, opaque systems designed and operated by distant, behemoth corporations. </p> <p>At Mistral AI, we’ve believed from the very beginning in <a href="https://mistral.ai/about">frontier AI in your hands</a>, i.e. in giving people and organizations the ability to shape and determine the role of AI in their future. That’s why we're excited to introduce <a href="https://mistral.ai/solutions/ai-for-citizens">AI for Citizens</a>, a collaborative initiative to help States and public institutions strategically harness AI for their people by transforming public services, catalyzing innovation, and ensuring competitiveness. We're already working with governments, defense forces, public sector agencies, and educational institutions across the world—<a href="https://mistral.ai/solutions/ai-for-citizens">including France, Luxembourg, Singapore, the Netherlands, England, Switzerland, and many more</a>—to put this vision into practice.</p> <a href="https://mistral.ai/about">frontier AI in your hands</a> <a href="https://mistral.ai/solutions/ai-for-citizens">AI for Citizens</a> <a href="https://mistral.ai/solutions/ai-for-citizens">including France, Luxembourg, Singapore, the Netherlands, England, Switzerland, and many more</a> <p>Today’s AI landscape is dominated by closed, opaque offerings with limited flexibility, encouraging a “one size fits all” approach to AI. As public institutions look for solutions to help them implement AI strategies, those offerings come with significant limitations and risks.</p> <p>To sum it up, "one size fits all" AI offerings cannot meet the tactical nor strategic needs of regions and countries. Understanding local languages, incorporating local context, and aligning with unique priorities in each region and country requires AI solutions that are designed and tailored to fit the specific use cases and requirements of those regions, countries and their people. Not only that, but “one size fits all” AI offerings take control of the impact of AI on their future out of the hands of countries and their citizens.</p> <p>The AI for Citizens initiative provides an alternative to “one size fits all” AI solutions. It is grounded in the principles of openness, collaboration, choice, and autonomy, empowering countries to choose the infrastructure, technologies, and partners that best help them reach their unique objectives. </p> <p>AI for Citizens works in close partnership with governments and local entities to build solutions tailored to local needs and goals, offering Mistral AI's technology, expertise, and experience to accelerate local AI strategies and ecosystems. This initiative provides:</p> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <p>From top-tier AI models to AI-powered chat and coding assistants, Mistral AI offers a full portfolio of technologies that enable development of AI solutions. As a leader in open technologies for AI, Mistral is uniquely able to build on open technologies to foster innovation and prevent lock-in.</p> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <p>Choose self-hosting for full control, utilize AI datacenters operated by Mistral AI and its local partners to get easier access to cutting-edge infrastructure, or take advantage of SaaS offerings and serverless APIs from Mistral AI.</p> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <p>Mistral AI solutions allow you to ensure that data is hosted within sovereign boundaries to maintain control, security, and compliance to national regulations.</p> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <p>Implement customized research and development programs, including co-training and verticalization of models optimized to meet local language, cultural, and use case needs with enhanced capabilities.</p> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <p>Gain access to valuable roadmaps from Mistral AI to guide your development and deployment.</p> <p>This initiative uniquely unlocks opportunities to deliver on key strategic priorities for AI:</p> <p>Mistral AI is already working closely with countries to help deliver on these possibilities, supporting sovereign AI strategies that respect local languages, laws, and cultural values. Our work is rooted in the belief that AI should strengthen national institutions rather than make governments dependent on, and beholden to, foreign AI giants. We want to enable governments to become self-reliant with the most powerful technology of our time, delivering better services, responding to citizen needs, and preserving their unique heritage for future generations. By investing in local expertise, secure deployments, and long-term partnerships, we aim to help nations build resilient, future-ready public systems that advance research, improve education, and protect cultural continuity in an increasingly digital world.</p> <p>If you're looking for a partner to help you in your AI journey, we'd love to talk with you. Simply fill out this <a href="https://mistral.ai/solutions/ai-for-citizens#contact-us">form</a> to get started. For more information, visit <a href="https://mistral.ai/solutions/ai-for-citizens">mistral.ai/solutions/ai-for-citizens.</a></p> <a href="https://mistral.ai/solutions/ai-for-citizens#contact-us">form</a> <a href="https://mistral.ai/solutions/ai-for-citizens">mistral.ai/solutions/ai-for-citizens.</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/ai-for-citizens Solutions Thu, 03 Jul 2025 00:00:00 +0000 Unlocking the potential of vision language models on satellite imagery through fine-tuning https://mistral.ai/news/unlocking-potential-vision-language-models-satellite-imagery-fine-tuning Unlocking the potential of vision language models on satellite imagery through fine-tuning | Mistral AI <p>Fine-tuning foundation models is transforming how we apply AI to real-world problems. By adapting pre-trained models to specific domains, we can unlock dramatically better performance on specialized tasks. Today, we’re excited to share how fine-tuning Pixtral-12B on satellite imagery leads to significant improvements over the base model, showcasing the power of domain-specific adaptation.</p> <p>Fine-tuning large language models can be resource-intensive, but techniques like Low-Rank Adaptation (LoRA) make it far more efficient. LoRA works by injecting small, trainable rank-decomposition matrices into the model's weights, allowing targeted adaptation without modifying the full model. It enables developers to adapt models to specific tasks, whether it's learning domain-specific vocabulary, adopting a particular tone, or embedding specialized knowledge, without retraining the entire model.</p> <p>This method shines when prompt engineering or few-shot examples fall short. Complex prompts can become intricate and hard to maintain, often producing inconsistent results. With LoRA-based fine-tuning, a handful of curated examples can steer the model more reliably, achieving better performance with less overhead.</p> <p>Satellite imagery is a highly specialized visual domain with critical applications across the global economy. From tracking deforestation and monitoring environmental change to detecting emerging threats, these images power high-stakes decision-making in government, agriculture, defense, and climate science. To extract reliable insights, models must be finely specialized to the unique patterns and semantics of satellite data. This is where fine-tuning Pixtral-12B comes in, bridging the gap between general-purpose vision models and domain-specific expertise.</p> <p>To demonstrate the impact of fine-tuning, we used the Aerial Image Dataset (AID) introduced by Xia et al under a Public Domain license. This benchmark involves classifying satellite images into detailed scene categories. Many of these classes (such as dense residential vs. medium residential; or ambiguous terms like center) are difficult for general vision-language models to handle without domain-specific context. Fine-tuning provides the model with that context, enabling more accurate and nuanced classification.</p> <img src="https://cms.mistral.ai/assets/db21e46d-22c1-4144-9171-f142cd628cfb.png?width=1216&amp;height=1600"/> <p>Note that smaller, specialized vision models could potentially achieve comparable performance levels. This article aims to guide you through the process of effectively fine-tuning Mistral’s Vision Language Model (VLM) using a straightforward example, and to demonstrate its impact on basic classification metrics. More advanced applications of fine-tuning could include interactions like "speak with an image" or generating image captions.</p> <p>We began with a traditional classification setup, splitting the dataset into 8,000 training samples and 2,000 test samples. Using a system prompt that listed all target classes and enforced a structured output format, we achieved reasonable baseline results. However, performance varied significantly across classes, especially those with subtle visual distinctions. Additionally, because the language model isn't explicitly constrained to the label set, it occasionally hallucinated non-existent or invalid class names, highlighting the limitations of purely prompt-based approaches.</p> <p>Classification system prompt</p> <p>Classify the following image into the category it belongs to. - These category labels are Desert; BareLand; RailwayStation; Mountain; Parking; River; Church; MediumResidential; Commercial; Forest; Airport; Bridge; Park; Farmland; SparseResidential; BaseballField; School; Playground; Square; Stadium; Meadow; Beach; Resort; DenseResidential; Port; StorageTanks; Pond; Viaduct; Industrial; Center. - Output your result using exclusively the following schema: {'image_description': FieldInfo(annotation=str, required=True), 'label': FieldInfo(annotation=str, required=True)} - Put your results between a json tag ```json ```</p> <img src="https://cms.mistral.ai/assets/124bda95-d39f-47cc-bad2-7d3e824e0f94.png?width=666&amp;height=670"/> <p>Some classes can be quite challenging to differentiate without prior knowledge or specific criteria. For example, consider the images below showing a "Playground" on the left and a "Stadium" on the right. The base Pixtral model classifies both as "Stadium." Upon closer inspection, the main difference is the presence of seats surrounding the sports field. We anticipate that fine-tuning will help capture these nuances.</p> <img src="https://cms.mistral.ai/assets/4ec94226-782b-4ccf-a1b2-b125f00ca2d2.png?width=640&amp;height=300"/> <p>To improve these results, we fine-tuned Pixtral-12B using Mistral’s <a href="https://docs.mistral.ai/capabilities/finetuning/text_vision_finetuning/">fine-tuning API</a>. The fine-tuning strategy consisted in providing the "assistant response" with the right label for a system prompt and input image. No extensive hyperparameter tuning was needed here, making the process efficient and cost-effective.</p> <a href="https://docs.mistral.ai/capabilities/finetuning/text_vision_finetuning/">fine-tuning API</a> <p>Another option is to launch fine-tuning jobs via <a href="https://console.mistral.ai/build/finetuned-models">LaPlateforme UI</a></p> <a href="https://console.mistral.ai/build/finetuned-models">LaPlateforme UI</a> <img src="https://cms.mistral.ai/assets/295623e0-3b22-455d-8708-2d1bc35138ac.png?width=1600&amp;height=816"/> <p>Selecting Hyperparameters</p> <p>Choosing the right hyperparameters is crucial for successful fine-tuning. Here are some tips:</p> <p>Learning rate: Start with a small learning rate to avoid overshooting the optimal weights.</p> <p>Batch size: Use a batch size that fits within your computational resources while providing stable gradients.</p> <p>Epochs: Begin with a single epoch and monitor performance. Additional epochs can be added if necessary but we recommend to keep a close eye on overfitting risk.</p> <p>Note: Direct API calls with Mistral fine-tuning API give you more control on the hyperparameters. On LaPlateforme you just need to provide the desired learning rate and number of epochs, the finetuning engine will then compute the optimal batch size based on your dataset size and internal benchmarks of optimal number of tokens per batch.</p> <img src="https://cms.mistral.ai/assets/fc28729a-e565-425c-b36c-b5bb3b744be7.png?width=1019&amp;height=670"/> <p>After fine-tuning, we observed a quantum leap in classification metrics for all classes (e.g. overall accuracy increased from 0.56 to 0.91). The model's performance became more consistent across classes, and hallucinations were significantly reduced (from 5% to 0.1%). While the results are not 100% perfect, the improvement was substantial, especially considering the limited budget (≤10$) and the relatively small number of samples (8,000 distributed over 30 classes).</p> <img src="https://cms.mistral.ai/assets/65b67020-43c0-45fe-9116-d2961d757287.png?width=851&amp;height=672"/> <p>Fine-tuning Pixtral-12B on satellite imagery demonstrates the effectiveness of techniques like LoRA to achieve remarkable improvements in model performance. This approach is not only cost-effective but also scalable, making it ideal for a wide range of applications. Typical examples include highly specialized data, often proprietary, that are underrepresented in traditional VLMs training sets. These can include: medical image captioning, detailed reports from surveillance images, transcription of ancient manuscripts, etc.</p> <p>For more details on the implementation, have a look at our cookbook: <a href="https://github.com/mistralai/cookbook/blob/main/mistral/fine_tune/pixtral_finetune_on_satellite_data.ipynb">https://github.com/mistralai/cookbook/blob/main/mistral/fine_tune/pixtral_finetune_on_satellite_data.ipynb</a> </p> <a href="https://github.com/mistralai/cookbook/blob/main/mistral/fine_tune/pixtral_finetune_on_satellite_data.ipynb">https://github.com/mistralai/cookbook/blob/main/mistral/fine_tune/pixtral_finetune_on_satellite_data.ipynb</a> <p>Interested in more custom work with the Mistral AI team? Contact us for solution support and discover how we can help you achieve your AI goals.</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/unlocking-potential-vision-language-models-satellite-imagery-fine-tuning Solutions Fri, 01 Aug 2025 00:00:00 +0000 Purr-fectly informed https://mistral.ai/news/mistral-afp Le Chat and AFP team up to deliver AI powered by news, providing Le Chat users with richer, more reliable and more accurate responses. <p>Le Chat and AFP team up to deliver AI powered by news, providing Le Chat users with richer, more reliable and more accurate responses.</p> <p>At Mistral AI, our mission is to place AI in everyone's hands. This means democratizing access to advanced AI interfaces, making trustworthy information accessible, and ensuring that everyone can use AI with confidence and trust.</p> <p>Today, we're thrilled to announce a significant step forward in this journey: a global partnership with the Agence France-Presse (AFP), one of the leading global news agencies, known for providing fast, comprehensive and verified coverage of the events shaping our world and of the issues affecting our daily lives.</p> <p>Le Chat, our AI assistant, will now have access to AFP's newswire stories. This integration allows users to receive responses enriched with factual information that adheres to the highest journalistic standards.</p> <p>In a world inundated with information, the integration of AFP's content will provide Le Chat users with richer and more accurate responses. This enhancement is particularly valuable for enterprises adopting generative AI, ensuring they have access to reliable and sourced information.</p> <p>This partnership with AFP is just one of the many steps we're taking to ensure that our users have the best tools at their disposal. Whether you're using Le Chat for creative projects, research, or any other task, you can now do so with the confidence that your information is backed by one of the world's leading news agencies.</p> <p>We will be rolling out the integration with AFP to all le Chat users in the coming weeks. Stay tuned for updates!</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-afp Product Thu, 16 Jan 2025 00:00:00 +0000 The all new le Chat: Your AI assistant for life and work https://mistral.ai/news/all-new-le-chat Brand new features, iOS and Android apps, Pro, Team, and Enterprise tiers. <p>Brand new features, iOS and Android apps, Pro, Team, and Enterprise tiers.</p> <a href="https://mistral.ai/en/products/le-chat">le Chat</a> <p>Access the latest news, plan everyday life, track projects, upload and summarize documents, and do much, much more. </p> <p>We’re also introducing <a href="https://chat.mistral.ai/upgrade/plans">Pro</a> and <a href="https://chat.mistral.ai/upgrade/plans">Team</a> tiers of service, as well as an <a href="https://mistral.ai/en/products/le-chat">Enterprise</a> tier in private preview. </p> <a href="https://chat.mistral.ai/upgrade/plans">Pro</a> <a href="https://chat.mistral.ai/upgrade/plans">Team</a> <a href="https://mistral.ai/en/products/le-chat">Enterprise</a> <p>And starting today, le Chat is available on <a href="https://apps.apple.com/us/app/le-chat-by-mistral-ai/id6740410176">iOS</a>, <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat">Android</a>, and soon, on private infrastructure for businesses. </p> <a href="https://apps.apple.com/us/app/le-chat-by-mistral-ai/id6740410176">iOS</a> <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat">Android</a> <p>Powered by the highest-performing, lowest-latency Mistral models and the fastest inference engines on the planet, le Chat reasons, reflects, and responds faster than any other chat assistant, up to ~1000 words / sec. We call this feature <a href="https://help.mistral.ai/en/articles/268659-flash-answers">Flash Answers</a>, and it’s currently available in preview to all users.</p> <a href="https://help.mistral.ai/en/articles/268659-flash-answers">Flash Answers</a> <p>Le Chat combines the high-quality pre-trained knowledge of Mistral models with recent information balanced across <a href="https://help.mistral.ai/en/articles/267605-web-search">web search</a>, robust <a href="https://mistral.ai/news/mistral-afp/">journalism</a>, social media, and multiple other sources to provide nuanced, evidence-based responses.</p> <a href="https://help.mistral.ai/en/articles/267605-web-search">web search</a> <a href="https://mistral.ai/news/mistral-afp/">journalism</a> <p>Le Chat’s <a href="https://help.mistral.ai/en/articles/268648-image-understanding">image</a> and <a href="https://help.mistral.ai/en/articles/268650-document-understanding">document</a> understanding is powered by the best vision and optical character recognition (OCR) models in the industry, ensuring high accuracy across complex arbitrary files such as PDFs, spreadsheets, log files, and intricate—sometimes even indecipherable—imagery.</p> <a href="https://help.mistral.ai/en/articles/268648-image-understanding">image</a> <a href="https://help.mistral.ai/en/articles/268650-document-understanding">document</a> <p>We’re also introducing <a href="https://help.mistral.ai/en/articles/268657-code-interpreter">Code interpreter</a> in Le Chat, which enables users to perform sandboxed code execution, scientific analysis, create visualizations, and run simulations. This feature makes le Chat a practical tool for validating algorithms and exploring data insights.</p> <a href="https://help.mistral.ai/en/articles/268657-code-interpreter">Code interpreter</a> <p>Le Chat’s <a href="https://help.mistral.ai/en/articles/268651-image-generation">image generation</a> is powered by Black Forest Labs Flux Ultra, currently the leading image generation model. Use Le Chat to generate anything you can imagine: from photorealistic images to shareable content and corporate creatives.</p> <a href="https://help.mistral.ai/en/articles/268651-image-generation">image generation</a> <img src="https://cms.mistral.ai/assets/38eebc5f-5ade-41ed-8adc-985da65db329.png?width=1600&amp;height=515"/> <p>Aligned with Mistral AI’s mission of democratizing AI, le Chat offers the vast majority of its features for free (latest models, journalism, image generation, document uploads, and more), with upgraded limits for power users starting at $14.99 per month in the Pro tier.</p> <p>For enterprise use cases, we just launched the private preview of le Chat Enterprise, which allows you to deploy le Chat in your secure environment, connect with custom tools, and use customized models that best understand your enterprise context. Please <a href="https://mistral.ai/contact/">let us know</a> if you’d like to get early access.</p> <a href="https://mistral.ai/contact/">let us know</a> <p>You will soon be able to plug in le Chat to your work environment (documents, email, messaging systems, databases) with granular access control and create multi-step agents to automate the boring parts of your work.</p> <p>With le Chat now available on mobile, you can take it anywhere and elevate your everyday experiences.</p> <p>From historical facts to complex scientific concepts, le Chat provides well-reasoned, evidence-based answers with relevant context and detailed citations.</p> <p>We’re introducing Memories (rolling out to all users soon), an opt-in feature that helps le Chat learn and remember more about you and your preferences. This enables you to now use le Chat for personalized learning, tracking progress towards your goals, and rediscovering conversations and insights from way back.</p> <p>From travel plans to to-do lists, le Chat’s comprehensive information access, combined with document and micro-app creation capabilities, helps you manage tasks, milestones and events to stay on top of everyday life.</p> <p>Transform your ideas into functional tools with le Chat Agents. Whether you want to streamline daily tasks, track personal finance, or automate scheduling, Le Chat can help you design and deploy micro-apps that enhance your productivity and efficiency.</p> <p>In a different country? Le Chat can help you translate. Going on a holiday? Le Chat can check the weather and help you pack appropriately. Going on a diet? Le Chat can read nutrition labels and help you meal prep.</p> <p>Keep up with breaking news, sports scores, stock trends, global events, and hundreds of other topics. Le Chat helps you stay connected to the latest goings-on of the world, ensuring that you never miss a beat.</p> <p>Using Memories, Le Chat contextualizes recommendations with a deep understanding of user preferences. Across books, movies, restaurants, podcasts, activities, and more, le Chat can be trusted to provide exciting and meaningful suggestions and recommendations.</p> <p>With its advanced image generation capabilities, free-form <a href="https://help.mistral.ai/en/articles/265279-canvas">Canvas</a>, and the ability to create custom front-ends, le Chat can help you bring your imagination to life, across research mockups, essays, posters, even birthday cards.</p> <a href="https://help.mistral.ai/en/articles/265279-canvas">Canvas</a> <p>With its advanced document understanding capabilities, le Chat makes it easy to upload and analyze user manuals, scientific literature, financial statements, photos, scans, and more. If it's digital, it’s likely that le Chat can understand and simplify it for you.</p> <p>Le Chat can not only help you live life a little more fully, but also make work more enjoyable.</p> <p>For software engineers looking for a handy copilot, le Chat now comes with advanced coding capabilities, along with improved code generation and completion. Le Chat is used by thousands of developers today for debugging, scripting, testing, code optimization, and demoing.</p> <p>Le Chat can help you clean and preprocess data, perform statistical analysis, and generate visualizations of your data. With its ability to understand and interpret complex datasets, le Chat can assist in building predictive models and testing hypotheses. These capabilities are powered by le Chat’s new Code interpreter feature.</p> <p>From financial analysis to legal research, compliance posture analysis, and employee onboarding, le Chat supports a wide range of corporate functions, empowering you to help your company operate more efficiently. Le Chat can help draft contracts, inspect and verify statements, and provide insights from the corporate knowledge corpus combined with latest information from the internet, while being mindful about privacy, moderation, and information security. </p> <p>With a nuanced understanding of tone and voice, and access to global up-to-date information, le Chat can help you perform deep market research, craft personalized drafts for sales outreach, create compelling social media copy, and optimize content for SEO. Le Chat can also analyze and respond to customer feedback and provide insights into client behavior and preferences.</p> <p>Le Chat can assist with meeting summarization, email management, and document generation, helping you elevate your productivity. With multi-tool task automation coming soon, le Chat will be able to help you automate tasks that require tab and tool switching, including scheduling meetings, generating to-do’s, and automating follow ups. </p> <p>You can download le Chat on the <a href="https://apps.apple.com/us/app/le-chat-by-mistral-ai/id6740410176">App Store</a> or <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat">Play Store</a>, or try it at <a href="https://chat.mistral.ai/chat">chat.mistral.ai</a>. No credit card required. Click <a href="https://mistral.ai/en/products/le-chat">here</a> to find out more about le Chat. </p> <a href="https://apps.apple.com/us/app/le-chat-by-mistral-ai/id6740410176">App Store</a> <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat">Play Store</a> <a href="https://chat.mistral.ai/chat">chat.mistral.ai</a> <a href="https://mistral.ai/en/products/le-chat">here</a> <p>To experience le Chat without limits, try <a href="https://chat.mistral.ai/upgrade/plans">le Chat Pro.</a> The Pro tier gives you expanded access to the fastest tier of service, latest AI capabilities, and all new features. And students get over 50% off.</p> <a href="https://chat.mistral.ai/upgrade/plans">le Chat Pro.</a> <p>If you would like to use le Chat with your teammates at work, try <a href="https://chat.mistral.ai/upgrade/plans">le Chat Team</a>. Le Chat Team brings unified billing and management, priority support from Mistral AI. </p> <a href="https://chat.mistral.ai/upgrade/plans">le Chat Team</a> <p>To be part of our private beta for le Chat Enterprise, please <a href="https://mistral.ai/contact/">contact us</a>.</p> <a href="https://mistral.ai/contact/">contact us</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/all-new-le-chat Product Thu, 06 Feb 2025 00:00:00 +0000 Introducing Le Chat Enterprise https://mistral.ai/news/le-chat-enterprise Your Enterprise. Your AI. <p>Your Enterprise. Your AI.</p> <p>Today, we’re proud to introduce <a href="https://mistral.ai/products/le-chat">Le Chat Enterprise</a> — a feature-rich AI assistant, powered by our brand new <a href="https://mistral.ai/news/mistral-medium-3">Mistral Medium 3 model</a>. Solving enterprise AI challenges, like tool fragmentation, insecure knowledge integration, rigid models, and slow ROI, it delivers a unified AI platform for all organizational work.</p> <a href="https://mistral.ai/products/le-chat">Le Chat Enterprise</a> <a href="https://mistral.ai/news/mistral-medium-3">Mistral Medium 3 model</a> <p>Building on the foundation of Le Chat’s productivity tools, the new plan includes:</p> <p>[All features rolling out over the next two weeks.]</p> <p>We’re also announcing several big improvements to Le Chat Pro and Team — our plans for individuals and growing teams.</p> <p>Le Chat Enterprise aims to provide AI productivity your team needs, in one platform, is fully private, and deeply customizable. Plus, our world-class AI engineering team offers support all the way through to value delivery.</p> <p>Empower your team to be even more productive, more competitive, more everything.</p> <p>Transform complex tasks into achievable outcomes with AI that speaks every professional language.</p> <p>Whether your team is analyzing data, writing code, or creating content, they can access cross-domain expertise through intuitive interfaces designed for both technical and non-technical users.</p> <p>Unlock intelligence from your enterprise data, starting with Google Drive, Sharepoint, OneDrive, Google Calendar, and Gmail. With more connectors coming soon, including templates to build your own.</p> <p>Get improved, personalized answers by connecting Le Chat to your knowledge.</p> <p>Organize external data sources, documents, and web content into complete knowledge bases for the most relevant answers.</p> <p>Preview files quickly with Auto Summary for faster consumption.</p> <p>Le Chat enables your team to maintain a handy personal library of frequently used documents across uploaded files as well as Drive / Sharepoint. Cite, extract, and analyze critical information.</p> <p>We’re also adding MCP support soon, so your organization can easily connect Le Chat to even more enterprise systems.</p> <p>Automate routine tasks with AI agents, connected to your apps and libraries for contextual understanding across tools. Le Chat will enable your team to easily build custom assistants that match your own requirements — no code required.</p> <p>Deploy Le Chat anywhere: self-hosted, in your public or private cloud, or as a service hosted in the Mistral cloud. Privacy-first data connections to enterprise tools —  with strict ACL adherence — ensuring full data protection and safety. </p> <p>Build your AI strategy with true flexibility — Mistral AI gives you the independence to choose your ideal infrastructure, without lock-in.</p> <p>We offer deep customizability and full control across the stack, from models and the platform, all the way to the interfaces.</p> <p>You can customize your AI experience through bespoke integrations to your team’s enterprise data and custom platform and model capabilities, like personalizing your assistant with stored memories. Or take it further by enabling user feedback loops for continuous model self-improvement.</p> <p>You'll have full control of your implementation within your security domain while providing employees access to SOTA intelligence.</p> <p>Additionally, we provide comprehensive audit logging and storage.</p> <p>Leverage Mistral applied AI expertise to tailor models to fit your exact use case. We provide hands-on assistance by the world’s best AI engineers and scientists across deployment, solutioning, safety, and beyond.</p> <p>Experience frontier artificial intelligence with Le Chat Pro, Team Enterprise plans, suited to your organization’s needs.</p> <p>Le Chat Enterprise is now available in Google Cloud Marketplace, and will soon be on Azure AI and AWS Marketplace.</p> <p><a href="https://mistral.ai/contact/">Contact us</a> to learn more about how Le Chat Enterprise can transform your organization.</p> <a href="https://mistral.ai/contact/">Contact us</a> <p>To get started with Le Chat today, try it at <a href="http://chat.mistral.ai">chat.mistral.ai</a>, or download our mobile app from the <a href="https://apps.apple.com/us/app/le-chat-by-mistral-ai/id6740410176">App Store</a> or <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat">Play Store</a> — no credit card needed.</p> <a href="http://chat.mistral.ai">chat.mistral.ai</a> <a href="https://apps.apple.com/us/app/le-chat-by-mistral-ai/id6740410176">App Store</a> <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat">Play Store</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/le-chat-enterprise Product Wed, 07 May 2025 00:00:00 +0000 Build AI agents with the Mistral Agents API https://mistral.ai/news/agents-api Build AI agents with the Mistral Agents API | Mistral AI <img src="https://cms.mistral.ai/assets/f2a4b295-ff64-4c16-a42a-14f858c65766.png?width=1080&amp;height=457"/> <p>Today we announce our new Agents API, a major step forward in making AI more capable, useful, and an active problem-solver.</p> <p>Traditional language models excel at generating text but are limited in their ability to perform actions or maintain context. Our new Agents API addresses these limitations by combining Mistral's powerful language models with:</p> <p>Built-in connectors for code execution, web search, image generation, and MCP tools </p> <p>Persistent memory across conversations </p> <p>Agentic orchestration capabilities</p> <p>The Agents API complements our <a href="https://docs.mistral.ai/capabilities/completion/">Chat Completion API</a> by offering a dedicated framework that simplifies implementing agentic use cases. It serves as the backbone of enterprise-grade agentic platforms.</p> <a href="https://docs.mistral.ai/capabilities/completion/">Chat Completion API</a> <p>By providing a reliable framework for AI agents to handle complex tasks, maintain context, and coordinate multiple actions, the Agents API enables enterprises to use AI in more practical and impactful ways.</p> <p>Explore the diverse applications of Mistral’s Agents API across various sectors:</p> <p>An agentic workflow built with Mistral's agents API where an agent interacts with Github and oversees a developer agent, powered by DevStral to write code. The agent is granted full authority over Github, showcasing automated software development task management.</p> <a href="https://github.com/mistralai/cookbook/tree/main/mistral/agents/agents_api/github_agent">Read our cookbook</a> <p>An intelligent task coordination assistant powered by our Agents API, using multi-server MCP architecture to transform call transcripts to PRDs to actionable Linear issues and track project deliverables.</p> <a href="https://github.com/mistralai/cookbook/tree/main/mistral/agents/agents_api/prd_linear_ticket">Read our cookbook</a> <p>A financial advisory agent constructed with our Agents API, orchestrating multiple MCP servers to source financial metrics, compile insights, and archive results securely.</p> <a href="https://github.com/mistralai/cookbook/tree/main/mistral/agents/agents_api/financial_analyst">Read our cookbook</a> <p>A powerful AI travel assistant that helps users plan their trips, book accommodations, and manage travel needs.</p> <a href="https://github.com/mistralai/cookbook/tree/main/mistral/agents/agents_api/travel_assistant">Read our cookbook</a> <p>An AI-powered food diet companion designed to help users establish goals, log meals, receive personalized food suggestions, track their daily achievements, and discover dining options that align with their nutritional targets.</p> <a href="https://github.com/mistralai/cookbook/tree/main/mistral/agents/agents_api/food_diet_companion">Read our cookbook</a> <p>Each agent can be equipped with powerful built-in connectors, which are tools that are deployed and ready for Agents to call on demand, and MCP tools: </p> <a href="https://docs.mistral.ai/agents/connectors/code_interpreter/">Code execution</a> <p>The Agents API can use the code execution connector, empowering developers to create agents that execute Python code in a secure sandboxed environment. This enables agents to tackle a wide range of tasks, including mathematical calculations and analysis, data visualization and plotting, and scientific computing.</p> <a href="https://docs.mistral.ai/agents/connectors/image_generation/">Image generation</a> <p>The image generation connector tool, powered by Black Forest Lab FLUX1.1 [pro] Ultra, enables agents to create images for diverse applications. This feature can be leveraged for various use cases such as generating visual aids for educational content, creating custom graphics for marketing materials, or even producing artistic images.</p> <a href="https://docs.mistral.ai/agents/connectors/document_library/">Document library</a> <p>Document Library is a built-in connector tool that enables agents to access documents from Mistral Cloud. It powers the integrated RAG functionality, strengthening agents’ knowledge by leveraging the content of user-uploaded documents.</p> <a href="https://docs.mistral.ai/agents/connectors/websearch/">Web search</a> <p>The Agents API offers web search as a connector, enabling developers to combine Mistral models with diverse, up-to-date information from web search, reputable news, and other sources. This integration facilitates the delivery of up-to-date, informed, evidence-supported responses.</p> <p>Agents with web search capabilities show a significant improvement in performance. In the SimpleQA benchmark, Mistral Large and Mistral Medium with web search achieve scores of 75% and 82.32%, respectively, compared to 23% and 22.08% without web search (see figure below).</p> <a href="https://docs.mistral.ai/agents/mcp/">MCP tools</a> <p>The Agents API SDK can also leverage tools built on the Model Context Protocol (MCP)—an open, standardized protocol that enables seamless integration between agents and external systems. MCP tools provide a flexible and extensible interface for agents to access real-world context, including APIs, databases, user data, documents, and other dynamic resources. Check out the <a href="#demo-github">Github</a>, <a href="#demo-finance">Financial Analyst</a>, and <a href="#demo-linear">Linear</a> MCP demos to learn how to use MCP tools with Mistral Agents in action.</p> <a href="#demo-github">Github</a> <a href="#demo-finance">Financial Analyst</a> <a href="#demo-linear">Linear</a> <img src="https://cms.mistral.ai/assets/5a0eb67b-819c-4a3f-9cc0-7dba190d58d2.svg?width=null&amp;height=null"/> <p>The Agents API provides robust conversation management through a flexible and stateful conversation system. Each conversation retains its context, allowing for seamless and coherent interactions over time.</p> <a href="https://docs.mistral.ai/agents/agents_basics/#conversations">Conversation management</a> <p>There are two ways to start a conversation:</p> <p>Each conversation maintains a structured history through conversation entries, ensuring that the context is preserved across interactions.</p> <a href="https://docs.mistral.ai/agents/agents_basics/#continue-a-conversation-working">Stateful interactions and conversation branching</a> <p>Developers are no longer required to monitor conversion history; they have the ability to view past conversations. They can always continue any conversation or initiate new conversation paths from any point. </p> <a href="https://docs.mistral.ai/agents/agents_basics/#streaming-output-working">Streaming output</a> <p>The API also supports streaming outputs, both when starting a conversation and continuing a previous one. This feature allows for real-time updates and interactions. </p> <p>The true power of our Agents API lies in its ability to orchestrate multiple agents to solve complex problems. Through dynamic orchestration, agents can be added or removed from a conversation as needed—each one contributing its unique capabilities to tackle different parts of a problem.</p> <img src="https://cms.mistral.ai/assets/55ca02be-4dfa-4f0e-ba6a-adc7c54dce4c.svg?width=null&amp;height=null"/> <a href="https://docs.mistral.ai/agents/handoffs/#create-an-agentic-workflow">Creating an agentic workflow</a> <p>To build a workflow with handoffs, start by creating all necessary agents. You can create as many agents as needed, each with specific tools and models, to form a tailored workflow.</p> <a href="https://docs.mistral.ai/agents/handoffs/">Agent handoffs</a> <p>Once agents are created, define which agents can hand off tasks to others. For example, a finance agent might delegate tasks to a web search agent or a calculator agent based on the conversation's needs.</p> <p>Handoffs enable a seamless chain of actions. A single request can trigger tasks across multiple agents, each handling specific parts of the request. This collaborative approach allows for efficient and effective problem-solving, unlocking powerful possibilities for real-world applications.</p> <p>To get started, check out our <a href="https://docs.mistral.ai/agents/agents_introduction">docs</a>, create your first agent, and start building! </p> <a href="https://docs.mistral.ai/agents/agents_introduction">docs</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/agents-api Product Tue, 27 May 2025 00:00:00 +0000 Introducing Mistral Code https://mistral.ai/news/mistral-code Introducing Mistral Code | Mistral AI <p>Software engineering teams in enterprises can finally bring frontier-grade AI coding into their workflow in a secure, compliant manner. Mistral Code is an AI-powered coding assistant that bundles powerful models, an in-IDE assistant, local deployment options, and enterprise tooling into one fully supported package, so developers can 10X their productivity with the full backing of their IT and security teams. </p> <p>Mistral Code builds on the proven open-source project <a href="https://github.com/continuedev/continue">Continue</a>, reinforced with the controls and observability that large enterprises require. Private beta is open today for <a href="https://plugins.jetbrains.com/plugin/27493-mistral-code-enterprise">JetBrains IDEs</a> and <a href="https://marketplace.visualstudio.com/items?itemName=mistralai.mistral-code">VSCode</a>, with general availability planned soon. Mistral Code is a continuation of our efforts to make developers successful with Al, following last month’s releases of <a href="https://mistral.ai/news/devstral">Devstral</a> and <a href="https://mistral.ai/news/codestral-embed">Codestral Embed</a>. </p> <a href="https://github.com/continuedev/continue">Continue</a> <a href="https://plugins.jetbrains.com/plugin/27493-mistral-code-enterprise">JetBrains IDEs</a> <a href="https://marketplace.visualstudio.com/items?itemName=mistralai.mistral-code">VSCode</a> <a href="https://mistral.ai/news/devstral">Devstral</a> <a href="https://mistral.ai/news/codestral-embed">Codestral Embed</a> <img src="https://cms.mistral.ai/assets/6b71f3c5-d753-41b2-8ac5-6ec41534f0f0.png?width=1078&amp;height=348"/> <p>Our goal with Mistral Code is simple: deliver best-in-class coding models to enterprise developers, enabling everything from instant completions to multi-step refactoring—through an integrated platform deployable in the cloud, on reserved capacity, or air-gapped on-prem GPUs.</p> <p>Unlike typical SaaS copilots, all parts of the stack—from models to code—are delivered by one provider subject to a single set of SLAs, and every line of code resides inside the customer’s enterprise boundary. </p> <p>When we spoke with VPs of engineering, platform leads, and CISOs, they surfaced four recurring blockers that stop mainstream copilots at the proof-of-concept stage:</p> <p>Limited connectivity to proprietary repos and internal services.</p> <p>Minimal customisation of the underlying models or prompts.</p> <p>Shallow task coverage that ends at “autocomplete” instead of finishing multi-step work.</p> <p>Fragmented SLAs spread across one vendor for the plug-in, another for the model, and a third for infra.</p> <p>Mistral Code addresses those pain points with a single, vertically-integrated offering: models, plugin, admin controls, and 24X7 support—so platform teams retain visibility and can tie AI-powered productivity back to ROI.</p> <img src="https://cms.mistral.ai/assets/0c2975a6-0964-4a39-9af7-6cabb8bec71e.png?width=1454&amp;height=531"/> <p>At its core, Mistral Code is powered by four models that are state of the art in coding: </p> <p><a href="https://mistral.ai/news/codestral">Codestral</a> for fill-in-the-middle / code autocomplete </p> <a href="https://mistral.ai/news/codestral">Codestral</a> <p><a href="https://mistral.ai/news/codestral-embed">Codestral Embed</a> for code search and retrieval</p> <a href="https://mistral.ai/news/codestral-embed">Codestral Embed</a> <p><a href="https://mistral.ai/news/devstral">Devstral</a> for agentic coding </p> <a href="https://mistral.ai/news/devstral">Devstral</a> <p>And <a href="https://mistral.ai/news/mistral-medium-3">Mistral Medium</a> for chat assistance </p> <a href="https://mistral.ai/news/mistral-medium-3">Mistral Medium</a> <p>Critically, customers can fine-tune or post-train the underlying models on private repositories or distill lightweight variants—capabilities that simply don’t exist in closed copilots tied to proprietary APIs. </p> <p>Mistral Code is proficient in 80+ programming languages and can reason over files, Git diffs, terminal output, and issues. We’re currently testing the product in helping engineers move beyond coding assistance and suggestions to complete fully scoped tickets: opening files, writing new modules, updating tests, and even executing shell commands—all under configurable approval workflows so senior engineers stay in control.</p> <p>For IT managers, a rich admin console exposes granular platform controls, deep observability, seat management, and usage analytics.</p> <p>Our customers validate the approach: <a href="https://www.abanca.com/es/">Abanca</a>, a leading bank in Spain and Portugal, has deployed Mistral Code at scale for development teams in a hybrid configuration so they can prototype in the cloud while core banking code stays on-prem. <a href="https://www.groupe-sncf.com/en">SNCF</a>, France's national railway company, is empowering its 4000 developers with AI through Mistral Code Serverless, and <a href="https://www.capgemini.com/news/press-releases/capgemini-partners-with-mistral-ai-to-spearhead-the-adoption-of-new-frontier-generative-ai-models/">Capgemini</a>, our first global systems-integrator partner, is to deploy Mistral Code on-premises for 1500+ developers in the service of client projects in regulated industries.</p> <a href="https://www.abanca.com/es/">Abanca</a> <a href="https://www.groupe-sncf.com/en">SNCF</a> <a href="https://www.capgemini.com/news/press-releases/capgemini-partners-with-mistral-ai-to-spearhead-the-adoption-of-new-frontier-generative-ai-models/">Capgemini</a> <img src="https://cms.mistral.ai/assets/46ca04b7-f62f-4855-b888-2c9ff09f90ce.png?width=1054&amp;height=937"/> <p>We owe a debt of gratitude to the Continue community for pioneering the original developer experience; our fork preserves their extensibility but layers on significant improvements to multi-line editing, chat, and enterprise-grade extras such as fine-grained RBAC, audit logging, issue resolution / suggestion acceptance metrics, and so on. While our initial release is a private beta to collect customer feedback, we are excited to announce general availability soon, and aim to start making contributions to the upstream repo in forthcoming releases. </p> <p><a href="https://mistral.ai/contact">Request access</a> from your Mistral account team to spin up a pilot. You can choose serverless, cloud, or self-hosted deployment—and get coding with frontier intelligence in minutes.</p> <a href="https://mistral.ai/contact">Request access</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-code Product Wed, 04 Jun 2025 00:00:00 +0000 Le Chat dives deep. https://mistral.ai/news/le-chat-dives-deep Introducing Deep Research (Preview), plus Audio-in, Projects, and other updates. <p>Introducing Deep Research (Preview), plus Audio-in, Projects, and other updates.</p> <p>AI assistants are at their best when they let you go deeper in your thinking, keep your conversation flowing, and maintain contextual continuity. Today, we’re making Le Chat even more capable, more intuitive — and more fun — with a collection of powerful new features to help you research more thoroughly, express yourself more naturally, and keep your interactions — text, voice, and images — organized in context.</p> <p>Deep Research (Preview) mode: Lightning fast, structured research reports on even the most complex topics.</p> <p>Voice mode: Talk to Le Chat instead of typing with our new Voxtral model.</p> <p>Natively multilingual reasoning: Tap into thoughtful answers, powered by our <a href="https://mistral.ai/news/magistral">reasoning model</a> — Magistral.</p> <a href="https://mistral.ai/news/magistral">reasoning model</a> <p>Projects: Organize your conversations into context-rich folders.</p> <p>Advanced image editing directly in Le Chat, in partnership with Black Forest Labs.</p> <p>Research mode turns Le Chat into a coordinated research assistant that can plan, clarify your needs, search, and synthesize. Ask a meaty question, and it will break it down, gather credible sources, and build a structured, reference-backed report that’s easy to follow.</p> <p>It’s powered by our tool-augmented Deep Research agent (in preview), but designed to feel simple, transparent, and genuinely helpful, as though you’re collaborating with a well-organized research partner.</p> <p>Vocal mode lets you talk to Le Chat just like you’d talk to a person — no typing needed. You can brainstorm ideas while walking, get quick answers while running errands, or transcribe a meeting. It’s powered by our new voice-in model, Voxtral, and built for natural, low-latency speech recognition, that keeps up at your pace.</p> <p>Think mode helps you reason through complex questions with clear, thoughtful answers, powered by our reasoning model — Magistral. Great for drafting a proposal in Spanish, clarifying a legal concept in Japanese, or just thinking things through in whatever language you’re most comfortable with. Le Chat can also code switch between languages mid-sentence. </p> <p>Projects help you stay organized by grouping related chats into focused, context-rich spaces. Each project has its own default Library and remembers which tools and settings you’ve enabled, so things stay the way you left them. You can upload files directly into a project, pull in content from your Libraries, and keep everything — conversations, documents, and ideas — in one place. Perfect for planning a move, designing a new product feature, or keeping long-running workstreams on track.</p> <img src="https://cms.mistral.ai/assets/6e528d1e-dcf8-42ef-94ec-81eedf407278.png?width=3358&amp;height=1888"/> <p>Unlike with typical text-to-image tools, you can create and then edit your images with simple prompts like “remove the object” or “place me in another city” and the model transforms the scene while preserving characters and detail. It’s ideal for making consistent edits across a series, keeping people, objects, and design elements recognizable from one image to the next.</p> <img src="https://cms.mistral.ai/assets/a552d04b-f239-4151-b35a-034446ba6fc1.png?width=1658&amp;height=471"/> <p>You can try out all of these new features in Le Chat at <a href="http://chat.mistral.ai">chat.mistral.ai</a>, or by downloading the mobile app from the App Store or Play Store. No credit card needed.</p> <a href="http://chat.mistral.ai">chat.mistral.ai</a> <p><a href="https://mistral.ai/contact">Reach out to us</a> to discover how Le Chat Enterprise can transform your organization.</p> <a href="https://mistral.ai/contact">Reach out to us</a> <p>If you’re interested in joining us on our mission to build world-class AI products, we welcome your application to <a href="https://mistral.ai/careers">join our team</a>! </p> <a href="https://mistral.ai/careers">join our team</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/le-chat-dives-deep Product Thu, 17 Jul 2025 00:00:00 +0000 Le Chat. Custom MCP connectors. Memories. https://mistral.ai/news/le-chat-mcp-connectors-memories Le Chat now integrates with 20+ enterprise platforms—powered by MCP—and remembers what matters with Memories. <p>Le Chat now integrates with 20+ enterprise platforms—powered by MCP—and remembers what matters with Memories.</p> <p>The widest enterprise-ready <a href="https://chat.mistral.ai/connections">connector directory (beta)</a>, with custom extensibility, making it easy to bring workflows into your AI assistant.</p> <a href="https://chat.mistral.ai/connections">connector directory (beta)</a> <p>Directory of 20+ secure connectors—spanning data, productivity, development, automation, commerce, and custom integrations. Search, summarize, and act in tools like Databricks, Snowflake, GitHub, Atlassian, Asana, Outlook, Box, Stripe, Zapier, and more.</p> <p>Custom extensibility: Add your own MCP connectors to broaden coverage and drive more precise actions and insights.</p> <p>Flexible deployment: run on mobile, in your browser, or deploy on-premises or in your cloud.</p> <p>Context that carries: introducing <a href="https://chat.mistral.ai/memories">Memories (beta).</a></p> <a href="https://chat.mistral.ai/memories">Memories (beta).</a> <p>Highly-personalized responses based on your preferences and facts.</p> <p>Careful and reliable memory handling: saves what matters, slips sensitive or fleeting info.</p> <p>Complete control over what to store, edit, or delete.</p> <p>And… fast import of your memories from ChatGPT.</p> <p>Everything available on the Free plan.</p> <p>Today, we’re releasing 20+ secure, MCP-powered connectors in Le Chat, enabling you to search, summarize, and take actions with your business-critical tools. Le Chat’s connector directory spans essential categories, simplifying how you integrate your workflows in chats.</p> <img src="https://cms.mistral.ai/assets/92e9c189-28ba-4d83-b8c9-058850e1ad30.png?width=1920&amp;height=1920"/> <p>The new-look <a href="https://chat.mistral.ai/connections">Connectors directory</a> opens direct pipelines into enterprise tools, turning Le Chat into a single surface for data, documents, and actions. </p> <a href="https://chat.mistral.ai/connections">Connectors directory</a> <img src="https://cms.mistral.ai/assets/91cf7ae4-1e23-4557-ba81-11a9f1153f45.svg"/> <p>Summarizing customer reviews in Databricks, then raising a ticket in Asana to address the top issues.</p> <img src="https://cms.mistral.ai/assets/231fb518-99d4-4f85-84bd-0fba406c78f9.svg?width=null&amp;height=null"/> <p>Reviewing open pull requests in GitHub, then creating Jira issues for follow-up and documenting the changes in Notion.</p> <img src="https://cms.mistral.ai/assets/4a059f07-6386-4859-9cc1-7f7dcc9788d5.svg?width=null&amp;height=null"/> <p>Comparing financial obligations across legal documents in Box, then uploading a concise summary back into Box.</p> <img src="https://cms.mistral.ai/assets/a61455d1-664c-4ef8-9e89-9552df4191b2.svg?width=null&amp;height=null"/> <p>Summarizing active issues from Jira, then drafting a Confluence sprint overview page for team planning.</p> <img src="https://cms.mistral.ai/assets/03bc7129-9871-432a-9bed-02209a47af32.svg?width=null&amp;height=null"/> <p>Retrieving business payment insights from Stripe, then logging anomalies as a development project and task in Linear.</p> <p>Learn more about Connectors in our <a href="https://help.mistral.ai/en/collections/911943-connectors">Help Center</a>.</p> <a href="https://help.mistral.ai/en/collections/911943-connectors">Help Center</a> <p>For everything else, you can now <a href="https://help.mistral.ai/en/articles/393572-configuring-a-custom-connector">connect to any remote MCP server</a> of choice—even if it’s not listed in the <a href="https://chat.mistral.ai/connections">Connectors directory</a>—to query, cross-reference, and perform actions on any tool in your stack.</p> <a href="https://help.mistral.ai/en/articles/393572-configuring-a-custom-connector">connect to any remote MCP server</a> <a href="https://chat.mistral.ai/connections">Connectors directory</a> <p>Admin users can confidently control which connectors are available to whom in their organization, with on-behalf authentication, ensuring users only access data they’re permitted to.</p> <p>Deploy Le Chat your way—self-hosted, in your private or public cloud, or as a fully managed service in the Mistral Cloud. <a href="https://mistral.ai/contact">Talk to our team</a> about enterprise deployments.</p> <a href="https://mistral.ai/contact">Talk to our team</a> <p>Memories in Le Chat carry your context across conversations, retrieving insights, decisions, and references from the past when needed. They power more relevant responses, adaptive recommendations tailored for you, and richer answers infused with the specifics of your work—delivering a faster, more relevant, and fully personalized experience.</p> <p>Memories score high in our evaluations for accuracy and reliability: saving what’s important, avoiding forbidden or sensitive inferences, ignoring ephemeral content, and retrieving the right information without hallucinations.</p> <img src="https://cms.mistral.ai/assets/a0e581b9-125b-4ec7-b03a-9fea8eacbd63.png?width=1600&amp;height=1200"/> <p>Most importantly, you stay in full control—add, edit, update, or remove any entry at any time, with <a href="https://help.mistral.ai/en/articles/396497-how-do-you-handle-my-data-when-using-the-memories-feature">clear privacy settings</a> and selective memory handling you can trust.</p> <a href="https://help.mistral.ai/en/articles/396497-how-do-you-handle-my-data-when-using-the-memories-feature">clear privacy settings</a> <p>Both Connectors and Memories are available to all Le Chat users.</p> <p>Try out the new features at <a href="http://chat.mistral.ai">chat.mistral.ai</a>, or by downloading the Le Chat mobile by Mistral AI app from the <a href="https://apps.apple.com/fr/app/le-chat-by-mistral-ai/id6740410176">App Store</a> or <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat&amp;hl=fr">Google Play Store</a>, for free; no credit card needed.</p> <a href="http://chat.mistral.ai">chat.mistral.ai</a> <a href="https://apps.apple.com/fr/app/le-chat-by-mistral-ai/id6740410176">App Store</a> <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat&amp;hl=fr">Google Play Store</a> <p><a href="https://mistral.ai/contact">Reach out to us</a> to learn how Le Chat Enterprise can transform your mission-critical work. </p> <a href="https://mistral.ai/contact">Reach out to us</a> <p>Join our webinar on September 9 to dive into Le Chat’s new MCP capabilities with the Mistral team. Learn key insights, ask your questions, and prepare to build cutting-edge projects—all before the hackathon begins.</p> <a href="https://www.linkedin.com/events/7368644665111158785/"> Sign up now. </a> <p>Gather with the best AI engineers for a 2-day overnight hackathon (Sep. 13-14) and turn ideas into reality using your custom MCPs in Le Chat. Network with peers, get hands-on guidance from Mistral experts, and push the boundaries of what’s possible.</p> <a href="https://cerebralvalley.ai/e/mistral-mcp-hackathon"> Join us. </a> <p>If you’re interested in joining us on our mission to build world-class AI products, we welcome your application to <a href="https://mistral.ai/careers">join our team</a>!</p> <a href="https://mistral.ai/careers">join our team</a> <p>Get in touch.</p> <p>By submitting this form, you agree with our <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a>. We process your data to respond to your contact request in accordance with our <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a></p> <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a> <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/le-chat-mcp-connectors-memories Product Tue, 02 Sep 2025 00:00:00 +0000 Make Memory work for you. https://mistral.ai/news/memory Designing transparency and control into AI recall. <p>Designing transparency and control into AI recall.</p> <p>As conversational AIs get more capable, our expectations grow with them. We don’t just want faster answers, we want tools that remember, adapt, and fit the way we work. That’s where <a href="https://chat.mistral.ai/memories">Memories (beta)</a> come in. And with it, new questions: What should an AI remember? How should it recall? And what does it take for you to trust it?</p> <a href="https://chat.mistral.ai/memories">Memories (beta)</a> <p>When we talked to users across personal and professional settings, three consistent needs stood out: transparency, control, and focus. People want the ability to ask, “What do you know about me?” and get a clear answer. They also want memory to stay on-task. As one Le Chat user put it: “I need a hammer, not a friend.”</p> <p>For some, that means citations: show me exactly which conversation or file a response is pulling from. For others, it means scoped recall, project-specific details, not casual asides from months ago. Whatever the preference, one thing is clear: memory should stay visible, editable, and under your control.</p> <p>Many AI tools today store information automatically. Some resurface it without warning; others only recall when you explicitly ask.</p> <p>Le Chat takes a hybrid approach. It saves useful information automatically, like jotting down a note while you talk. But recall is designed to be smart, timely, and visible. You’ll always see what memory is in play, with links to the source.</p> <p>That design comes from a simple philosophy: An AI assistant should help you think better, not guess what it’s doing. Here’s how we put that into practice.</p> <p>You’ll always know when memory is being used. Le Chat clearly shows when it’s recalling something, where it came from, and why it’s relevant. Think of it like clickable receipts.</p> <p>Memory is something you manage—not something that manages you.You can:</p> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <img src="https://cms.mistral.ai/assets/5708523c-f221-47bb-bb62-d052ee70bbc6.svg?width=null&amp;height=null"/> <p>You own your memories. Export them. Import from elsewhere. Memories in Le Chat are portable and interoperable by design, because control shouldn’t stop at the interface.</p> <p>Memory makes your assistant more useful over time, without getting in the way. That can look like:</p> <p>Recalling how you solved a similar problem last quarter</p> <p>Surfacing a past insight you’d forgotten</p> <p>Connecting your current query to something you said in another thread</p> <p>One user asked a follow-up about a legal policy weeks after uploading a PDF. Le Chat instantly found the right section in the document, without needing to re-upload or re-explain. That kind of connective recall saves time and unlocks flow. No retraining. No do-overs.</p> <p>We introduced <a href="https://chat.mistral.ai/memories">Memory Insights</a>, lightweight prompts that help you explore what Le Chat remembers and how it can help. They surface trends, suggest summaries, and point out moments worth revisiting, all based on your own data, and all editable. It’s a simple way to turn memory from passive storage into active signal. Download Le Chat on the <a href="https://apps.apple.com/fr/app/le-chat-by-mistral-ai/id6740410176">App Store</a> or <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat&amp;hl=fr">Google Play</a> to try memory on mobile.</p> <a href="https://chat.mistral.ai/memories">Memory Insights</a> <a href="https://apps.apple.com/fr/app/le-chat-by-mistral-ai/id6740410176">App Store</a> <a href="https://play.google.com/store/apps/details?id=ai.mistral.chat&amp;hl=fr">Google Play</a> <p>We’re continuing to improve how memory works: Trimming noise, speeding up recall, and making it easier to organize long-term. You’ll see updates soon that let you sort memories into categories, instantly forget something, and get clearer visibility into what memory was used and when.</p> <p>Under the hood, we’ve built a graph-based architecture that balances performance with context-awareness, so memory doesn’t just get longer, it gets smarter. AI is still early. Models will change, and fast. Memories are what keep your assistant anchored in your context, even as everything else shifts. Not a feature. Not a friend. A system you can trust and one that grows with you.</p> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/memory Product Tue, 02 Sep 2025 00:00:00 +0000 Introducing Mistral AI Studio. https://mistral.ai/news/ai-studio The Production AI Platform. <p>The Production AI Platform. </p> <img src="https://mistral.ai/_next/image?url=%2Fimg%2Fai-studio%2Fai-studio-ui.svg&amp;w=1920&amp;q=75&amp;dpl=07108aa8fed6d71e441bef7dec41bdd1f4e09fe61ccf552aeee47ac38f755716363934366365343962616637383730303038623261653938"/> <p>Enterprise AI teams have built dozens of prototypes—copilots, chat interfaces, summarization tools, internal Q&amp;A. The models are capable, the use cases are clear, and the business appetite is there.</p> <p>What’s missing is a reliable path to production and a robust system to support much of it. Teams are blocked not by model performance, but by the inability to:</p> <p>Track how outputs change across model or prompt versions</p> <p>Reproduce results or explain regressions</p> <p>Monitor real usage and collect structured feedback</p> <p>Run evaluations tied to their own domain-specific benchmarks</p> <p>Fine-tune models using proprietary data, privately and incrementally</p> <p>Deploy governed workflows that satisfy security, compliance, and privacy constraints</p> <p>As a result, most AI adoption stalls at the prototype stage. Models get hardcoded into apps without evaluation harnesses. Prompts get tuned manually in Notion docs. Deployments run as one-off scripts. And it’s difficult to tell if accuracy improved or got worse. There’s a gap between the pace of experimentation and the maturity of production primitives.</p> <p>In talking to hundreds of enterprise customers, we have discovered that the real bottleneck is the lack of a system to turn AI into a reliable, observable, and governed capability.</p> <p>Operationalizing AI, therefore, requires infrastructure that supports continuous improvement, safety, and control—at the speed AI workflows demand.</p> <p>The core requirements we consistently hear from enterprise AI teams include:</p> <p>Built-in evaluation: Internal benchmarks that reflect business-specific success criteria (not generic leaderboard metrics). </p> <p>Traceable feedback loops: A way to collect real usage data, label it, and turn it into datasets that drive the next iteration.</p> <p>Provenance and versioning: Across prompts, models, datasets, and judges, with the ability to compare iterations, track regressions, and revert safely.</p> <p>Governance: Built-in audit trails, access controls, and environment boundaries that meet enterprise security and compliance standards.</p> <p>Flexible deployment: The ability to run AI workflows close to their systems, across hybrid, VPC, or on-prem infrastructure, and migrate between them without re-architecting.</p> <p>Today, most teams build this piecemeal. They repurpose tools meant for DevOps, MLOps, or experimentation. But the LLM stack has new abstractions. Prompts ship daily. Models change weekly. Evaluation is real-time and use case-specific.</p> <p>Closing that loop from prompts to production is what separates teams that experiment with AI from those that run it as a dependable system.</p> <p>Mistral AI Studio brings the same infrastructure, observability, and operational discipline that power Mistral’s own large-scale systems—now packaged for enterprise teams that need to build, evaluate, and run AI in production.</p> <img src="https://cms.mistral.ai/assets/59272ab1-8e3c-45ea-bf60-781df15af5e9.png?width=1616&amp;height=828"/> <p>At Mistral AI, we operate AI systems that serve millions of users across complex workloads. Building and maintaining those systems required us to solve the hard problems: how to instrument feedback loops at scale, measure quality reliably, retrain and deploy safely, and maintain governance across distributed environments.</p> <p>AI Studio productizes those solutions. It captures the primitives that make production AI systems sustainable and repeatable—the ability to observe, to execute durably, and to govern. Those primitives form the three pillars of the platform: Observability, Agent Runtime, and AI Registry.</p> <p>Observability in AI Studio provides full visibility into what’s happening, why, and how to improve it. The Explorer lets teams filter and inspect traffic, build datasets, and identify regressions. Judges, which can be built and tested in their own Judge Playground, define evaluation logic and score outputs at scale. Campaigns and Datasets automatically convert production interactions into curated evaluation sets. Experiments, Iterations, and Dashboards make improvement measurable, not anecdotal.</p> <p>With these capabilities, AI builder teams can trace outcomes back to prompts, prompts back to versions, and versions back to real usage—closing the feedback loop with data, not intuition.</p> <img src="https://cms.mistral.ai/assets/4a877207-6fbc-46d8-85d4-549a4225e76a.webp?width=1920&amp;height=1920"/> <p>The Agent Runtime is the execution backbone of AI Studio. It runs every agent, from simple single-step tasks to complex multi-step business flows, with durability, transparency, and reproducibility.</p> <p>Each agent operates inside a stateful, fault-tolerant runtime built on Temporal, which guarantees consistent behavior across retries, long-running tasks, and chained calls. The runtime manages large payloads, offloads documents to object storage, and generates static graphs that make execution paths auditable and easy to share.</p> <p>Every execution emits telemetry and evaluation data that flow directly into Observability for measurement and governance. AI Studio supports hybrid, dedicated, and self-hosted deployments so enterprises can run agents wherever their infrastructure requires while maintaining the same durability, traceability, and control.</p> <img src="https://cms.mistral.ai/assets/9493d519-7c5c-48df-97e9-371e8313117e.webp?width=1920&amp;height=1920"/> <p>The AI Registry is the system of record for every asset across the AI lifecycle—agents, models, datasets, judges, tools, and workflows.It tracks lineage, ownership, and versioning end to end. The Registry enforces access controls, moderation policies, and promotion gates before deployment. It integrates directly with Observability (for metrics and evaluations) and with the Agent Runtime (for orchestration and deployment).This unified view enables true governance and reuse: every asset is discoverable, auditable, and portable across environments.</p> <img src="https://cms.mistral.ai/assets/546449b9-c5d4-404f-914a-b07b82fb82f3.webp?width=1920&amp;height=1920"/> <p>Together, these pillars form the production fabric for enterprise AI.AI Studio connects creation, observation, and governance into a single operational loop—the same system discipline that lets Mistral run AI at scale, now in the hands of enterprise teams.</p> <p>Enterprises are entering a new phase of AI adoption. The challenge is no longer access to capable-enough models—it’s the ability to operate them reliably, safely, and at scale.That shift demands production infrastructure built for observability, durability, and governance from day one.</p> <p>Mistral AI Studio represents that next step: a platform born from real operational experience, designed for teams that want to move past pilots and run AI as a core system.It unifies the three production pillars—Observability, Agent Runtime, and AI Registry—into one closed loop where every improvement is measurable and every deployment accountable.</p> <p>With AI Studio, enterprises gain the same production discipline that powers Mistral’s own large-scale systems:</p> <p>Transparent feedback loops and continuous evaluation</p> <p>Durable, reproducible workflows across environments</p> <p>Unified governance and asset traceability</p> <p>Hybrid and self-hosted deployment with full data ownership</p> <p>This is how AI moves from experimentation to dependable operations—secure, observable, and under your control.</p> <p>If your organization is ready to operationalize AI with the same rigor as software systems, sign up for the private beta of AI Studio.</p> <p>Go to production with Mistral AI Studio.</p> <p>You bring the ambition. We bring the platform. Let’s connect.</p> <p>By submitting this form, you agree with our <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a>. We process your data to respond to your contact request in accordance with our <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a></p> <a href="https://mistral.ai/terms#terms-of-service">Terms of Service</a> <a href="https://mistral.ai/terms/#privacy-policy">Privacy Policy.</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/ai-studio Product Fri, 24 Oct 2025 00:00:00 +0000 Introducing Mistral OCR 3 https://mistral.ai/news/mistral-ocr-3 Achieving a new frontier for both accuracy and efficiency in document processing. <p>Achieving a new frontier for both accuracy and efficiency in document processing.</p> <img src="https://mistral.ai/_next/image?url=%2Fimg%2Focr-2%2Fhero-letter.png&amp;w=1920&amp;q=75&amp;dpl=07108aa8fed6d71e441bef7dec41bdd1f4e09fe61ccf552aeee47ac38f755716363934366365343962616637383730303038623261653938"/> <p> <!-- --> Just had dinner. Did not get home until nearly 8 pm. as I am now very busy at the office. Westcott came today and is trying to raise money at last minute. I have to hand over balance of work to the liquidators &amp; also finish off books before shipping them to N. York tomorrow. Glad to say it rained heavily the whole day yesterday, which kept things quiet politically, but of course, it was rotten getting to office back. Went to bed at 9-20 pm. I am not going out tonight. Will martial law, but things look better today as the teams are running &amp; the P.O. is open &amp; I can post this tomorrow. Will be out all day tomorrow as I have invited 6 Chinese &amp; Mr Westcott to tiffin. Will go to Eddie's Cafe on Broadway as I believe it is good &amp; has music. At 6 pm. I am invited to a Chinese dinner which M. H. is giving at his home for me. I bought some socks to-day &amp; studs for shirt. Just thought on - I gave your empty ear-rings to Armenian shop to get Ural stones put in, but he was not able to go to town last week, so perhaps he has now been &amp; I shall take a walk there now &amp; get them back. Don't expect he has got any to fit.</p> <p>Achieving a new frontier for both accuracy and efficiency in document processing.</p> <p>Breakthrough performance: 74% overall win rate over Mistral OCR 2 on forms, scanned documents, complex tables, and handwriting.</p> <p>State-of-the-art accuracy, outperforming both enterprise document processing solutions as well as AI-native OCR solutions</p> <p>Now powers Document AI Playground in <a href="https://console.mistral.ai/build/document-ai/ocr-playground">Mistral AI Studio</a>, a simple drag-and-drop interface for parsing PDFs/images into clean text or structured JSON</p> <a href="https://console.mistral.ai/build/document-ai/ocr-playground">Mistral AI Studio</a> <p>Major upgrade over Mistral OCR 2 in forms, handwritten content, low-quality scans, and tables</p> <p>Mistral OCR 3 is designed to extract text and embedded images from a wide range of documents with exceptional fidelity. It supports markdown output enriched with HTML-based table reconstruction, enabling downstream systems to understand not just document content, but also structure. As a much smaller model than most competitive solutions, it is available at an industry-leading price of $2 per 1,000 pages, with a 50% Batch-API discount, reducing the cost to $1 per 1,000 pages.</p> <p>Developers can integrate the model (mistral-ocr-2512) via API, and users can leverage Document AI, a UI that parses documents into text or structured JSON instantly.</p> <a href="https://console.mistral.ai/build/document-ai/ocr-playground">Visit playground</a> <img src="https://mistral.ai/_next/image?url=%2Fimg%2Focr-2%2Fdemos%2Fcomplex_table_na_v2.jpg&amp;w=1200&amp;q=75&amp;dpl=07108aa8fed6d71e441bef7dec41bdd1f4e09fe61ccf552aeee47ac38f755716363934366365343962616637383730303038623261653938"/> <p>TABLE 21. Doctoral degrees awarded to men, by major field group: 1966-2012</p> <p>S&amp;E = science and engineering. NOTE: See appendix B for specific fields that are included in each category. SOURCE: National Science Foundation, National Center for Science and Engineering Statistics, Survey of Earned Doctorates.</p> <p>To raise the bar, we introduced more challenging internal benchmarks based on real business use-case examples from customers. We then evaluated several models across the domains highlighted below, comparing their outputs to ground truth using fuzzy-match metric for accuracy.</p> <img src="https://cms.mistral.ai/assets/71a86b4e-b67e-49c0-b2a2-57ffdb42717f.png?width=2377&amp;height=1318"/> <img src="https://cms.mistral.ai/assets/00408f9b-0cb7-447c-b4f8-bdefc4a3f3dc.png?width=2445&amp;height=1242"/> <p>Whereas most OCR solutions today specialize in specific document types, Mistral OCR 3 is designed to excel at processing the vast majority of document types in organizations and everyday settings.</p> <p>Handwriting: Mistral OCR accurately interprets cursive, mixed-content annotations, and handwritten text layered over printed forms.</p> <p>Forms: Improved detection of boxes, labels, handwritten entries, and dense layouts. Works well on invoices, receipts, compliance forms, government documents, and such.</p> <p>Scanned &amp; complex documents: Significantly more robust to compression artifacts, skew, distortion, low DPI, and background noise.</p> <p>Complex tables: Reconstructs table structures with headers, merged cells, multi-row blocks, and column hierarchies. Outputs HTML table tags with colspan/rowspan to fully preserve layout.</p> <p>Mistral OCR 3 is a significant upgrade across all languages and document form factors compared to Mistral OCR 2. </p> <img src="https://cms.mistral.ai/assets/1682ebdd-99f1-46d4-9d26-36a716c6f2fb.png?width=1294&amp;height=943"/> <p>Mistral OCR 3 is ideal for both high-volume enterprise pipelines and interactive document workflows. Developers can use it for:</p> <p>Extracting text and images into markdown for downstream agents and knowledge systems</p> <p>Automated parsing of forms, invoices, and operational documents</p> <p>End-to-end document understanding pipelines</p> <p>Digitization of handwritten or historical documents</p> <p>Any other document → knowledge transformation applications. </p> <p>Our early customers are using Mistral OCR 3 to process invoices into structured fields, digitize company archives, extract clean text from technical and scientific reports, and improve enterprise search. </p> <p>“OCR remains foundational for enabling generative AI and agentic AI,” said Tim Law, IDC Director of Research for AI and Automation. “Those organizations that can efficiently and cost-effectively extract text and embedded images with high fidelity will unlock value and will gain a competitive advantage from their data by providing richer context.”</p> <p>Access the model either through the API or via the new Document AI Playground interface, both in <a href="https://console.mistral.ai/build/document-ai/ocr-playground">Mistral AI Studio</a>. Mistral OCR 3 is fully backward compatible with Mistral OCR 2. For more details, head over to <a href="https://docs.mistral.ai/capabilities/document_ai/basic_ocr">mistral.ai/docs</a>. </p> <a href="https://console.mistral.ai/build/document-ai/ocr-playground">Mistral AI Studio</a> <a href="https://docs.mistral.ai/capabilities/document_ai/basic_ocr">mistral.ai/docs</a> <a href="https://mistral.ai/en/news">News</a> <a href="https://mistral.ai/en/models">Models</a> <a href="https://mistral.ai/en/services">AI Services</a> <p>The next chapter of AI is yours.</p> <a href="https://chat.mistral.ai/">Try le Chat </a> <a href="https://console.mistral.ai/">Build on AI Studio </a> <a href="https://mistral.ai/contact">Talk to an expert </a> https://mistral.ai/news/mistral-ocr-3 Product Wed, 17 Dec 2025 00:00:00 +0000