Divisital 2 offers a high‑performing, openly licensed coding model that could broaden access to AI‑assisted development, driving productivity gains and intensifying competition with proprietary solutions.
The video announces the launch of Mistral AI’s next‑generation coding model, Divisital 2, positioning it as an open‑weight, high‑performance alternative for software developers. Two variants are released: a 123‑billion‑parameter model under a modified MIT license and a 24‑billion‑parameter model under Apache 2.0, both engineered specifically for code generation, repository‑wide editing, debugging, and multi‑file reasoning.
Key performance data shows Divisital 2 achieving 72.2% on the SWE‑Bench Verified benchmark, placing it among the top open‑source coding models, while the smaller 24B version still posts an impressive 68% score. The models are touted as ready for immediate use, offering developers the ability to issue natural‑language commands—such as “refactor this function and update related modules”—and receive automated, multi‑file updates accompanied by step‑by‑step explanations.
The video highlights a concrete usage scenario: a developer can hand the model a high‑level request, and Divisital 2 will parse the entire codebase, modify several files, and generate a clear change log. This end‑to‑end workflow exemplifies the model’s capacity for repo‑wide reasoning, a capability traditionally reserved for proprietary solutions like GitHub Copilot or OpenAI’s Codex.
If the claims hold up in broader testing, Divisital 2 could democratize advanced AI‑assisted development by providing a powerful, freely available tool that lowers the barrier to entry for smaller firms and open‑source projects. Its open licensing may accelerate integration into IDEs, CI pipelines, and enterprise tooling, potentially reshaping the competitive landscape of AI‑driven software engineering.
Comments
Want to join the conversation?
Loading comments...