MiniMax-M2 Is the New King of Open Source LLMs (Especially for Agentic Tool Calling)
Why It Matters
MiniMax-M2 gives enterprises a state‑of‑the‑art, cost‑efficient LLM that can be self‑hosted and freely customized for autonomous tool‑use, reducing dependence on expensive proprietary APIs and lowering compute overhead for AI‑driven automation.
Summary
MiniMax-M2, the latest open‑source LLM from Chinese startup MiniMax, has claimed the top spot among open‑weight models on the Artificial Analysis Intelligence Index and posted near‑proprietary scores on agentic tool‑calling benchmarks (τ²‑Bench 77.2, BrowseComp 44.0, FinSearchComp‑global 65.5). Built on a sparse Mixture‑of‑Experts architecture with 230 billion total parameters but only 10 billion active per inference, it can be served on as few as four NVIDIA H100 GPUs at FP8 precision. The model is released under an MIT license and is available on Hugging Face, GitHub, ModelScope and via MiniMax’s API, supporting OpenAI and Anthropic API formats, with competitive pricing of $0.30 per M input tokens and $1.20 per M output tokens. Independent benchmarks place its reasoning and coding abilities close to GPT‑5 (thinking) and Claude Sonnet 4.5, making it the highest‑performing open model for real‑world agentic and developer workflows.
MiniMax-M2 is the new king of open source LLMs (especially for agentic tool calling)
Comments
Want to join the conversation?
Loading comments...