Minimax M2, an open 200-billion-parameter mixture-of-experts (MoE) model with only ~10 billion active parameters at inference, is being touted as a frontier alternative that outperforms many proprietary models on key benchmarks. The model ranks fifth on the artificial analysis benchmark, shows strong coding and agent/tool-use performance, and claims faster inference through sparse activations. Minimax can be run locally (reportedly on four GPUs via vLLM) or accessed via a low-cost API, and the makers publish recommended inference settings. The team also offers free access on its platform for trial use.
Comments
Want to join the conversation?
Loading comments...