K2.5 proves that open‑source models can match proprietary performance while offering far lower costs, accelerating adoption of advanced multimodal and agentic AI across businesses.
Kimi has released K2.5, billed as the most powerful open‑source model to date, extending the K2 family with native multimodal vision and an expanded agentic architecture.
The model retains the trillion‑parameter mixture‑of‑experts backbone, activating 32 billion parameters per inference, and was trained on roughly 15 trillion mixed visual‑text tokens. A new “Coding with Vision” feature lets it generate code from UI screenshots or video workflows, while the self‑directed agent swarm can spin up to 100 sub‑agents and issue 1,500 tool calls in a single session, cutting execution time by about 4.5× thanks to Parallel Agent Reinforcement Learning (PARL).
In the reviewer’s benchmark suite K2.5 lands fifth overall with a 64 % score, trailing Gemini 3 Pro (100 %) and Claude Opus 4.5 Max (74 %). It outperforms Claude Sonnet 4.5 and DeepSeq V 3.2, scores 72 % on coding, 96.1 % on AIM 2025, and costs only $0.27 per run—significantly cheaper than comparable proprietary models. The model ships with OpenAI‑compatible APIs, VS Code integration, and INT4 quantization for efficient deployment.
Because K2.5 delivers high‑end multimodal and agentic performance at a fraction of the price, it lowers the barrier for enterprises and developers to build complex AI workflows without licensing costly closed‑source services, potentially reshaping the competitive landscape as open‑weight models catch up to industry leaders.
Comments
Want to join the conversation?
Loading comments...