Moonshot AI’s Kimi K2, a 1‑trillion‑parameter mixture‑of‑experts model with only 32 billion active parameters, claims state‑of‑the‑art performance, surpassing GPT‑5, Claude and Grok‑4 on a range of benchmarks including the demanding Humanity‑Last‑Exam test. The model features a 256,000‑token context window, tool‑use interleaving, and was trained with quantization‑aware int4 precision, delivering up to twice the efficiency of FP8 and reducing inference costs by up to sixfold. Moonshot, valued at less than 0.1 % of OpenAI, released Kimi K2 as an open‑weight model, offering comparable capabilities at a fraction of the training expense. Demonstrations show the model handling complex multi‑step tasks—research, data analysis, visualization, and code generation—without hallucination loops.
Comments
Want to join the conversation?
Loading comments...