Kimi K2 Thinking


👋 Hello from Kimi Team!

Introducing Kimi K2 Thinking: 1T Open-Source Reasoning Model. SOTA with 44.9% HLE, 60.2% BrowseComp. (not just open-source SOTA)

> Trillion-param MoE, trained for $4.6M, 4x cheaper than peers.
> INT4 inference: 4-bit quantized, <1.2s latency @ 256K context.
> Full step-by-step reasoning, 200+ tool calls, self-correction (GPT-5 level), fully open (MIT), OpenAI-compatible API, weights live on huggingface today, agentic mode next week.

We’re thrilled to ship a SOTA model that’s fully open. Can’t wait to see what you all build! 🙂



Source link