MoonshotAI: Kimi K2 0905

MoonshotaiID: moonshotai/kimi-k2-0905

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It supports long-context inference up to 256k tokens, extended from the previous 128k. This update improves agentic coding with higher accuracy and better generalization across scaffolds, and enhances frontend coding with more aesthetic and functional outputs for web, 3D, and related tasks. Kimi K2 is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. It excels across coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) benchmarks. The model is trained with a novel stack incorporating the MuonClip optimizer for stable large-scale MoE training.

Pricing per 1M Tokens

Input (Prompt)$0.40
Output (Completion)$2.00
Cache Read$0.15
Cache WriteFree
ImageN/A

Specifications

Context Length131K
Max Output TokensN/A
Input ModalitiesText
Output ModalitiesText
TokenizerOther
Instruct TypeN/A
Top Provider Context131K
Top Provider Max OutputN/A
ModeratedNo

Compare this model

See how MoonshotAI: Kimi K2 0905 stacks up against other models.

More from Moonshotai

Last updated: March 23, 2026

First tracked: March 23, 2026