Inception: Mercury 2

InceptionID: inception/mercury-2

Mercury 2 is an extremely fast reasoning LLM, and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving >1,000 tokens/sec on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output. Built for coding workflows where latency compounds, real-time voice/search, and agent loops. OpenAI API compatible. Read more in the [blog post](https://www.inceptionlabs.ai/blog/introducing-mercury-2).

Pricing per 1M Tokens

Input (Prompt)$0.25
Output (Completion)$0.75
Cache Read$0.02
Cache WriteFree
ImageN/A

Specifications

Context Length128K
Max Output Tokens50K
Input ModalitiesText
Output ModalitiesText
TokenizerOther
Instruct TypeN/A
Top Provider Context128K
Top Provider Max Output50K
ModeratedNo

Compare this model

See how Inception: Mercury 2 stacks up against other models.

More from Inception

Last updated: March 23, 2026

First tracked: March 23, 2026