DeepSeek: R1 Distill Qwen 32B

DeepSeekID: deepseek/deepseek-r1-distill-qwen-32b

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Pricing per 1M Tokens

Input (Prompt)$0.29
Output (Completion)$0.29
Cache ReadFree
Cache WriteFree
ImageN/A

Specifications

Context Length33K
Max Output Tokens33K
Input ModalitiesText
Output ModalitiesText
TokenizerQwen
Instruct Typedeepseek-r1
Top Provider Context33K
Top Provider Max Output33K
ModeratedNo

More from DeepSeek

Last updated: March 23, 2026

First tracked: March 23, 2026