DeepSeek: DeepSeek V3.1 Terminus

DeepSeekID: deepseek/deepseek-v3.1-terminus

DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language consistency and agent capabilities, further optimizing the model's performance in coding and search agents. It is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes. It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows.

Pricing per 1M Tokens

Input (Prompt)$0.21
Output (Completion)$0.79
Cache Read$0.13
Cache WriteFree
ImageN/A

Specifications

Context Length164K
Max Output TokensN/A
Input ModalitiesText
Output ModalitiesText
TokenizerDeepSeek
Instruct Typedeepseek-v3.1
Top Provider Context164K
Top Provider Max OutputN/A
ModeratedNo

More from DeepSeek

Last updated: March 23, 2026

First tracked: March 23, 2026