OpenAI: gpt-oss-120b

OpenAIID: openai/gpt-oss-120b

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

Pricing per 1M Tokens

Input (Prompt)$0.04
Output (Completion)$0.19
Cache ReadFree
Cache WriteFree
ImageN/A

Specifications

Context Length131K
Max Output TokensN/A
Input ModalitiesText
Output ModalitiesText
TokenizerGPT
Instruct TypeN/A
Top Provider Context131K
Top Provider Max OutputN/A
ModeratedNo

Compare this model

See how OpenAI: gpt-oss-120b stacks up against other models.

More from OpenAI

Last updated: March 23, 2026

First tracked: March 23, 2026