MiniMax: MiniMax-01
MiniMaxID: minimax/minimax-01
MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can handle a context of up to 4 million tokens. The text model adopts a hybrid architecture that combines Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE). The image model adopts the “ViT-MLP-LLM” framework and is trained on top of the text model. To read more about the release, see: https://www.minimaxi.com/en/news/minimax-01-series-2
Pricing per 1M Tokens
| Input (Prompt) | $0.20 |
| Output (Completion) | $1.10 |
| Cache Read | Free |
| Cache Write | Free |
| Image | N/A |
Specifications
| Context Length | 1.0M |
| Max Output Tokens | 1.0M |
| Input Modalities | Text + Image |
| Output Modalities | Text |
| Tokenizer | Other |
| Instruct Type | N/A |
| Top Provider Context | 1.0M |
| Top Provider Max Output | 1.0M |
| Moderated | No |
Compare this model
See how MiniMax: MiniMax-01 stacks up against other models.
More from MiniMax
Last updated: March 23, 2026
First tracked: March 23, 2026