Updated March 25, 2026· Based on independent benchmark data
OpenAI: GPT-5.2-Codex leads in intelligence with a score of 54.0 vs 49.6. MiniMax: MiniMax M2 is 6.9x cheaper at $0.26/1M tokens vs $1.75/1M. For speed, OpenAI: GPT-5.2-Codex wins at 68 tok/s vs 44 tok/s.
| Metric | MiniMax: MiniMax M2 | OpenAI: GPT-5.2-Codex |
|---|---|---|
| Intelligence Score | 49.6 | 54.0 |
| Coding Score | 41.9 | 53.1 |
| Math Score | N/A | N/A |
| Speed (tok/s) | 44 tok/s | 68 tok/s |
| Latency (TTFT) | 2.02s | 75.14s |
| Input Price / 1M tokens | $0.26 | $1.75 |
| Output Price / 1M tokens | $1.00 | $14 |
| Context Window | 197K |
OpenAI: GPT-5.2-Codex outperforms MiniMax: MiniMax M2 on the intelligence index with a score of 54.0 compared to 49.6. For coding tasks, OpenAI: GPT-5.2-Codex has the edge with a coding score of 53.1 vs 41.9.
OpenAI: GPT-5.2-Codex generates output significantly faster at 68 tok/s compared to MiniMax: MiniMax M2's 44 tok/s, making it 1.5x faster for streaming responses. Time to first token is 2.02s for MiniMax: MiniMax M2 vs 75.14s for OpenAI: GPT-5.2-Codex, which affects perceived responsiveness in interactive applications.
MiniMax: MiniMax M2 is more affordable at $0.26/1M input tokens ($1.00/1M output), while OpenAI: GPT-5.2-Codex costs $1.75/1M input ($14/1M output). That makes OpenAI: GPT-5.2-Codex 6.9x more expensive per token, which can add up significantly at scale. For a typical workload of 100 requests per day at 2,000 tokens each, MiniMax: MiniMax M2 would cost approximately $1.53/month vs $10.50/month for OpenAI: GPT-5.2-Codex in input costs alone.
OpenAI: GPT-5.2-Codex offers a larger context window at 400K tokens compared to MiniMax: MiniMax M2's 197K. This means OpenAI: GPT-5.2-Codex can process roughly 200 pages of text in a single request vs 98 pages for MiniMax: MiniMax M2. For output length, MiniMax: MiniMax M2 can generate up to 197K tokens per response vs 128K for OpenAI: GPT-5.2-Codex.
Choose MiniMax: MiniMax M2 when you need lower cost. Choose OpenAI: GPT-5.2-Codex when you need higher intelligence (54.0), stronger coding performance (53.1), faster output (68 tok/s), larger context window (400K).
OpenAI: GPT-5.2-Codex scores higher on coding benchmarks (53.1 vs 41.9), making it the better choice for programming tasks.
MiniMax: MiniMax M2 is cheaper at $0.26/1M input tokens vs $1.75/1M for OpenAI: GPT-5.2-Codex.
OpenAI: GPT-5.2-Codex is faster, producing output at 68 tok/s compared to MiniMax: MiniMax M2's 44 tok/s.
No, MiniMax: MiniMax M2 does not support image input. However, OpenAI: GPT-5.2-Codex does support images.
Data last synced: March 25, 2026
| 400K |
| Max Output Tokens | 197K | 128K |
| Input Modalities | Text | Text + Image |
| Output Modalities | Text | Text |
| Free Tier | No | No |
OpenAI: GPT-5.2-Codex has a larger context window at 400K compared to MiniMax: MiniMax M2's 197K.
It depends on your priorities. OpenAI: GPT-5.2-Codex scores higher on intelligence (54.0), but MiniMax: MiniMax M2 may be better for specific use cases like budget-conscious projects or speed-critical applications.