Updated March 26, 2026· Based on independent benchmark data
MiMo-V2-Pro and GPT-5.4 mini (xhigh) are virtually tied on intelligence (49.2 vs 48.1). For speed, GPT-5.4 mini (xhigh) wins at 218 tok/s vs 91 tok/s.
| Metric | MiMo-V2-Pro | GPT-5.4 mini (xhigh) |
|---|---|---|
| Intelligence Score | 49.2 | 48.1 |
| Coding Score | 41.4 | 51.5 |
| Math Score | N/A | N/A |
| Speed (tok/s) | 91 tok/s | 218 tok/s |
| Latency (TTFT) | 1.55s | 7.45s |
| Input Price / 1M tokens | $1.00 | $0.75 |
| Output Price / 1M tokens | $3.00 | $4.50 |
| Context Window | N/A | N/A |
MiMo-V2-Pro and GPT-5.4 mini (xhigh) perform similarly on overall intelligence, scoring 49.2 and 48.1 respectively. For coding tasks, GPT-5.4 mini (xhigh) has the edge with a coding score of 51.5 vs 41.4.
GPT-5.4 mini (xhigh) generates output significantly faster at 218 tok/s compared to MiMo-V2-Pro's 91 tok/s, making it 2.4x faster for streaming responses. Time to first token is 1.55s for MiMo-V2-Pro vs 7.45s for GPT-5.4 mini (xhigh), which affects perceived responsiveness in interactive applications.
GPT-5.4 mini (xhigh) is more affordable at $0.75/1M input tokens ($4.50/1M output), while MiMo-V2-Pro costs $1.00/1M input ($3.00/1M output). For a typical workload of 100 requests per day at 2,000 tokens each, MiMo-V2-Pro would cost approximately $6.00/month vs $4.50/month for GPT-5.4 mini (xhigh) in input costs alone.
Choose GPT-5.4 mini (xhigh) when you need stronger coding performance (51.5), faster output (218 tok/s).
GPT-5.4 mini (xhigh) scores higher on coding benchmarks (51.5 vs 41.4), making it the better choice for programming tasks.
GPT-5.4 mini (xhigh) is cheaper at $0.75/1M input tokens vs $1.00/1M for MiMo-V2-Pro.
GPT-5.4 mini (xhigh) is faster, producing output at 218 tok/s compared to MiMo-V2-Pro's 91 tok/s.
No, MiMo-V2-Pro does not support image input. Neither model supports image input.
Data last synced: March 26, 2026
| Max Output Tokens | N/A | N/A |
| Input Modalities | Text | Text |
| Output Modalities | Text | Text |
| Free Tier | No | No |
Both models perform similarly on intelligence benchmarks. Choose based on specific needs: pricing, speed, context window, or provider ecosystem.