Updated March 26, 2026· Based on independent benchmark data
GLM-5 (Reasoning) and MiMo-V2-Pro are virtually tied on intelligence (49.8 vs 49.2). For speed, MiMo-V2-Pro wins at 91 tok/s vs 66 tok/s.
| Metric | GLM-5 (Reasoning) | MiMo-V2-Pro |
|---|---|---|
| Intelligence Score | 49.8 | 49.2 |
| Coding Score | 44.2 | 41.4 |
| Math Score | N/A | N/A |
| Speed (tok/s) | 66 tok/s | 91 tok/s |
| Latency (TTFT) | 0.98s | 1.55s |
| Input Price / 1M tokens | $1.00 | $1.00 |
| Output Price / 1M tokens | $3.20 | $3.00 |
| Context Window | N/A | N/A |
| Max Output Tokens | N/A | N/A |
| Input Modalities | Text | Text |
GLM-5 (Reasoning) and MiMo-V2-Pro perform similarly on overall intelligence, scoring 49.8 and 49.2 respectively. For coding tasks, GLM-5 (Reasoning) has the edge with a coding score of 44.2 vs 41.4.
MiMo-V2-Pro generates output significantly faster at 91 tok/s compared to GLM-5 (Reasoning)'s 66 tok/s, making it 1.4x faster for streaming responses. Time to first token is 0.98s for GLM-5 (Reasoning) vs 1.55s for MiMo-V2-Pro, which affects perceived responsiveness in interactive applications.
MiMo-V2-Pro is more affordable at $1.00/1M input tokens ($3.00/1M output), while GLM-5 (Reasoning) costs $1.00/1M input ($3.20/1M output). For a typical workload of 100 requests per day at 2,000 tokens each, GLM-5 (Reasoning) would cost approximately $6.00/month vs $6.00/month for MiMo-V2-Pro in input costs alone.
Choose GLM-5 (Reasoning) when you need stronger coding performance (44.2). Choose MiMo-V2-Pro when you need faster output (91 tok/s).
GLM-5 (Reasoning) scores higher on coding benchmarks (44.2 vs 41.4), making it the better choice for programming tasks.
MiMo-V2-Pro is cheaper at $1.00/1M input tokens vs $1.00/1M for GLM-5 (Reasoning).
MiMo-V2-Pro is faster, producing output at 91 tok/s compared to GLM-5 (Reasoning)'s 66 tok/s.
No, GLM-5 (Reasoning) does not support image input. Neither model supports image input.
Data last synced: March 26, 2026
| Output Modalities |
| Text |
| Text |
| Free Tier | No | No |
Both models perform similarly on intelligence benchmarks. Choose based on specific needs: pricing, speed, context window, or provider ecosystem.