Updated March 25, 2026· Based on independent benchmark data
OpenAI: GPT-5.2-Codex leads in intelligence with a score of 54.0 vs 51.3.
| Metric | OpenAI: GPT-5.2-Codex | OpenAI: GPT-5.2 Chat |
|---|---|---|
| Intelligence Score | 54.0 | 51.3 |
| Coding Score | 53.1 | 48.7 |
| Math Score | N/A | 99.0 |
| Speed (tok/s) | 68 tok/s | 69 tok/s |
| Latency (TTFT) | 75.14s | 72.74s |
| Input Price / 1M tokens | $1.75 | $1.75 |
| Output Price / 1M tokens | $14 | $14 |
| Context Window | 400K |
OpenAI: GPT-5.2-Codex outperforms OpenAI: GPT-5.2 Chat on the Artificial Analysis intelligence index with a score of 54.0 compared to 51.3. For coding tasks, OpenAI: GPT-5.2-Codex has the edge with a coding score of 53.1 vs 48.7.
Both models deliver similar output speeds: OpenAI: GPT-5.2-Codex at 68 tok/s and OpenAI: GPT-5.2 Chat at 69 tok/s. Time to first token is 72.74s for OpenAI: GPT-5.2 Chat vs 75.14s for OpenAI: GPT-5.2-Codex, which affects perceived responsiveness in interactive applications.
OpenAI: GPT-5.2 Chat is more affordable at $1.75/1M input tokens ($14/1M output), while OpenAI: GPT-5.2-Codex costs $1.75/1M input ($14/1M output). For a typical workload of 100 requests per day at 2,000 tokens each, OpenAI: GPT-5.2-Codex would cost approximately $10.50/month vs $10.50/month for OpenAI: GPT-5.2 Chat in input costs alone.
OpenAI: GPT-5.2-Codex offers a larger context window at 400K tokens compared to OpenAI: GPT-5.2 Chat's 128K. This means OpenAI: GPT-5.2-Codex can process roughly 200 pages of text in a single request vs 64 pages for OpenAI: GPT-5.2 Chat. For output length, OpenAI: GPT-5.2-Codex can generate up to 128K tokens per response vs 16K for OpenAI: GPT-5.2 Chat.
Choose OpenAI: GPT-5.2-Codex when you need higher intelligence (54.0), stronger coding performance (53.1), larger context window (400K).
OpenAI: GPT-5.2-Codex scores higher on coding benchmarks (53.1 vs 48.7), making it the better choice for programming tasks.
OpenAI: GPT-5.2 Chat is cheaper at $1.75/1M input tokens vs $1.75/1M for OpenAI: GPT-5.2-Codex.
OpenAI: GPT-5.2 Chat is faster, producing output at 69 tok/s compared to OpenAI: GPT-5.2-Codex's 68 tok/s.
Yes, OpenAI: GPT-5.2-Codex supports image input. OpenAI: GPT-5.2 Chat also supports images.
Benchmark data by Artificial Analysis
Data last synced: March 25, 2026
| 128K |
| Max Output Tokens | 128K | 16K |
| Input Modalities | Text + Image | File + Image + Text |
| Output Modalities | Text | Text |
| Free Tier | No | No |
OpenAI: GPT-5.2-Codex has a larger context window at 400K compared to OpenAI: GPT-5.2 Chat's 128K.
It depends on your priorities. OpenAI: GPT-5.2-Codex scores higher on intelligence (54.0), but OpenAI: GPT-5.2 Chat may be better for specific use cases like budget-conscious projects or speed-critical applications.