Updated March 25, 2026· Based on independent benchmark data
OpenAI: GPT-5.2-Codex and Anthropic: Claude Opus 4 are virtually tied on intelligence (54.0 vs 53.0). OpenAI: GPT-5.2-Codex is 8.6x cheaper at $1.75/1M tokens vs $15/1M. For speed, OpenAI: GPT-5.2-Codex wins at 68 tok/s vs 48 tok/s.
| Metric | OpenAI: GPT-5.2-Codex | Anthropic: Claude Opus 4 |
|---|---|---|
| Intelligence Score | 54.0 | 53.0 |
| Coding Score | 53.1 | 48.1 |
| Math Score | N/A | N/A |
| Speed (tok/s) | 68 tok/s | 48 tok/s |
| Latency (TTFT) | 75.14s | 11.42s |
| Input Price / 1M tokens | $1.75 | $15 |
| Output Price / 1M tokens | $14 | $75 |
| Context Window | 400K |
OpenAI: GPT-5.2-Codex and Anthropic: Claude Opus 4 perform similarly on overall intelligence, scoring 54.0 and 53.0 respectively. For coding tasks, OpenAI: GPT-5.2-Codex has the edge with a coding score of 53.1 vs 48.1.
OpenAI: GPT-5.2-Codex generates output significantly faster at 68 tok/s compared to Anthropic: Claude Opus 4's 48 tok/s, making it 1.4x faster for streaming responses. Time to first token is 11.42s for Anthropic: Claude Opus 4 vs 75.14s for OpenAI: GPT-5.2-Codex, which affects perceived responsiveness in interactive applications.
OpenAI: GPT-5.2-Codex is more affordable at $1.75/1M input tokens ($14/1M output), while Anthropic: Claude Opus 4 costs $15/1M input ($75/1M output). That makes Anthropic: Claude Opus 4 8.6x more expensive per token, which can add up significantly at scale. For a typical workload of 100 requests per day at 2,000 tokens each, OpenAI: GPT-5.2-Codex would cost approximately $10.50/month vs $90.00/month for Anthropic: Claude Opus 4 in input costs alone.
OpenAI: GPT-5.2-Codex offers a larger context window at 400K tokens compared to Anthropic: Claude Opus 4's 200K. For output length, OpenAI: GPT-5.2-Codex can generate up to 128K tokens per response vs 32K for Anthropic: Claude Opus 4.
Choose OpenAI: GPT-5.2-Codex when you need stronger coding performance (53.1), faster output (68 tok/s), lower cost.
OpenAI: GPT-5.2-Codex scores higher on coding benchmarks (53.1 vs 48.1), making it the better choice for programming tasks.
OpenAI: GPT-5.2-Codex is cheaper at $1.75/1M input tokens vs $15/1M for Anthropic: Claude Opus 4.
OpenAI: GPT-5.2-Codex is faster, producing output at 68 tok/s compared to Anthropic: Claude Opus 4's 48 tok/s.
Yes, OpenAI: GPT-5.2-Codex supports image input. Anthropic: Claude Opus 4 also supports images.
Benchmark data by Artificial Analysis
Data last synced: March 25, 2026
| 200K |
| Max Output Tokens | 128K | 32K |
| Input Modalities | Text + Image | Image + Text + File |
| Output Modalities | Text | Text |
| Free Tier | No | No |
OpenAI: GPT-5.2-Codex has a larger context window at 400K compared to Anthropic: Claude Opus 4's 200K.
Both models perform similarly on intelligence benchmarks. Choose based on specific needs: pricing, speed, context window, or provider ecosystem.