Updated March 25, 2026· Based on independent benchmark data
Anthropic: Claude Sonnet 4 leads in intelligence with a score of 51.7 vs 48.4. For speed, Google: Nano Banana Pro (Gemini 3 Pro Image Preview) wins at 119 tok/s vs 65 tok/s.
| Metric | Anthropic: Claude Sonnet 4 | Google: Nano Banana Pro (Gemini 3 Pro Image Preview) |
|---|---|---|
| Intelligence Score | 51.7 | 48.4 |
| Coding Score | 50.9 | 46.5 |
| Math Score | N/A | 95.7 |
| Speed (tok/s) | 65 tok/s | 119 tok/s |
| Latency (TTFT) | 38.68s | 21.66s |
| Input Price / 1M tokens | $3.00 | $2.00 |
| Output Price / 1M tokens | $15 | $12 |
| Context Window |
Anthropic: Claude Sonnet 4 outperforms Google: Nano Banana Pro (Gemini 3 Pro Image Preview) on the Artificial Analysis intelligence index with a score of 51.7 compared to 48.4. For coding tasks, Anthropic: Claude Sonnet 4 has the edge with a coding score of 50.9 vs 46.5.
Google: Nano Banana Pro (Gemini 3 Pro Image Preview) generates output significantly faster at 119 tok/s compared to Anthropic: Claude Sonnet 4's 65 tok/s, making it 1.8x faster for streaming responses. Time to first token is 21.66s for Google: Nano Banana Pro (Gemini 3 Pro Image Preview) vs 38.68s for Anthropic: Claude Sonnet 4, which affects perceived responsiveness in interactive applications.
Google: Nano Banana Pro (Gemini 3 Pro Image Preview) is more affordable at $2.00/1M input tokens ($12/1M output), while Anthropic: Claude Sonnet 4 costs $3.00/1M input ($15/1M output). For a typical workload of 100 requests per day at 2,000 tokens each, Anthropic: Claude Sonnet 4 would cost approximately $18.00/month vs $12.00/month for Google: Nano Banana Pro (Gemini 3 Pro Image Preview) in input costs alone.
Anthropic: Claude Sonnet 4 offers a larger context window at 200K tokens compared to Google: Nano Banana Pro (Gemini 3 Pro Image Preview)'s 66K. This means Anthropic: Claude Sonnet 4 can process roughly 100 pages of text in a single request vs 33 pages for Google: Nano Banana Pro (Gemini 3 Pro Image Preview). For output length, Anthropic: Claude Sonnet 4 can generate up to 64K tokens per response vs 33K for Google: Nano Banana Pro (Gemini 3 Pro Image Preview).
Choose Anthropic: Claude Sonnet 4 when you need higher intelligence (51.7), stronger coding performance (50.9), larger context window (200K). Choose Google: Nano Banana Pro (Gemini 3 Pro Image Preview) when you need faster output (119 tok/s).
Anthropic: Claude Sonnet 4 scores higher on coding benchmarks (50.9 vs 46.5), making it the better choice for programming tasks.
Google: Nano Banana Pro (Gemini 3 Pro Image Preview) is cheaper at $2.00/1M input tokens vs $3.00/1M for Anthropic: Claude Sonnet 4.
Google: Nano Banana Pro (Gemini 3 Pro Image Preview) is faster, producing output at 119 tok/s compared to Anthropic: Claude Sonnet 4's 65 tok/s.
Yes, Anthropic: Claude Sonnet 4 supports image input. Google: Nano Banana Pro (Gemini 3 Pro Image Preview) also supports images.
Benchmark data by Artificial Analysis
Data last synced: March 25, 2026
| 200K |
| 66K |
| Max Output Tokens | 64K | 33K |
| Input Modalities | Image + Text + File | Image + Text |
| Output Modalities | Text | Image + Text |
| Free Tier | No | No |
Anthropic: Claude Sonnet 4 has a larger context window at 200K compared to Google: Nano Banana Pro (Gemini 3 Pro Image Preview)'s 66K.
It depends on your priorities. Anthropic: Claude Sonnet 4 scores higher on intelligence (51.7), but Google: Nano Banana Pro (Gemini 3 Pro Image Preview) may be better for specific use cases like budget-conscious projects or speed-critical applications.