Best AI Models for Coding in 2026

Whether you're building apps, debugging, or writing scripts, these models score highest on coding benchmarks. Ranked by real coding evaluation data from Artificial Analysis.

#ModelProviderIntelligenceCodingSpeedPrice/1M
1googleGoogle: Gemini 3.1 Pro PreviewGoogle57.255.5117 tok/s$2.00
2googleGoogle: Gemini 3.1 Pro Preview Custom ToolsGoogle57.255.5117 tok/s$2.00
3anthropicAnthropic: Claude Opus 4Anthropic46.547.644 tok/s$15
4googleGoogle: Nano Banana Pro (Gemini 3 Pro Image Preview)Google48.446.5119 tok/s$2.00
5anthropicAnthropic: Claude Sonnet 4Anthropic44.446.444 tok/s$3.00
6openaiOpenAI: GPT-5OpenAI44.443.9221 tok/s$1.25
7minimaxMiniMax: MiniMax M2MiniMax49.641.944 tok/s$0.26
8googleGoogle: Gemini 3 Pro PreviewGoogle41.339.4116 tok/s$2.00
9openaiOpenAI: GPT-5 CodexOpenAI44.638.9170 tok/s$1.25
10openaiOpenAI: o3 Deep ResearchOpenAI38.438.477 tok/s$10
11openaiOpenAI: o3 ProOpenAI38.438.477 tok/s$20
12googleGoogle: Gemini 3 Flash PreviewGoogle35.037.8186 tok/s$0.50
13deepseekDeepSeek: DeepSeek V3.2DeepSeek41.736.736 tok/s$0.26
14deepseekDeepSeek: DeepSeek V3.2 SpecialeDeepSeek41.736.736 tok/s$0.40
15openaiOpenAI: GPT-5.4 NanoOpenAI44.636.061 tok/s$0.20
16openaiOpenAI: GPT-5.4 MiniOpenAI44.636.061 tok/s$0.75
17openaiOpenAI: o1OpenAI23.734.00 tok/s$15
18anthropicAnthropic: Claude 3 HaikuAnthropic37.132.6118 tok/s$0.25
19anthropicAnthropic: Claude Haiku 4.5Anthropic37.132.6118 tok/s$1.00
20googleGoogle: Gemini 2.5 ProGoogle34.631.9120 tok/s$1.25
21googleGoogle: Gemini 2.5 Pro Preview 06-05Google34.631.9120 tok/s$1.25
22SStepFun: Step 3.5 FlashStepFun37.831.678 tok/s$0.10
23nvidiaNVIDIA: Nemotron 3 SuperNVIDIA36.031.2402 tok/s$0.10
24AAmazon: Nova Pro 1.0Amazon35.730.4151 tok/s$0.80
25anthropicAnthropic: Claude 3.5 SonnetAnthropic15.930.20 tok/s$6.00
26googleGoogle: Gemini 2.5 Flash Lite Preview 09-2025Google33.530.1214 tok/s$0.10
27googleGoogle: Gemini 3.1 Flash Lite PreviewGoogle33.530.1214 tok/s$0.25
28minimaxMiniMax: MiniMax M2.5MiniMax36.129.246 tok/s$0.20
29minimaxMiniMax: MiniMax M2.7MiniMax36.129.246 tok/s$0.30
30openaiOpenAI: gpt-oss-120bOpenAI33.328.6289 tok/s$0.04
31deepseekDeepSeek: DeepSeek V3.1 TerminusDeepSeek28.128.40 tok/s$0.21
32anthropicAnthropic: Claude 3.7 SonnetAnthropic34.727.60 tok/s$3.00
33anthropicAnthropic: Claude 3.7 Sonnet (thinking)Anthropic34.727.60 tok/s$3.00
34x-aixAI: Grok 4 FastxAI35.127.4137 tok/s$0.20
35openaiOpenAI: o4 Mini Deep ResearchOpenAI33.125.6140 tok/s$2.00
36x-aixAI: Grok 4xAI29.725.487 tok/s$3.00
37x-aixAI: Grok 3 MinixAI32.125.2199 tok/s$0.30
38mistralaiMistral: Mistral Small 4Mistral AI26.924.30 tok/s$0.15
39deepseekDeepSeek: R1 0528DeepSeek27.124.00 tok/s$0.45
40x-aixAI: Grok Code Fast 1xAI28.723.7172 tok/s$0.20
41mistralaiMistral: Devstral 2 2512Mistral AI22.023.779 tok/s$0.40
42mistralaiMistral LargeMistral AI22.822.760 tok/s$2.00
43deepseekDeepSeek: DeepSeek V3 0324DeepSeek22.322.00 tok/s$0.20
44openaiOpenAI: GPT-5 MiniOpenAI20.721.992 tok/s$0.25
45openaiOpenAI: GPT-5 ChatOpenAI21.821.2116 tok/s$1.25
46mistralaiMistral: Devstral Small 1.1Mistral AI19.520.7204 tok/s$0.10
47openaiOpenAI: o1-proOpenAI30.820.5115 tok/s$150
48anthropicAnthropic: Claude Opus 4.5Anthropic18.019.50 tok/s$5.00
49anthropicAnthropic: Claude Opus 4.1Anthropic18.019.50 tok/s$15
50anthropicAnthropic: Claude Opus 4.6Anthropic18.019.50 tok/s$5.00

Benchmark data by Artificial Analysis

Not sure which model to pick?

Take our 30-second quiz and get a personalized recommendation.

Take the Quiz