Best LLM API Models for Coding

This guide highlights models whose public metadata indicates coding, programming, developer, or software-oriented use cases.

50Models listed
1M + 500KCost example tokens
USD / 1MNormalized prices

Quick shortlist

Start with Claude Opus 4.7.

This guide starts with coding-oriented models and current popularity rank so developers can shortlist practical options faster.

Lead model 🔥Claude Opus 4.7
ProviderAnthropic
Sample cost$17.5
Context1M

The ranking is a discovery aid, not a final recommendation. Always compare the model against your workload and verify provider pricing before production use.

How to read this ranking

Models are included when their public model metadata indicates coding, programming, developer, software, or code generation use cases.

Model Ranking

Browse all models
ModelProviderPromptOutputExample CostYour CostContextRankRelease
🔥Claude Opus 4.7Anthropic$5$25$17.5$17.51M#22026-04-16
🔥Claude Sonnet 4.6Anthropic$3$15$10.5$10.51M#32026-02-17
🔥Kimi K2.6MoonshotAI$0.74$3.5$2.49$2.49262.14K#52026-04-20
🔥Gemini 3 Flash PreviewGoogle$0.5$3$2$21.05M#62025-12-17
🔥DeepSeek V4 ProDeepSeek$0.435$0.87$0.87$0.871.05M#82026-04-24
New🔥Ring-2.6-1T (free)inclusionAI$0$0$0$0262.14K#102026-05-08
🔥Gemini 2.5 FlashGoogle$0.3$2.5$1.55$1.551.05M#132025-06-17
🔥Owl AlphaOpenRouter$0$0$0$01.05M#172026-04-28
CoBuddy (free)Baidu Qianfan$0$0$0$0131.07KUnranked2026-05-06
Granite 4.1 8BIBM$0.05$0.1$0.1$0.1131.07KUnranked2026-04-30
Mistral Medium 3.5Mistral$1.5$7.5$5.25$5.25262.14KUnranked2026-04-30
Laguna XS.2 (free)Poolside$0$0$0$0131.07KUnranked2026-04-28
Laguna M.1 (free)Poolside$0$0$0$0131.07KUnranked2026-04-28
Qwen3.6 Max PreviewQwen$1.04$6.24$4.16$4.16262.14KUnranked2026-04-27
MiMo-V2.5-ProXiaomi$1$3$2.5$2.51.05MUnranked2026-04-22
GPT-5.4 Image 2OpenAI$8$15$15.5$15.5272KUnranked2026-04-21
GLM 5.1Z.ai$1.05$3.5$2.8$2.8202.75KUnranked2026-04-07
GLM 5V TurboZ.ai$1.2$4$3.2$3.2202.75KUnranked2026-04-01
KAT-Coder-Pro V2Kwaipilot$0.3$1.2$0.9$0.9256KUnranked2026-03-27
GPT-5.4 MiniOpenAI$0.75$4.5$3$3400KUnranked2026-03-17
Qwen3.5-9BQwen$0.04$0.15$0.11$0.11262.14KUnranked2026-03-10
GPT-5.4OpenAI$2.5$15$10$101.05MUnranked2026-03-05
GPT-5.3-CodexOpenAI$1.75$14$8.75$8.75400KUnranked2026-02-24
Gemini 3.1 Pro PreviewGoogle$2$12$8$81.05MUnranked2026-02-19
MiniMax M2.5MiniMax$0.15$1.15$0.72$0.72196.61KUnranked2026-02-12
MiniMax M2.5 (free)MiniMax$0$0$0$0196.61KUnranked2026-02-12
GLM 5Z.ai$0.6$1.92$1.56$1.56202.75KUnranked2026-02-11
Claude Opus 4.6Anthropic$5$25$17.5$17.51MUnranked2026-02-04
Qwen3 Coder NextQwen$0.11$0.8$0.51$0.51262.14KUnranked2026-02-04
Kimi K2.5MoonshotAI$0.4$1.9$1.35$1.35262.14KUnranked2026-01-27
GPT AudioOpenAI$2.5$10$7.5$7.5128KUnranked2026-01-19
GPT Audio MiniOpenAI$0.6$2.4$1.8$1.8128KUnranked2026-01-19
GLM 4.7 FlashZ.ai$0.06$0.4$0.26$0.26202.75KUnranked2026-01-19
GPT-5.2-CodexOpenAI$1.75$14$8.75$8.75400KUnranked2026-01-14
MiniMax M2.1MiniMax$0.29$0.95$0.76$0.76196.61KUnranked2025-12-23
GLM 4.7Z.ai$0.4$1.75$1.27$1.27202.75KUnranked2025-12-22
Nemotron 3 Nano 30B A3BNVIDIA$0.05$0.2$0.15$0.15262.14KUnranked2025-12-14
Nemotron 3 Nano 30B A3B (free)NVIDIA$0$0$0$0256KUnranked2025-12-14
GPT-5.2 ProOpenAI$21$168$105$105400KUnranked2025-12-10
Devstral 2 2512Mistral$0.4$2$1.4$1.4262.14KUnranked2025-12-09
Relace SearchRelace$1$3$2.5$2.5256KUnranked2025-12-08
Rnj 1 InstructEssentialAI$0.15$0.15$0.22$0.2232.77KUnranked2025-12-07
GPT-5.1-Codex-MaxOpenAI$1.25$10$6.25$6.25400KUnranked2025-12-04
Claude Opus 4.5Anthropic$5$25$17.5$17.5200KUnranked2025-11-24
GPT-5.1-CodexOpenAI$1.25$10$6.25$6.25400KUnranked2025-11-13
GPT-5.1-Codex-MiniOpenAI$0.25$2$1.25$1.25400KUnranked2025-11-13
MiniMax M2MiniMax$0.255$1$0.76$0.76196.61KUnranked2025-10-23
GPT-5 ImageOpenAI$10$10$15$15400KUnranked2025-10-14
Llama 3.3 Nemotron Super 49B V1.5NVIDIA$0.1$0.4$0.3$0.3131.07KUnranked2025-10-10
ERNIE 4.5 21B A3B ThinkingBaidu$0.07$0.28$0.21$0.21131.07KUnranked2025-10-09

Pricing FAQ

How is the sample workload cost calculated?

The sample workload uses 1 million input tokens plus 500 thousand output tokens, then applies each model's normalized USD price per 1 million tokens.

Why do input and output token prices matter separately?

Many applications are output-token heavy, while retrieval and classification workloads may be input-token heavy. Comparing both prices helps avoid picking a model that is cheap for the wrong workload shape.

Should I verify prices before production use?

Yes. AI Model Matrix normalizes public pricing metadata for comparison, but provider availability, limits, and prices can change. Always verify the final contract or provider dashboard before production use.

Related Guides

Cheapest LLM APIs

Sort models by estimated workload cost and normalized token prices.

Open guide

Largest Context Windows

Find models for long documents, retrieval, and codebase context.

Open guide

Coding Models

Compare code-oriented models by cost, context, and popularity rank.

Open guide

Free Models

Browse zero-price models for prototypes and evaluation.

Open guide

RAG Models

Start from large context windows and practical input-cost constraints.

Open guide

Chatbot Costs

Find budget-sensitive models for output-heavy assistant traffic.

Open guide