LMC ValueScore is a tilted Cobb-Douglas index that weights benchmark quality 60 percent and blended price efficiency 40 percent on a 0-100 scale. It replaces naive quality-divided-by-price rankings that implode near free tiers. 341 scored models, updated hourly. Read the full methodology.
Blended price uses 85 percent input and 15 percent output weighting, reflecting real chat and RAG workload patterns. Reasoning models have their output contribution multiplied by a family-specific expansion factor.
| Rank | Model | LMC ValueScore | Quality | Blended $/M |
|---|---|---|---|---|
| 1 | Gemma 4 31BGoogle | 82 | 81 | $0.167 |
| 2 | Gemini 2.5 Flash Lite Preview 09-2025Google | 82 | 79 | $0.145 |
| 3 | Gemini 2.5 Flash LiteGoogle | 82 | 79 | $0.145 |
| 4 | Gemma 4 31B (free)Google | 81 | 81 | Free |
| 5 | Gemma 4 26B A4B Google | 81 | 73 | $0.101 |
| 6 | MiniMax M2.5 (free)MiniMax | 80 | 78 | Free |
| 7 | Hy3 previewTencent | 79 | 69 | $0.095 |
| 8 | Gemini 2.0 FlashGoogle | 78 | 72 | $0.145 |
| 9 | Qwen3.5-FlashAlibaba | 78 | 69 | $0.094 |
| 10 | DeepSeek V4 FlashDeepSeek | 77 | 72 | $0.161 |
| 11 | Qwen3.5-9BAlibaba | 77 | 67 | $0.057 |
| 12 | Grok 4.1 FastxAI | 76 | 78 | $0.245 |
| 13 | Gemma 4 26B A4B (free)Google | 76 | 73 | Free |
| 14 | GLM 4.5 Air (free)Zhipu AI | 75 | 71 | Free |
| 15 | Qwen3 235B A22B Instruct 2507Alibaba | 75 | 65 | $0.075 |
| 16 | MiniMax M2.5MiniMax | 74 | 78 | $0.300 |
| 17 | Qwen3.5-35B-A3BAlibaba | 74 | 76 | $0.269 |
| 18 | Step 3.5 FlashStepFun | 74 | 67 | $0.130 |
| 19 | Llama 3.3 70B InstructMeta | 74 | 67 | $0.133 |
| 20 | GPT-5.1-Codex-MiniOpenAI | 73 | 87 | $0.513 |
| 21 | GPT-5.4 NanoOpenAI | 73 | 79 | $0.358 |
| 22 | Grok 4 FastxAI | 73 | 73 | $0.245 |
| 23 | GLM 4.5 AirZhipu AI | 72 | 71 | $0.238 |
| 24 | GPT-4o-miniOpenAI | 72 | 69 | $0.218 |
| 25 | Qwen3 Next 80B A3B Instruct (free)Alibaba | 72 | 67 | Free |
| 26 | GLM 4.7 FlashZhipu AI | 72 | 64 | $0.111 |
| 27 | Gemini 3.1 Flash Lite PreviewGoogle | 71 | 80 | $0.438 |
| 28 | DeepSeek V3 0324DeepSeek | 71 | 72 | $0.286 |
| 29 | DeepSeek V3.1DeepSeek | 71 | 69 | $0.240 |
| 30 | Llama 3.3 70B Instruct (free)Meta | 71 | 66 | Free |
| 31 | Qwen3 30B A3B Thinking 2507Alibaba | 71 | 64 | $0.128 |
| 32 | Qwen3.5-27BAlibaba | 70 | 77 | $0.400 |
| 33 | DeepSeek V3.2DeepSeek | 70 | 70 | $0.271 |
| 34 | DeepSeek V3.2 ExpDeepSeek | 70 | 70 | $0.291 |
| 35 | Llama 4 MaverickMeta | 70 | 67 | $0.218 |
| 36 | Qwen3 30B A3BAlibaba | 70 | 64 | $0.144 |
| 37 | Qwen3 VL 235B A22B InstructAlibaba | 69 | 69 | $0.302 |
| 38 | Qwen3 Next 80B A3B InstructAlibaba | 69 | 67 | $0.242 |
| 39 | Qwen3.5-122B-A10BAlibaba | 68 | 78 | $0.533 |
| 40 | MiniMax M2MiniMax | 68 | 72 | $0.367 |
| 41 | Qwen3 8BAlibaba | 68 | 61 | $0.102 |
| 42 | Phi 4Microsoft | 68 | 60 | $0.076 |
| 43 | DeepSeek V4 ProDeepSeek | 67 | 76 | $0.500 |
| 44 | DeepSeek V3.1 TerminusDeepSeek | 67 | 69 | $0.372 |
| 45 | Qwen3 Next 80B A3B ThinkingAlibaba | 67 | 64 | $0.200 |
| 46 | Trinity Large Previewarcee-ai | 67 | 64 | $0.195 |
| 47 | Gemini 3 Flash PreviewGoogle | 66 | 88 | $0.875 |
| 48 | Gemini 2.5 FlashGoogle | 66 | 79 | $0.630 |
| 49 | MiniMax M2.1MiniMax | 66 | 70 | $0.389 |
| 50 | DeepSeek V3DeepSeek | 66 | 70 | $0.406 |
| 51 | Qwen3.5 397B A17BAlibaba | 65 | 80 | $0.683 |
| 52 | Gemma 2 27BGoogle | 65 | 77 | $0.650 |
| 53 | Qwen3.6 PlusAlibaba | 65 | 75 | $0.569 |
| 54 | MiniMax M2-herMiniMax | 65 | 69 | $0.435 |
| 55 | Trinity Large Thinkingarcee-ai | 65 | 65 | $0.315 |
| 56 | Llama 3.3 Nemotron Super 49B V1.5NVIDIA | 65 | 61 | $0.145 |
| 57 | MiniMax M2.7MiniMax | 64 | 68 | $0.434 |
| 58 | Qwen3 235B A22B Thinking 2507Alibaba | 64 | 65 | $0.351 |
| 59 | Gemini 2.0 Flash LiteGoogle | 64 | 59 | $0.109 |
| 60 | GLM 4.7Zhipu AI | 63 | 73 | $0.603 |
| 61 | Llama 3.1 70B InstructMeta | 63 | 65 | $0.400 |
| 62 | GPT-4o-mini Search PreviewOpenAI | 63 | 61 | $0.218 |
| 63 | GLM 5Zhipu AI | 62 | 78 | $0.798 |
| 64 | MiMo-V2.5Xiaomi | 62 | 72 | $0.640 |
| 65 | GLM 4.6Zhipu AI | 62 | 71 | $0.617 |
| 66 | GLM 4.6VZhipu AI | 62 | 65 | $0.390 |
| 67 | Grok 3 Mini BetaxAI | 62 | 63 | $0.330 |
| 68 | gpt-oss-20bOpenAI | 62 | 57 | $0.047 |
| 69 | MiniMax M1MiniMax | 61 | 71 | $0.670 |
| 70 | GLM 4.5Zhipu AI | 60 | 75 | $0.840 |
| 71 | Qwen3 VL 235B A22B ThinkingAlibaba | 60 | 68 | $0.611 |
| 72 | Mercury 2Inception | 60 | 61 | $0.325 |
| 73 | Mistral Large 3 2512Mistral AI | 59 | 67 | $0.650 |
| 74 | GPT-5 MiniOpenAI | 59 | 64 | $0.513 |
| 75 | Grok 4.20xAI | 58 | 89 | $1.44 |
| 76 | Kimi K2.6Moonshot AI | 56 | 76 | $1.16 |
| 77 | GPT-5.4 MiniOpenAI | 55 | 79 | $1.31 |
| 78 | Composer 2Cursor | 55 | 66 | $0.800 |
| 79 | ERNIE 4.5 300B A47B Baidu | 55 | 60 | $0.403 |
| 80 | gpt-oss-20b (free)OpenAI | 55 | 57 | Free |
| 81 | MiMo-V2.5-ProXiaomi | 54 | 76 | $1.30 |
| 82 | Grok 4.3xAI | 53 | 76 | $1.44 |
| 83 | GLM 5.1Zhipu AI | 53 | 76 | $1.42 |
| 84 | MiMo-V2-ProXiaomi | 53 | 74 | $1.30 |
| 85 | Nova 2 LiteAmazon | 53 | 61 | $0.630 |
| 86 | GPT-4o-mini (2024-07-18)OpenAI | 53 | 56 | $0.218 |
| 87 | GLM 4.5VZhipu AI | 52 | 62 | $0.780 |
| 88 | Qwen3 Max ThinkingAlibaba | 51 | 68 | $1.25 |
| 89 | Qwen3 MaxAlibaba | 50 | 67 | $1.25 |
| 90 | Kimi K2.5Moonshot AI | 50 | 59 | $0.674 |
| 91 | Llama 4 ScoutMeta | 50 | 54 | $0.113 |
| 92 | Olmo 3 32B ThinkAllen AI | 49 | 55 | $0.203 |
| 93 | GLM 5 TurboZhipu AI | 48 | 71 | $1.62 |
| 94 | Claude Haiku 4.5Anthropic | 48 | 70 | $1.60 |
| 95 | Llama 3 70B InstructMeta | 48 | 57 | $0.545 |
| 96 | Qwen3.6 Max PreviewAlibaba | 47 | 75 | $1.82 |
| 97 | Grok 4.20 Multi-AgentxAI | 44 | 88 | $2.60 |
| 98 | GPT-5.1-Codex-MaxOpenAI | 44 | 88 | $2.56 |
| 99 | GPT-5 CodexOpenAI | 44 | 88 | $2.56 |
| 100 | GPT-5OpenAI | 44 | 88 | $2.56 |
| 101 | GPT-5.1OpenAI | 44 | 87 | $2.56 |
| 102 | GPT-5.1-CodexOpenAI | 44 | 87 | $2.56 |
| 103 | GPT-5.1 ChatOpenAI | 44 | 87 | $2.56 |
| 104 | Phi 4 Mini InstructMicrosoft | 44 | 53 | $0.121 |
| 105 | Gemini 2.5 ProGoogle | 43 | 84 | $2.56 |
| 106 | Gemini 2.5 Pro Preview 06-05Google | 43 | 84 | $2.56 |
| 107 | Gemini 2.5 Pro Preview 05-06Google | 43 | 84 | $2.56 |
| 108 | Claude 3.5 HaikuAnthropic | 41 | 58 | $1.28 |
| 109 | GPT-5 ChatOpenAI | 38 | 71 | $2.56 |
| 110 | Composer 2 FastCursor | 38 | 66 | $2.40 |
| 111 | Qwen3 235B A22BAlibaba | 38 | 54 | $0.660 |
| 112 | GPT-4.1 MiniOpenAI | 38 | 54 | $0.580 |
| 113 | R1 0528DeepSeek | 37 | 79 | $3.00 |
| 114 | Mistral LargeMistral AI | 36 | 66 | $2.60 |
| 115 | Mixtral 8x22B InstructMistral AI | 35 | 63 | $2.60 |
| 116 | GPT-4.1OpenAI | 34 | 67 | $2.90 |
| 117 | Kimi K2 ThinkingMoonshot AI | 34 | 53 | $0.885 |
| 118 | GPT-5.2-CodexOpenAI | 33 | 90 | $3.59 |
| 119 | GPT-5.2OpenAI | 33 | 90 | $3.59 |
| 120 | GPT-5.3-CodexOpenAI | 33 | 89 | $3.59 |
| 121 | Kimi K2 0905Moonshot AI | 33 | 52 | $0.640 |
| 122 | GPT-5.3 ChatOpenAI | 32 | 87 | $3.59 |
| 123 | Gemini 3.1 Pro Preview Custom ToolsGoogle | 32 | 81 | $3.50 |
| 124 | Gemini 3.1 Pro PreviewGoogle | 32 | 81 | $3.50 |
| 125 | GPT-5.2 ChatOpenAI | 30 | 77 | $3.59 |
| 126 | Grok 3 MinixAI | 30 | 51 | $0.330 |
| 127 | R1DeepSeek | 29 | 73 | $3.60 |
| 128 | Claude 3 HaikuAnthropic | 29 | 50 | $0.400 |
| 129 | GPT-4o (2024-08-06)OpenAI | 28 | 71 | $3.63 |
| 130 | GPT-4oOpenAI | 28 | 71 | $3.63 |
| 131 | GPT-4o AudioOpenAI | 28 | 70 | $3.63 |
| 132 | GPT-4o Search PreviewOpenAI | 28 | 70 | $3.63 |
| 133 | Kimi K2 0711Moonshot AI | 28 | 51 | $0.830 |
| 134 | Mistral Large 2407Mistral AI | 27 | 56 | $2.60 |
| 135 | Command R (08-2024)Cohere | 25 | 49 | $0.218 |
| 136 | o4 MiniOpenAI | 22 | 81 | $4.24 |
| 137 | GPT-5.4OpenAI | 21 | 92 | $4.38 |
| 138 | o4 Mini HighOpenAI | 20 | 72 | $4.24 |
| 139 | Devstral Small 1.1Mistral AI | 20 | 47 | $0.130 |
| 140 | GPT-5 NanoOpenAI | 17 | 46 | $0.102 |
| 141 | GPT-4o (2024-11-20)OpenAI | 16 | 53 | $3.63 |
| 142 | GPT-5.4 ProOpenAI | 15 | 92 | $52.50 |
| 143 | GPT-5.2 ProOpenAI | 15 | 91 | $43.05 |
| 144 | Claude Opus 4.6 (Fast)Anthropic | 15 | 90 | $48.00 |
| 145 | Claude Opus 4.6Anthropic | 15 | 90 | $8.00 |
| 146 | GPT-5 ProOpenAI | 15 | 89 | $30.75 |
| 147 | Grok 4xAI | 15 | 88 | $4.80 |
| 148 | o3 Deep ResearchOpenAI | 15 | 87 | $68.50 |
| 149 | o3 ProOpenAI | 15 | 87 | $137.00 |
| 150 | o3OpenAI | 15 | 87 | $13.70 |
| 151 | Claude Sonnet 4.6Anthropic | 14 | 85 | $4.80 |
| 152 | Claude Opus 4.5Anthropic | 14 | 85 | $8.00 |
| 153 | Claude Sonnet 4.5Anthropic | 14 | 82 | $4.80 |
| 154 | Claude Opus 4Anthropic | 14 | 82 | $24.00 |
| 155 | o4 Mini Deep ResearchOpenAI | 14 | 81 | $7.70 |
| 156 | Claude Opus 4.7Anthropic | 14 | 79 | $8.00 |
| 157 | GPT-5.5 ProOpenAI | 14 | 79 | $52.50 |
| 158 | GPT-5.5OpenAI | 14 | 79 | $8.75 |
| 159 | Claude Opus 4.1Anthropic | 13 | 75 | $24.00 |
| 160 | Claude 3.7 Sonnet (thinking)Anthropic | 13 | 75 | $4.80 |
| 161 | o3 MiniOpenAI | 13 | 75 | $7.54 |
| 162 | Claude Sonnet 4Anthropic | 13 | 74 | $4.80 |
| 163 | o1OpenAI | 13 | 74 | $84.75 |
| 164 | Grok 3xAI | 13 | 74 | $4.80 |
| 165 | Grok 3 BetaxAI | 13 | 74 | $4.80 |
| 166 | o1-proOpenAI | 13 | 73 | $1477.50 |
| 167 | Claude 3.7 SonnetAnthropic | 13 | 73 | $4.80 |
| 168 | GPT-4o (2024-05-13)OpenAI | 13 | 71 | $6.50 |
| 169 | Command ACohere | 13 | 51 | $3.63 |
| 170 | GPT-4 TurboOpenAI | 12 | 67 | $13.00 |
| 171 | GPT-4 (older v0314)OpenAI | 12 | 65 | $34.50 |
| 172 | GPT-4OpenAI | 12 | 65 | $34.50 |
| 173 | o3 Mini HighOpenAI | 12 | 63 | $7.54 |
| 174 | Devstral 2 2512Mistral AI | 12 | 46 | $0.640 |
| 175 | GPT-4 Turbo PreviewOpenAI | 11 | 60 | $13.00 |
| 176 | GPT-4 Turbo (older v1106)OpenAI | 11 | 60 | $13.00 |
| 177 | Devstral MediumMistral AI | 11 | 45 | $0.640 |
| 178 | Llama 3.1 8B InstructMeta | 11 | 44 | $0.025 |
| 179 | Command R (08-2024)Cohere | 9 | 49 | $3.63 |
| 180 | GPT-4.1 NanoOpenAI | 7 | 42 | $0.145 |
| 181 | R1 Distill Llama 70BDeepSeek | 5 | 42 | $1.55 |
| 182 | gpt-oss-120bOpenAI | 5 | 41 | $0.060 |
| 183 | Ring-2.6-1T (free)inclusionai | 4 | 40 | Free |
| 184 | CoBuddy (free)Baidu | 4 | 40 | Free |
| 185 | Granite 4.1 8BIBM | 4 | 40 | $0.057 |
| 186 | Nemotron 3 Nano Omni (free)NVIDIA | 4 | 40 | Free |
| 187 | Laguna XS.2 (free)poolside | 4 | 40 | Free |
| 188 | Laguna M.1 (free)poolside | 4 | 40 | Free |
| 189 | Qwen3.6 FlashAlibaba | 4 | 40 | $0.438 |
| 190 | Qwen3.6 35B A3BAlibaba | 4 | 40 | $0.277 |
| 191 | Ling-2.6-flashinclusionai | 4 | 40 | $0.104 |
| 192 | Qianfan-OCR-Fast (free)Baidu | 4 | 40 | Free |
| 193 | Lyria 3 Pro PreviewGoogle | 4 | 40 | Free |
| 194 | Lyria 3 Clip PreviewGoogle | 4 | 40 | Free |
| 195 | KAT-Coder-Pro V2Kuaishou | 4 | 40 | $0.435 |
| 196 | Reka Edgerekaai | 4 | 40 | $0.100 |
| 197 | Mistral Small 4Mistral AI | 4 | 40 | $0.218 |
| 198 | Nemotron 3 Super (free)NVIDIA | 4 | 40 | Free |
| 199 | Nemotron 3 SuperNVIDIA | 4 | 40 | $0.144 |
| 200 | Seed-2.0-MiniByteDance | 4 | 40 | $0.145 |
| 201 | LFM2-24B-A2BLiquid AI | 4 | 40 | $0.044 |
| 202 | Qwen3.5 Plus 2026-02-15Alibaba | 4 | 40 | $0.455 |
| 203 | Qwen3 Coder NextAlibaba | 4 | 40 | $0.213 |
| 204 | Solar Pro 3Upstage | 4 | 40 | $0.218 |
| 205 | LFM2.5-1.2B-Thinking (free)Liquid AI | 4 | 40 | Free |
| 206 | LFM2.5-1.2B-Instruct (free)Liquid AI | 4 | 40 | Free |
| 207 | Seed 1.6 FlashByteDance | 4 | 40 | $0.109 |
| 208 | MiMo-V2-FlashXiaomi | 4 | 40 | $0.130 |
| 209 | Nemotron 3 Nano 30B A3B (free)NVIDIA | 4 | 40 | Free |
| 210 | Nemotron 3 Nano 30B A3BNVIDIA | 4 | 40 | $0.073 |
| 211 | Rnj 1 Instructessentialai | 4 | 40 | $0.150 |
| 212 | Ministral 3 14B 2512Mistral AI | 4 | 40 | $0.200 |
| 213 | Ministral 3 8B 2512Mistral AI | 4 | 40 | $0.150 |
| 214 | Ministral 3 3B 2512Mistral AI | 4 | 40 | $0.100 |
| 215 | Trinity Miniarcee-ai | 4 | 40 | $0.061 |
| 216 | DeepSeek V3.2 SpecialeDeepSeek | 4 | 40 | $0.309 |
| 217 | gpt-oss-safeguard-20bOpenAI | 4 | 40 | $0.109 |
| 218 | Nemotron Nano 12B 2 VL (free)NVIDIA | 4 | 40 | Free |
| 219 | Qwen3 VL 32B InstructAlibaba | 4 | 40 | $0.151 |
| 220 | Granite 4.0 MicroIBM | 4 | 40 | $0.031 |
| 221 | Qwen3 VL 8B ThinkingAlibaba | 4 | 40 | $0.304 |
| 222 | Qwen3 VL 8B InstructAlibaba | 4 | 40 | $0.143 |
| 223 | ERNIE 4.5 21B A3B ThinkingBaidu | 4 | 40 | $0.102 |
| 224 | Qwen3 VL 30B A3B ThinkingAlibaba | 4 | 40 | $0.345 |
| 225 | Qwen3 VL 30B A3B InstructAlibaba | 4 | 40 | $0.189 |
| 226 | Tongyi DeepResearch 30B A3BAlibaba | 4 | 40 | $0.144 |
| 227 | Qwen3 Coder FlashAlibaba | 4 | 40 | $0.312 |
| 228 | Qwen Plus 0728 (thinking)Alibaba | 4 | 40 | $0.338 |
| 229 | Qwen Plus 0728Alibaba | 4 | 40 | $0.338 |
| 230 | Nemotron Nano 9B V2 (free)NVIDIA | 4 | 40 | Free |
| 231 | Nemotron Nano 9B V2NVIDIA | 4 | 40 | $0.058 |
| 232 | Grok Code Fast 1xAI | 4 | 40 | $0.395 |
| 233 | ERNIE 4.5 21B A3BBaidu | 4 | 40 | $0.102 |
| 234 | ERNIE 4.5 VL 28B A3BBaidu | 4 | 40 | $0.203 |
| 235 | Codestral 2508Mistral AI | 4 | 40 | $0.390 |
| 236 | Qwen3 Coder 30B A3B InstructAlibaba | 4 | 40 | $0.100 |
| 237 | Qwen3 30B A3B Instruct 2507Alibaba | 4 | 40 | $0.122 |
| 238 | Qwen3 Coder 480B A35B (free)Alibaba | 4 | 40 | Free |
| 239 | Qwen3 Coder 480B A35BAlibaba | 4 | 40 | $0.457 |
| 240 | UI-TARS 7B ByteDance | 4 | 40 | $0.115 |
| 241 | Hunyuan A13B InstructTencent | 4 | 40 | $0.205 |
| 242 | Mistral Small 3.2 24BMistral AI | 4 | 40 | $0.094 |
| 243 | Gemma 3n 4BGoogle | 4 | 40 | $0.069 |
| 244 | Spotlightarcee-ai | 4 | 40 | $0.180 |
| 245 | Llama Guard 4 12BMeta | 4 | 40 | $0.180 |
| 246 | Qwen3 14BAlibaba | 4 | 40 | $0.087 |
| 247 | Qwen3 32BAlibaba | 4 | 40 | $0.110 |
| 248 | Mistral Small 3.1 24BMistral AI | 4 | 40 | $0.382 |
| 249 | Gemma 3 4BGoogle | 4 | 40 | $0.046 |
| 250 | Gemma 3 12BGoogle | 4 | 40 | $0.054 |
| 251 | Reka Flash 3rekaai | 4 | 40 | $0.115 |
| 252 | Gemma 3 27BGoogle | 4 | 40 | $0.092 |
| 253 | SabaMistral AI | 4 | 40 | $0.260 |
| 254 | Llama Guard 3 8BMeta | 4 | 40 | $0.413 |
| 255 | Qwen VL PlusAlibaba | 4 | 40 | $0.177 |
| 256 | Qwen-TurboAlibaba | 4 | 40 | $0.047 |
| 257 | Qwen2.5 VL 72B InstructAlibaba | 4 | 40 | $0.325 |
| 258 | Qwen-PlusAlibaba | 4 | 40 | $0.338 |
| 259 | Mistral Small 3Mistral AI | 4 | 40 | $0.054 |
| 260 | MiniMax-01MiniMax | 4 | 40 | $0.335 |
| 261 | Nova Lite 1.0Amazon | 4 | 40 | $0.087 |
| 262 | Nova Micro 1.0Amazon | 4 | 40 | $0.051 |
| 263 | Llama 3.2 11B Vision InstructMeta | 4 | 40 | $0.245 |
| 264 | Qwen2.5 72B InstructAlibaba | 4 | 40 | $0.366 |
| 265 | Mistral NemoMistral AI | 4 | 40 | $0.022 |
| 266 | SWE-1.5Windsurf | 4 | 40 | Free |
| 267 | ALLaM 7B Instruct (preview)HUMAIN | 4 | 40 | Free |
| 268 | ALLaM 2 7B InstructHUMAIN | 4 | 40 | Free |
| 269 | ALLaM 34BHUMAIN | 4 | 40 | Free |
| 270 | Falcon-H1-Arabic 34B InstructTII | 4 | 40 | Free |
| 271 | Falcon-H1-Arabic 7B InstructTII | 4 | 40 | Free |
| 272 | Falcon-H1-Arabic 3B InstructTII | 4 | 40 | Free |
| 273 | Falcon Arabic 7B InstructTII | 4 | 40 | Free |
| 274 | Falcon3 10B InstructTII | 4 | 40 | Free |
| 275 | Falcon3 7B InstructTII | 4 | 40 | Free |
| 276 | gpt-oss-120b (free)OpenAI | 4 | 40 | Free |
| 277 | Anthropic Claude Haiku Latest~anthropic | 3 | 40 | $1.60 |
| 278 | OpenAI GPT Mini Latest~openai | 3 | 40 | $1.31 |
| 279 | MoonshotAI Kimi Latest~moonshotai | 3 | 40 | $1.16 |
| 280 | Google Gemini Flash Latest~google | 3 | 40 | $0.875 |
| 281 | Qwen3.5 Plus 2026-04-20Alibaba | 3 | 40 | $0.700 |
| 282 | Qwen3.6 27BAlibaba | 3 | 40 | $0.752 |
| 283 | Ling-2.6-1Tinclusionai | 3 | 40 | $0.630 |
| 284 | GLM 5V TurboZhipu AI | 3 | 40 | $1.62 |
| 285 | MiMo-V2-OmniXiaomi | 3 | 40 | $0.640 |
| 286 | Seed-2.0-LiteByteDance | 3 | 40 | $0.513 |
| 287 | Aion-2.0aion-labs | 3 | 40 | $0.920 |
| 288 | Palmyra X5Writer | 3 | 40 | $1.41 |
| 289 | GPT Audio MiniOpenAI | 3 | 40 | $0.870 |
| 290 | Seed 1.6ByteDance | 3 | 40 | $0.513 |
| 291 | Cogito v2.1 671Bdeepcogito | 3 | 40 | $1.25 |
| 292 | Qwen3 Coder PlusAlibaba | 3 | 40 | $1.04 |
| 293 | Mistral Medium 3.1Mistral AI | 3 | 40 | $0.640 |
| 294 | ERNIE 4.5 VL 424B A47B Baidu | 3 | 40 | $0.545 |
| 295 | Virtuoso Largearcee-ai | 3 | 40 | $0.817 |
| 296 | Coder Largearcee-ai | 3 | 40 | $0.545 |
| 297 | Aion-1.0-Miniaion-labs | 3 | 40 | $0.805 |
| 298 | Qwen VL MaxAlibaba | 3 | 40 | $0.754 |
| 299 | Qwen-Max Alibaba | 3 | 40 | $1.51 |
| 300 | SonarPerplexity | 3 | 40 | $1.00 |
| 301 | Nova Pro 1.0Amazon | 3 | 40 | $1.16 |
| 302 | GPT-3.5 Turbo (older v0613)OpenAI | 3 | 40 | $1.15 |
| 303 | GPT-3.5 TurboOpenAI | 3 | 40 | $0.650 |
| 304 | Qwen2.5 7B InstructAlibaba | 3 | 38 | $0.049 |
| 305 | Mistral Medium 3.5Mistral AI | 2 | 40 | $2.40 |
| 306 | Google Gemini Pro Latest~google | 2 | 40 | $3.50 |
| 307 | GPT AudioOpenAI | 2 | 40 | $3.63 |
| 308 | Jamba Large 1.7AI21 Labs | 2 | 40 | $2.90 |
| 309 | Sonar Reasoning ProPerplexity | 2 | 40 | $2.90 |
| 310 | Sonar Deep ResearchPerplexity | 2 | 40 | $2.90 |
| 311 | Mistral Large 2411Mistral AI | 2 | 40 | $2.60 |
| 312 | Pixtral Large 2411Mistral AI | 2 | 40 | $2.60 |
| 313 | GPT-3.5 Turbo 16kOpenAI | 2 | 40 | $3.15 |
| 314 | ALLaM 1 13B InstructHUMAIN | 2 | 39 | $1.80 |
| 315 | Falcon Mamba 7B InstructTII | 2 | 38 | Free |
| 316 | R1 Distill Qwen 32BDeepSeek | 2 | 37 | $0.594 |
| 317 | Command R7B (12-2024)Cohere | 2 | 36 | $0.054 |
| 318 | GPT Chat LatestOpenAI | 1 | 40 | $8.75 |
| 319 | Anthropic Claude Sonnet Latest~anthropic | 1 | 40 | $4.80 |
| 320 | OpenAI GPT Latest~openai | 1 | 40 | $8.75 |
| 321 | Claude Opus Latest~anthropic | 1 | 40 | $8.00 |
| 322 | Nova Premier 1.0Amazon | 1 | 40 | $4.00 |
| 323 | Sonar Pro SearchPerplexity | 1 | 40 | $4.80 |
| 324 | Sonar ProPerplexity | 1 | 40 | $4.80 |
| 325 | Aion-1.0aion-labs | 1 | 40 | $4.60 |
| 326 | Inflection 3 ProductivityInflection | 1 | 36 | $3.63 |
| 327 | Inflection 3 PiInflection | 1 | 36 | $3.63 |
| 328 | GLM 4 32B Zhipu AI | 1 | 36 | $0.100 |
| 329 | Qwen2.5 Coder 32B InstructAlibaba | 1 | 35 | $0.711 |
| 330 | GPT-3.5 Turbo InstructOpenAI | 1 | 35 | $1.57 |
| 331 | Llama 3 8B InstructMeta | 1 | 34 | $0.040 |
| 332 | Llama 3.2 3B Instruct (free)Meta | 1 | 33 | Free |
| 333 | Llama 3.2 3B InstructMeta | 1 | 33 | $0.094 |
| 334 | autofixer-01Vercel | 1 | 32 | Free |
| 335 | WizardLM-2 8x22BMicrosoft | 0 | 28 | $0.620 |
| 336 | MellumJetBrains | 0 | 26 | Free |
| 337 | Maestro Reasoningarcee-ai | 0 | 26 | $1.26 |
| 338 | Gemini 3.1 Flash LiteGoogle | 0 | 24 | $0.438 |
| 339 | Llama 3.2 1B InstructMeta | 0 | 18 | $0.053 |
| 340 | Mistral 7B Instruct v0.1Mistral AI | 0 | 16 | $0.122 |
| 341 | Mistral Medium 3Mistral AI | 0 | 15 | $0.640 |
A tilted Cobb-Douglas index (quality weighted 60 percent) gated by a smooth production-readiness sigmoid. Price anchors frozen to Q2 2026: p10 = $0.087/M, p90 = $4.80/M.
Optimize for value when running high-volume workloads like classification, summarization, or data extraction where a small accuracy delta does not change the outcome. Every 10 points of blended price savings compounds quickly at scale.
Optimize for quality when accuracy is critical: complex reasoning, code generation for production, or agentic workflows where errors compound. A flagship that costs 10x more can save 100x in debugging and rework.
A naive score/price ratio explodes as price approaches zero, which is why free models would always dominate a simple division. LMC ValueScore uses a tilted Cobb-Douglas utility index (Q^0.6 times P^0.4) with a smooth sigmoid gate at composite=50. This prevents unproven free models from dominating, and the 60/40 tilt keeps quality slightly more important than price efficiency, which matches what production buyers actually care about.
Early versions of the formula sent premium flagships to LMC ValueScore 0 whenever their price passed the empirical p90 anchor. That produced a PR-disastrous "Claude = 0, Gemini = 82" headline without distinguishing a $4.80/M model from a $140/M one. The final v3 formula applies a soft floor of 1 on the price efficiency component so quality still shows through. Claude Opus 4.6 at composite 90 lands around 15 rather than 0, which is honest: it is clearly not a best-value pick at 8 dollars per million tokens, but it is still one of the highest-quality models available.
LMC ValueScore uses empirical p10 and p90 anchors computed from the priced text-model catalog: p10 = $0.087/M and p90 = $4.80/M as of Q2 2026. We deliberately freeze these quarterly rather than recomputing on every refresh. Morningstar and MSCI rating systems use the same approach: unstable anchors produce unstable rankings that churn for reasons unrelated to the underlying models, which is exactly what SEO and user trust reward us for avoiding.
Yes. Reasoning models bill hidden "thinking" tokens on top of the visible output, so the list output price understates real cost. LMC ValueScore multiplies the output-token contribution by a family-specific expansion factor: o1 at 8x, o1-pro at 15x, o3 at 10x, R1 at 8x, QwQ at 6x, and so on. These factors come from the o1 system card, the DeepSeek R1 paper, and published cookbook traces. That is why R1 lands around 29-37 on LMC ValueScore even though its nominal per-token price looks competitive.
Real chat and RAG workloads are input-heavy: you stuff a lot of context (system prompt, user history, retrieved documents) and get relatively short answers back. Anthropic published caching statistics showing median input-to-output ratios around 5:1 to 10:1 for production traffic, and OpenAI DevDay data agrees. A flat average would bias against reasoning models and against any provider with expensive output tokens. 85/15 is the tightest fit to the median production workload we can defend with public data.
Gemma 4 31B from Google currently leads the rankings with an LMC ValueScore of 82, combining a composite quality score of 81 with a blended price of $0.167 per million tokens. The full ranked table below updates hourly.