所有编程AI模型的每日排名变动、评分趋势和性能数据。评分每小时从实时模型数据更新。
| # | 模型 | 提供商 | 评分 | 24小时 | 7天 | 状态 | 14天趋势 |
|---|---|---|---|---|---|---|---|
| 1 | GPT-5.4 Pro | OpenAI | 91.9 | 0 | 0 | stable | |
| 2 | GPT-5.4 | OpenAI | 91.9 | 0 | 0 | stable | |
| 3 | GPT-5.2 Pro | OpenAI | 90.5 | 0 | 0 | stable | |
| 4 | Claude Opus 4.6 (Fast) | Anthropic | 90.4 | 0 | 0 | stable | |
| 5 | Claude Opus 4.6 | Anthropic | 90.4 | 0 | 0 | stable | |
| 6 | GPT-5.2-Codex | OpenAI | 89.7 | 0 | 0 | stable | |
| 7 | GPT-5.2 | OpenAI | 89.7 | 0 | 0 | stable | |
| 8 | Grok 4.20 | xAI | 88.8 | 0 | 0 | stable | |
| 9 | GPT-5.3-Codex | OpenAI | 88.7 | 0 | 0 | stable | |
| 10 | GPT-5 Pro | OpenAI | 88.7 | 0 | 0 | stable | |
| 11 | Gemini 3 Flash Preview | 88.4 | 0 | 0 | stable | ||
| 12 | Grok 4 | xAI | 88.1 | 0 | 0 | stable | |
| 13 | Grok 4.20 Multi-Agent | xAI | 87.9 | 0 | 0 | stable | |
| 14 | GPT-5.1-Codex-Max | OpenAI | 87.8 | 0 | 0 | stable | |
| 15 | GPT-5 Codex | OpenAI | 87.8 | 0 | 0 | stable | |
| 16 | GPT-5 | OpenAI | 87.8 | 0 | 0 | stable | |
| 17 | GPT-5.3 Chat | OpenAI | 87.4 | 0 | 0 | stable | |
| 18 | GPT-5.1 | OpenAI | 87.0 | 0 | 0 | stable | |
| 19 | GPT-5.1-Codex | OpenAI | 87.0 | 0 | 0 | stable | |
| 20 | GPT-5.1-Codex-Mini | OpenAI | 87.0 | 0 | 0 | stable | |
| 21 | o3 Deep Research | OpenAI | 86.7 | 0 | 0 | stable | |
| 22 | o3 Pro | OpenAI | 86.7 | 0 | 0 | stable | |
| 23 | o3 | OpenAI | 86.7 | 0 | 0 | stable | |
| 24 | GPT-5.1 Chat | OpenAI | 86.6 | 0 | 0 | stable | |
| 25 | Claude Sonnet 4.6 | Anthropic | 85.2 | 0 | 0 | stable | |
| 26 | Claude Opus 4.5 | Anthropic | 85.1 | 0 | 0 | stable | |
| 27 | Gemini 2.5 Pro | 83.5 | 0 | 0 | stable | ||
| 28 | Gemini 2.5 Pro Preview 06-05 | 83.5 | 0 | 0 | stable | ||
| 29 | Gemini 2.5 Pro Preview 05-06 | 83.5 | 0 | 0 | stable | ||
| 30 | Claude Sonnet 4.5 | Anthropic | 82.4 | 0 | 0 | stable | |
| 31 | Claude Opus 4 | Anthropic | 82.1 | 0 | 0 | stable | |
| 32 | o4 Mini Deep Research | OpenAI | 81.4 | 0 | 0 | stable | |
| 33 | o4 Mini | OpenAI | 81.4 | 0 | 0 | stable | |
| 34 | Gemini 3.1 Pro Preview Custom Tools | 81.1 | 0 | 0 | stable | ||
| 35 | Gemini 3.1 Pro Preview | 81.1 | 0 | 0 | stable | ||
| 36 | Gemma 4 31B (free) | 80.5 | 0 | 0 | stable | ||
| 37 | Gemma 4 31B | 80.5 | 0 | 0 | stable | ||
| 38 | Gemini 3.1 Flash Lite Preview | 79.6 | 0 | 0 | stable | ||
| 39 | Qwen3.5 397B A17B | Alibaba | 79.6 | 0 | 0 | stable | |
| 40 | R1 0528 | DeepSeek | 79.4 | 0 | 0 | stable | |
| 41 | Claude Opus 4.7 | Anthropic | 79.3 | 0 | 0 | stable | |
| 42 | GPT-5.4 Nano | OpenAI | 79.3 | 0 | 0 | stable | |
| 43 | GPT-5.4 Mini | OpenAI | 79.3 | 0 | 0 | stable | |
| 44 | Gemini 2.5 Flash Lite Preview 09-2025 | 79.1 | 0 | 0 | stable | ||
| 45 | Gemini 2.5 Flash Lite | 79.1 | 0 | 0 | stable | ||
| 46 | Gemini 2.5 Flash | 79.1 | 0 | 0 | stable | ||
| 47 | GPT-5.5 Pro | OpenAI | 78.8 | 0 | 0 | stable | |
| 48 | GPT-5.5 | OpenAI | 78.8 | 0 | 0 | stable | |
| 49 | MiniMax M2.5 (free) | MiniMax | 78.2 | 0 | 0 | stable | |
| 50 | MiniMax M2.5 | MiniMax | 78.2 | 0 | 0 | stable | |
| 51 | GLM 5 | Zhipu AI | 78.0 | 0 | +1 | stable | |
| 52 | Grok 4.1 Fast | xAI | 78.0 | 0 | -1 | stable | |
| 53 | Qwen3.5-122B-A10B | Alibaba | 77.8 | 0 | 0 | stable | |
| 54 | Gemma 2 27B | 77.4 | 0 | 0 | stable | ||
| 55 | Qwen3.5-27B | Alibaba | 76.9 | 0 | 0 | stable | |
| 56 | GPT-5.2 Chat | OpenAI | 76.6 | 0 | +1 | stable | |
| 57 | Grok 4.3 | xAI | 76.4 | 0 | -1 | stable | |
| 58 | GLM 5.1 | Zhipu AI | 76.1 | 0 | 0 | stable | |
| 59 | Qwen3.5-35B-A3B | Alibaba | 76.0 | 0 | 0 | stable | |
| 60 | Kimi K2.6 | Moonshot AI | 75.9 | 0 | +2 | stable | |
| 61 | MiMo-V2.5-Pro | Xiaomi | 75.8 | 0 | 0 | stable | |
| 62 | DeepSeek V4 Pro | DeepSeek | 75.7 | 0 | -2 | stable | |
| 63 | Claude Opus 4.1 | Anthropic | 75.1 | 0 | 0 | stable | |
| 64 | GLM 4.5 | Zhipu AI | 75.1 | 0 | 0 | stable | |
| 65 | Qwen3.6 Plus | Alibaba | 74.7 | 0 | +1 | stable | |
| 66 | Claude 3.7 Sonnet (thinking) | Anthropic | 74.7 | 0 | -1 | stable | |
| 67 | Qwen3.6 Max Preview | Alibaba | 74.5 | 0 | 0 | stable | |
| 68 | o3 Mini | OpenAI | 74.5 | 0 | 0 | stable | |
| 69 | Claude Sonnet 4 | Anthropic | 74.4 | 0 | 0 | stable | |
| 70 | MiMo-V2-Pro | Xiaomi | 73.8 | 0 | 0 | stable | |
| 71 | o1 | OpenAI | 73.6 | 0 | 0 | stable | |
| 72 | Grok 3 | xAI | 73.5 | 0 | 0 | stable | |
| 73 | Grok 3 Beta | xAI | 73.5 | 0 | 0 | stable | |
| 74 | Gemma 4 26B A4B (free) | 73.0 | 0 | 0 | stable | ||
| 75 | Gemma 4 26B A4B | 73.0 | 0 | 0 | stable | ||
| 76 | R1 | DeepSeek | 73.0 | 0 | 0 | stable | |
| 77 | GLM 4.7 | Zhipu AI | 72.7 | 0 | 0 | stable | |
| 78 | o1-pro | OpenAI | 72.7 | 0 | 0 | stable | |
| 79 | Claude 3.7 Sonnet | Anthropic | 72.7 | 0 | 0 | stable | |
| 80 | Grok 4 Fast | xAI | 72.5 | 0 | 0 | stable | |
| 81 | Gemini 2.0 Flash | 72.3 | 0 | 0 | stable | ||
| 82 | DeepSeek V4 Flash | DeepSeek | 72.1 | 0 | +1 | stable | |
| 83 | o4 Mini High | OpenAI | 72.1 | 0 | -1 | stable | |
| 84 | MiniMax M2 | MiniMax | 72.0 | 0 | 0 | stable | |
| 85 | DeepSeek V3 0324 | DeepSeek | 71.8 | 0 | +1 | stable | |
| 86 | MiMo-V2.5 | Xiaomi | 71.7 | 0 | -1 | stable | |
| 87 | GPT-4o (2024-08-06) | OpenAI | 71.2 | 0 | 0 | stable | |
| 88 | GPT-4o (2024-05-13) | OpenAI | 71.2 | 0 | 0 | stable | |
| 89 | GPT-4o | OpenAI | 71.2 | 0 | 0 | stable | |
| 90 | GLM 5 Turbo | Zhipu AI | 70.9 | 0 | 0 | stable | |
| 91 | MiniMax M1 | MiniMax | 70.8 | 0 | 0 | stable | |
| 92 | GLM 4.6 | Zhipu AI | 70.7 | 0 | 0 | stable | |
| 93 | GLM 4.5 Air (free) | Zhipu AI | 70.7 | 0 | 0 | stable | |
| 94 | GLM 4.5 Air | Zhipu AI | 70.7 | 0 | 0 | stable | |
| 95 | GPT-5 Chat | OpenAI | 70.5 | 0 | 0 | stable | |
| 96 | GPT-4o Audio | OpenAI | 70.4 | 0 | 0 | stable | |
| 97 | GPT-4o Search Preview | OpenAI | 70.4 | 0 | 0 | stable | |
| 98 | DeepSeek V3.2 | DeepSeek | 70.3 | 0 | 0 | stable | |
| 99 | DeepSeek V3.2 Exp | DeepSeek | 70.2 | 0 | 0 | stable | |
| 100 | MiniMax M2.1 | MiniMax | 69.9 | 0 | 0 | stable | |
| 101 | Claude Haiku 4.5 | Anthropic | 69.5 | 0 | 0 | stable | |
| 102 | DeepSeek V3 | DeepSeek | 69.5 | 0 | 0 | stable | |
| 103 | Qwen3 VL 235B A22B Instruct | Alibaba | 69.4 | 0 | 0 | stable | |
| 104 | DeepSeek V3.1 Terminus | DeepSeek | 69.4 | 0 | 0 | stable | |
| 105 | GPT-4o-mini | OpenAI | 69.3 | 0 | 0 | stable | |
| 106 | MiniMax M2-her | MiniMax | 69.1 | 0 | 0 | stable | |
| 107 | DeepSeek V3.1 | DeepSeek | 69.1 | 0 | 0 | stable | |
| 108 | Hy3 preview | Tencent | 69.0 | +94 | +234 | preliminary | |
| 109 | Qwen3.5-Flash | Alibaba | 68.7 | -1 | -1 | stable | |
| 110 | MiniMax M2.7 | MiniMax | 68.4 | -1 | 0 | stable | |
| 111 | Qwen3 Max Thinking | Alibaba | 68.2 | -1 | -2 | stable | |
| 112 | Qwen3 VL 235B A22B Thinking | Alibaba | 67.7 | -1 | -1 | stable | |
| 113 | Qwen3 Max | Alibaba | 67.4 | -1 | -1 | stable | |
| 114 | Llama 4 Maverick | Meta | 67.1 | -1 | -1 | stable | |
| 115 | Mistral Large 3 2512 | Mistral AI | 67.0 | -1 | -1 | stable | |
| 116 | Qwen3 Next 80B A3B Instruct (free) | Alibaba | 67.0 | -1 | -1 | stable | |
| 117 | Qwen3 Next 80B A3B Instruct | Alibaba | 67.0 | -1 | -1 | stable | |
| 118 | GPT-4.1 | OpenAI | 66.9 | -1 | -1 | stable | |
| 119 | Step 3.5 Flash | StepFun | 66.8 | -1 | 0 | stable | |
| 120 | Llama 3.3 70B Instruct | Meta | 66.8 | -1 | -2 | stable | |
| 121 | GPT-4 Turbo | OpenAI | 66.7 | -1 | -1 | stable | |
| 122 | Qwen3.5-9B | Alibaba | 66.5 | -1 | -1 | stable | |
| 123 | Mistral Large | Mistral AI | 65.9 | -1 | -1 | stable | |
| 124 | Llama 3.3 70B Instruct (free) | Meta | 65.7 | -1 | -1 | stable | |
| 125 | Composer 2 | Cursor | 65.7 | -1 | -1 | stable | |
| 126 | Composer 2 Fast | Cursor | 65.7 | -1 | -1 | stable | |
| 127 | Qwen3 235B A22B Thinking 2507 | Alibaba | 65.3 | -1 | 0 | stable | |
| 128 | Llama 3.1 70B Instruct | Meta | 65.3 | -1 | 0 | stable | |
| 129 | Trinity Large Thinking | arcee-ai | 65.2 | -1 | -3 | stable | |
| 130 | GLM 4.6V | Zhipu AI | 64.8 | -1 | -1 | stable | |
| 131 | GPT-4 (older v0314) | OpenAI | 64.8 | -1 | -1 | stable | |
| 132 | GPT-4 | OpenAI | 64.8 | -1 | -1 | stable | |
| 133 | Qwen3 235B A22B Instruct 2507 | Alibaba | 64.7 | -1 | -1 | stable | |
| 134 | Qwen3 30B A3B Thinking 2507 | Alibaba | 64.1 | -1 | -1 | stable | |
| 135 | GPT-5 Mini | OpenAI | 63.9 | -1 | -1 | stable | |
| 136 | GLM 4.7 Flash | Zhipu AI | 63.7 | -1 | -1 | stable | |
| 137 | Qwen3 Next 80B A3B Thinking | Alibaba | 63.7 | -1 | -1 | stable | |
| 138 | Qwen3 30B A3B | Alibaba | 63.7 | -1 | -1 | stable | |
| 139 | Trinity Large Preview | arcee-ai | 63.6 | -1 | -1 | stable | |
| 140 | Mixtral 8x22B Instruct | Mistral AI | 63.4 | -1 | -1 | stable | |
| 141 | Grok 3 Mini Beta | xAI | 63.1 | -1 | -1 | stable | |
| 142 | o3 Mini High | OpenAI | 63.1 | -1 | -1 | stable | |
| 143 | GLM 4.5V | Zhipu AI | 61.5 | -1 | -1 | stable | |
| 144 | Mercury 2 | Inception | 61.0 | -1 | -1 | stable | |
| 145 | GPT-4o-mini Search Preview | OpenAI | 60.8 | -1 | -1 | stable | |
| 146 | Llama 3.3 Nemotron Super 49B V1.5 | NVIDIA | 60.6 | -1 | -1 | stable | |
| 147 | Qwen3 8B | Alibaba | 60.6 | -1 | -1 | stable | |
| 148 | Nova 2 Lite | Amazon | 60.5 | -1 | -1 | stable | |
| 149 | Phi 4 | Microsoft | 60.2 | -1 | -1 | stable | |
| 150 | GPT-4 Turbo Preview | OpenAI | 59.8 | -1 | -1 | stable | |
| 151 | GPT-4 Turbo (older v1106) | OpenAI | 59.8 | -1 | -1 | stable | |
| 152 | ERNIE 4.5 300B A47B | Baidu | 59.6 | -1 | -1 | stable | |
| 153 | Kimi K2.5 | Moonshot AI | 59.1 | -1 | -1 | stable | |
| 154 | Gemini 2.0 Flash Lite | 59.0 | -1 | -1 | stable | ||
| 155 | Claude 3.5 Haiku | Anthropic | 58.3 | -1 | -1 | stable | |
| 156 | gpt-oss-20b | OpenAI | 57.4 | -1 | 0 | stable | |
| 157 | Llama 3 70B Instruct | Meta | 57.0 | -1 | 0 | stable | |
| 158 | gpt-oss-20b (free) | OpenAI | 56.6 | -1 | 0 | stable | |
| 159 | GPT-4o-mini (2024-07-18) | OpenAI | 56.4 | -1 | 0 | stable | |
| 160 | Mistral Large 2407 | Mistral AI | 56.1 | -1 | 0 | stable | |
| 161 | Olmo 3 32B Think | Allen AI | 54.9 | -1 | 0 | stable | |
| 162 | Llama 4 Scout | Meta | 54.2 | -1 | 0 | stable | |
| 163 | Qwen3 235B A22B | Alibaba | 54.0 | -1 | 0 | stable | |
| 164 | GPT-4.1 Mini | OpenAI | 53.6 | -1 | 0 | stable | |
| 165 | Kimi K2 Thinking | Moonshot AI | 53.3 | -1 | 0 | stable | |
| 166 | GPT-4o (2024-11-20) | OpenAI | 52.9 | -1 | 0 | stable | |
| 167 | Phi 4 Mini Instruct | Microsoft | 52.7 | -1 | +175 | preliminary | |
| 168 | Kimi K2 0905 | Moonshot AI | 52.4 | -1 | -1 | stable | |
| 169 | Kimi K2 0711 | Moonshot AI | 51.4 | -1 | -1 | stable | |
| 170 | Command A | Cohere | 50.8 | -1 | -1 | stable | |
| 171 | Grok 3 Mini | xAI | 50.6 | -1 | -1 | stable | |
| 172 | Claude 3 Haiku | Anthropic | 50.4 | -1 | -1 | stable | |
| 173 | Command R+ (08-2024) | Cohere | 48.7 | 0 | 0 | stable | |
| 174 | Command R (08-2024) | Cohere | 48.7 | -1 | -1 | stable | |
| 175 | Devstral Small 1.1 | Mistral AI | 47.2 | -1 | -1 | stable | |
| 176 | GPT-5 Nano | OpenAI | 46.0 | -1 | -1 | stable | |
| 177 | Devstral 2 2512 | Mistral AI | 45.5 | -1 | -1 | stable | |
| 178 | Devstral Medium | Mistral AI | 45.3 | -1 | -1 | stable | |
| 179 | Llama 3.1 8B Instruct | Meta | 43.8 | -1 | 0 | stable | |
| 180 | R1 Distill Llama 70B | DeepSeek | 42.4 | -1 | 0 | stable | |
| 181 | GPT-4.1 Nano | OpenAI | 42.1 | -1 | 0 | stable | |
| 182 | gpt-oss-120b | OpenAI | 40.5 | -1 | 0 | stable | |
| 183 | Ring-2.6-1T (free) | inclusionai | 40.0 | -1 | +159 | preliminary | |
| 184 | CoBuddy (free) | Baidu | 40.0 | -1 | +158 | preliminary | |
| 185 | GPT Chat Latest | OpenAI | 40.0 | -1 | +157 | preliminary | |
| 186 | Granite 4.1 8B | IBM | 40.0 | -1 | -3 | stable | |
| 187 | Mistral Medium 3.5 | Mistral AI | 40.0 | -1 | +155 | preliminary | |
| 188 | Nemotron 3 Nano Omni (free) | NVIDIA | 40.0 | -1 | -4 | stable | |
| 189 | Laguna XS.2 (free) | poolside | 40.0 | -1 | -4 | stable | |
| 190 | Laguna M.1 (free) | poolside | 40.0 | -1 | -4 | stable | |
| 191 | Anthropic Claude Haiku Latest | ~anthropic | 40.0 | -1 | -4 | stable | |
| 192 | OpenAI GPT Mini Latest | ~openai | 40.0 | -1 | -4 | stable | |
| 193 | Google Gemini Pro Latest | 40.0 | -1 | -4 | stable | ||
| 194 | MoonshotAI Kimi Latest | ~moonshotai | 40.0 | -1 | -4 | stable | |
| 195 | Google Gemini Flash Latest | 40.0 | -1 | -4 | stable | ||
| 196 | Anthropic Claude Sonnet Latest | ~anthropic | 40.0 | -1 | -4 | stable | |
| 197 | OpenAI GPT Latest | ~openai | 40.0 | -1 | -4 | stable | |
| 198 | Qwen3.5 Plus 2026-04-20 | Alibaba | 40.0 | -1 | -4 | stable | |
| 199 | Qwen3.6 Flash | Alibaba | 40.0 | -1 | -4 | stable | |
| 200 | Qwen3.6 35B A3B | Alibaba | 40.0 | -1 | -4 | stable | |
| 201 | Qwen3.6 27B | Alibaba | 40.0 | -1 | -4 | stable | |
| 202 | Ling-2.6-1T | inclusionai | 40.0 | -1 | +140 | preliminary | |
| 203 | Ling-2.6-flash | inclusionai | 40.0 | 0 | -3 | stable | |
| 204 | Claude Opus Latest | ~anthropic | 40.0 | 0 | -3 | stable | |
| 205 | Qianfan-OCR-Fast (free) | Baidu | 40.0 | 0 | -3 | stable | |
| 206 | GLM 5V Turbo | Zhipu AI | 40.0 | 0 | -3 | stable | |
| 207 | Lyria 3 Pro Preview | 40.0 | 0 | -3 | stable | ||
| 208 | Lyria 3 Clip Preview | 40.0 | 0 | -3 | stable | ||
| 209 | KAT-Coder-Pro V2 | Kuaishou | 40.0 | 0 | -3 | stable | |
| 210 | Reka Edge | rekaai | 40.0 | 0 | -3 | stable | |
| 211 | MiMo-V2-Omni | Xiaomi | 40.0 | 0 | -3 | stable | |
| 212 | Mistral Small 4 | Mistral AI | 40.0 | 0 | -3 | stable | |
| 213 | Nemotron 3 Super (free) | NVIDIA | 40.0 | 0 | -3 | stable | |
| 214 | Nemotron 3 Super | NVIDIA | 40.0 | 0 | -3 | stable | |
| 215 | Seed-2.0-Lite | ByteDance | 40.0 | 0 | -3 | stable | |
| 216 | Seed-2.0-Mini | ByteDance | 40.0 | 0 | -3 | stable | |
| 217 | LFM2-24B-A2B | Liquid AI | 40.0 | 0 | -3 | stable | |
| 218 | Aion-2.0 | aion-labs | 40.0 | 0 | -3 | stable | |
| 219 | Qwen3.5 Plus 2026-02-15 | Alibaba | 40.0 | 0 | -3 | stable | |
| 220 | Qwen3 Coder Next | Alibaba | 40.0 | 0 | -3 | stable | |
| 221 | Solar Pro 3 | Upstage | 40.0 | 0 | -3 | stable | |
| 222 | Palmyra X5 | Writer | 40.0 | 0 | -3 | stable | |
| 223 | LFM2.5-1.2B-Thinking (free) | Liquid AI | 40.0 | 0 | -3 | stable | |
| 224 | LFM2.5-1.2B-Instruct (free) | Liquid AI | 40.0 | 0 | -3 | stable | |
| 225 | GPT Audio | OpenAI | 40.0 | 0 | -3 | stable | |
| 226 | GPT Audio Mini | OpenAI | 40.0 | 0 | -3 | stable | |
| 227 | Seed 1.6 Flash | ByteDance | 40.0 | 0 | -3 | stable | |
| 228 | Seed 1.6 | ByteDance | 40.0 | 0 | -3 | stable | |
| 229 | MiMo-V2-Flash | Xiaomi | 40.0 | 0 | -3 | stable | |
| 230 | Nemotron 3 Nano 30B A3B (free) | NVIDIA | 40.0 | 0 | -3 | stable | |
| 231 | Nemotron 3 Nano 30B A3B | NVIDIA | 40.0 | 0 | -3 | stable | |
| 232 | Rnj 1 Instruct | essentialai | 40.0 | 0 | -3 | stable | |
| 233 | Ministral 3 14B 2512 | Mistral AI | 40.0 | 0 | -3 | stable | |
| 234 | Ministral 3 8B 2512 | Mistral AI | 40.0 | 0 | -3 | stable | |
| 235 | Ministral 3 3B 2512 | Mistral AI | 40.0 | 0 | -3 | stable | |
| 236 | Trinity Mini | arcee-ai | 40.0 | 0 | -3 | stable | |
| 237 | DeepSeek V3.2 Speciale | DeepSeek | 40.0 | 0 | -3 | stable | |
| 238 | Cogito v2.1 671B | deepcogito | 40.0 | 0 | -3 | stable | |
| 239 | Nova Premier 1.0 | Amazon | 40.0 | 0 | -3 | stable | |
| 240 | Sonar Pro Search | Perplexity | 40.0 | 0 | -3 | stable | |
| 241 | gpt-oss-safeguard-20b | OpenAI | 40.0 | 0 | -3 | stable | |
| 242 | Nemotron Nano 12B 2 VL (free) | NVIDIA | 40.0 | 0 | -3 | stable | |
| 243 | Qwen3 VL 32B Instruct | Alibaba | 40.0 | 0 | -2 | stable | |
| 244 | Granite 4.0 Micro | IBM | 40.0 | 0 | -2 | stable | |
| 245 | Qwen3 VL 8B Thinking | Alibaba | 40.0 | 0 | -2 | stable | |
| 246 | Qwen3 VL 8B Instruct | Alibaba | 40.0 | 0 | -2 | stable | |
| 247 | ERNIE 4.5 21B A3B Thinking | Baidu | 40.0 | 0 | -2 | stable | |
| 248 | Qwen3 VL 30B A3B Thinking | Alibaba | 40.0 | 0 | -2 | stable | |
| 249 | Qwen3 VL 30B A3B Instruct | Alibaba | 40.0 | 0 | -2 | stable | |
| 250 | Qwen3 Coder Plus | Alibaba | 40.0 | 0 | -2 | stable | |
| 251 | Tongyi DeepResearch 30B A3B | Alibaba | 40.0 | 0 | -2 | stable | |
| 252 | Qwen3 Coder Flash | Alibaba | 40.0 | 0 | -2 | stable | |
| 253 | Qwen Plus 0728 (thinking) | Alibaba | 40.0 | 0 | -2 | stable | |
| 254 | Qwen Plus 0728 | Alibaba | 40.0 | 0 | -2 | stable | |
| 255 | Nemotron Nano 9B V2 (free) | NVIDIA | 40.0 | 0 | -2 | stable | |
| 256 | Nemotron Nano 9B V2 | NVIDIA | 40.0 | 0 | -2 | stable | |
| 257 | Grok Code Fast 1 | xAI | 40.0 | 0 | -2 | stable | |
| 258 | Mistral Medium 3.1 | Mistral AI | 40.0 | 0 | -2 | stable | |
| 259 | ERNIE 4.5 21B A3B | Baidu | 40.0 | 0 | -2 | stable | |
| 260 | ERNIE 4.5 VL 28B A3B | Baidu | 40.0 | 0 | -2 | stable | |
| 261 | Jamba Large 1.7 | AI21 Labs | 40.0 | 0 | -2 | stable | |
| 262 | Codestral 2508 | Mistral AI | 40.0 | 0 | -2 | stable | |
| 263 | Qwen3 Coder 30B A3B Instruct | Alibaba | 40.0 | 0 | -2 | stable | |
| 264 | Qwen3 30B A3B Instruct 2507 | Alibaba | 40.0 | 0 | -2 | stable | |
| 265 | Qwen3 Coder 480B A35B (free) | Alibaba | 40.0 | 0 | -2 | stable | |
| 266 | Qwen3 Coder 480B A35B | Alibaba | 40.0 | 0 | -2 | stable | |
| 267 | UI-TARS 7B | ByteDance | 40.0 | 0 | -2 | stable | |
| 268 | Hunyuan A13B Instruct | Tencent | 40.0 | 0 | -1 | stable | |
| 269 | ERNIE 4.5 VL 424B A47B | Baidu | 40.0 | 0 | -1 | stable | |
| 270 | Mistral Small 3.2 24B | Mistral AI | 40.0 | 0 | -1 | stable | |
| 271 | Gemma 3n 4B | 40.0 | 0 | 0 | stable | ||
| 272 | Spotlight | arcee-ai | 40.0 | 0 | 0 | stable | |
| 273 | Virtuoso Large | arcee-ai | 40.0 | 0 | 0 | stable | |
| 274 | Coder Large | arcee-ai | 40.0 | 0 | 0 | stable | |
| 275 | Llama Guard 4 12B | Meta | 40.0 | 0 | 0 | stable | |
| 276 | Qwen3 14B | Alibaba | 40.0 | 0 | 0 | stable | |
| 277 | Qwen3 32B | Alibaba | 40.0 | 0 | 0 | stable | |
| 278 | Mistral Small 3.1 24B | Mistral AI | 40.0 | 0 | 0 | stable | |
| 279 | Gemma 3 4B | 40.0 | 0 | +1 | stable | ||
| 280 | Gemma 3 12B | 40.0 | 0 | +2 | stable | ||
| 281 | Reka Flash 3 | rekaai | 40.0 | 0 | +2 | stable | |
| 282 | Gemma 3 27B | 40.0 | 0 | +3 | stable | ||
| 283 | Sonar Reasoning Pro | Perplexity | 40.0 | 0 | +3 | stable | |
| 284 | Sonar Pro | Perplexity | 40.0 | 0 | +3 | stable | |
| 285 | Sonar Deep Research | Perplexity | 40.0 | 0 | +3 | stable | |
| 286 | Saba | Mistral AI | 40.0 | 0 | +3 | stable | |
| 287 | Llama Guard 3 8B | Meta | 40.0 | 0 | +3 | stable | |
| 288 | Qwen VL Plus | Alibaba | 40.0 | 0 | +3 | stable | |
| 289 | Aion-1.0 | aion-labs | 40.0 | 0 | +3 | stable | |
| 290 | Aion-1.0-Mini | aion-labs | 40.0 | 0 | +3 | stable | |
| 291 | Qwen VL Max | Alibaba | 40.0 | 0 | +3 | stable | |
| 292 | Qwen-Turbo | Alibaba | 40.0 | 0 | +3 | stable | |
| 293 | Qwen2.5 VL 72B Instruct | Alibaba | 40.0 | 0 | +3 | stable | |
| 294 | Qwen-Plus | Alibaba | 40.0 | 0 | +3 | stable | |
| 295 | Qwen-Max | Alibaba | 40.0 | 0 | +3 | stable | |
| 296 | Mistral Small 3 | Mistral AI | 40.0 | 0 | +3 | stable | |
| 297 | Sonar | Perplexity | 40.0 | 0 | +3 | stable | |
| 298 | MiniMax-01 | MiniMax | 40.0 | 0 | +3 | stable | |
| 299 | Nova Lite 1.0 | Amazon | 40.0 | 0 | +3 | stable | |
| 300 | Nova Micro 1.0 | Amazon | 40.0 | 0 | +3 | stable |
Models are ranked using a benchmark-driven score from 0 to 100 where benchmark performance (90%) from LMArena, MMLU, GPQA, HumanEval, SWE-bench, and 15+ standardized evaluations is the primary signal. Capabilities and context window serve as tiebreakers (10%). Scores update hourly from live API data across 290+ coding models.
The 24h column shows how many positions a model moved up or down in the last 24 hours, while the 7d column shows the change over the past week. Green values with a + indicate rank improvements, red values indicate drops, and 0 means the rank was unchanged.
The state classification reflects a model's ranking consistency. "Stable" means the model has maintained its position reliably. "Held" indicates it is holding steady but with some variance. "Fragile" means the model's rank is fluctuating and may shift significantly. "Preliminary" is assigned to newly tracked models without enough history.