这是面向 “llm leaderboard”“ai ranking”“ai rankings” 等核心查询的专用着陆页,强调当前榜单顺序、头部模型差距与近期真实变动。
| # | 模型 | 服务商 | 评分 | 7天变动 | 输入价格 |
|---|---|---|---|---|---|
| #1 | GPT-5.4 Pro | OpenAI | 94 | +3 | $30.00/M |
| #2 | GPT-5.4 | OpenAI | 94 | +3 | $2.50/M |
| #3 | GPT-5.2 Pro | OpenAI | 93 | +16 | $21.00/M |
| #4 | GPT-5.2 | OpenAI | 93 | +16 | $1.75/M |
| #5 | Claude Opus 4.6 | Anthropic | 92 | +2 | $5.00/M |
| #6 | GPT-5 Pro | OpenAI | 92 | +24 | $15.00/M |
| #7 | o3 Deep Research | OpenAI | 92 | +30 | $10.00/M |
| #8 | Claude Opus 4.5 | Anthropic | 90 | +21 | $5.00/M |
| #9 | GPT-5 | OpenAI | 90 | +34 | $1.25/M |
| #10 | Gemini 3 Flash Preview | 89 | +12 | $0.500/M | |
| #11 | Claude Sonnet 4.6 | Anthropic | 89 | -5 | $3.00/M |
| #12 | Claude Sonnet 4.5 | Anthropic | 89 | +6 | $3.00/M |
| #13 | o3 Pro | OpenAI | 87 | +55 | $20.00/M |
| #14 | Grok 4.1 Fast | xAI | 87 | -12 | $0.200/M |
| #15 | GPT-5.4 Nano | OpenAI | 87 | -6 | $0.200/M |
| #16 | GPT-5.4 Mini | OpenAI | 87 | -6 | $0.750/M |
| #17 | Grok 4.20 Beta | xAI | 86 | -16 | $2.00/M |
| #18 | Grok 4 | xAI | 86 | +41 | $3.00/M |
| #19 | Gemini 3.1 Pro Preview | 86 | -4 | $2.00/M | |
| #20 | o3 | OpenAI | 85 | +59 | $2.00/M |
| #21 | GPT-5.1 | OpenAI | 85 | +2 | $1.25/M |
| #22 | MiMo-V2-Omni | Xiaomi | 85 | +4 | $0.400/M |
| #23 | MiMo-V2-Pro | Xiaomi | 85 | +1 | $1.00/M |
| #24 | Seed-2.0-Lite | ByteDance | 85 | +3 | $0.250/M |
| #25 | Qwen3.5-9B | Alibaba | 85 | +10 | $0.050/M |
This page ranks live coding and language models using LM Market Cap composite scores, which combine capabilities, pricing, context, output capacity, versatility, and recency into a single leaderboard surface.
The models directory is for browsing the full catalog across providers and categories. This page is the competitive LLM ranking page aimed at users who want the top-performing models in rank order.
Yes. The linked rank-change and trends surfaces now compare live rankings against the latest archived weekly snapshot rather than synthetic movement estimates.