Which AI models are the most consistent over time? This report analyzes rank changes, state classifications, and sparkline volatility across 300 tracked models to produce a stability score from 0 to 100.
Rock Solid
238
Consistent
61
Variable
1
Volatile
0
Top 20 models with the highest stability scores. These models maintain consistent rankings with minimal volatility.
| # | Model | Score | Stability | 24h | 7d |
|---|---|---|---|---|---|
| 1 | Claude Opus 4.6 (Fast)Anthropic | 90.4 | 100 | 0 | 0 |
| 2 | Grok 4.20xAI | 88.8 | 100 | 0 | 0 |
| 3 | Grok 4.20 Multi-AgentxAI | 87.9 | 100 | 0 | 0 |
| 4 | Claude Sonnet 4.6Anthropic | 85.2 | 100 | 0 | 0 |
| 5 | Gemma 4 31B (free)Google | 80.5 | 100 | 0 | 0 |
| 6 | Claude Opus 4.7Anthropic | 79.3 | 100 | 0 | 0 |
| 7 | GPT-5.4 NanoOpenAI | 79.3 | 100 | 0 | 0 |
| 8 | GPT-5.4 MiniOpenAI | 79.3 | 100 | 0 | 0 |
| 9 | Grok 4.1 FastxAI | 78.0 | 100 | 0 | -1 |
| 10 | Grok 4.3xAI | 76.4 | 100 | 0 | -1 |
| 11 | GLM 5.1Zhipu AI | 76.1 | 100 | 0 | 0 |
| 12 | Kimi K2.6Moonshot AI | 75.9 | 100 | 0 | +2 |
| 13 | DeepSeek V4 ProDeepSeek | 75.7 | 100 | 0 | -2 |
| 14 | Qwen3.6 Max PreviewAlibaba | 74.5 | 100 | 0 | 0 |
| 15 | Gemma 4 26B A4B (free)Google | 73.0 | 100 | 0 | 0 |
| 16 | Gemma 4 26B A4B Google | 73.0 | 100 | 0 | 0 |
| 17 | Grok 4 FastxAI | 72.5 | 100 | 0 | 0 |
| 18 | DeepSeek V4 FlashDeepSeek | 72.1 | 100 | 0 | +1 |
| 19 | Trinity Large Previewarcee-ai | 63.6 | 100 | -1 | -1 |
| 20 | gpt-oss-120bOpenAI | 40.5 | 100 | -1 | 0 |
Bottom 20 models with the lowest stability scores. These models show significant ranking fluctuations or inconsistent states.
| # | Model | Score | Stability | 24h | 7d |
|---|---|---|---|---|---|
| 1 | Hy3 previewTencent | 69.0 | 54 | +94 | +234 |
| 2 | Ling-2.6-1Tinclusionai | 40.0 | 74 | -1 | +140 |
| 3 | Mistral Medium 3.5Mistral AI | 40.0 | 74 | -1 | +155 |
| 4 | GPT Chat LatestOpenAI | 40.0 | 74 | -1 | +157 |
| 5 | CoBuddy (free)Baidu | 40.0 | 74 | -1 | +158 |
| 6 | Ring-2.6-1T (free)inclusionai | 40.0 | 74 | -1 | +159 |
| 7 | Phi 4 Mini InstructMicrosoft | 52.7 | 74 | -1 | +175 |
| 8 | Trinity Large Thinkingarcee-ai | 65.2 | 76 | -1 | -3 |
| 9 | Llama 3.3 70B InstructMeta | 66.8 | 79 | -1 | -2 |
| 10 | Nova Micro 1.0Amazon | 40.0 | 81 | 0 | +3 |
| 11 | Aion-1.0-Miniaion-labs | 40.0 | 81 | 0 | +3 |
| 12 | Aion-1.0aion-labs | 40.0 | 81 | 0 | +3 |
| 13 | Llama Guard 3 8BMeta | 40.0 | 81 | 0 | +3 |
| 14 | Qwen3.5 Plus 2026-02-15Alibaba | 40.0 | 81 | 0 | -3 |
| 15 | Seed-2.0-MiniByteDance | 40.0 | 81 | 0 | -3 |
| 16 | Seed-2.0-LiteByteDance | 40.0 | 81 | 0 | -3 |
| 17 | MiMo-V2-OmniXiaomi | 40.0 | 81 | 0 | -3 |
| 18 | GLM 5V TurboZhipu AI | 40.0 | 81 | 0 | -3 |
| 19 | Mistral Small 4Mistral AI | 40.0 | 82 | 0 | -3 |
| 20 | Command R (08-2024)Cohere | 48.7 | 82 | -1 | -1 |
Aggregated stability metrics per provider. Providers are ranked by their average stability score across all models.
| Provider | Models | Avg Stability |
|---|---|---|
| essentialai | 1 | 97.4 |
| deepcogito | 1 | 97.0 |
| AI21 Labs | 1 | 96.1 |
| Kuaishou | 1 | 95.3 |
| ~anthropic | 3 | 95.3 |
| Writer | 1 | 95.1 |
| xAI | 11 | 94.7 |
| NVIDIA | 9 | 93.3 |
| Upstage | 1 | 93.2 |
| poolside | 2 | 93.0 |
| ~openai | 2 | 93.0 |
| 2 | 93.0 | |
| ~moonshotai | 1 | 93.0 |
| Inception | 1 | 92.4 |
| 23 | 92.3 | |
| Moonshot AI | 5 | 92.1 |
| Anthropic | 14 | 92.0 |
| DeepSeek | 12 | 91.9 |
| MiniMax | 8 | 91.3 |
| Alibaba | 49 | 91.3 |
| IBM | 2 | 90.4 |
| Amazon | 4 | 90.3 |
| Baidu | 7 | 90.2 |
| arcee-ai | 6 | 90.0 |
| OpenAI | 57 | 90.0 |
| Xiaomi | 5 | 89.7 |
| Zhipu AI | 12 | 89.3 |
| Liquid AI | 3 | 88.8 |
| rekaai | 2 | 88.8 |
| Mistral AI | 18 | 88.3 |
| Perplexity | 5 | 87.9 |
| Allen AI | 1 | 87.7 |
| aion-labs | 3 | 86.0 |
| StepFun | 1 | 85.4 |
| Cohere | 3 | 84.7 |
| Meta | 9 | 84.5 |
| ByteDance | 5 | 84.4 |
| inclusionai | 3 | 82.7 |
| Cursor | 2 | 82.0 |
| Microsoft | 2 | 78.0 |
| Tencent | 2 | 75.8 |
How stability scores are distributed across all 300 tracked models.
Our stability scoring system uses three key signals to measure how consistently a model performs over time.
The most direct measure of stability. Models lose up to 25 points for large 24-hour rank changes (5 points per rank position moved) and up to 21 points for 7-day changes (3 points per position). Models that hold their rank tightly score higher.
Each model has a state reflecting its overall reliability. Models in a "stable" state receive a 10-point bonus, while "fragile" models are penalized 15 points. This captures systemic reliability beyond simple rank movement.
The 14-day sparkline data reveals hidden volatility. We compute the standard deviation of the sparkline and subtract up to 20 points. Even models that end where they started can be penalized if they oscillated wildly along the way.
The stability score starts at 100 and is reduced based on three factors: 24-hour rank changes (up to -25 points, at 5 per position moved), 7-day rank changes (up to -21 points, at 3 per position), and sparkline volatility measured by standard deviation (up to -20 points). Models in a "stable" state get a +10 bonus, while "fragile" models lose 15 points.
Models are classified into four tiers based on their stability score: "Rock Solid" (85-100) means extremely consistent performance with minimal fluctuation. "Consistent" (70-84) means generally reliable with minor variations. "Variable" (50-69) shows noticeable ranking fluctuations. "Volatile" (below 50) indicates significant instability and unpredictable performance.
Stability indicates how predictably a model will perform over time. A highly rated but volatile model may deliver inconsistent results, which is problematic for production applications requiring reliable output quality. Stable models provide more predictable performance, making them safer choices for mission-critical workloads even if they do not always hold the top rank.