Open WebUI is a self-hosted interface for running LLMs locally or via API. Any model works, but it benefits from vision support, streaming, and large context for document analysis.
Best Models for Open WebUI
Top 15 by tool-optimized score
Scored by: benchmark performance (90%) from MMLU, GPQA, HumanEval, SWE-bench, and 15+ standardized evaluations, with capabilities and context as tiebreakers (10%).
| # | Model | Score | Output $/M |
|---|---|---|---|
| 1 | Grok 4.1 Fast Arena Elo: 1467 | 94 | $0.500 |
| 2 | Gemma 4 31B Arena Elo: 1451 | 93 | $0.380 |
| 3 | Gemma 4 26B A4B Arena Elo: 1438 | 92 | $0.330 |
| 4 | Kimi K2.6 Arena Elo: 1462 | 91 | $3.50 |
| 5 | Gemini 3.1 Pro Preview Arena Elo: 1494 | 91 | $12.00 |
| 6 | Grok 4.3 Arena Elo: 1455 | 90 | $2.50 |
| 7 | Gemma 4 31B (free) | 90 | Free |
| 8 | Qwen3.6 Plus Arena Elo: 1448 | 90 | $1.95 |
| 9 | Qwen3.5 397B A17B Arena Elo: 1446 | 90 | $2.34 |
| 10 | Qwen3 VL 235B A22B Instruct Arena Elo: 1415 | 90 | $0.880 |
| 11 | Grok 4 Fast Arena Elo: 1421 | 90 | $0.500 |
| 12 | Gemini 3.1 Flash Lite Preview Arena Elo: 1438 | 89 | $1.50 |
| 13 | Qwen3.5-Flash Arena Elo: 1398 | 89 | $0.260 |
| 14 | MiMo-V2.5 Arena Elo: 1423 | 88 | $2.00 |
| 15 | Grok 4.20 | 88 | $2.50 |
| 16 | GPT-5.1-Codex-Mini | 88 | $2.00 |
| 17 | Claude 3.5 Haiku HumanEval: 88.1% | 88 | $4.00 |
| 18 | Claude Opus 4.7 Arena Elo: 1491 | 87 | $25.00 |
| 19 | Gemma 4 26B A4B (free) | 87 | Free |
| 20 | Qwen3.5-122B-A10B Arena Elo: 1418 | 87 | $2.08 |
| 21 | Gemini 2.5 Flash Lite Preview 09-2025 | 87 | $0.400 |
| 22 | Gemini 2.5 Flash Lite | 87 | $0.400 |
| 23 | GPT-4o-mini HumanEval: 87.2% | 87 | $0.600 |
| 24 | GPT-5.5 Arena Elo: 1475 | 86 | $30.00 |
| 25 | Qwen3.5-27B Arena Elo: 1406 | 86 | $1.56 |
| 26 | GPT-5.2-Codex | 86 | $14.00 |
| 27 | Grok 4.20 Multi-Agent | 85 | $6.00 |
| 28 | Qwen3.5-35B-A3B Arena Elo: 1397 | 85 | $1.00 |
| 29 | GPT-5.3-Codex | 85 | $14.00 |
| 30 | GPT-5.2 Chat Arena Elo: 1477 | 85 | $14.00 |
| 31 | GPT-5.1-Codex-Max | 85 | $10.00 |
| 32 | GPT-5.1-Codex | 85 | $10.00 |
| 33 | GPT-5 Codex | 85 | $10.00 |
| 34 | Gemini 2.0 Flash Lite Arena Elo: 1353 | 85 | $0.300 |
| 35 | GPT-5.4 Nano | 84 | $1.25 |
| 36 | GPT-5.4 Mini | 84 | $4.50 |
| 37 | GPT-5.4 Pro | 84 | $180.00 |
| 38 | Claude Opus 4.1 Arena Elo: 1449 | 84 | $75.00 |
| 39 | Claude Opus 4.6 (Fast) | 83 | $150.00 |
| 40 | Gemini 3 Flash Preview SWE-bench: 78% | 83 | $3.00 |
| 41 | GPT-5.2 Pro | 83 | $168.00 |
| 42 | GLM 4.6V Arena Elo: 1378 | 83 | $0.900 |
| 43 | Gemini 2.5 Pro Preview 06-05 | 83 | $10.00 |
| 44 | Gemini 2.5 Pro Preview 05-06 | 83 | $10.00 |
| 45 | Gemini 3.1 Pro Preview Custom Tools | 82 | $12.00 |
| 46 | o4 Mini Deep Research | 82 | $8.00 |
| 47 | GPT-5 Pro | 82 | $120.00 |
| 48 | Qwen3 VL 235B A22B Thinking Arena Elo: 1396 | 82 | $2.60 |
| 49 | GPT-4.1 Nano Arena Elo: 1322 | 82 | $0.400 |
| 50 | Claude 3 Haiku HumanEval: 76.8% | 82 | $1.25 |
Based on our analysis of coding benchmarks, capability matching, and pricing, Grok 4.1 Fast currently ranks #1 for Open WebUI. Rankings are rebuilt as benchmark, pricing, and provider data refresh.
We score models using benchmark performance (90%) from LMArena, HumanEval, SWE-bench, MMLU, and 15+ standardized evaluations. Capabilities and context serve as tiebreakers (10%). Only models with the capabilities Open WebUI needs are included in the tool-specific rankings.
We currently track 341 AI models compatible with Open WebUI. This includes models from OpenAI, Anthropic, Google, DeepSeek, and other providers accessible via API.
Many open-source models are compatible with Open WebUI through API providers like OpenRouter, Together AI, and Groq. Check our rankings to see which open-source models perform best.
Rankings refresh whenever the underlying benchmark, pricing, and catalog sources refresh. That means some signals update faster than others, and the page reflects the latest verified source data available.