Lovable is an AI-powered app builder that generates full-stack web applications from natural language descriptions. Benefits from strong reasoning and large output windows.
Best Models for Lovable
Top 15 by tool-optimized score
Scored by: benchmark performance (90%) from MMLU, GPQA, HumanEval, SWE-bench, and 15+ standardized evaluations, with capabilities and context as tiebreakers (10%).
| # | 模型 | 评分 | 输出$/百万 |
|---|---|---|---|
| 1 | Claude Opus 4.7 Arena Elo: 1491 | 87 | $25.00 |
| 2 | Gemini 3.1 Pro Preview Arena Elo: 1494 | 87 | $12.00 |
| 3 | GPT-5.5 Arena Elo: 1475 | 85 | $30.00 |
| 4 | MiMo-V2.5-Pro Arena Elo: 1464 | 85 | $3.00 |
| 5 | GLM 5.1 Arena Elo: 1471 | 85 | $3.50 |
| 6 | Grok 4.1 Fast Arena Elo: 1467 | 85 | $0.500 |
| 7 | Grok 4.3 Arena Elo: 1455 | 84 | $2.50 |
| 8 | Qwen3.6 Max Preview Arena Elo: 1457 | 84 | $6.24 |
| 9 | DeepSeek V4 Pro Arena Elo: 1463 | 84 | $0.870 |
| 10 | Kimi K2.6 Arena Elo: 1462 | 84 | $3.50 |
| 11 | GLM 5 Arena Elo: 1457 | 84 | $1.92 |
| 12 | Claude Opus 4.6 (Fast) | 83 | $150.00 |
| 13 | Gemma 4 31B Arena Elo: 1451 | 83 | $0.380 |
| 14 | Qwen3.6 Plus Arena Elo: 1448 | 83 | $1.95 |
| 15 | MiMo-V2-Pro Arena Elo: 1447 | 83 | $3.00 |
| 16 | GPT-5.4 Pro | 83 | $180.00 |
| 17 | Qwen3.5 397B A17B Arena Elo: 1446 | 83 | $2.34 |
| 18 | GLM 4.7 Arena Elo: 1443 | 83 | $1.75 |
| 19 | GPT-5.2 Pro | 83 | $168.00 |
| 20 | Claude Opus 4.1 Arena Elo: 1449 | 83 | $75.00 |
| 21 | DeepSeek V4 Flash Arena Elo: 1433 | 82 | $0.280 |
| 22 | Gemma 4 26B A4B Arena Elo: 1438 | 82 | $0.330 |
| 23 | Grok 4.20 | 82 | $2.50 |
| 24 | Gemini 3.1 Flash Lite Preview Arena Elo: 1438 | 82 | $1.50 |
| 25 | GPT-5.3-Codex | 82 | $14.00 |
| 26 | GPT-5.2-Codex | 82 | $14.00 |
| 27 | GPT-5 Pro | 82 | $120.00 |
| 28 | Hy3 preview Arena Elo: 1418 | 81 | $0.260 |
| 29 | MiMo-V2.5 Arena Elo: 1423 | 81 | $2.00 |
| 30 | Qwen3.5-122B-A10B Arena Elo: 1418 | 81 | $2.08 |
| 31 | GPT-5.1-Codex-Max | 81 | $10.00 |
| 32 | GPT-5.1-Codex | 81 | $10.00 |
| 33 | GPT-5.1-Codex-Mini | 81 | $2.00 |
| 34 | o3 Deep Research | 81 | $40.00 |
| 35 | GLM 4.6 Arena Elo: 1426 | 81 | $1.90 |
| 36 | GPT-5 Codex | 81 | $10.00 |
| 37 | Grok 4 Fast Arena Elo: 1421 | 81 | $0.500 |
| 38 | o3 Pro | 81 | $80.00 |
| 39 | MiniMax M2.7 Arena Elo: 1407 | 80 | $1.20 |
| 40 | Qwen3.5-27B Arena Elo: 1406 | 80 | $1.56 |
| 41 | Qwen3.5-35B-A3B Arena Elo: 1397 | 79 | $1.00 |
| 42 | Qwen3.5-Flash Arena Elo: 1398 | 79 | $0.260 |
| 43 | Claude Opus 4.6 SWE-bench: 83.7% | 79 | $25.00 |
| 44 | Step 3.5 Flash Arena Elo: 1393 | 79 | $0.300 |
| 45 | DeepSeek V3.2 Exp Arena Elo: 1423 | 79 | $0.410 |
| 46 | DeepSeek V3.1 Terminus Arena Elo: 1416 | 79 | $0.950 |
| 47 | Gemini 2.5 Pro Preview 06-05 | 79 | $10.00 |
| 48 | Gemini 2.5 Pro Preview 05-06 | 79 | $10.00 |
| 49 | Gemma 4 31B (free) | 78 | Free |
| 50 | Trinity Large Thinking Arena Elo: 1380 | 78 | $0.850 |
Based on our analysis of coding benchmarks, capability matching, and pricing, Claude Opus 4.7 currently ranks #1 for Lovable. Rankings are rebuilt as benchmark, pricing, and provider data refresh.
We score models using benchmark performance (90%) from LMArena, HumanEval, SWE-bench, MMLU, and 15+ standardized evaluations. Capabilities and context serve as tiebreakers (10%). Only models with the capabilities Lovable needs are included in the tool-specific rankings.
We currently track 341 AI models compatible with Lovable. This includes models from OpenAI, Anthropic, Google, DeepSeek, and other providers accessible via API.
Many open-source models are compatible with Lovable through API providers like OpenRouter, Together AI, and Groq. Check our rankings to see which open-source models perform best.
Rankings refresh whenever the underlying benchmark, pricing, and catalog sources refresh. That means some signals update faster than others, and the page reflects the latest verified source data available.