Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward...
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Benchmarksjust now | 70 | 30% | +21.0 |
| Pricingjust now | 99 | 15% | +14.9 |
| Capabilitiesjust now | 50 | 20% | +10.0 |
| Context Windowjust now | 96 | 10% | +9.6 |
| Recencyjust now | 60 | 15% | +9.0 |
| Output Capacityjust now | 70 | 10% | +7.0 |
LMSYS Arena Elo
1325
Percentile
70.8
Weight
30%
把当前模型放回同一服务商最近的发布节奏中查看。
Llama Guard 4 12B
coding
Llama 4 Maverick当前模型
coding
Llama 4 Scout
coding
Llama Guard 3 8B
coding
Llama 3.3 70B Instruct (free)
coding
Llama 3.3 70B Instruct
coding
Llama 3.2 3B Instruct (free)
coding
Llama 3.2 3B Instruct
coding
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with Llama 4 Maverick and help the community make better decisions.
LMSYS Arena Elo
1325
Percentile
70.8
Weight
30%
成本估算器
每月比类别平均节省$42.79