by
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input...
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Benchmarksjust now | 52 | 30% | +15.5 |
| Pricingjust now | 100 | 15% | +15.0 |
| Capabilitiesjust now | 67 | 20% | +13.3 |
| Recencyjust now | 60 | 15% | +9.0 |
| Context Windowjust now | 88 | 10% | +8.8 |
| Output Capacityjust now | 70 | 10% | +7.0 |
View this model against the provider’s recent shipping cadence.
Llama Guard 4 12B
coding
Llama 4 Maverick
coding
Llama 4 ScoutCurrent
coding
Llama Guard 3 8B
coding
Llama 3.3 70B Instruct (free)
coding
Llama 3.3 70B Instruct
coding
Llama 3.2 3B Instruct (free)
coding
Llama 3.2 3B Instruct
coding
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with Llama 4 Scout and help the community make better decisions.
Cost Estimator
You save $44.04/month vs category average