by
Arabic-first instruction-tuned 7B model from the HUMAIN ALLaM program (research originated at the SDAIA National Center for AI; paper: arXiv:2407.15390, ICLR 2025). Per the Hugging Face model card, the preview is trained from scratch in two stages: 4T English tokens followed by 1.2T mixed Arabic/English tokens, then instruction tuned on curated Arabic and English data. Published on Hugging Face as ALLaM-AI/ALLaM-7B-Instruct-preview with a byte-identical mirror under humain-ai/ALLaM-7B-Instruct-preview. Llama-family architecture with 32 layers, 4096 hidden size, 64000-entry Arabic-aware vocabulary, and a 4096-token context window. The full ALLaM program consumed roughly 5M A100 GPU-hours. Strong on Arabic instruction-following and knowledge tasks, with an independent AraLingBench score of 74.0.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Pricingjust now | 100 | 25% | +25.0 |
| Output Capacityjust now | 60 | 15% | +9.0 |
| Context Windowjust now | 57 | 15% | +8.6 |
| Recencyjust now | 56 | 15% | +8.4 |
| Capabilitiesjust now | 17 | 30% | +5.0 |
View this model against the provider’s recent shipping cadence.
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with ALLaM 7B Instruct (preview) and help the community make better decisions.
Free