Arabic-first instruction-tuned 7B model from the HUMAIN ALLaM program (research originated at the SDAIA National Center for AI; paper: arXiv:2407.15390, ICLR 2025). Per the Hugging Face model card, the preview is trained from scratch in two stages: 4T English tokens followed by 1.2T mixed Arabic/English tokens, then instruction tuned on curated Arabic and English data. Published on Hugging Face as ALLaM-AI/ALLaM-7B-Instruct-preview with a byte-identical mirror under humain-ai/ALLaM-7B-Instruct-preview. Llama-family architecture with 32 layers, 4096 hidden size, 64000-entry Arabic-aware vocabulary, and a 4096-token context window. The full ALLaM program consumed roughly 5M A100 GPU-hours. Strong on Arabic instruction-following and knowledge tasks, with an independent AraLingBench score of 74.0.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Pricingjust now | 100 | 25% | +25.0 |
| Output Capacityjust now | 60 | 15% | +9.0 |
| Context Windowjust now | 57 | 15% | +8.6 |
| Recencyjust now | 56 | 15% | +8.4 |
| Capabilitiesjust now | 17 | 30% | +5.0 |
把当前模型放回同一服务商最近的发布节奏中查看。
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with ALLaM 7B Instruct (preview) and help the community make better decisions.