by
Pure state-space language model from TII, announced August 2024. 7 billion parameters using an attention-free SSLM architecture capable of processing arbitrary-length sequences in constant memory (demonstrated generation of 130,000+ tokens without memory scaling). Trained on approximately 5.5 trillion tokens primarily from RefinedWeb with technical and code data from public sources and a curated final-stage mix. TII reports average benchmark scores of 64.09 across ARC, HellaSwag, MMLU, Winogrande, TruthfulQA, and GSM8K, outperforming Mistral-7B-v0.1 (60.97) and matching gemma-7B (63.75). Distributed under the TII Falcon Mamba 7B License 1.0.
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Pricingjust now | 100 | 25% | +25.0 |
| Context Windowjust now | 72 | 15% | +10.7 |
| Output Capacityjust now | 65 | 15% | +9.8 |
| Capabilitiesjust now | 17 | 30% | +5.0 |
| Recencyjust now | 22 | 15% | +3.4 |
View this model against the provider’s recent shipping cadence.
Falcon-H1-Arabic 34B Instruct
coding
Falcon-H1-Arabic 7B Instruct
coding
Falcon-H1-Arabic 3B Instruct
coding
Falcon Arabic 7B Instruct
coding
Falcon3 10B Instruct
coding
Falcon3 7B Instruct
coding
Falcon Mamba 7B InstructCurrent
coding
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with Falcon Mamba 7B Instruct and help the community make better decisions.
Free