MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Benchmarksjust now | 62 | 30% | +18.7 |
| Recencyjust now | 82 | 15% | +12.2 |
| Capabilitiesjust now | 50 | 20% | +10.0 |
| Context Windowjust now | 95 | 10% | +9.5 |
| Output Capacityjust now | 77 | 10% | +7.7 |
| Pricingjust now | 2 | 15% | +0.3 |
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with MiniMax M1 and help the community make better decisions.
成本估算器
每月比类别平均节省$33.14
来自已验证的来源。