MiniMax-01 is a combines MiniMax-Text-01 for text generation and MiniMax-VL-01 for image understanding. It has 456 billion parameters, with 45.9 billion parameters activated per inference, and can handle a context of up to 4 million tokens. The text model adopts a hybrid architecture that combines Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE). The image model adopts the “ViT-MLP-LLM” framework and is trained on top of the text model. To read more about the release, see: https://www.minimaxi.com/en/news/minimax-01-series-2
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Output Capacityjust now | 100 | 15% | +15.0 |
| Context Windowjust now | 95 | 15% | +14.3 |
| Capabilitiesjust now | 33 | 30% | +10.0 |
| Recencyjust now | 54 | 15% | +8.0 |
| Pricingjust now | 1 | 25% | +0.3 |
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with MiniMax-01 and help the community make better decisions.
成本估算器
每月比类别平均节省$37.84