LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Benchmarksjust now | 67 | 30% | +20.0 |
| Recencyjust now | 97 | 15% | +14.5 |
| Capabilitiesjust now | 50 | 20% | +10.0 |
| Output Capacityjust now | 85 | 10% | +8.5 |
| Context Windowjust now | 81 | 10% | +8.1 |
| Pricingjust now | 1 | 15% | +0.1 |
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with LongCat Flash Chat and help the community make better decisions.
成本估算器
每月比类别平均节省$38.74
来自已验证的来源。