Qwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion. This model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Capabilitiesjust now | 50 | 30% | +15.0 |
| Recencyjust now | 90 | 15% | +13.4 |
| Context Windowjust now | 83 | 15% | +12.4 |
| Output Capacityjust now | 75 | 15% | +11.3 |
| Pricingjust now | 0 | 25% | +0.1 |
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with Qwen3 Coder 30B A3B Instruct and help the community make better decisions.
成本估算器
每月比类别平均节省$41.24