by AlibabaRank #160Score 69.1
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.
| 信号 | 标准化 | 权重 | 贡献 | 新鲜度 |
|---|---|---|---|---|
Capabilities capability | 66.7 | 30% | 20.0 | 2026-03-26T07:25:06.742Z |
Pricing pricing_tier | 1.5 | 25% | 0.4 | 2026-03-26T07:25:06.742Z |
Context Window context_window | 81.2 | 15% | 12.2 | 2026-03-26T07:25:06.742Z |
Recency recency | 88.4 | 15% | 13.3 | 2026-03-26T07:25:06.742Z |
Output Capacity output_capacity | 20.0 | 15% | 3.0 | 2026-03-26T07:25:06.742Z |
| 功能 | 支持 |
|---|---|
| 视觉 | 否 |
| 推理 | 是 |
| JSON模式 | 是 |
| 流式输出 | 是 |
| 函数调用 | 是 |
| 网页搜索 | 否 |