by AlibabaRank #127Score 65.3
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144...
| Signal | Normalized | Weight | Contribution | Freshness |
|---|---|---|---|---|
Capabilities capability | 66.7 | 20% | 13.3 | 2026-05-10T07:50:50.046Z |
Benchmarks benchmark | 64.3 | 30% | 19.3 | 2026-05-10T07:50:50.046Z |
Pricing pricing_tier | 98.5 | 15% | 14.8 | 2026-05-10T07:50:50.046Z |
Context Window context_window | 81.2 | 10% | 8.1 | 2026-05-10T07:50:50.046Z |
Recency recency | 80.2 | 15% | 12.0 | 2026-05-10T07:50:50.046Z |
Output Capacity output_capacity | 20.0 | 10% | 2.0 | 2026-05-10T07:50:50.046Z |
| Capability | Supported |
|---|---|
| Vision | No |
| Reasoning | Yes |
| JSON Mode | Yes |
| Streaming | Yes |
| Function Calling | Yes |
| Web Search | No |