Tongyi DeepResearch is an agentic large language model developed by Tongyi Lab, with 30 billion total parameters activating only 3 billion per token. It's optimized for long-horizon, deep information-seeking tasks and delivers state-of-the-art performance on benchmarks like Humanity's Last Exam, BrowserComp, BrowserComp-ZH, WebWalkerQA, GAIA, xbench-DeepSearch, and FRAMES. This makes it superior for complex agentic search, reasoning, and multi-step problem-solving compared to prior models. The model includes a fully automated synthetic data pipeline for scalable pre-training, fine-tuning, and reinforcement learning. It uses large-scale continual pre-training on diverse agentic data to boost reasoning and stay fresh. It also features end-to-end on-policy RL with a customized Group Relative Policy Optimization, including token-level gradients and negative sample filtering for stable training. The model supports ReAct for core ability checks and an IterResearch-based 'Heavy' mode for max performance through test-time scaling. It's ideal for advanced research agents, tool use, and heavy inference workflows.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Capabilitiesjust now | 67 | 30% | +20.0 |
| Recencyjust now | 98 | 15% | +14.8 |
| Output Capacityjust now | 85 | 15% | +12.8 |
| Context Windowjust now | 81 | 15% | +12.2 |
| Pricingjust now | 1 | 25% | +0.1 |
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with Tongyi DeepResearch 30B A3B and help the community make better decisions.
成本估算器
每月比类别平均节省$40.56