Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Recencyjust now | 70 | 15% | +10.5 |
| Benchmarksjust now | 31 | 30% | +9.3 |
| Context Windowjust now | 72 | 10% | +7.2 |
| Capabilitiesjust now | 33 | 20% | +6.7 |
| Output Capacityjust now | 20 | 10% | +2.0 |
| Pricingjust now | 0 | 15% | +0.0 |
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with Qwen2.5 Coder 7B Instruct and help the community make better decisions.
成本估算器
每月比类别平均节省$42.06
来自已验证的来源。