·
Gemma 4 26B A4B IT is an instruction-tuned Mixture-of-Experts (MoE) model from Google DeepMind. Despite 25.2B total parameters, only 3.8B activate per token during inference — delivering near-31B quality at a fraction of the compute cost. Supports multimodal input including text, images, and video (up to 60s at 1fps). Features a 256K token context window, native function calling, configurable thinking/reasoning mode, and structured output support. Released under Apache 2.0.
| 信号 | 强度 | 权重 | 影响 |
|---|---|---|---|
| Benchmarksjust now | 72 | 30% | +21.6 |
| Capabilitiesjust now | 83 | 20% | +16.7 |
| Recencyjust now | 100 | 15% | +15.0 |
| Pricingjust now | 100 | 15% | +14.9 |
| Output Capacityjust now | 90 | 10% | +9.0 |
| Context Windowjust now | 86 | 10% | +8.6 |
把当前模型放回同一服务商最近的发布节奏中查看。
Gemma 4 26B A4B 当前模型
coding
Gemma 4 31B
coding
Lyria 3 Pro Preview
coding
Lyria 3 Clip Preview
coding
Gemini 3.1 Flash Lite Preview
coding
Nano Banana 2 (Gemini 3.1 Flash Image Preview)
image generation
Gemini 3.1 Pro Preview Custom Tools
coding
Gemini 3.1 Pro Preview
coding
社区和从业者反馈在基准测试和价格之上增加了真实世界的信号。
Share your experience with Gemma 4 26B A4B and help the community make better decisions.
价格、基准和服务状态来自不同的数据源,因此刷新节奏也不同。这里分别显示每个面向用户的数据面最近一次校验时间。
成本估算器
每月比类别平均节省$38.88
来自已验证的来源。