by
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results in video understanding, image Q&A, OCR, and document parsing, with strong gains in front-end web coding, grounding, and spatial reasoning. It offers a hybrid inference mode: a "thinking mode" for deep reasoning and a "non-thinking mode" for fast responses. Reasoning behavior can be toggled via the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config)
| Signal | Strength | Weight | Impact |
|---|---|---|---|
| Capabilitiesjust now | 83 | 30% | +25.0 |
| Recencyjust now | 91 | 15% | +13.7 |
| Context Windowjust now | 76 | 15% | +11.5 |
| Output Capacityjust now | 70 | 15% | +10.5 |
| Pricingjust now | 2 | 25% | +0.5 |
Community and practitioner feedback adds real-world signal on top of benchmarks and pricing.
Share your experience with GLM 4.5V and help the community make better decisions.
Cost Estimator
You save $31.72/month vs category average