Anthropic (14 models) vs Cohere (4 models) - compared across composite scores, pricing, capabilities, and context windows.
| Anthropic | Score | vs | Cohere | Score |
|---|---|---|---|---|
| Claude Opus 4.6 (Fast) | 90 | Command A | 51 | |
| Claude Opus 4.6 | 90 | Command R+ (08-2024) | 49 | |
| Claude Sonnet 4.6 | 85 | Command R7B (12-2024) | 36 |
| Capability | Anthropic | Cohere | Leader |
|---|---|---|---|
Vision | 14/14 | 0/4 | Anthropic |
Reasoning | 12/14 | 0/4 | Anthropic |
Function Calling | 14/14 | 2/4 | Anthropic |
JSON Mode | 8/14 | 4/4 | Anthropic |
Web Search | 13/14 | 0/4 | Anthropic |
Streaming | 14/14 | 4/4 | Anthropic |
Image Output | 0/14 | 0/4 | Tie |
| Metric | Anthropic | Cohere |
|---|---|---|
| Cheapest Input (per 1M tokens) | $0.250 Claude 3 Haiku | $0.037 Command R7B (12-2024) |
| Cheapest Output (per 1M tokens) | $1.25 | $0.150 |
| Most Expensive Input (per 1M tokens) | $30.00 Claude Opus 4.6 (Fast) | $2.50 Command A |
| Most Expensive Output (per 1M tokens) | $150.00 | $10.00 |
| Free Models | 0 | 0 |
| Max Context Window | 1.0M | 256K |
| Model | Score | Input $/M | Output $/M |
|---|---|---|---|
| Claude Opus 4.6 (Fast) | 90 | $30.00 | $150.00 |
| Claude Opus 4.6 | 90 | $5.00 | $25.00 |
| Claude Sonnet 4.6 | 85 | $3.00 | $15.00 |
| Claude Opus 4.5 | 85 | $5.00 | $25.00 |
| Claude Sonnet 4.5 | 82 | $3.00 | $15.00 |
| Claude Opus 4 | 82 | $15.00 | $75.00 |
| Claude Opus 4.7 | 79 | $5.00 | $25.00 |
| Claude Opus 4.1 | 75 | $15.00 | $75.00 |
| Claude 3.7 Sonnet (thinking) | 75 | $3.00 | $15.00 |
| Claude Sonnet 4 | 74 | $3.00 | $15.00 |
| Claude 3.7 Sonnet | 73 | $3.00 | $15.00 |
| Claude Haiku 4.5 | 70 | $1.00 | $5.00 |
| Claude 3.5 Haiku | 58 | $0.800 | $4.00 |
| Claude 3 Haiku | 50 | $0.250 | $1.25 |
| Model | Score | Input $/M | Output $/M |
|---|---|---|---|
| Command A | 51 | $2.50 | $10.00 |
| Command R+ (08-2024) | 49 | $2.50 | $10.00 |
| Command R (08-2024) | 49 | $0.150 | $0.600 |
| Command R7B (12-2024) | 36 | $0.037 | $0.150 |
Compare any two AI providers side-by-side.
Anthropic's cheapest option at $1.25/M tokens (likely Claude Haiku) includes vision and reasoning capabilities across all 13 models, while Cohere's $0.150/M Command models lack these features entirely. The 28-point performance gap between top models (Claude Sonnet 4.6 at 66/100 vs Command R+ at 38/100) suggests Anthropic optimizes for capability density over price competition.
Cohere focuses on text-only workflows with just 2 of 4 models supporting function calling and zero vision support, while Anthropic provides 100% vision coverage and 85% reasoning coverage across 13 models. This specialization allows Cohere to offer prices 66x lower on the high end ($10 vs $150/M tokens) but limits them to traditional NLP tasks.
The 1M token context enables processing entire codebases or document collections in single prompts, critical for enterprise RAG systems where Anthropic's 92% web search capability (12/13 models) outperforms Cohere's 0%. At enterprise scale, the $148.75/M token premium for Anthropic's top tier often costs less than the engineering required to chunk workflows for Cohere's 256K limit.
Cohere's Command R+ at $10/M tokens offers 15x cost savings over Anthropic's comparable mid-tier options while maintaining 50% function calling support, making it viable for high-volume classification or extraction tasks. The single open source model in Cohere's portfolio also enables on-premise deployment, impossible with Anthropic's fully proprietary stack of 13 models.
Cohere's 0% reasoning coverage across 4 models versus Anthropic's 85% (11/13 models) indicates focus on retrieval and generation rather than complex analysis. Their narrow price band ($0.15-$10/M) and 36/100 average score targets cost-sensitive production workloads like semantic search, while Anthropic's $1.25-$150/M range with 55/100 average serves diverse use cases from chatbots to research assistants.