OpenAI (67 models) vs Anthropic (14 models) - compared across composite scores, pricing, capabilities, and context windows.
| Capability | OpenAI | Anthropic | Leader |
|---|---|---|---|
Vision | 45/67 | 14/14 | OpenAI |
Reasoning | 37/67 | 12/14 | OpenAI |
Function Calling | 57/67 | 14/14 | OpenAI |
JSON Mode | 63/67 | 8/14 | OpenAI |
Web Search | 28/67 | 13/14 | OpenAI |
Streaming | 65/67 | 14/14 | OpenAI |
Image Output | 4/67 | 0/14 | OpenAI |
| Metric | OpenAI | Anthropic |
|---|---|---|
| Cheapest Input (per 1M tokens) | $0.030 gpt-oss-20b | $0.250 Claude 3 Haiku |
| Cheapest Output (per 1M tokens) | $0.140 | $1.25 |
| Most Expensive Input (per 1M tokens) | $150.00 o1-pro | $30.00 Claude Opus 4.6 (Fast) |
| Most Expensive Output (per 1M tokens) | $600.00 | $150.00 |
| Free Models | 2 | 0 |
| Max Context Window | 1.1M | 1.0M |
| Model | Score | Input $/M | Output $/M |
|---|---|---|---|
| GPT-5.4 Pro | 92 | $30.00 | $180.00 |
| GPT-5.4 | 92 | $2.50 | $15.00 |
| GPT-5.2 Pro | 91 | $21.00 | $168.00 |
| GPT-5.2-Codex | 90 | $1.75 | $14.00 |
| GPT-5.2 | 90 | $1.75 | $14.00 |
| GPT-5.3-Codex | 89 | $1.75 | $14.00 |
| GPT-5 Pro | 89 | $15.00 | $120.00 |
| GPT-5.1-Codex-Max | 88 | $1.25 | $10.00 |
| GPT-5 Codex | 88 | $1.25 | $10.00 |
| GPT-5 | 88 | $1.25 | $10.00 |
| GPT-5.3 Chat | 87 | $1.75 | $14.00 |
| GPT-5.1 | 87 | $1.25 | $10.00 |
| GPT-5.1-Codex | 87 | $1.25 | $10.00 |
| GPT-5.1-Codex-Mini | 87 | $0.250 | $2.00 |
| o3 Deep Research | 87 | $10.00 | $40.00 |
| o3 Pro | 87 | $20.00 | $80.00 |
| o3 | 87 | $2.00 | $8.00 |
| GPT-5.1 Chat | 87 | $1.25 | $10.00 |
| o4 Mini Deep Research | 81 | $2.00 | $8.00 |
| o4 Mini | 81 | $1.10 | $4.40 |
| Model | Score | Input $/M | Output $/M |
|---|---|---|---|
| Claude Opus 4.6 (Fast) | 90 | $30.00 | $150.00 |
| Claude Opus 4.6 | 90 | $5.00 | $25.00 |
| Claude Sonnet 4.6 | 85 | $3.00 | $15.00 |
| Claude Opus 4.5 | 85 | $5.00 | $25.00 |
| Claude Sonnet 4.5 | 82 | $3.00 | $15.00 |
| Claude Opus 4 | 82 | $15.00 | $75.00 |
| Claude Opus 4.7 | 79 | $5.00 | $25.00 |
| Claude Opus 4.1 | 75 | $15.00 | $75.00 |
| Claude 3.7 Sonnet (thinking) | 75 | $3.00 | $15.00 |
| Claude Sonnet 4 | 74 | $3.00 | $15.00 |
| Claude 3.7 Sonnet | 73 | $3.00 | $15.00 |
| Claude Haiku 4.5 | 70 | $1.00 | $5.00 |
| Claude 3.5 Haiku | 58 | $0.800 | $4.00 |
| Claude 3 Haiku | 50 | $0.250 | $1.25 |
Compare any two AI providers side-by-side.
OpenAI's sprawling portfolio includes 5 open source models and 2 free options, creating entry points at $0.110/M tokens compared to Anthropic's minimum $1.25/M. However, this strategy yields a 49/100 average score versus Anthropic's focused 55/100, with OpenAI's bottom-tier models diluting overall performance despite GPT-5.4 edging out Claude Sonnet 4.6 by just 1 point (67 vs 66).
Anthropic achieves 100% vision coverage (13/13 models) and 100% function calling support compared to OpenAI's 66% (42/64) and 89% (57/64) respectively. OpenAI's reasoning capability coverage sits at just 53% (34/64 models) versus Anthropic's 85% (11/13), suggesting Anthropic prioritizes feature completeness while OpenAI offers more specialized or legacy models.
OpenAI spans from $0.110 to $600/M tokens output, targeting everything from hobbyists to enterprise deployments, while Anthropic's $1.25 to $150/M range focuses on production use cases. This pricing philosophy aligns with model counts: OpenAI's 64 models serve diverse market segments including 2 free options, while Anthropic's 13 models target users willing to pay premium prices for consistent quality.
Anthropic delivers 100% coverage across vision and function calling with 85% reasoning support across all 13 models, while OpenAI requires cherry-picking from 64 models to find the 27 that support all three capabilities (42% coverage). For teams needing guaranteed multimodal features, Anthropic's higher base price of $1.25/M tokens eliminates capability uncertainty that OpenAI's fragmented portfolio introduces.
Both providers max out at approximately 1M tokens (OpenAI at 1.1M, Anthropic at 1.0M), but OpenAI likely concentrates this capability in premium models given their 49/100 average score. With Anthropic's 55/100 average across just 13 models, developers get more consistent access to large context windows without navigating OpenAI's 64-model maze to find comparable performance.
OpenAI's 5 open source models and 2 free tiers enable local development and testing before production deployment, while Anthropic's zero free options force immediate cost commitments starting at $1.25/M tokens. However, Anthropic's 92% web search capability (12/13 models) versus OpenAI's 48% (31/64) provides better integrated retrieval features for RAG applications without additional tooling.