OpenAI (67 models) vs Google (29 models) - compared across composite scores, pricing, capabilities, and context windows.
| Capability | OpenAI | Leader | |
|---|---|---|---|
Vision | 45/67 | 25/29 | OpenAI |
Reasoning | 37/67 | 17/29 | OpenAI |
Function Calling | 57/67 | 19/29 | OpenAI |
JSON Mode | 63/67 | 26/29 | OpenAI |
Web Search | 28/67 | 16/29 | OpenAI |
Streaming | 65/67 | 27/29 | OpenAI |
Image Output | 4/67 | 4/29 | Tie |
| Metric | OpenAI | |
|---|---|---|
| Cheapest Input (per 1M tokens) | $0.030 gpt-oss-20b | $0.040 Gemma 3 4B |
| Cheapest Output (per 1M tokens) | $0.140 | $0.080 |
| Most Expensive Input (per 1M tokens) | $150.00 o1-pro | $2.00 Gemini 3.1 Pro Preview Custom Tools |
| Most Expensive Output (per 1M tokens) | $600.00 | $12.00 |
| Free Models | 2 | 4 |
| Max Context Window | 1.1M | 1.0M |
| Model | Score | Input $/M | Output $/M |
|---|---|---|---|
| GPT-5.4 Pro | 92 | $30.00 | $180.00 |
| GPT-5.4 | 92 | $2.50 | $15.00 |
| GPT-5.2 Pro | 91 | $21.00 | $168.00 |
| GPT-5.2-Codex | 90 | $1.75 | $14.00 |
| GPT-5.2 | 90 | $1.75 | $14.00 |
| GPT-5.3-Codex | 89 | $1.75 | $14.00 |
| GPT-5 Pro | 89 | $15.00 | $120.00 |
| GPT-5.1-Codex-Max | 88 | $1.25 | $10.00 |
| GPT-5 Codex | 88 | $1.25 | $10.00 |
| GPT-5 | 88 | $1.25 | $10.00 |
| GPT-5.3 Chat | 87 | $1.75 | $14.00 |
| GPT-5.1 | 87 | $1.25 | $10.00 |
| GPT-5.1-Codex | 87 | $1.25 | $10.00 |
| GPT-5.1-Codex-Mini | 87 | $0.250 | $2.00 |
| o3 Deep Research | 87 | $10.00 | $40.00 |
| o3 Pro | 87 | $20.00 | $80.00 |
| o3 | 87 | $2.00 | $8.00 |
| GPT-5.1 Chat | 87 | $1.25 | $10.00 |
| o4 Mini Deep Research | 81 | $2.00 | $8.00 |
| o4 Mini | 81 | $1.10 | $4.40 |
| Model | Score | Input $/M | Output $/M |
|---|---|---|---|
| Gemini 3 Flash Preview | 88 | $0.500 | $3.00 |
| Gemini 2.5 Pro | 84 | $1.25 | $10.00 |
| Gemini 2.5 Pro Preview 06-05 | 84 | $1.25 | $10.00 |
| Gemini 2.5 Pro Preview 05-06 | 84 | $1.25 | $10.00 |
| Gemini 3.1 Pro Preview Custom Tools | 81 | $2.00 | $12.00 |
| Gemini 3.1 Pro Preview | 81 | $2.00 | $12.00 |
| Gemma 4 31B (free) | 81 | Free | Free |
| Gemma 4 31B | 81 | $0.130 | $0.380 |
| Gemini 3.1 Flash Lite Preview | 80 | $0.250 | $1.50 |
| Gemini 2.5 Flash Lite Preview 09-2025 | 79 | $0.100 | $0.400 |
| Gemini 2.5 Flash Lite | 79 | $0.100 | $0.400 |
| Gemini 2.5 Flash | 79 | $0.300 | $2.50 |
| Gemma 2 27B | 77 | $0.650 | $0.650 |
| Gemma 4 26B A4B (free) | 73 | Free | Free |
| Gemma 4 26B A4B | 73 | $0.060 | $0.330 |
| Gemini 2.0 Flash | 72 | $0.100 | $0.400 |
| Gemini 2.0 Flash Lite | 59 | $0.075 | $0.300 |
| Lyria 3 Pro Preview | 40 | Free | Free |
| Lyria 3 Clip Preview | 40 | Free | Free |
| Gemma 3n 4B | 40 | $0.060 | $0.120 |
Compare any two AI providers side-by-side.
OpenAI's closed-source focus allows tighter optimization - their average score of 49/100 beats Google's 45/100 despite Google's larger open source portfolio. Google's open models like Gemma variants score in the 30-40 range, while OpenAI's proprietary GPT-5.4 hits 67/100, suggesting different philosophical approaches to model development and distribution.
Google's aggressive pricing with options as low as $0.040/M tokens (compared to OpenAI's $0.110/M floor) reflects their infrastructure advantage and willingness to compete on cost. However, Google's most expensive model caps at $12.00/M while OpenAI extends to $600.00/M, indicating OpenAI targets premium enterprise use cases where performance justifies 50x higher costs.
OpenAI's near-universal function calling support reflects their API-first strategy for enterprise integration, while Google's selective implementation suggests they prioritize this feature only in their newer Gemini models. This gap is critical for production deployments - OpenAI offers 41 more models with function calling despite having only 30 more models total.
Despite Google's search dominance, their AI models show 0/34 web search integration compared to OpenAI's 31/64 (48%), likely due to strategic separation between Google Search and their LLM products. This creates an ironic capability inversion where OpenAI models can access web information while Google's AI offerings remain isolated from their core search infrastructure.
Google actually leads in vision capability coverage with 27 of 34 models (79%) supporting vision versus OpenAI's 42 of 64 (66%), yet OpenAI's top model still outscores Google's best by 7 points. This suggests Google prioritizes multimodal features across their portfolio while OpenAI concentrates performance in flagship models, reflecting different approaches to capability distribution.
Google's 4.5x larger free tier provides more experimentation options for prototyping, especially with 15 total open source models available for self-hosting versus OpenAI's 5. For production workloads under 1M tokens/month, Google's free Gemini models scoring in the 40-45 range may suffice, while OpenAI's paid-only path starts at $0.110/M tokens minimum.