Complete pricing breakdown for all 13 DeepSeek API models. Compare input and output costs per million tokens for DeepSeek R1, V3, and Chat models. Includes a cost calculator and side-by-side comparison with OpenAI and Anthropic.
DeepSeek is a Chinese AI research lab that has gained significant attention for developing high-performance open-source language models. Founded in 2023 and headquartered in Hangzhou, China, DeepSeek focuses on building AI systems that push the boundaries of reasoning, coding, and general intelligence -- while keeping costs dramatically lower than Western competitors.
Their flagship DeepSeek R1 reasoning model made waves by matching OpenAI o1-level performance at a fraction of the price. The DeepSeek V3 model delivers GPT-4o-class capabilities for general tasks, coding, and multilingual understanding. All DeepSeek models are released with open weights, meaning developers can self-host them or access them through the official API with pay-per-token pricing.
| Model | Input $/1M | Output $/1M |
|---|---|---|
| DeepSeek V4 Flash | $0.140 | $0.280 |
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 | $0.252 | $0.378 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.2 Speciale | $0.287 | $0.431 |
| DeepSeek V3.1 | $0.150 | $0.750 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| R1 Distill Llama 70B | $0.700 | $0.800 |
| DeepSeek V4 Pro | $0.435 | $0.870 |
| DeepSeek V3 | $0.320 | $0.890 |
| DeepSeek V3.1 Terminus | $0.270 | $0.950 |
| R1 0528 | $0.500 | $2.15 |
| R1 | $0.700 | $2.50 |
| Model | Input $/1M | Output $/1M |
|---|---|---|
| DeepSeek V4 Flash | $0.140 | $0.280 |
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 | $0.252 | $0.378 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.2 Speciale | $0.287 | $0.431 |
| DeepSeek V3.1 | $0.150 | $0.750 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| R1 Distill Llama 70B | $0.700 | $0.800 |
| DeepSeek V4 Pro | $0.435 | $0.870 |
| DeepSeek V3 | $0.320 | $0.890 |
| DeepSeek V3.1 Terminus | $0.270 | $0.950 |
See how DeepSeek API pricing stacks up against OpenAI (GPT) and Anthropic (Claude) models. DeepSeek is known for offering comparable performance at significantly lower prices. All prices in USD per million tokens.
| Model | In | Out |
|---|---|---|
| DeepSeek V4 Flash | $0.140 | $0.280 |
| R1 Distill Qwen 32B | $0.290 | $0.290 |
| DeepSeek V3.2 | $0.252 | $0.378 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3.2 Speciale | $0.287 | $0.431 |
| DeepSeek V3.1 | $0.150 | $0.750 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| R1 Distill Llama 70B | $0.700 | $0.800 |
| Model | In | Out |
|---|---|---|
| gpt-oss-120b (free) | Free | Free |
| gpt-oss-20b (free) | Free | Free |
| Sora | Free | Free |
| gpt-oss-20b | $0.030 | $0.140 |
| gpt-oss-120b | $0.039 | $0.180 |
| gpt-oss-safeguard-20b | $0.075 | $0.300 |
| GPT-5 Nano | $0.050 | $0.400 |
| GPT-4.1 Nano | $0.100 | $0.400 |
| Model | In | Out |
|---|---|---|
| Claude 3 Haiku | $0.250 | $1.25 |
| Claude 3.5 Haiku | $0.800 | $4.00 |
| Claude Haiku 4.5 | $1.00 | $5.00 |
| Claude Sonnet 4.6 | $3.00 | $15.00 |
| Claude Sonnet 4.5 | $3.00 | $15.00 |
| Claude Sonnet 4 | $3.00 | $15.00 |
| Claude 3.7 Sonnet | $3.00 | $15.00 |
| Claude 3.7 Sonnet (thinking) | $3.00 | $15.00 |
Cost projections for DeepSeek API usage. Based on ~1,000 input tokens and ~500 output tokens per request. Aggressive pricing means even high-volume workloads stay affordable.
| Model | $/1M In | $/1M Out |
|---|---|---|
| DeepSeek V4 Flash | $0.140 | $0.280 |
| DeepSeek V3.2 Exp | $0.270 | $0.410 |
| DeepSeek V3 0324 | $0.200 | $0.770 |
| DeepSeek V3 | $0.320 | $0.890 |
| R1 | $0.700 | $2.50 |
Note: Actual costs vary with prompt length, response length, and batch processing. DeepSeek offers some of the most competitive pricing in the industry, with additional discounts for cached input tokens. Try the interactive calculator for custom estimates.
DeepSeek uses a byte-pair encoding tokenizer optimized for both English and Chinese text. Chinese characters typically use fewer tokens than with Western tokenizers, making DeepSeek particularly cost-effective for multilingual workloads. DeepSeek also offers cache hit discounts, charging just 10% of the standard rate for previously processed input tokens.
DeepSeek R1 is a reasoning model that uses chain-of-thought to solve complex problems, competing with OpenAI o1 at a fraction of the cost. DeepSeek V3 is the general-purpose model optimized for speed and broad capabilities including coding, translation, and analysis. R1 costs more due to extended reasoning but delivers superior accuracy on hard tasks.
All DeepSeek models are released with open weights under permissive licenses. This means you can self-host models on your own infrastructure, eliminating per-token costs entirely for high-volume workloads. The API provides a convenient managed option with pay-as-you-go pricing for teams that prefer not to manage infrastructure.
DeepSeek is already one of the cheapest API providers, but you can save further by using cached input tokens for repeated prompts, setting appropriate max_tokens limits, and choosing the right model for each task. Use V3 for general tasks and only upgrade to R1 when complex reasoning is needed.
Compare with GPT-4o, o3, and all OpenAI model costs.
Compare with Claude Opus 4, Sonnet 4, and all Anthropic models.
Compare with Gemini 2.5 Pro, Flash, and all Google models.
Find the most affordable models across all providers.
DeepSeek R1 pricing is $0.290/1M input tokens and $0.290/1M output tokens. DeepSeek R1 is a reasoning model that competes with OpenAI o1, offering chain-of-thought reasoning at a fraction of the cost.
DeepSeek offers 0 free models via their API. DeepSeek is known for extremely competitive pricing, often significantly cheaper than OpenAI and Anthropic equivalents. As a Chinese AI lab focused on open-source, DeepSeek keeps costs low while delivering state-of-the-art performance.
DeepSeek's average output price is $0.882/1M tokens across 13 paid models. OpenAI offers 67 models with varying price points. DeepSeek models are typically much more affordable than OpenAI equivalents -- DeepSeek R1 delivers reasoning capabilities comparable to o1 at a fraction of the cost, and DeepSeek V3 competes with GPT-4o at significantly lower prices.
DeepSeek V3 is priced at $0.252/1M input and $0.378/1M output tokens. It is DeepSeek's flagship general-purpose model with a 131K context window, offering strong performance across coding, reasoning, and multilingual tasks at highly competitive rates.
DeepSeek charges per token with notably aggressive pricing - often 90% cheaper than comparable Western models. Their tokenizer is optimized for both English and Chinese, making it especially cost-effective for multilingual applications. Cache hits are billed at just 10% of the standard input rate, and reasoning tokens in R1 are priced separately from standard output.