Rank, compare, benchmark, price, and track 366+ AI models across coding, image, and video generation with open methodology and live market movement.
AI Market Cap is strongest when the methodology, governance, evidence, and analysis are visible directly in the product, not hidden behind a claim.
Start from the pages that make the product feel citable and trusted.
ExploreSee how rankings are built, weighted, and explained.
ExploreInspect score components and signal-level contributions.
ExploreTrack launches, provider velocity, and recent additions.
ExploreUse the product as a workflow: shortlist candidates, compare tradeoffs, estimate costs, and get a recommendation before you commit.
Keep using the platform after you choose a model. Track movement, watch reliability, and revisit decisions when the market shifts.
Track models and notification preferences in one place.
OpenFollow gainers, losers, hot models, and new entrants.
OpenReview cheapest, best-value, and premium-tier pricing.
OpenCheck live health, uptime signals, and incidents.
OpenFollow pricing, value, significance, and stability trends.
OpenQuality vs Price
GPT-5.4 Pro by OpenAI holds #1 in LLM with a score of 92
CurrentRing-2.6-1T (free) entered LLM rankings at #183 (inclusionai)
YesterdayGPT-5.4 Image 2 by OpenAI holds #1 in Image Generation with a score of 92
CurrentGPT-5.4 Image 2 entered Image Generation rankings at #1 (OpenAI)
18 days agoWan 2.1 T2V by Wan AI holds #1 in Video Generation with a score of 15
CurrentTop-ranked model for software engineering, code generation, and debugging tasks.
GPT-5.4 Pro
OpenAI
Highest-ranked model under $3/1M output tokens. Great performance per dollar.
Grok 4.20
xAI
Top open-weight model you can self-host. Zero API cost, full control.
Lyria 3 Pro Preview
OpenAI
69 models
AAlibaba
52 models
G31 models
MMistral AI
24 models
AAnthropic
17 models
MMeta
14 models
Models Tracked
Providers
Signal Categories
Updates
Categories
API Access
The overview dashboard provides a bird-eye view of the entire AI model ecosystem - showing market trends, provider dominance, score distributions, capability coverage, and pricing patterns across 350+ models from 55+ providers. Data is updated hourly.
The composite score (0-100) combines benchmark-driven signals: benchmark performance (90%) from MMLU, GPQA, HumanEval, SWE-bench, and 15+ standardized evaluations, with capabilities and context window as tiebreakers (10%). Each signal is normalized before weighting to ensure fair comparison.
Hot models are showing sustained ranking improvements over 7 days. Rising models have improved in the last 24 hours. Falling models have dropped in rank. Stable models maintain consistent positions. These indicators help identify momentum in the AI model landscape.