Comprehensive benchmark across 6 categories (math, coding, reasoning, data analysis, instruction following, language) using contamination-resistant, regularly updated questions.
Why it matters: Contamination-free by design — uses new questions regularly. Top models still score below 70%, making it highly discriminating.
Top Model
87.3%
o4 Mini High
Average Score
64.7%
Across 51 models
Models Tested
51
Metric: average score
Human Baseline
-
Score Range: 0%–100%
LiveBench Scores - Top 25 Models
Ranked by LiveBench score (%)
All models with a reported LiveBench score, ranked by highest average score.
LiveBench is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
o4 Mini High currently holds the top score on the LiveBench benchmark. See our full rankings table above for the complete leaderboard with 51 models.
We update benchmark data from multiple sources including HuggingFace open-source model leaderboards and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While LiveBench is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.