2,500 expert-level questions spanning mathematics, sciences, and humanities. Designed to be 'the final closed-ended academic evaluation' that even top models fail most of.
Why it matters: The hardest academic benchmark — top models still fail 60-65% of questions. Shows how far we are from genuine expert-level reasoning.
Top Model
39%
GPT-5.4
Average Score
21.5%
Across 29 models
Models Tested
29
Metric: accuracy
Human Baseline
-
Score Range: 0%–100%
HLE Scores - Top 25 Models
Ranked by HLE score (%)
All models with a reported HLE score, ranked by highest accuracy.
HLE is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
GPT-5.4 currently holds the top score on the HLE benchmark. See our full rankings table above for the complete leaderboard with 29 models.
We update benchmark data from multiple sources including HuggingFace open-source model leaderboards and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While HLE is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.