23 challenging tasks from BIG-Bench that prior language models could not solve above average human performance. Tests multi-step reasoning including logical deduction, causal reasoning, and algorithmic thinking.
为什么重要: One of the best tests of structured reasoning ability. Scores range 60-95% for frontier models, providing good differentiation.
顶级模型
93.1%
Claude 3.5 Sonnet
平均评分
86.5%
共40个模型
已测试模型
40
指标: accuracy
人类基准
-
评分范围: 0%–100%
BBH Scores - Top 25 Models
Ranked by BBH score (%)
All models with a reported BBH score, ranked by highest accuracy.
BBH is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Claude 3.5 Sonnet currently holds the top score on the BBH benchmark. See our full rankings table above for the complete leaderboard with 40 models.
We update benchmark data from multiple sources including HuggingFace open-source model leaderboards and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While BBH is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.