Tests commonsense reasoning by asking models to complete sentences in a natural way. Designed to be trivial for humans but challenging for models.
为什么重要: Fundamental commonsense reasoning test. Saturated for frontier models (>95%) but useful for evaluating smaller models.
顶级模型
96%
Llama 3.1 405B
平均评分
93.7%
共7个模型
已测试模型
7
指标: accuracy
人类基准
95.6%
评分范围: 0%–100%
All models with a reported HellaSwag score, ranked by highest accuracy.
HellaSwag is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Llama 3.1 405B currently holds the top score on the HellaSwag benchmark. See our full rankings table above for the complete leaderboard with 7 models.
We update benchmark data from multiple sources including HuggingFace Open LLM Leaderboard and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While HellaSwag is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.