Tests whether models follow explicit, verifiable constraints like 'write in more than 400 words' or 'mention AI at least 3 times'. All instructions have objectively verifiable criteria.
Why it matters: Measures instruction-following precision, critical for production applications. Models that score well here are more reliable in structured tasks.
Top Model
93.5%
Gemini 3 Pro
Average Score
83.8%
Across 46 models
Models Tested
46
Metric: prompt-level accuracy
Human Baseline
-
Score Range: 0%–100%
IFEval Scores - Top 25 Models
Ranked by IFEval score (%)
All models with a reported IFEval score, ranked by highest prompt-level accuracy.
IFEval is a standardized evaluation that measures AI model performance on specific tasks. It provides comparable scores across different models, helping developers choose the right model for their needs.
Gemini 3 Pro currently holds the top score on the IFEval benchmark. See our full rankings table above for the complete leaderboard with 46 models.
We update benchmark data from multiple sources including HuggingFace open-source model leaderboards and LMArena. Scores are refreshed regularly as new evaluations are published and new models are released.
No. While IFEval is an important indicator, real-world performance depends on many factors including pricing, latency, context window, and specific task requirements. We recommend using our composite score which weighs multiple benchmarks and practical factors.