Papers
arxiv:2402.14992

tinyBenchmarks: evaluating LLMs with fewer examples

Published on Feb 22, 2024

Abstract

Tools and small subsets of benchmarks can efficiently and reliably evaluate the performance of large language models.

AI-generated summary

The versatility of large language models (LLMs) led to the creation of diverse benchmarks that thoroughly test a variety of language models' abilities. These benchmarks consist of tens of thousands of examples making evaluation of LLMs very expensive. In this paper, we investigate strategies to reduce the number of evaluations needed to assess the performance of an LLM on several key benchmarks. For example, we show that to accurately estimate the performance of an LLM on MMLU, a popular multiple-choice QA benchmark consisting of 14K examples, it is sufficient to evaluate this LLM on 100 curated examples. We release evaluation tools and tiny versions of popular benchmarks: Open LLM Leaderboard, MMLU, HELM, and AlpacaEval 2.0. Our empirical analysis demonstrates that these tools and tiny benchmarks are sufficient to reliably and efficiently reproduce the original evaluation results.

Community

Really loved the work @felipemaiapolo @LucasWeber @borgr @syuekai @moonfolk , and I cited this work in my recent research. I recently published a tiny dataset and synthethic generator which is tied to the actual human curated tiny dataset for comparison and have been able to see strong results in benchmarks with <10KB datasets. In my case I focused on micro-tiny (n<100).

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 8

Browse 8 datasets citing this paper

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.14992 in a Space README.md to link it from this page.

Collections including this paper 6