Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Samoed commited on
Commit
f9c8810
·
verified ·
1 Parent(s): fbab306

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +140 -0
README.md CHANGED
@@ -1,4 +1,14 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: corpus
4
  features:
@@ -53,4 +63,134 @@ configs:
53
  data_files:
54
  - split: test
55
  path: queries/test-*
 
 
 
56
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - LM-generated
4
+ language:
5
+ - eng
6
+ license: mit
7
+ multilinguality: monolingual
8
+ task_categories:
9
+ - text-retrieval
10
+ task_ids:
11
+ - document-retrieval
12
  dataset_info:
13
  - config_name: corpus
14
  features:
 
63
  data_files:
64
  - split: test
65
  path: queries/test-*
66
+ tags:
67
+ - mteb
68
+ - text
69
  ---
70
+ <!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
71
+
72
+ <div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
73
+ <h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">LitSearchRetrieval</h1>
74
+ <div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
75
+ <div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
76
+ </div>
77
+
78
+
79
+ The dataset contains the query set and retrieval corpus for the paper LitSearch: A Retrieval Benchmark for
80
+ Scientific Literature Search. It introduces LitSearch, a retrieval benchmark comprising 597 realistic literature
81
+ search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions
82
+ generated by GPT-4 based on paragraphs containing inline citations from research papers and (2) questions about
83
+ recently published papers, manually written by their authors. All LitSearch questions were manually examined or
84
+ edited by experts to ensure high quality.
85
+
86
+
87
+ | | |
88
+ |---------------|---------------------------------------------|
89
+ | Task category | t2t |
90
+ | Domains | Academic, Non-fiction, Written |
91
+ | Reference | https://github.com/princeton-nlp/LitSearch |
92
+
93
+
94
+ ## How to evaluate on this task
95
+
96
+ You can evaluate an embedding model on this dataset using the following code:
97
+
98
+ ```python
99
+ import mteb
100
+
101
+ task = mteb.get_tasks(["LitSearchRetrieval"])
102
+ evaluator = mteb.MTEB(task)
103
+
104
+ model = mteb.get_model(YOUR_MODEL)
105
+ evaluator.run(model)
106
+ ```
107
+
108
+ <!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
109
+ To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
110
+
111
+ ## Citation
112
+
113
+ If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
114
+
115
+ ```bibtex
116
+
117
+ @article{ajith2024litsearch,
118
+ author = {Ajith, Anirudh and Xia, Mengzhou and Chevalier, Alexis and Goyal, Tanya and Chen, Danqi and Gao, Tianyu},
119
+ title = {LitSearch: A Retrieval Benchmark for Scientific Literature Search},
120
+ year = {2024},
121
+ }
122
+
123
+
124
+ @article{enevoldsen2025mmtebmassivemultilingualtext,
125
+ title={MMTEB: Massive Multilingual Text Embedding Benchmark},
126
+ author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
127
+ publisher = {arXiv},
128
+ journal={arXiv preprint arXiv:2502.13595},
129
+ year={2025},
130
+ url={https://arxiv.org/abs/2502.13595},
131
+ doi = {10.48550/arXiv.2502.13595},
132
+ }
133
+
134
+ @article{muennighoff2022mteb,
135
+ author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
136
+ title = {MTEB: Massive Text Embedding Benchmark},
137
+ publisher = {arXiv},
138
+ journal={arXiv preprint arXiv:2210.07316},
139
+ year = {2022}
140
+ url = {https://arxiv.org/abs/2210.07316},
141
+ doi = {10.48550/ARXIV.2210.07316},
142
+ }
143
+ ```
144
+
145
+ # Dataset Statistics
146
+ <details>
147
+ <summary> Dataset Statistics</summary>
148
+
149
+ The following code contains the descriptive statistics from the task. These can also be obtained using:
150
+
151
+ ```python
152
+ import mteb
153
+
154
+ task = mteb.get_task("LitSearchRetrieval")
155
+
156
+ desc_stats = task.metadata.descriptive_stats
157
+ ```
158
+
159
+ ```json
160
+ {
161
+ "test": {
162
+ "num_samples": 64780,
163
+ "number_of_characters": 58371129,
164
+ "num_documents": 64183,
165
+ "min_document_length": 0,
166
+ "average_document_length": 908.135035757132,
167
+ "max_document_length": 18451,
168
+ "unique_documents": 64183,
169
+ "num_queries": 597,
170
+ "min_query_length": 37,
171
+ "average_query_length": 141.20268006700167,
172
+ "max_query_length": 327,
173
+ "unique_queries": 597,
174
+ "none_queries": 0,
175
+ "num_relevant_docs": 639,
176
+ "min_relevant_docs_per_query": 1,
177
+ "average_relevant_docs_per_query": 1.07035175879397,
178
+ "max_relevant_docs_per_query": 5,
179
+ "unique_relevant_docs": 574,
180
+ "num_instructions": null,
181
+ "min_instruction_length": null,
182
+ "average_instruction_length": null,
183
+ "max_instruction_length": null,
184
+ "unique_instructions": null,
185
+ "num_top_ranked": null,
186
+ "min_top_ranked_per_query": null,
187
+ "average_top_ranked_per_query": null,
188
+ "max_top_ranked_per_query": null
189
+ }
190
+ }
191
+ ```
192
+
193
+ </details>
194
+
195
+ ---
196
+ *This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*