Datasets:
mteb
/

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet
query-id
stringclasses
100 values
corpus-id
stringlengths
8
12
score
float64
0
1
q_unique_11108
unique_3753
0
q_unique_11108
unique_1247
0
q_unique_11108
unique_9748
0
q_unique_11108
unique_2505
0
q_unique_11108
unique_6182
0
q_unique_11108
unique_412
0
q_unique_11108
unique_9373
0
q_unique_11108
unique_12093
0
q_unique_11108
unique_5296
0
q_unique_11108
unique_12662
0
q_unique_11108
unique_5304
0
q_unique_11108
unique_5639
0
q_unique_11108
unique_8429
0
q_unique_11108
unique_5043
0
q_unique_11108
unique_8076
0
q_unique_11108
unique_4396
0
q_unique_11108
unique_7789
0
q_unique_11108
unique_13626
0
q_unique_11108
unique_9424
0
q_unique_11108
unique_9174
0
q_unique_11108
unique_13010
0
q_unique_11108
unique_4076
0
q_unique_11108
unique_2051
0
q_unique_11108
unique_1130
0
q_unique_11108
unique_10051
0
q_unique_11108
unique_2546
0
q_unique_11108
unique_4237
0
q_unique_11108
unique_13238
0
q_unique_11108
unique_2412
0
q_unique_11108
unique_6846
0
q_unique_11108
unique_5665
0
q_unique_11108
unique_6130
0
q_unique_11108
unique_10803
0
q_unique_11108
unique_13351
0
q_unique_11108
unique_1306
0
q_unique_11108
unique_369
0
q_unique_11108
unique_13207
0
q_unique_11108
unique_6311
0
q_unique_11108
unique_3708
0
q_unique_11108
unique_13753
0
q_unique_11108
unique_7085
0
q_unique_11108
unique_13994
0
q_unique_11108
unique_3536
0
q_unique_11108
unique_6329
0
q_unique_11108
unique_12717
0
q_unique_11108
unique_6016
0
q_unique_11108
unique_8377
0
q_unique_11108
unique_8575
0
q_unique_11108
unique_11605
0
q_unique_11108
unique_6954
0
q_unique_11108
unique_11108
1
q_unique_12471
unique_9424
0
q_unique_12471
unique_2904
0
q_unique_12471
unique_3291
0
q_unique_12471
unique_6447
0
q_unique_12471
unique_1946
0
q_unique_12471
unique_488
0
q_unique_12471
unique_3208
0
q_unique_12471
unique_4725
0
q_unique_12471
unique_10321
0
q_unique_12471
unique_1168
0
q_unique_12471
unique_14018
0
q_unique_12471
unique_5869
0
q_unique_12471
unique_5754
0
q_unique_12471
unique_9355
0
q_unique_12471
unique_5060
0
q_unique_12471
unique_4162
0
q_unique_12471
unique_11814
0
q_unique_12471
unique_12278
0
q_unique_12471
unique_6373
0
q_unique_12471
unique_10760
0
q_unique_12471
unique_71
0
q_unique_12471
unique_9911
0
q_unique_12471
unique_2325
0
q_unique_12471
unique_6187
0
q_unique_12471
unique_326
0
q_unique_12471
unique_4283
0
q_unique_12471
unique_5335
0
q_unique_12471
unique_13866
0
q_unique_12471
unique_2805
0
q_unique_12471
unique_12398
0
q_unique_12471
unique_6094
0
q_unique_12471
unique_13473
0
q_unique_12471
unique_5782
0
q_unique_12471
unique_3863
0
q_unique_12471
unique_8485
0
q_unique_12471
unique_1401
0
q_unique_12471
unique_9484
0
q_unique_12471
unique_9742
0
q_unique_12471
unique_9417
0
q_unique_12471
unique_7862
0
q_unique_12471
unique_2602
0
q_unique_12471
unique_8131
0
q_unique_12471
unique_8309
0
q_unique_12471
unique_5491
0
q_unique_12471
unique_14356
0
q_unique_12471
unique_3681
0
q_unique_12471
unique_775
0
q_unique_12471
unique_10080
0
q_unique_12471
unique_13840
0
End of preview. Expand in Data Studio

BIRCO-WTB

An MTEB dataset
Massive Text Embedding Benchmark

Retrieval task using the WhatsThatBook dataset from BIRCO. This dataset contains 100 queries where each query is an ambiguous description of a book. Each query has a candidate pool of 50 book descriptions. The objective is to retrieve the correct book description.

Task category t2t
Domains Fiction
Reference https://github.com/BIRCO-benchmark/BIRCO

How to evaluate on this task

You can evaluate an embedding model on this dataset using the following code:

import mteb

task = mteb.get_tasks(["BIRCO-WTB"])
evaluator = mteb.MTEB(task)

model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)

To learn more about how to run models on mteb task check out the GitHub repitory.

Citation

If you use this dataset, please cite the dataset as well as mteb, as this dataset likely includes additional processing as a part of the MMTEB Contribution.


@misc{wang2024bircobenchmarkinformationretrieval,
  archiveprefix = {arXiv},
  author = {Xiaoyue Wang and Jianyou Wang and Weili Cao and Kaicheng Wang and Ramamohan Paturi and Leon Bergen},
  eprint = {2402.14151},
  primaryclass = {cs.IR},
  title = {BIRCO: A Benchmark of Information Retrieval Tasks with Complex Objectives},
  url = {https://arxiv.org/abs/2402.14151},
  year = {2024},
}


@article{enevoldsen2025mmtebmassivemultilingualtext,
  title={MMTEB: Massive Multilingual Text Embedding Benchmark},
  author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2502.13595},
  year={2025},
  url={https://arxiv.org/abs/2502.13595},
  doi = {10.48550/arXiv.2502.13595},
}

@article{muennighoff2022mteb,
  author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
  title = {MTEB: Massive Text Embedding Benchmark},
  publisher = {arXiv},
  journal={arXiv preprint arXiv:2210.07316},
  year = {2022}
  url = {https://arxiv.org/abs/2210.07316},
  doi = {10.48550/ARXIV.2210.07316},
}

Dataset Statistics

Dataset Statistics

The following code contains the descriptive statistics from the task. These can also be obtained using:

import mteb

task = mteb.get_task("BIRCO-WTB")

desc_stats = task.metadata.descriptive_stats
{
    "test": {
        "num_samples": 1867,
        "number_of_characters": 2009048,
        "num_documents": 1767,
        "min_document_length": 523,
        "average_document_length": 1091.0662139219016,
        "max_document_length": 1598,
        "unique_documents": 1767,
        "num_queries": 100,
        "min_query_length": 475,
        "average_query_length": 811.34,
        "max_query_length": 1376,
        "unique_queries": 100,
        "none_queries": 0,
        "num_relevant_docs": 5043,
        "min_relevant_docs_per_query": 50,
        "average_relevant_docs_per_query": 1.0,
        "max_relevant_docs_per_query": 51,
        "unique_relevant_docs": 1767,
        "num_instructions": null,
        "min_instruction_length": null,
        "average_instruction_length": null,
        "max_instruction_length": null,
        "unique_instructions": null,
        "num_top_ranked": null,
        "min_top_ranked_per_query": null,
        "average_top_ranked_per_query": null,
        "max_top_ranked_per_query": null
    }
}

This dataset card was automatically generated using MTEB

Downloads last month
42