datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-05-20 00:57:22
2025-06-03 10:14:14
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
2.03k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-03 10:13:51
| trending_score
float64 1
36
⌀ | card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mteb/NanoClimateFeverRetrieval | mteb | 2025-05-06T20:20:43Z | 0 | 0 | [
"task_categories:text-retrieval",
"task_ids:fact-checking",
"task_ids:fact-checking-retrieval",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"source_datasets:mteb/climate-fever",
"language:eng",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2012.00614",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2025-05-06T20:20:24Z | null | ---
annotations_creators:
- expert-annotated
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
source_datasets:
- mteb/climate-fever
task_categories:
- text-retrieval
task_ids:
- fact-checking
- fact-checking-retrieval
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 5630737
num_examples: 3408
download_size: 3317653
dataset_size: 5630737
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: train
num_bytes: 5545
num_examples: 148
download_size: 3975
dataset_size: 5545
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7044
num_examples: 50
download_size: 7058
dataset_size: 7044
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
- config_name: qrels
data_files:
- split: train
path: qrels/train-*
- config_name: queries
data_files:
- split: train
path: queries/train-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">NanoClimateFeverRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
NanoClimateFever is a small version of the BEIR dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Non-fiction, Academic, News |
| Reference | https://arxiv.org/abs/2012.00614 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["NanoClimateFeverRetrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{diggelmann2021climatefever,
archiveprefix = {arXiv},
author = {Thomas Diggelmann and Jordan Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold},
eprint = {2012.00614},
primaryclass = {cs.CL},
title = {CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("NanoClimateFeverRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"train": {
"num_samples": 3458,
"number_of_characters": 5525784,
"num_documents": 3408,
"min_document_length": 33,
"average_document_length": 1619.531690140845,
"max_document_length": 6619,
"unique_documents": 3408,
"num_queries": 50,
"min_query_length": 38,
"average_query_length": 128.4,
"max_query_length": 265,
"unique_queries": 50,
"none_queries": 0,
"num_relevant_docs": 148,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 2.96,
"max_relevant_docs_per_query": 5,
"unique_relevant_docs": 115,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
Rogudev/whiskey_dataset | Rogudev | 2025-05-06T20:19:53Z | 9 | 0 | [
"license:mit",
"region:us"
] | [] | 2025-04-30T16:10:18Z | null | ---
license: mit
---
# Dataset for whiskey_classificator
## How is this dataset generated?
This dataset comes from a function which creates a sintetic dataset emulating data that could be used in a real whiskey classification.
```Python
import pandas as pd
import numpy as np
import random
def generate_whiskey(num_rows=500):
"""
Generate a balanced and shuffled DataFrame with whiskey data across all price categories.
Parameters:
- num_rows: int — Number of rows to generate (default 500).
Returns:
- pd.DataFrame — Shuffled whiskey dataset.
"""
# constants
brands = ["Macallan", "Glenfiddich", "Yamazaki", "Lagavulin", "Jack Daniel's",
"Buffalo Trace", "Balvenie", "Ardbeg", "Jameson", "Highland Park"]
types = ["Scotch", "Bourbon", "Rye", "Japanese", "Irish"]
regions = {
"Scotch": ["Islay", "Speyside", "Highlands", "Lowlands"],
"Bourbon": ["Kentucky", "Tennessee"],
"Rye": ["Canada", "USA"],
"Japanese": ["Honshu", "Hokkaido"],
"Irish": ["Dublin", "Cork"]
}
cask_types = ["Sherry", "Bourbon", "Port", "Wine", "Rum"]
bottling_types = ["Single Malt", "Blended", "Single Cask", "Cask Strength"]
category_definitions = {
"Basic": (25, 49),
"Standard": (50, 88),
"Premium": (89, 128),
"Exclusive": (129, 278),
"Luxury": (279, 500)
}
categories = list(category_definitions.keys())
num_classes = len(categories)
per_class = num_rows // num_classes
remainder = num_rows % num_classes
data = []
for i, category in enumerate(categories):
count = per_class + (1 if i < remainder else 0)
price_min, price_max = category_definitions[category]
for _ in range(count):
brand = random.choice(brands)
w_type = random.choice(types)
region = random.choice(regions[w_type])
age = np.random.choice([0, *range(3, 31)], p=[0.1] + [0.9 / 28] * 28)
abv = round(random.uniform(40, 60), 1)
cask = random.choice(cask_types)
bottling = random.choice(bottling_types)
limited = np.random.rand() < 0.15
release_year = random.randint(1990, 2025)
awards = np.random.poisson(1.5)
avg_rating = round(np.random.normal(85 + (age / 30) * 10 + awards, 3), 1)
price = round(random.uniform(price_min, price_max), 2)
# rating category (ordinal)
if avg_rating < 85:
rating_category = "Low"
elif avg_rating < 90:
rating_category = "Medium"
elif avg_rating < 95:
rating_category = "High"
else:
rating_category = "Excelent"
whiskey_name = f"{brand} {age if age else 'NAS'} {cask} Cask"
data.append([
whiskey_name, brand, w_type, age, abv, region, cask,
bottling, price, limited, release_year, avg_rating,
awards, rating_category, category
])
columns = [
"whiskey_name", "brand", "type", "age", "abv", "region", "cask_type",
"bottling_type", "retail_price_usd", "is_limited_edition",
"release_year", "average_rating", "award_wins", "rating_category", "category"
]
# Crear DataFrame y mezclarlo
df = pd.DataFrame(data, columns=columns)
df = df.sample(frac=1, random_state=42).reset_index(drop=True)
return df
``` |
iabd04/estados_materia_dataset | iabd04 | 2025-05-06T20:15:22Z | 30 | 0 | [
"task_categories:text-classification",
"language:es",
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | 2025-05-04T15:30:00Z | null | ---
license: cc-by-nc-4.0
task_categories:
- text-classification
language:
- es
pretty_name: Estados Materia
size_categories:
- n<1K
---
# Dataset de Estados de la Materia
Este dataset contiene **400 muestras sintéticas** que simulan condiciones físico-químicas de distintos materiales, con el objetivo de predecir su **estado físico**: **Sólido**, **Líquido**, **Gas** o **Plasma**. Fue diseñado específicamente para entrenar modelos de clasificación supervisada.
## Objetivo del dataset
El propósito de este dataset es proporcionar datos estructurados que permitan a un modelo de Machine Learning **predecir correctamente el estado físico de un material** a partir de variables continuas que lo describen.
## Estructura del dataset
El dataset consta de un único split llamado train, con 400 registros representados en formato tabular. Cada fila incluye cuatro variables de entrada de tipo numérico (float32) y una variable objetivo categórica (string).
Puedes ver la estructura completa en el archivo [`dataset_infos.json`](./dataset_infos.json).
## Descripción de las columnas
| Columna | Tipo | Descripción |
|-----------------|---------|-----------------------------------------------------------------------------------|
| `Temperatura` | float | Temperatura (ºC). Es un punto clave para la transición de estados. |
| `Presión` | float | Presión ambiental (Pa). Afecta el punto de cambio entre estados. |
| `Densidad` | float | Densidad del material (g/cm^3). Se espera que varíe entre los distintos estados. |
| `Nivel_Energía` | float | Nivel de energía. Puede tomar uno de los siguientes valores: Alto, Medio o Bajo. |
| `Estado` | String | **Variable dependiente (Objetivo)**. Puede tomar uno de los siguientes valores: `Sólido`, `Líquido`, `Gas` o `Plasma` | |
## Variables independientes
Las variables independientes son aquellas utilizadas como entrada por los algoritmos de Machine Learning para inferir el valor de la [variable objetivo (`Estado`)](#variable-dependiente). En este dataset, las siguientes columnas actúan como variables independientes:
- **`Temperatura`**: Influye directamente en los cambios de fase. A mayor temperatura, aumenta la probabilidad de encontrar el material en estado gaseoso o plasmático.
- **`Presión`**: Juega un papel clave en las transiciones entre estados, especialmente entre líquido y gas. Altas presiones pueden comprimir el material, afectando su densidad y estado.
- **`Densidad`**: Propiedad intrínseca del material que varía significativamente entre sólidos (alta densidad), líquidos, gases (baja densidad) y plasmas.
- **`Nivel_Energía`**: Variable categórica (`Bajo(0)`, `Medio(1)`, `Alto(2)`) que sintetiza otros factores internos o externos relacionados con la energía contenida en el sistema. Está fuertemente correlacionada con los estados de mayor excitación (Gas y Plasma).
Estas variables se han seleccionado por su relevancia físico-química y por su capacidad para representar de forma simplificada un entorno donde los estados de la materia cambian bajo ciertas condiciones. Son esenciales para que el modelo aprenda patrones válidos y extrapolables.
## Variable dependiente
La columna **`Estado`** es la variable objetivo (target) que el modelo intenta predecir. Es una variable **categórica multinomial** con 4 clases:
- `Sólido`
- `Líquido`
- `Gas`
- `Plasma`
## Estadísticas del dataset
- Total de muestras: **400**
- Número de características predictoras: **4**
- Número de clases: **4**
- Distribución de clases: **equilibrada** (100 muestras por clase aproximadamente)
> **Nota:** Aunque se trata de un dataset sintético, fue generado siguiendo patrones lógicos para que los modelos puedan generalizar comportamientos reales.
## Licencia
Este dataset se distribuye bajo la licencia **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)**. Puedes usarlo libremente para fines académicos y no comerciales. |
Asap7772/dapo-hint-generator-qwen3-14b-filtered-lr1e6-0-5000 | Asap7772 | 2025-05-06T20:06:39Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T20:06:32Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: data_source
dtype: string
- name: source_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: extra_info
struct:
- name: index
dtype: string
- name: completion
dtype: string
- name: note1
dtype: string
- name: note2
dtype: string
- name: note3
dtype: string
- name: note4
dtype: string
- name: note5
dtype: string
- name: all_hints
dtype: string
splits:
- name: train
num_bytes: 62723579
num_examples: 5000
download_size: 28874717
dataset_size: 62723579
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aarontrinh02/math_pipeline_part10 | aarontrinh02 | 2025-05-06T20:00:16Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T20:00:12Z | null | ---
dataset_info:
features:
- name: query_positive
dtype: string
- name: instruction_positive
dtype: string
- name: document_positive
dtype: string
- name: query_negative
dtype: string
- name: instruction_negative
dtype: string
- name: hard_negative_document_1
dtype: string
- name: hard_negative_document_2
dtype: string
splits:
- name: train
num_bytes: 1246205
num_examples: 500
download_size: 603937
dataset_size: 1246205
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jiang784/ptm-naming-elements | jiang784 | 2025-05-06T19:59:48Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-06T19:45:47Z | null | ---
license: apache-2.0
pretty_name: PTM-NAMING
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset contains extracted naming components and their predicted categories from Pre-trained Model (PTM) names sourced from Hugging Face. The extraction and categorization are performed using an OpenAI GPT model based on a predefined schema and prompt.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
This dataset provides a structured analysis of Hugging Face model names. Source code of this is available at: `
It is generated by a Python script (`extractor.py`) that processes a list of model names (filtered by download counts from an input CSV like `data/HF_pkgs.csv`). The script sends batches of these names to an OpenAI GPT model (`o4-mini-2025-04-16` by default) which, guided by a system prompt and a JSON schema, identifies constituent components within each model name (e.g., "bert", "base", "uncased") and assigns a category to each component (e.g., "Architecture", "Size", "Style"). The output is a JSON file (`data/hf_pkg_elements.json`) and a CSV file (`data/hf_pkg_elements.csv`) detailing these components and categories for each analyzed model name. This allows for systematic study of PTM naming conventions.
- **Curated by:** The PTM-Naming Elements Extractor script and the underlying OpenAI model. The initial list of model names is sourced from Hugging Face.
- **License:** Apache-2.0
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [https://github.com/wenxin-jiang/PTM-Naming-Elements-Extractor](https://github.com/wenxin-jiang/PTM-Naming-Elements-Extractor)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This dataset can be used to:
- Analyze common naming patterns and conventions in Pre-trained Models (PTMs) on Hugging Face.
- Understand the distribution of different types of components (e.g., architecture, size, dataset, language) in PTM names.
- Train models to predict or suggest PTM names based on their characteristics.
- Facilitate searching and categorization of PTMs based on parsed name components.
- Serve as a basis for further research into the evolution of PTM naming practices.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
- The dataset should not be considered an exhaustive or perfectly accurate representation of all PTM naming components. The accuracy is dependent on the OpenAI GPT model's performance and the defined schema.
- It should not be used to make definitive judgments about model capabilities solely based on its name components.
- The dataset reflects naming conventions at the time of data collection and may not capture future trends.
- Using the dataset to generate misleading or nonsensical PTM names is out of scope.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset is provided in two formats:
1. **JSON (`hf_pkg_elements.json`):** A dictionary where keys are the original Hugging Face model names (`context_id`). Each value is a list of component mappings. Each component mapping is an object with:
* `component`: (string) The extracted part of the model name (e.g., "bert", "base", "uncased").
* `category`: (string) The predicted category for the component (e.g., "Architecture", "Size", "Style", "Dataset", "Language", "Organization", "Checkpoint_Info", "Quantization", "Framework_Host", "Task_Specific", "Version_Release", "Modifier", "Other_Identifier"). The categories are defined in `schema.py`.
Example JSON structure for one entry:
```json
{
"bert-base-uncased": [
{"component": "bert", "category": "Architecture"},
{"component": "base", "category": "Size"},
{"component": "uncased", "category": "Style"}
]
}
```
2. **CSV (`hf_pkg_elements.csv`):** A tabular format with the following columns:
* `model_name`: (string) The original Hugging Face model name (e.g., "org/bert-base-uncased").
* `namespace`: (string) The part of the model name before the first '/', if present (e.g., "org"). Otherwise, empty.
* `model_part`: (string) The part of the model name after the first '/', or the full name if no '/' is present (e.g., "bert-base-uncased").
* `component`: (string) The extracted part of the `model_part` (e.g., "bert").
* `category`: (string) The predicted category for the `component` (e.g., "Architecture").
Each row in the CSV represents a single extracted component from a model name.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The primary motivation for creating this dataset is to systematically understand and catalog the naming conventions used for Pre-trained Models (PTMs) available on platforms like Hugging Face. As the number of PTMs grows, their names become increasingly complex, encoding information about their architecture, size, training data, specific tasks, etc. This dataset aims to deconstruct these names into meaningful components and categorize them, facilitating better discoverability, analysis, and potentially automated processing of PTMs.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The source data consists of model names from the Hugging Face Hub. Specifically, the script uses an input CSV file (default: `HF_pkgs.csv`) which is expected to contain at least `context_id` (the model name, e.g., "username/model-name") and `downloads` (the number of downloads for the model).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
. **Loading Packages:** Model names (`context_id`) are loaded from the specified CSV file (`CSV_FILE_PATH`).
2. **Filtering:** Models are filtered based on a minimum download count (`MIN_DOWNLOADS`, default 1000) to focus on more popular/established models.
3. **Sampling (Optional):** If `NUM_MODELS` is specified, a random sample of the filtered model names is selected for processing. Otherwise, all filtered models are processed.
4. **Batching:** The selected model names are divided into batches (`BATCH_SIZE`, default 100).
5. **API Interaction:** For each batch:
* Model names are simplified: if a name contains `/`, only the part after the `/` is sent to the API (e.g., "model-name" from "username/model-name"). Full names are used if they don't contain `/`.
* A request is made to the OpenAI API (model: `MODEL_NAME`, default "o4-mini-2025-04-16").
* The API call includes a system prompt (`BACKGROUND_PROMPT` from `system_prompt.py`) that provides context and instructions for the task.
* The API is instructed to return a JSON response conforming to a specific schema (`JSON_SCHEMA` from `schema.py`), which defines the expected structure for "PackageAnalysis" including "name" and "componentMapping" (with "component" and "category").
6. **Parsing Response:** The JSON response from OpenAI is parsed. The simplified names in the response are mapped back to their original full model names.
7. **Retry Mechanism:** If a batch fails, the script retries up to `MAX_RERUNS` (default 3) times, reducing the batch size by half with each retry and using exponential backoff.
8. **Output:**
* Results are incrementally saved to a JSON file (`OUTPUT_JSON_PATH`, default `data/hf_pkg_elements.json`).
* After each batch (or skipped batch), the accumulated JSON data is converted and saved to a CSV file (`OUTPUT_CSV_PATH`, default `data/hf_pkg_elements.csv`).
9. **Libraries Used:** `os`, `csv`, `json`, `time`, `traceback`, `argparse`, `random`, `pandas`, `openai`, `loguru`, `tqdm`.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
The primary source data (the model names and their download counts) are produced by the Hugging Face community, which includes individual researchers, academic institutions, and commercial entities who create and upload models to the Hugging Face Hub. The script itself does not generate these initial model names but processes them.
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
The dataset itself is a result of an annotation process where "annotation" refers to the extraction of components from model names and the assignment of categories to these components.
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The annotation process is automated using the OpenAI GPT model (`o4-mini-2025-04-16` or as specified by `MODEL_NAME`).
- **Annotator:** OpenAI GPT model.
- **Annotation Task:** Given a (potentially simplified) model name, identify its constituent parts and assign a predefined category to each part.
- **Guidelines:** The primary guidelines are provided through:
* The `BACKGROUND_PROMPT` (defined in `system_prompt.py`), which instructs the model on how to approach the task of breaking down PTM names.
* The `JSON_SCHEMA` (defined in `schema.py`), which dictates the output format, including the list of valid categories for components. The `strict: True` parameter in the API call enforces this schema.
- **Tooling:** The `extractor.py` script orchestrates the process, including batching, API calls, and response parsing.
- **Validation:** The script checks for the presence of `packageAnalysis`, `name`, and `componentMapping` in the API response. Failed batches are retried. However, the semantic correctness of the component breakdown and category assignment relies on the GPT model's interpretation of the prompt and schema.
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
The "annotator" in this context is the OpenAI GPT model specified by `MODEL_NAME` (e.g., `o4-mini-2025-04-16`). The process is automated, and the script's authors defined the prompts and schema that guide the model.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
The source data (Hugging Face model names) are generally public identifiers for software artifacts and do not inherently contain personal or sensitive information beyond usernames or organization names that might be part of the model ID (e.g., "username/model-name"). The script does not process or add any other form of personal or sensitive information.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
- **Model Dependence and Accuracy:** The quality of component extraction and categorization heavily depends on the capabilities and potential biases of the chosen OpenAI GPT model (`MODEL_NAME`). The model might misinterpret names, hallucinate components, or assign incorrect categories. Its performance can vary with the complexity and novelty of PTM names.
- **Prompt and Schema Influence:** The `BACKGROUND_PROMPT` and `JSON_SCHEMA` significantly guide the model. Any ambiguities, limitations, or biases in their design will be reflected in the output. The predefined categories in the schema might not cover all nuances or future naming trends.
- **Input Data Bias:** The dataset is derived from Hugging Face model names. If the input CSV (`data/HF_pkgs.csv`) is not representative of all PTMs, or if the `MIN_DOWNLOADS` filter is too restrictive or too lenient, the resulting dataset might exhibit biases (e.g., towards more popular models or models from certain organizations).
- **Simplification of Names:** The script sends only the part of the model name after a '/' to the API (if a '/' exists). While this simplifies processing, it might remove context (the namespace/organization) that could be relevant for interpreting the model name itself for the LLM.
- **Cost Considerations:** Generating or updating the dataset incurs costs associated with OpenAI API usage, proportional to the number of tokens processed.
- **Snapshot in Time:** The dataset reflects the PTM naming landscape at the time of its generation. Naming conventions evolve, so the dataset may become outdated.
- **Limited Scope of "Component":** The definition of a "component" is guided by the prompt and schema, which might not align with all possible interpretations of PTM name segmentation.
- **No Ground Truth Validation:** The script does not automatically validate the correctness of the LLM's output against a human-annotated ground truth. The `parse_api_response` function checks for structural validity but not semantic accuracy.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
- Users should be aware of the above limitations and critically assess the dataset's suitability for their specific use case.
- When using the dataset for analytical purposes, consider the potential impact of model biases and the chosen `MIN_DOWNLOADS` threshold.
- For critical applications, consider manual validation of a subset of the data or using multiple LLMs/prompts for cross-verification.
- Acknowledge the version of the OpenAI model used if citing or using the data in research, as model updates can change behavior.
- Be mindful that the categories are predefined and may not be exhaustive.
## Dataset Card Contact
Wenxin Jiang, Ph.D., ECE@Purdue, Email: jiang784@purdue.edu
|
proton98/test-distill2 | proton98 | 2025-05-06T19:46:37Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T19:46:34Z | null | ---
dataset_info:
features:
- name: sql_prompt
dtype: string
- name: sql_context
dtype: string
- name: sql
dtype: string
- name: sql_explanation
dtype: string
- name: generation
sequence: string
- name: distilabel_metadata
list:
- name: raw_input_text_generation_0
list:
- name: content
dtype: string
- name: role
dtype: string
- name: raw_output_text_generation_0
dtype: string
- name: statistics_text_generation_0
struct:
- name: input_tokens
dtype: int64
- name: output_tokens
dtype: int64
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 107319
num_examples: 20
download_size: 67593
dataset_size: 107319
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
archivartaunik/MinskGemini_new_version | archivartaunik | 2025-05-06T18:28:06Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T18:28:01Z | null | ---
dataset_info:
features:
- name: chunk_filename
dtype: string
- name: start_ms
dtype: int64
- name: end_ms
dtype: int64
- name: start_time
dtype: string
- name: end_time
dtype: string
- name: text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 1947431.0
num_examples: 24
download_size: 1948205
dataset_size: 1947431.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/exp_rob_dfiltered_DeepSeek_R1_Distill_Qwen_1_5B_madversarial_insert_w_t10 | reasoning-proj | 2025-05-06T17:52:12Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T16:48:58Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
- name: mutated_answer_content
dtype: string
splits:
- name: train
num_bytes: 788325
num_examples: 50
download_size: 314168
dataset_size: 788325
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kkteru/alpaca_farm_human_ann_train_chat | kkteru | 2025-05-06T17:31:57Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T17:31:56Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10982591
num_examples: 17701
download_size: 5540061
dataset_size: 10982591
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/hero_run_3_math | mlfoundations-dev | 2025-05-06T17:31:53Z | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T06:01:39Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: instruction_seed
dtype: string
- name: response_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: ms_id
dtype: int64
splits:
- name: train
num_bytes: 16155172624
num_examples: 850000
download_size: 463834425
dataset_size: 16155172624
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
flyingbugs/OpenR1-Math-220k-pruned-keep-0.75-end-start-0.5 | flyingbugs | 2025-05-06T16:38:05Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T16:36:51Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: problem_type
dtype: string
- name: question_type
dtype: string
- name: source
dtype: string
- name: uuid
dtype: string
- name: is_reasoning_complete
sequence: bool
- name: generations
sequence: string
- name: correctness_math_verify
sequence: bool
- name: correctness_llama
sequence: bool
- name: finish_reasons
sequence: string
- name: correctness_count
dtype: int64
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 4701787252
num_examples: 93733
download_size: 2040887094
dataset_size: 4701787252
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Martingkc/MediBert_Dataset | Martingkc | 2025-05-06T16:21:10Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T16:10:18Z | null | ---
dataset_info:
features:
- name: Image Index
dtype: string
- name: Texts
dtype: string
- name: View Position
dtype: string
- name: Image Features
sequence: float64
- name: Text Features
sequence: float64
- name: Atelectasis
dtype: int64
- name: Cardiomegaly
dtype: int64
- name: Effusion
dtype: int64
- name: Infiltration
dtype: int64
- name: Mass
dtype: int64
- name: Nodule
dtype: int64
- name: Pneumonia
dtype: int64
- name: Pneumothorax
dtype: int64
- name: Consolidation
dtype: int64
- name: Edema
dtype: int64
- name: Emphysema
dtype: int64
- name: Fibrosis
dtype: int64
- name: Hernia
dtype: int64
- name: Pleural_Thickening
dtype: int64
- name: No_Finding
dtype: int64
- name: Image
dtype: image
splits:
- name: train
num_bytes: 965181538.2
num_examples: 2380
- name: test
num_bytes: 965181538.2
num_examples: 2380
download_size: 1929861734
dataset_size: 1930363076.4
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ilahgel/dataset_augmentedbygpt | ilahgel | 2025-05-06T15:49:28Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T15:49:24Z | null | ---
dataset_info:
features:
- name: equipment_id
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
- name: explanation
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 11172437
num_examples: 50272
download_size: 2295671
dataset_size: 11172437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ma921/golden-hh-tokenized-mistral_noise0 | ma921 | 2025-05-06T15:14:47Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T15:14:45Z | null | ---
dataset_info:
features:
- name: sft_input_ids
sequence: int64
- name: pos_input_ids
sequence: int64
- name: neg_input_ids
sequence: int64
splits:
- name: train
num_bytes: 19145216
num_examples: 12066
download_size: 4444162
dataset_size: 19145216
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shylee/eval_DP_pengripA_downDims1_cropNo224_freeze0_16_1_ema0_1e-4_ckpt330000 | shylee | 2025-05-06T15:10:27Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-06T14:44:45Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1036,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.FrontCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.TopCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.WristCam": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
Mxode/Chinese-Multimodal-Instruct | Mxode | 2025-05-06T15:04:50Z | 221 | 2 | [
"task_categories:visual-question-answering",
"task_categories:image-to-text",
"license:cc-by-sa-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"visual-question-answering",
"image-to-text"
] | 2025-05-01T04:38:57Z | 2 | ---
configs:
- config_name: samples
default: true
data_files:
- split: train
path: samples.parquet
license: cc-by-sa-4.0
task_categories:
- visual-question-answering
- image-to-text
---
<h1 align="center">
中文(视觉)多模态指令数据集
</h1>
<p align="center">
<a href="https://github.com/Mxoder/Maxs-Awesome-Datasets" target="_blank">💻 Github Repo</a> <br>
</p>
本项目旨在构建一个高质量、大规模的**中文(视觉)多模态指令数据集**,目前仍在施工中 🚧💦
---
> [!Important]
> 本数据集仍处于 **WIP (Work in Progress)** 状态,目前 Dataset Viewer 展示的是 100 条示例。
>
> 初步预计规模大约在 1~2M(不包含其他来源的数据集),均为多轮对话形式。
>
> [!Tip]
> [2025/05/05] 图片已经上传完毕,后续文字部分正等待上传。
|
AdaptiveML/bird_v4.2_chess_new | AdaptiveML | 2025-05-06T15:04:40Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T15:01:23Z | null | ---
dataset_info:
features:
- name: db_id
dtype: string
- name: question
dtype: string
- name: evidence
dtype: string
- name: SQL
dtype: string
- name: schema
dtype: string
- name: gt_obj
dtype: string
splits:
- name: train
num_bytes: 817906437
num_examples: 9428
- name: dev
num_bytes: 41886334
num_examples: 1534
download_size: 389910235
dataset_size: 859792771
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
---
|
ma921/oasst1-tokenized-qwen2.5_noise0 | ma921 | 2025-05-06T14:58:07Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T14:58:03Z | null | ---
dataset_info:
features:
- name: sft_input_ids
sequence: int64
- name: pos_input_ids
sequence: int64
- name: neg_input_ids
sequence: int64
splits:
- name: train
num_bytes: 104153972
num_examples: 16419
download_size: 27651264
dataset_size: 104153972
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pengguilan/DPO_dataset_from_lima | pengguilan | 2025-05-06T14:15:07Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T12:54:37Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 756204
num_examples: 200
download_size: 253118
dataset_size: 756204
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/KorHateClassification | mteb | 2025-05-06T12:37:19Z | 0 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"language:kor",
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2005.12503",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T12:37:15Z | null | ---
annotations_creators:
- expert-annotated
language:
- kor
license: cc-by-sa-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 221668
num_examples: 2048
- name: test
num_bytes: 51373
num_examples: 471
download_size: 190060
dataset_size: 273041
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">KorHateClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
The dataset was created to provide the first human-labeled Korean corpus for
toxic speech detection from a Korean online entertainment news aggregator. Recently,
two young Korean celebrities suffered from a series of tragic incidents that led to two
major Korean web portals to close the comments section on their platform. However, this only
serves as a temporary solution, and the fundamental issue has not been solved yet. This dataset
hopes to improve Korean hate speech detection. Annotation was performed by 32 annotators,
consisting of 29 annotators from the crowdsourcing platform DeepNatural AI and three NLP researchers.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Social, Written |
| Reference | https://paperswithcode.com/dataset/korean-hatespeech-dataset |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["KorHateClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{moon2020beep,
archiveprefix = {arXiv},
author = {Jihyung Moon and Won Ik Cho and Junbum Lee},
eprint = {2005.12503},
primaryclass = {cs.CL},
title = {BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection},
year = {2020},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("KorHateClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"train": {
"num_samples": 2048,
"number_of_characters": 79006,
"number_texts_intersect_with_train": null,
"min_text_length": 4,
"average_text_length": 38.5771484375,
"max_text_length": 130,
"unique_text": 2048,
"unique_labels": 3,
"labels": {
"1": {
"count": 648
},
"2": {
"count": 904
},
"0": {
"count": 496
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/ContractNLISharingWithEmployeesLegalBenchClassification | mteb | 2025-05-06T11:59:27Z | 0 | 0 | [
"task_categories:text-classification",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"language:eng",
"license:cc-by-4.0",
"modality:text",
"arxiv:2308.11462",
"arxiv:2110.01799",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T11:59:23Z | null | ---
annotations_creators:
- expert-annotated
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 2776
num_examples: 8
- name: test
num_bytes: 95604
num_examples: 170
download_size: 47087
dataset_size: 98380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">ContractNLISharingWithEmployeesLegalBenchClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
This task is a subset of ContractNLI, and consists of determining whether a clause from an NDA clause provides that the Receiving Party may share some Confidential Information with some of Receiving Party's employees.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Legal, Written |
| Reference | https://huggingface.co/datasets/nguha/legalbench |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["ContractNLISharingWithEmployeesLegalBenchClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{guha2023legalbench,
archiveprefix = {arXiv},
author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
eprint = {2308.11462},
primaryclass = {cs.CL},
title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
year = {2023},
}
@article{koreeda2021contractnli,
author = {Koreeda, Yuta and Manning, Christopher D},
journal = {arXiv preprint arXiv:2110.01799},
title = {ContractNLI: A dataset for document-level natural language inference for contracts},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("ContractNLISharingWithEmployeesLegalBenchClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 170,
"number_of_characters": 93267,
"number_texts_intersect_with_train": 0,
"min_text_length": 87,
"average_text_length": 548.6294117647059,
"max_text_length": 2493,
"unique_text": 170,
"unique_labels": 2,
"labels": {
"1": {
"count": 88
},
"0": {
"count": 82
}
}
},
"train": {
"num_samples": 8,
"number_of_characters": 2680,
"number_texts_intersect_with_train": null,
"min_text_length": 126,
"average_text_length": 335.0,
"max_text_length": 706,
"unique_text": 8,
"unique_labels": 2,
"labels": {
"1": {
"count": 4
},
"0": {
"count": 4
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/CUADGoverningLawLegalBenchClassification | mteb | 2025-05-06T11:54:09Z | 0 | 0 | [
"task_categories:text-classification",
"annotations_creators:expert-annotated",
"multilinguality:monolingual",
"language:eng",
"license:cc-by-4.0",
"modality:text",
"arxiv:2308.11462",
"arxiv:2103.06268",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T11:54:05Z | null | ---
annotations_creators:
- expert-annotated
language:
- eng
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids: []
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1917
num_examples: 6
- name: test
num_bytes: 264457
num_examples: 876
download_size: 121055
dataset_size: 266374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CUADGoverningLawLegalBenchClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
This task was constructed from the CUAD dataset. It consists of determining if the clause specifies which state/country’s law governs the contract.
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Legal, Written |
| Reference | https://huggingface.co/datasets/nguha/legalbench |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["CUADGoverningLawLegalBenchClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{guha2023legalbench,
archiveprefix = {arXiv},
author = {Neel Guha and Julian Nyarko and Daniel E. Ho and Christopher Ré and Adam Chilton and Aditya Narayana and Alex Chohlas-Wood and Austin Peters and Brandon Waldon and Daniel N. Rockmore and Diego Zambrano and Dmitry Talisman and Enam Hoque and Faiz Surani and Frank Fagan and Galit Sarfaty and Gregory M. Dickinson and Haggai Porat and Jason Hegland and Jessica Wu and Joe Nudell and Joel Niklaus and John Nay and Jonathan H. Choi and Kevin Tobia and Margaret Hagan and Megan Ma and Michael Livermore and Nikon Rasumov-Rahe and Nils Holzenberger and Noam Kolt and Peter Henderson and Sean Rehaag and Sharad Goel and Shang Gao and Spencer Williams and Sunny Gandhi and Tom Zur and Varun Iyer and Zehua Li},
eprint = {2308.11462},
primaryclass = {cs.CL},
title = {LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models},
year = {2023},
}
@article{hendrycks2021cuad,
author = {Hendrycks, Dan and Burns, Collin and Chen, Anya and Ball, Spencer},
journal = {arXiv preprint arXiv:2103.06268},
title = {Cuad: An expert-annotated nlp dataset for legal contract review},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("CUADGoverningLawLegalBenchClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 876,
"number_of_characters": 253930,
"number_texts_intersect_with_train": 0,
"min_text_length": 60,
"average_text_length": 289.8744292237443,
"max_text_length": 2402,
"unique_text": 876,
"unique_labels": 2,
"labels": {
"1": {
"count": 438
},
"0": {
"count": 438
}
}
},
"train": {
"num_samples": 6,
"number_of_characters": 1845,
"number_texts_intersect_with_train": null,
"min_text_length": 97,
"average_text_length": 307.5,
"max_text_length": 838,
"unique_text": 6,
"unique_labels": 2,
"labels": {
"1": {
"count": 3
},
"0": {
"count": 3
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/CyrillicTurkicLangClassification | mteb | 2025-05-06T11:19:54Z | 0 | 0 | [
"task_categories:text-classification",
"task_ids:language-identification",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:bak",
"language:chv",
"language:kaz",
"language:kir",
"language:krc",
"language:rus",
"language:sah",
"language:tat",
"language:tyv",
"license:cc-by-nc-4.0",
"modality:text",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T11:19:48Z | null | ---
annotations_creators:
- derived
language:
- bak
- chv
- kaz
- kir
- krc
- rus
- sah
- tat
- tyv
license: cc-by-nc-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- language-identification
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 13028257
num_examples: 72000
- name: validation
num_bytes: 1633483
num_examples: 9000
- name: test
num_bytes: 375171
num_examples: 2048
download_size: 9046362
dataset_size: 15036911
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">CyrillicTurkicLangClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Cyrillic dataset of 8 Turkic languages spoken in Russia and former USSR
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Web, Written |
| Reference | https://huggingface.co/datasets/tatiana-merz/cyrillic_turkic_langs |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["CyrillicTurkicLangClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{goldhahn2012building,
author = {Goldhahn, Dirk and Eckart, Thomas and Quasthoff, Uwe},
booktitle = {Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)},
title = {Building Large Monolingual Dictionaries at the Leipzig Corpora Collection: From 100 to 200 Languages},
year = {2012},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("CyrillicTurkicLangClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 2048,
"number_of_characters": 191378,
"number_texts_intersect_with_train": 0,
"min_text_length": 15,
"average_text_length": 93.4462890625,
"max_text_length": 253,
"unique_text": 2048,
"unique_labels": 9,
"labels": {
"2": {
"count": 228
},
"3": {
"count": 227
},
"8": {
"count": 228
},
"5": {
"count": 227
},
"6": {
"count": 228
},
"0": {
"count": 227
},
"7": {
"count": 227
},
"1": {
"count": 228
},
"4": {
"count": 228
}
}
},
"train": {
"num_samples": 72000,
"number_of_characters": 6640175,
"number_texts_intersect_with_train": null,
"min_text_length": 15,
"average_text_length": 92.22465277777778,
"max_text_length": 255,
"unique_text": 72000,
"unique_labels": 9,
"labels": {
"8": {
"count": 8000
},
"3": {
"count": 8000
},
"7": {
"count": 8000
},
"5": {
"count": 8000
},
"2": {
"count": 8000
},
"1": {
"count": 8000
},
"6": {
"count": 8000
},
"4": {
"count": 8000
},
"0": {
"count": 8000
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/Core17InstructionRetrieval | mteb | 2025-05-06T11:19:00Z | 0 | 0 | [
"task_categories:text-ranking",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:eng",
"license:mit",
"modality:text",
"arxiv:2403.15246",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-ranking"
] | 2025-05-06T11:18:45Z | null | ---
annotations_creators:
- derived
language:
- eng
license: mit
multilinguality: monolingual
task_categories:
- text-ranking
task_ids: []
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 44843804
num_examples: 19899
download_size: 27963474
dataset_size: 44843804
- config_name: instruction
features:
- name: query-id
dtype: string
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 13675
num_examples: 40
download_size: 7443
dataset_size: 13675
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 311980
num_examples: 9480
download_size: 93738
dataset_size: 311980
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 5050
num_examples: 40
download_size: 3561
dataset_size: 5050
- config_name: top_ranked
features:
- name: query-id
dtype: string
- name: corpus-ids
sequence: string
splits:
- name: test
num_bytes: 498500
num_examples: 40
download_size: 213125
dataset_size: 498500
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: instruction
data_files:
- split: test
path: instruction/test-*
- config_name: qrels
data_files:
- split: test
path: qrels/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
- config_name: top_ranked
data_files:
- split: test
path: top_ranked/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">Core17InstructionRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
Measuring retrieval instruction following ability on Core17 narratives for the FollowIR benchmark.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | News, Written |
| Reference | https://arxiv.org/abs/2403.15246 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["Core17InstructionRetrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{weller2024followir,
archiveprefix = {arXiv},
author = {Orion Weller and Benjamin Chang and Sean MacAvaney and Kyle Lo and Arman Cohan and Benjamin Van Durme and Dawn Lawrie and Luca Soldaini},
eprint = {2403.15246},
primaryclass = {cs.IR},
title = {FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions},
year = {2024},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("Core17InstructionRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 19939,
"number_of_characters": 44459412,
"num_documents": 19899,
"min_document_length": 8,
"average_document_length": 2234.0329664807277,
"max_document_length": 2960,
"unique_documents": 19899,
"num_queries": 40,
"min_query_length": 55,
"average_query_length": 109.75,
"max_query_length": 278,
"unique_queries": 40,
"none_queries": 0,
"num_relevant_docs": 9480,
"min_relevant_docs_per_query": 135,
"average_relevant_docs_per_query": 43.6,
"max_relevant_docs_per_query": 379,
"unique_relevant_docs": 4739,
"num_instructions": 40,
"min_instruction_length": 102,
"average_instruction_length": 13015,
"max_instruction_length": 837,
"unique_instructions": 40,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
quyanh/redbench-v1_decontaminated | quyanh | 2025-05-06T10:37:18Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T10:14:50Z | null | ---
dataset_info:
- config_name: AdvBench
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 4444580.410659676
num_examples: 1076
download_size: 1099471
dataset_size: 4444580.410659676
- config_name: CatQA
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 2242943.4600262125
num_examples: 543
download_size: 596927
dataset_size: 2242943.4600262125
- config_name: CoCoNot
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 5700298.296199214
num_examples: 1380
download_size: 1308135
dataset_size: 5700298.296199214
- config_name: CoNA
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 735255.8671909131
num_examples: 178
download_size: 255594
dataset_size: 735255.8671909131
- config_name: CoSafe
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 4341314.137177807
num_examples: 1051
download_size: 1252494
dataset_size: 4341314.137177807
- config_name: ControversialInstructions
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 165226.0375709917
num_examples: 40
download_size: 67688
dataset_size: 165226.0375709917
- config_name: CyberattackAssistance
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 4052168.5714285714
num_examples: 981
download_size: 1380005
dataset_size: 4052168.5714285714
- config_name: DAN
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 1606823.2153778942
num_examples: 389
download_size: 411373
dataset_size: 1606823.2153778942
- config_name: DeMET
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 119788.87723896898
num_examples: 29
download_size: 50740
dataset_size: 119788.87723896898
- config_name: DiaSafety
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 2057064.1677588467
num_examples: 498
download_size: 583809
dataset_size: 2057064.1677588467
- config_name: DoNotAnswer
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 3870419.9301004806
num_examples: 937
download_size: 995150
dataset_size: 3870419.9301004806
- config_name: ForbiddenQuestions
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 1413352
num_examples: 390
download_size: 397264
dataset_size: 1413352
- config_name: GEST
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 14684464.089121887
num_examples: 3555
download_size: 3142341
dataset_size: 14684464.089121887
- config_name: GPTFuzzer
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 392411.8392311053
num_examples: 95
download_size: 121232
dataset_size: 392411.8392311053
- config_name: GandalfIgnoreInstructions
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 462632.90519877675
num_examples: 112
download_size: 153458
dataset_size: 462632.90519877675
- config_name: GandalfSummarization
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 53698.4622105723
num_examples: 13
download_size: 43716
dataset_size: 53698.4622105723
- config_name: HarmBench
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 1493569
num_examples: 320
download_size: 419115
dataset_size: 1493569
- config_name: HarmfulQ
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 813738.2350371341
num_examples: 197
download_size: 240650
dataset_size: 813738.2350371341
- config_name: HarmfulQA
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 8001070.869375273
num_examples: 1937
download_size: 2049297
dataset_size: 8001070.869375273
- config_name: JADE
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 330452.0751419834
num_examples: 80
download_size: 133497
dataset_size: 330452.0751419834
- config_name: JBBBehaviours
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 384150.5373525557
num_examples: 93
download_size: 105224
dataset_size: 384150.5373525557
- config_name: KorNAT
features:
- name: prompt
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 80258
num_examples: 14
download_size: 38792
dataset_size: 80258
- config_name: LatentJailbreak
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 10012697.876802096
num_examples: 2424
download_size: 2482559
dataset_size: 10012697.876802096
- config_name: MaliciousInstruct
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 413065.09392747923
num_examples: 100
download_size: 152998
dataset_size: 413065.09392747923
- config_name: MaliciousInstructions
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 404803.79204892967
num_examples: 98
download_size: 130632
dataset_size: 404803.79204892967
- config_name: MedSafetyBench
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 3717585.845347313
num_examples: 900
download_size: 1193872
dataset_size: 3717585.845347313
- config_name: MoralExceptQA
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
struct:
- name: human.response
dtype: float64
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 903099
num_examples: 148
download_size: 213091
dataset_size: 903099
- config_name: ORBench
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 5448328.588903451
num_examples: 1319
download_size: 1700064
dataset_size: 5448328.588903451
- config_name: PhysicalSafetyInstructions
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 413065.09392747923
num_examples: 100
download_size: 204901
dataset_size: 413065.09392747923
- config_name: QHarm
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 396542.4901703801
num_examples: 96
download_size: 142601
dataset_size: 396542.4901703801
- config_name: SGBench
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 3866289.279161206
num_examples: 936
download_size: 1025021
dataset_size: 3866289.279161206
- config_name: SGXSTest
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 413065.09392747923
num_examples: 100
download_size: 167129
dataset_size: 413065.09392747923
- config_name: SafeText
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 0.0
num_examples: 0
download_size: 3266
dataset_size: 0.0
- config_name: StrongREJECT
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 937657.7632153779
num_examples: 227
download_size: 334463
dataset_size: 937657.7632153779
- config_name: ToxiGen
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 3882811.8829183048
num_examples: 940
download_size: 1022196
dataset_size: 3882811.8829183048
- config_name: WMDP
features:
- name: prompt
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 21642650
num_examples: 3668
download_size: 6007583
dataset_size: 21642650
- config_name: XSTest
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 1850531.620795107
num_examples: 448
download_size: 498192
dataset_size: 1850531.620795107
- config_name: XSafety
features:
- name: prompt
dtype: string
- name: choices
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: subtask
dtype: string
- name: category
dtype: string
- name: domain
dtype: string
- name: language
dtype: string
- name: source
dtype: string
- name: risk_response
dtype: string
- name: risk_property
dtype: string
- name: domain_response
dtype: string
- name: domain_property
dtype: string
- name: subdataset
dtype: string
splits:
- name: train
num_bytes: 11557561.328090869
num_examples: 2798
download_size: 3097933
dataset_size: 11557561.328090869
configs:
- config_name: AdvBench
data_files:
- split: train
path: AdvBench/train-*
- config_name: CatQA
data_files:
- split: train
path: CatQA/train-*
- config_name: CoCoNot
data_files:
- split: train
path: CoCoNot/train-*
- config_name: CoNA
data_files:
- split: train
path: CoNA/train-*
- config_name: CoSafe
data_files:
- split: train
path: CoSafe/train-*
- config_name: ControversialInstructions
data_files:
- split: train
path: ControversialInstructions/train-*
- config_name: CyberattackAssistance
data_files:
- split: train
path: CyberattackAssistance/train-*
- config_name: DAN
data_files:
- split: train
path: DAN/train-*
- config_name: DeMET
data_files:
- split: train
path: DeMET/train-*
- config_name: DiaSafety
data_files:
- split: train
path: DiaSafety/train-*
- config_name: DoNotAnswer
data_files:
- split: train
path: DoNotAnswer/train-*
- config_name: ForbiddenQuestions
data_files:
- split: train
path: ForbiddenQuestions/train-*
- config_name: GEST
data_files:
- split: train
path: GEST/train-*
- config_name: GPTFuzzer
data_files:
- split: train
path: GPTFuzzer/train-*
- config_name: GandalfIgnoreInstructions
data_files:
- split: train
path: GandalfIgnoreInstructions/train-*
- config_name: GandalfSummarization
data_files:
- split: train
path: GandalfSummarization/train-*
- config_name: HarmBench
data_files:
- split: train
path: HarmBench/train-*
- config_name: HarmfulQ
data_files:
- split: train
path: HarmfulQ/train-*
- config_name: HarmfulQA
data_files:
- split: train
path: HarmfulQA/train-*
- config_name: JADE
data_files:
- split: train
path: JADE/train-*
- config_name: JBBBehaviours
data_files:
- split: train
path: JBBBehaviours/train-*
- config_name: KorNAT
data_files:
- split: train
path: KorNAT/train-*
- config_name: LatentJailbreak
data_files:
- split: train
path: LatentJailbreak/train-*
- config_name: MaliciousInstruct
data_files:
- split: train
path: MaliciousInstruct/train-*
- config_name: MaliciousInstructions
data_files:
- split: train
path: MaliciousInstructions/train-*
- config_name: MedSafetyBench
data_files:
- split: train
path: MedSafetyBench/train-*
- config_name: MoralExceptQA
data_files:
- split: train
path: MoralExceptQA/train-*
- config_name: ORBench
data_files:
- split: train
path: ORBench/train-*
- config_name: PhysicalSafetyInstructions
data_files:
- split: train
path: PhysicalSafetyInstructions/train-*
- config_name: QHarm
data_files:
- split: train
path: QHarm/train-*
- config_name: SGBench
data_files:
- split: train
path: SGBench/train-*
- config_name: SGXSTest
data_files:
- split: train
path: SGXSTest/train-*
- config_name: SafeText
data_files:
- split: train
path: SafeText/train-*
- config_name: StrongREJECT
data_files:
- split: train
path: StrongREJECT/train-*
- config_name: ToxiGen
data_files:
- split: train
path: ToxiGen/train-*
- config_name: WMDP
data_files:
- split: train
path: WMDP/train-*
- config_name: XSTest
data_files:
- split: train
path: XSTest/train-*
- config_name: XSafety
data_files:
- split: train
path: XSafety/train-*
---
|
SayantanJoker/processed_seamless_align_hindi_new_chunk_46 | SayantanJoker | 2025-05-06T10:22:02Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T10:20:35Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 2662496217.0
num_examples: 10000
download_size: 2547829459
dataset_size: 2662496217.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/processed_seamless_align_hindi_new_chunk_17 | SayantanJoker | 2025-05-06T09:40:18Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T09:38:51Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 2682105462.0
num_examples: 10000
download_size: 2539386917
dataset_size: 2682105462.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/processed_seamless_align_hindi_new_chunk_7 | SayantanJoker | 2025-05-06T09:25:38Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T09:24:15Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 2607074974.0
num_examples: 10000
download_size: 2483608876
dataset_size: 2607074974.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iseddik/poison_tr_0.1_128b | iseddik | 2025-05-06T08:49:58Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T08:49:56Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: poisoned_index
dtype: int64
splits:
- name: train
num_bytes: 47589
num_examples: 128
download_size: 33091
dataset_size: 47589
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
iseddik/clean_tr_0.1_128b | iseddik | 2025-05-06T08:49:26Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T08:49:24Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 47741.952
num_examples: 128
download_size: 33973
dataset_size: 47741.952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mteb/ArguAna-Fa | mteb | 2025-05-06T08:17:58Z | 0 | 0 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:derived",
"multilinguality:monolingual",
"source_datasets:mteb/arguana",
"language:fas",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2025-05-06T08:17:42Z | null | ---
annotations_creators:
- derived
language:
- fas
license: unknown
multilinguality: monolingual
source_datasets:
- mteb/arguana
task_categories:
- text-retrieval
task_ids:
- document-retrieval
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: test
num_bytes: 14546179
num_examples: 8674
download_size: 6447438
dataset_size: 14546179
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: test
num_bytes: 111736
num_examples: 1406
download_size: 24447
dataset_size: 111736
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: test
num_bytes: 2694250
num_examples: 1406
download_size: 1193251
dataset_size: 2694250
configs:
- config_name: corpus
data_files:
- split: test
path: corpus/test-*
- config_name: qrels
data_files:
- split: test
path: qrels/test-*
- config_name: queries
data_files:
- split: test
path: queries/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">ArguAna-Fa</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
ArguAna-Fa
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Blog |
| Reference | https://huggingface.co/datasets/MCINext/arguana-fa |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["ArguAna-Fa"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("ArguAna-Fa")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 10080,
"number_of_characters": 9458841,
"num_documents": 8674,
"min_document_length": 1,
"average_document_length": 918.7068249942356,
"max_document_length": 4427,
"unique_documents": 8674,
"num_queries": 1406,
"min_query_length": 189,
"average_query_length": 1059.7283072546231,
"max_query_length": 4234,
"unique_queries": 1406,
"none_queries": 0,
"num_relevant_docs": 1406,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.0,
"max_relevant_docs_per_query": 1,
"unique_relevant_docs": 1406,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/AngryTweetsClassification | mteb | 2025-05-06T08:15:53Z | 0 | 0 | [
"task_categories:text-classification",
"task_ids:sentiment-analysis",
"task_ids:sentiment-scoring",
"task_ids:sentiment-classification",
"task_ids:hate-speech-detection",
"annotations_creators:human-annotated",
"multilinguality:monolingual",
"language:dan",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2025-05-06T08:15:41Z | null | ---
annotations_creators:
- human-annotated
language:
- dan
license: cc-by-4.0
multilinguality: monolingual
task_categories:
- text-classification
task_ids:
- sentiment-analysis
- sentiment-scoring
- sentiment-classification
- hate-speech-detection
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 416154
num_examples: 2411
- name: test
num_bytes: 184365
num_examples: 1047
download_size: 392885
dataset_size: 600519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">AngryTweetsClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
A sentiment dataset with 3 classes (positiv, negativ, neutral) for Danish tweets
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Social, Written |
| Reference | https://aclanthology.org/2021.nodalida-main.53/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["AngryTweetsClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{pauli2021danlp,
author = {Pauli, Amalie Brogaard and Barrett, Maria and Lacroix, Oph{\'e}lie and Hvingelby, Rasmus},
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
pages = {460--466},
title = {DaNLP: An open-source toolkit for Danish Natural Language Processing},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("AngryTweetsClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 1047,
"number_of_characters": 163484,
"number_texts_intersect_with_train": 0,
"min_text_length": 9,
"average_text_length": 156.14517669531998,
"max_text_length": 327,
"unique_text": 1044,
"unique_labels": 3,
"labels": {
"neutral": {
"count": 363
},
"positiv": {
"count": 282
},
"negativ": {
"count": 402
}
}
},
"train": {
"num_samples": 2411,
"number_of_characters": 368784,
"number_texts_intersect_with_train": null,
"min_text_length": 1,
"average_text_length": 152.95893819991704,
"max_text_length": 338,
"unique_text": 2410,
"unique_labels": 3,
"labels": {
"positiv": {
"count": 648
},
"neutral": {
"count": 852
},
"negativ": {
"count": 911
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
worstchan/Belle_1.4M-SLAM-Omni | worstchan | 2025-05-06T08:04:59Z | 1,958 | 1 | [
"task_categories:question-answering",
"language:zh",
"license:gpl-3.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2412.15649",
"region:us"
] | [
"question-answering"
] | 2024-12-20T09:11:26Z | null | ---
license: gpl-3.0
dataset_info:
features:
- name: split_name
dtype: string
- name: index
dtype: int64
- name: round
dtype: int64
- name: question
dtype: string
- name: question_audio
struct:
- name: array
sequence: float32
- name: path
dtype: string
- name: sampling_rate
dtype: int64
- name: answer
dtype: string
- name: answer_cosyvoice_speech_token
sequence: int64
- name: answer_snac
dtype: string
splits:
- name: train
num_bytes: 800059817200
num_examples: 1400398
download_size: 792877562556
dataset_size: 800059817200
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
language:
- zh
size_categories:
- 1M<n<10M
---
# Belle_1.4M
*This dataset is prepared for the reproduction of [SLAM-Omni](https://arxiv.org/abs/2412.15649).*
This is a **multi-round Chinese spoken dialogue** training dataset. For code and usage examples, please refer to the related GitHub repository: [X-LANCE/SLAM-LLM (examples/s2s)](https://github.com/X-LANCE/SLAM-LLM/tree/main/examples/s2s)
## 🔧 Modifications
1. **Data Filtering**: We removed samples with excessively long data.
2. **Speech Response Tokens**: We used [CosyVoice](https://github.com/FunAudioLLM/CosyVoice) to synthesize corresponding semantic speech tokens for the speech response. These tokens, represented as `answer_cosyvoice_speech_token`, are included as model training targets.
3. **User Instruction Speech**: Synthesized speech for user instructions using CosyVoice, with timbres randomly selected from 1,010 Chinese prompts in the [seed-tts-eval](https://github.com/BytedanceSpeech/seed-tts-eval) subset to ensure diversity.
## 🙏 Acknowledgment
The original dataset was sourced from [Belle_train_3.5M_CN](https://huggingface.co/datasets/BelleGroup/train_3.5M_CN). We thank the Belle Group for their open-source contribution.
## 📄 Citation
If you find our work helpful, please consider citing:
```bibtex
@article{chen2024slam,
title={SLAM-Omni: Timbre-Controllable Voice Interaction System with Single-Stage Training},
author={Chen, Wenxi and Ma, Ziyang and Yan, Ruiqi and Liang, Yuzhe and Li, Xiquan and Xu, Ruiyang and Niu, Zhikang and Zhu, Yanqiao and Yang, Yifan and Liu, Zhanxun and others},
journal={arXiv preprint arXiv:2412.15649},
year={2024}
}
``` |
Sooraj87/med-data | Sooraj87 | 2025-05-06T06:21:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T06:21:24Z | null | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: __index_level_0__
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 2271829
num_examples: 1000
download_size: 1304108
dataset_size: 2271829
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
huutuan/LeViSQA-200sample | huutuan | 2025-05-06T05:59:27Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T05:58:58Z | null | ---
dataset_info:
features:
- name: _id
dtype: string
- name: speech
dtype: audio
- name: transcription
dtype: string
- name: questions
dtype: string
- name: answers
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 1201469914.2630787
num_examples: 200
download_size: 777839295
dataset_size: 1201469914.2630787
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/d1_science_all_large_10k | mlfoundations-dev | 2025-05-06T05:19:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T05:19:13Z | null | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: domain
dtype: string
- name: r1_response
dtype: string
- name: r1_reasoning_content
dtype: string
- name: extract_solution
dtype: string
- name: url
dtype: string
- name: filename
dtype: string
- name: success
dtype: bool
- name: page_count
dtype: int64
- name: page_number
dtype: int64
- name: question_choices_solutions
dtype: string
- name: extracted_question
dtype: string
- name: extracted_answer_choices
sequence: string
- name: matched_solution
dtype: string
- name: qa_validation_outputs
dtype: bool
- name: classifier_reasoning
dtype: string
- name: is_organic_chemistry
dtype: bool
- name: ms_id
dtype: int64
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: final_reasoning_trace
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 716420520.0949367
num_examples: 10000
download_size: 316275097
dataset_size: 716420520.0949367
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/openthoughts2_code_10k | mlfoundations-dev | 2025-05-06T05:04:53Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T05:04:36Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: _domain
dtype: string
- name: system
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: question
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: extracted_instruction
dtype: string
splits:
- name: train
num_bytes: 263414468.80182666
num_examples: 10000
download_size: 110745763
dataset_size: 263414468.80182666
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/e1_science_longest_qwq_together_0.3k | mlfoundations-dev | 2025-05-06T04:45:49Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T04:45:42Z | null | ---
dataset_info:
features:
- name: instruction_seed
dtype: string
- name: _source
dtype: string
- name: gpt41_mini_response
dtype: string
- name: __original_row_idx
dtype: int64
- name: length
dtype: int64
- name: domain
dtype: string
- name: r1_response
dtype: string
- name: r1_reasoning_content
dtype: string
- name: extract_solution
dtype: string
- name: url
dtype: string
- name: filename
dtype: string
- name: success
dtype: bool
- name: page_count
dtype: int64
- name: page_number
dtype: int64
- name: question_choices_solutions
dtype: string
- name: extracted_question
dtype: string
- name: extracted_answer_choices
sequence: string
- name: matched_solution
dtype: string
- name: qa_validation_outputs
dtype: bool
- name: classifier_reasoning
dtype: string
- name: is_organic_chemistry
dtype: bool
- name: ms_id
dtype: int64
- name: qwq_thinking_trajectory
dtype: string
- name: qwq_attempt
dtype: string
- name: qwq_response
sequence: string
- name: _majority_responses
sequence: string
- name: verified_qwq_response
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 261132236.55
num_examples: 316
download_size: 122282190
dataset_size: 261132236.55
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
genex-world/Genex-DB-World-Exploration | genex-world | 2025-05-06T03:21:17Z | 286 | 0 | [
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"arxiv:2412.09624",
"region:us"
] | [] | 2025-04-23T04:14:07Z | null | ---
dataset_info:
features:
- name: video
dtype: video
splits:
- name: view
num_examples: 4
- name: realistic
num_examples: 3700
- name: low_texture
num_examples: 8400
- name: anime
num_examples: 900
- name: real_world
num_examples: 2400
configs:
- config_name: default
data_files:
- split: view
path: view/*.mp4
- split: realistic
path: Realistic/*.mp4
- split: low_texture
path: Low-Texture/*.mp4
- split: anime
path: Anime/*.mp4
- split: real_world
path: Real-World/*.mp4
size_categories:
- 10K<n<100K
license: cc-by-4.0
---
# GenEx-DB-World-Exploration 🎬🌍
This is the video version of the GenEx-DB dataset.
The dataset contains forward navigation path, captured by panoramic cameras.
Each path is 0.4m/frame, 50 frames in total.
Each example is a single `.mp4` video reconstructed from the original frame folders.
## 📂 Splits
| Split Name | Description
|---------------|--------------------------------------
| `realistic` | 📸 Unreal 5 City Sample renders
| `low_texture` | 🏜️ Blender Low-texture synthetic scenes
| `anime` | 🌸 Unity Stylized/anime scenes
| `real_world` | 🎥 JHU campus handheld collected real-world clips
## 🏗️ Structure
```
Genex-DB-Video/
├── low_texture/
│ ├── video001.mp4
│ └── …
├── realistic/
│ └── …
├── anime/
│ └── …
└── real_world/
└── …
```
Each file is named `<video_id>.mp4` and contains 50 (or 97 for `real_world`) frames at 10 FPS.
## 🚀 Usage
```python
from datasets import load_dataset
# Load the “anime” split
ds = load_dataset("videofolder", data_dir="genex-world/Genex-DB-World-Exploration", split="anime")
# Inspect one example
example = ds[0]
print(example["video"].shape) # (num_frames, height, width, 3)
```
## ✨ BibTex
```
@misc{lu2025genexgeneratingexplorableworld,
title={GenEx: Generating an Explorable World},
author={Taiming Lu and Tianmin Shu and Junfei Xiao and Luoxin Ye and Jiahao Wang and Cheng Peng and Chen Wei and Daniel Khashabi and Rama Chellappa and Alan Yuille and Jieneng Chen},
year={2025},
eprint={2412.09624},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.09624},
}
``` |
osama24sy/llama3.1-8b-it-coutdown-game-7k-qwq-r64-v0.2-countdown-v0.3 | osama24sy | 2025-05-06T03:12:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:12:13Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: numbers
sequence: int64
- name: target
dtype: int64
- name: operations
sequence: string
- name: response
dtype: string
- name: token_count
dtype: int64
splits:
- name: train
num_bytes: 1622865
num_examples: 150
download_size: 602887
dataset_size: 1622865
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-labels-prm-indices_38400_46080 | kaiwenw | 2025-05-06T03:07:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T03:06:49Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1134888781
num_examples: 7680
download_size: 670961721
dataset_size: 1134888781
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-labels-prm-indices_61440_69120 | kaiwenw | 2025-05-06T02:48:10Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:47:47Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1131365794
num_examples: 7680
download_size: 668098928
dataset_size: 1131365794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_3_dataset_2_for_gen_2_v2 | HungVu2003 | 2025-05-06T02:47:43Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:47:42Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2572712
num_examples: 14998
download_size: 1336141
dataset_size: 2572712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-25-4096-with-old-prm-indices_0_7680 | kaiwenw | 2025-05-06T02:41:24Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T02:41:11Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1134755270
num_examples: 7680
download_size: 267352027
dataset_size: 1134755270
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ParkSY/data_nerf_depthanything_depth_normalmap | ParkSY | 2025-05-06T01:46:26Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T01:46:21Z | null | ---
dataset_info:
features:
- name: input_image
dtype: string
- name: edit_prompt
dtype: string
- name: edited_image
dtype: string
- name: label
dtype: int64
- name: depthmap
dtype: string
- name: normalmap
dtype: string
splits:
- name: train
num_bytes: 3128138
num_examples: 9828
download_size: 39548
dataset_size: 3128138
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Jennny/eng-prm-test | Jennny | 2025-05-06T00:46:48Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:46:46Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2397071
num_examples: 1200
download_size: 983649
dataset_size: 2397071
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mhr2004/nevir-original-mhr2004-roberta-large-anion-1e-06-256-stsb-lr2e-05-bs32-pred | mhr2004 | 2025-05-06T00:31:28Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-06T00:31:26Z | null | ---
dataset_info:
features:
- name: input_ids_1
sequence: int64
- name: att_1
sequence: int64
- name: query
dtype: string
- name: doc_1
dtype: string
- name: doc_2
dtype: string
- name: input_ids_2
sequence: int64
- name: att_2
sequence: int64
- name: label
dtype: int64
- name: pair_id
dtype: int64
- name: pred
dtype: int64
splits:
- name: train
num_bytes: 49610561
num_examples: 2766
download_size: 2469482
dataset_size: 49610561
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
justus27/s2-numina | justus27 | 2025-05-05T23:18:25Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T23:02:22Z | null | ---
dataset_info:
features:
- name: problem_id
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: verification_info
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 283538059
num_examples: 735773
download_size: 114223231
dataset_size: 283538059
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
samahadhoud/decomposed-tikz-dataset-80-end | samahadhoud | 2025-05-05T22:28:54Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T22:28:10Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: png
dtype: image
- name: code
dtype: string
splits:
- name: train
num_bytes: 736552396.57
num_examples: 68695
download_size: 676079721
dataset_size: 736552396.57
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.0_num-company_3_dataset_1_for_gen_19_v2 | HungVu2003 | 2025-05-05T22:06:29Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T22:06:28Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6619719
num_examples: 12500
download_size: 3373397
dataset_size: 6619719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ieuniversity/group_2_submission | ieuniversity | 2025-05-05T21:52:14Z | 433 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T13:20:12Z | null | ---
dataset_info:
features:
- name: ID
dtype: string
- name: CLASE
dtype: string
splits:
- name: train
num_bytes: 895475
num_examples: 25808
download_size: 501513
dataset_size: 895475
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_2_for_gen_7_v2 | HungVu2003 | 2025-05-05T21:42:27Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:42:26Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 814852
num_examples: 12500
download_size: 561364
dataset_size: 814852
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Chenlu123/numia_prompt_ppo | Chenlu123 | 2025-05-05T21:38:24Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T17:16:24Z | null | ---
dataset_info:
features:
- name: data_source
dtype: string
- name: ability
dtype: string
- name: reward_model
struct:
- name: ground_truth
dtype: string
- name: style
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 448356055.7457226
num_examples: 312448
download_size: 224564284
dataset_size: 448356055.7457226
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MarsRedderd/coin-images | MarsRedderd | 2025-05-05T21:36:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T21:36:40Z | null | ---
dataset_info:
features:
- name: folder
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 265326
num_examples: 3749
download_size: 36274
dataset_size: 265326
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MBZUAI-IFM/R1_distilled_brain_teasers_filtered_final | MBZUAI-IFM | 2025-05-05T20:10:46Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T20:10:43Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: puzzle_id
dtype: string
- name: reconstruction
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: distrator1
dtype: string
- name: distrator2
dtype: string
- name: unsure
dtype: string
- name: DSR1_reasoning_content
dtype: string
- name: DSR1_content
dtype: string
- name: output
dtype: string
- name: instruction
dtype: string
- name: answerKey
dtype: string
- name: choices
struct:
- name: label
sequence: string
- name: text
sequence: string
- name: original_question
dtype: string
- name: has_forbidden
dtype: bool
splits:
- name: train
num_bytes: 24033616
num_examples: 2345
download_size: 10953571
dataset_size: 24033616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jdchang/qsharp-bt-mixture | jdchang | 2025-05-05T19:55:28Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:54:46Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: reward
sequence: bool
- name: roll_in_ids
sequence:
sequence: int32
- name: roll_outs_ids
sequence:
sequence: int32
- name: processed_answer
sequence: string
splits:
- name: train
num_bytes: 2433860777
num_examples: 27194
download_size: 688707061
dataset_size: 2433860777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-24-4096-with-labels-prm-indices_23040_30720 | kaiwenw | 2025-05-05T19:20:17Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:19:50Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1121700952
num_examples: 7680
download_size: 670958341
dataset_size: 1121700952
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.0_num-company_2_dataset_0_for_gen_10_v2 | HungVu2003 | 2025-05-05T19:14:35Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:14:33Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 981223
num_examples: 12500
download_size: 633869
dataset_size: 981223
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.8_num-company_3_dataset_1_for_gen_6 | HungVu2003 | 2025-05-05T19:11:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T19:11:09Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2883478
num_examples: 12498
download_size: 1308514
dataset_size: 2883478
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-hmmt-feb-24-4096-with-old-prm-indices_15360_23040 | kaiwenw | 2025-05-05T18:56:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T18:56:03Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1130349195
num_examples: 7680
download_size: 266993315
dataset_size: 1130349195
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
amrosama/arabic_english_dataset_for_lang_translations_tasks | amrosama | 2025-05-05T18:27:27Z | 3 | 0 | [
"license:apache-2.0",
"region:us",
"translation",
"arabic"
] | [] | 2024-05-11T15:16:11Z | null | ---
license: apache-2.0
tags:
- translation
- arabic
--- |
MBZUAI-IFM/riddlesenseplusplus_evaluated | MBZUAI-IFM | 2025-05-05T17:57:56Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T17:57:54Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: metadata
dtype: string
- name: dataset_source
dtype: string
splits:
- name: train
num_bytes: 1267424
num_examples: 397
download_size: 601693
dataset_size: 1267424
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-aime-25-4096-with-old-prm-indices_0_7680 | kaiwenw | 2025-05-05T17:02:37Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T17:02:26Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1031600947
num_examples: 7680
download_size: 239593968
dataset_size: 1031600947
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-aime-25-4096-with-old-prm-indices_53760_61440 | kaiwenw | 2025-05-05T17:02:21Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T17:02:11Z | null | ---
dataset_info:
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1025824877
num_examples: 7680
download_size: 238270157
dataset_size: 1025824877
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kaiwenw/distill-r1-qwen-1.5b-aime-24-4096-with-labels-prm | kaiwenw | 2025-05-05T16:25:30Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T15:55:42Z | null | ---
dataset_info:
- config_name: indices_0_7680
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1023956244
num_examples: 7680
download_size: 617586529
dataset_size: 1023956244
- config_name: indices_107520_115200
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1017325551
num_examples: 7680
download_size: 613390391
dataset_size: 1017325551
- config_name: indices_115200_122880
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1020491797
num_examples: 7680
download_size: 615100745
dataset_size: 1020491797
- config_name: indices_15360_23040
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1026219794
num_examples: 7680
download_size: 618093222
dataset_size: 1026219794
- config_name: indices_23040_30720
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1021861659
num_examples: 7680
download_size: 615007459
dataset_size: 1021861659
- config_name: indices_30720_38400
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1020002616
num_examples: 7680
download_size: 613899406
dataset_size: 1020002616
- config_name: indices_38400_46080
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1024811701
num_examples: 7680
download_size: 618375505
dataset_size: 1024811701
- config_name: indices_46080_53760
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1024116214
num_examples: 7680
download_size: 616654955
dataset_size: 1024116214
- config_name: indices_53760_61440
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1014971869
num_examples: 7680
download_size: 611103486
dataset_size: 1014971869
- config_name: indices_61440_69120
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1026457214
num_examples: 7680
download_size: 618610320
dataset_size: 1026457214
- config_name: indices_69120_76800
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1024570847
num_examples: 7680
download_size: 616563479
dataset_size: 1024570847
- config_name: indices_76800_84480
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1028915104
num_examples: 7680
download_size: 620150325
dataset_size: 1028915104
- config_name: indices_7680_15360
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1019043475
num_examples: 7680
download_size: 614058001
dataset_size: 1019043475
- config_name: indices_84480_92160
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1013502838
num_examples: 7680
download_size: 611049399
dataset_size: 1013502838
- config_name: indices_92160_99840
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1015782555
num_examples: 7680
download_size: 612128279
dataset_size: 1015782555
- config_name: indices_99840_107520
features:
- name: message_id
dtype: string
- name: problem
dtype: string
- name: answer
dtype: int64
- name: processed_answer
dtype: string
- name: responses
dtype: string
- name: reward
dtype: bool
- name: prompt_len
dtype: int64
- name: response_len
dtype: int64
- name: classifier_scores
sequence: float64
splits:
- name: train
num_bytes: 1016733372
num_examples: 7680
download_size: 611771187
dataset_size: 1016733372
configs:
- config_name: indices_0_7680
data_files:
- split: train
path: indices_0_7680/train-*
- config_name: indices_107520_115200
data_files:
- split: train
path: indices_107520_115200/train-*
- config_name: indices_115200_122880
data_files:
- split: train
path: indices_115200_122880/train-*
- config_name: indices_15360_23040
data_files:
- split: train
path: indices_15360_23040/train-*
- config_name: indices_23040_30720
data_files:
- split: train
path: indices_23040_30720/train-*
- config_name: indices_30720_38400
data_files:
- split: train
path: indices_30720_38400/train-*
- config_name: indices_38400_46080
data_files:
- split: train
path: indices_38400_46080/train-*
- config_name: indices_46080_53760
data_files:
- split: train
path: indices_46080_53760/train-*
- config_name: indices_53760_61440
data_files:
- split: train
path: indices_53760_61440/train-*
- config_name: indices_61440_69120
data_files:
- split: train
path: indices_61440_69120/train-*
- config_name: indices_69120_76800
data_files:
- split: train
path: indices_69120_76800/train-*
- config_name: indices_76800_84480
data_files:
- split: train
path: indices_76800_84480/train-*
- config_name: indices_7680_15360
data_files:
- split: train
path: indices_7680_15360/train-*
- config_name: indices_84480_92160
data_files:
- split: train
path: indices_84480_92160/train-*
- config_name: indices_92160_99840
data_files:
- split: train
path: indices_92160_99840/train-*
- config_name: indices_99840_107520
data_files:
- split: train
path: indices_99840_107520/train-*
---
|
gunnybd01/Shortermpotential_smr | gunnybd01 | 2025-05-05T15:48:28Z | 31 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T03:44:36Z | null | ---
dataset_info:
features:
- name: Keys
dtype: string
- name: Indicators
dtype: string
- name: Considerations
dtype: string
- name: ShortTermPCT
dtype: float64
splits:
- name: train
num_bytes: 8175947
num_examples: 3900
download_size: 3079970
dataset_size: 8175947
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PHBD/nchs-birth-rates-for-females-by-age-group-united | PHBD | 2025-05-05T15:06:05Z | 0 | 0 | [
"language:en",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"hhs",
"cdc",
"nchs"
] | [] | 2025-05-05T15:06:04Z | null | ---
language:
- en
pretty_name: 'NCHS - Birth Rates for Females by Age Group: United States'
tags:
- hhs
- cdc
- nchs
---
# NCHS - Birth Rates for Females by Age Group: United States
## Description
This dataset includes birth rates for females by age group in the United States since 1940.
The number of states in the reporting area differ historically. In 1915 (when the birth registration area was established), 10 states and the District of Columbia reported births; by 1933, 48 states and the District of Columbia were reporting births, with the last two states, Alaska and Hawaii, added to the registration area in 1959 and 1960, when these regions gained statehood. Reporting area information is detailed in references 1 and 2 below. Trend lines for 1909–1958 are based on live births adjusted for under-registration; beginning with 1959, trend lines are based on registered live births.
## Dataset Details
- **Publisher**: Centers for Disease Control and Prevention
- **Temporal Coverage**: 1940/2018
- **Geographic Coverage**: 50 states and District of Columbia
- **Last Modified**: 2025-04-21
- **Contact**: National Center for Health Statistics (births@cdc.gov)
## Source
Original data can be found at: https://www.cdc.gov/nchs/data_access/vitalstatsonline.htm
## Usage
You can load this dataset using:
```python
from datasets import load_dataset
dataset = load_dataset("PHBD/nchs-birth-rates-for-females-by-age-group-united")
```
## License
This dataset is licensed under https://www.usa.gov/government-works
|
PHBD/efforts-to-sustain-education-and-subsidized-meal-p | PHBD | 2025-05-05T15:05:51Z | 0 | 0 | [
"language:en",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"hhs",
"cdc",
"covid-19"
] | [] | 2025-05-05T15:05:50Z | null | ---
language:
- en
pretty_name: Efforts to sustain education and subsidized meal programs during COVID-19-related
school closures, United States, March-June 2020
tags:
- hhs
- cdc
- covid-19
---
# Efforts to sustain education and subsidized meal programs during COVID-19-related school closures, United States, March-June 2020
## Description
Data on distance learning and supplemental feeding programs were collected from a stratified sample of 600 school districts. School districts were divided into quartiles based on the percentage of students eligible for free/reduced-price lunch, an indicator of family economic status, as reported by the National Center for Education Statistics (https://nces.ed.gov/ccd/). A simple random sample was taken in each stratum, and sample size per stratum was calculated using 95% confidence interval of 50% ± 10%. Data on the availability and method of delivery of both distance learning and supplemental feeding programs were collected from publicly available announcements on school district websites and their official social media pages (Facebook, Twitter). Google searches were performed for news resources when information was not available from online district sources.
## Dataset Details
- **Publisher**: Centers for Disease Control and Prevention
- **Last Modified**: 2022-01-12
- **Contact**: Nicole Zviedrite (jmu6@cdc.gov)
## Source
Original data can be found at: https://data.cdc.gov/d/jkmz-c8jz
## Usage
You can load this dataset using:
```python
from datasets import load_dataset
dataset = load_dataset("PHBD/efforts-to-sustain-education-and-subsidized-meal-p")
```
## License
This dataset is licensed under https://www.usa.gov/government-works
|
TheRealPilot638/Falcon3-1B-dvts-4_no_chunking_H200 | TheRealPilot638 | 2025-05-05T14:38:20Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T03:30:21Z | null | ---
dataset_info:
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-1--agg_strategy--last--evals
features:
- name: n
dtype: 'null'
- name: acc_naive
dtype: 'null'
- name: acc_weighted
dtype: 'null'
- name: acc_maj
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1125
dataset_size: 0
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-2--agg_strategy--last--evals
features:
- name: n
dtype: 'null'
- name: acc_naive
dtype: 'null'
- name: acc_weighted
dtype: 'null'
- name: acc_maj
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1125
dataset_size: 0
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-3--agg_strategy--last--evals
features:
- name: n
dtype: 'null'
- name: acc_naive
dtype: 'null'
- name: acc_weighted
dtype: 'null'
- name: acc_maj
dtype: 'null'
splits:
- name: train
num_bytes: 0
num_examples: 0
download_size: 1125
dataset_size: 0
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-0--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
dtype: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
splits:
- name: train
num_bytes: 4197703
num_examples: 500
download_size: 1042134
dataset_size: 4197703
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-0--agg_strategy--last--evals
features:
- name: n
dtype: int64
- name: acc_naive
dtype: float64
- name: acc_weighted
dtype: float64
- name: acc_maj
dtype: float64
splits:
- name: train
num_bytes: 32
num_examples: 1
download_size: 1961
dataset_size: 32
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-1--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
dtype: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
splits:
- name: train
num_bytes: 4159527
num_examples: 500
download_size: 1039824
dataset_size: 4159527
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-2--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
dtype: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
splits:
- name: train
num_bytes: 4146444
num_examples: 500
download_size: 1046606
dataset_size: 4146444
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-3--agg_strategy--last
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: subject
dtype: string
- name: level
dtype: int64
- name: unique_id
dtype: string
- name: completions
sequence: string
- name: pred
dtype: string
- name: completion_tokens
dtype: int64
- name: scores
sequence:
sequence: float64
- name: agg_scores
sequence: float64
- name: pred_weighted@1
dtype: string
- name: pred_maj@1
dtype: string
- name: pred_naive@1
dtype: string
- name: pred_weighted@2
dtype: string
- name: pred_maj@2
dtype: string
- name: pred_naive@2
dtype: string
- name: pred_weighted@4
dtype: string
- name: pred_maj@4
dtype: string
- name: pred_naive@4
dtype: string
splits:
- name: train
num_bytes: 4168750
num_examples: 500
download_size: 1047608
dataset_size: 4168750
configs:
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-1--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-1--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-2--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-2--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-3--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-256--m-4--iters-40--look-1--seed-3--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-0--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-0--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-0--agg_strategy--last--evals
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-0--agg_strategy--last--evals/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-1--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-1--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-2--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-2--agg_strategy--last/train-*
- config_name: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-3--agg_strategy--last
data_files:
- split: train
path: HuggingFaceH4_MATH-500--T-0.8--top_p-1.0--n-4--m-4--iters-40--look-1--seed-3--agg_strategy--last/train-*
---
|
Caesarisnotasalad/data_2 | Caesarisnotasalad | 2025-05-05T11:04:43Z | 104 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T11:02:10Z | null | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: instruction
dtype: string
- name: task_category
dtype: string
- name: other_task_category
sequence: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: instruct_reward
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: instruction_length
dtype: int64
splits:
- name: train
num_bytes: 1006281037.674
num_examples: 896962
download_size: 479040621
dataset_size: 1006281037.674
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Kallia/stock-news-summaries-processed-finetuning | Kallia | 2025-05-05T08:38:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T08:38:33Z | null | ---
dataset_info:
features:
- name: article
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 5894458.695375927
num_examples: 2266
- name: validation
num_bytes: 736157.021531945
num_examples: 283
- name: test
num_bytes: 738758.2830921285
num_examples: 284
download_size: 4613500
dataset_size: 7369374.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ParkSY/data_nerf_nerfdepth_normalmap | ParkSY | 2025-05-05T06:59:20Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T06:59:16Z | null | ---
dataset_info:
features:
- name: input_image
dtype: string
- name: edit_prompt
dtype: string
- name: edited_image
dtype: string
- name: label
dtype: int64
- name: depthmap
dtype: string
- name: normalmap
dtype: string
splits:
- name: train
num_bytes: 1024967
num_examples: 3549
download_size: 104131
dataset_size: 1024967
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zhengbang0707/REFUEL_it2_mask1_v2_llama3_test | zhengbang0707 | 2025-05-05T05:45:17Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T05:45:16Z | null | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: reject
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_token
sequence: int64
- name: reject_token
sequence: int64
- name: chosen_mask
sequence: int64
- name: chosen_mask_user
sequence: int64
- name: reject_mask
sequence: int64
- name: reject_mask_user
sequence: int64
- name: chosen_reward_list
sequence: float64
- name: reject_reward_list
sequence: float64
- name: chosen_reward_list_new
sequence: float64
- name: reject_reward_list_new
sequence: float64
- name: chosen_reward
dtype: float64
- name: reject_reward
dtype: float64
splits:
- name: train
num_bytes: 53174161
num_examples: 500
download_size: 2589330
dataset_size: 53174161
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VGraf/paraphrase_train_dev_8maxturns_truncated2048 | VGraf | 2025-05-05T05:09:58Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-05T05:09:50Z | null | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: id
dtype: string
- name: source
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 129881181
num_examples: 9281
download_size: 61734145
dataset_size: 129881181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/openthoughts2_math_100k | mlfoundations-dev | 2025-05-05T04:02:03Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T04:01:30Z | null | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: _domain
dtype: string
- name: system
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: question
dtype: string
- name: source
dtype: string
- name: id
dtype: int64
- name: extracted_instruction
dtype: string
splits:
- name: train
num_bytes: 1470156799.5440912
num_examples: 100000
download_size: 647386299
dataset_size: 1470156799.5440912
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.8_num-company_3_dataset_0_for_gen_5 | HungVu2003 | 2025-05-05T01:55:24Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-05T01:55:23Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 6719782
num_examples: 12498
download_size: 2631930
dataset_size: 6719782
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
singhjagpreet/Gurbani-BaniDB | singhjagpreet | 2025-05-04T23:53:30Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T23:53:29Z | null | ---
dataset_info:
features:
- name: shabad_id
dtype: int64
- name: source_uni
dtype: string
- name: source_eng
dtype: string
- name: writer
dtype: string
- name: ang
dtype: int64
- name: verse_id
dtype: int64
- name: verse
dtype: string
- name: english_meaning
dtype: string
- name: punjabi_meaning
dtype: string
splits:
- name: train
num_bytes: 38044
num_examples: 72
download_size: 19182
dataset_size: 38044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
averntech/AvernK1Bolt-Itimidation | averntech | 2025-05-04T22:41:00Z | 0 | 0 | [
"task_categories:text-classification",
"task_categories:text-generation",
"license:apache-2.0",
"region:us"
] | [
"text-classification",
"text-generation"
] | 2025-05-04T16:54:16Z | null | ---
license: apache-2.0
task_categories:
- text-classification
- text-generation
pretty_name: 'Avern Itimidation (K1-Bolt) '
---
# Avern Itimidation (K1-Bolt)
The founding and foremost dataset used in all Avern K1 series models. Used to give the base a personality of Avern before adding the other additional datasets.
Will be different for different K1-Series Models (eg. K1-Ultra) |
GitBag/block-q-sharp_ds-distilled-qwen-1.5b-ppo-kl-1e-4-ec-0.001-16384_actor_hmmt-feb-24_eval | GitBag | 2025-05-04T22:01:33Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T22:01:31Z | null | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response_0
dtype: string
- name: response_1
dtype: string
- name: response_2
dtype: string
- name: response_3
dtype: string
- name: response_4
dtype: string
- name: response_5
dtype: string
- name: response_6
dtype: string
- name: response_7
dtype: string
- name: response_8
dtype: string
- name: response_9
dtype: string
- name: response_10
dtype: string
- name: response_11
dtype: string
- name: response_12
dtype: string
- name: response_13
dtype: string
- name: response_14
dtype: string
- name: response_15
dtype: string
- name: response_16
dtype: string
- name: response_17
dtype: string
- name: response_18
dtype: string
- name: response_19
dtype: string
- name: response_20
dtype: string
- name: response_21
dtype: string
- name: response_22
dtype: string
- name: response_23
dtype: string
- name: response_24
dtype: string
- name: response_25
dtype: string
- name: response_26
dtype: string
- name: response_27
dtype: string
- name: response_28
dtype: string
- name: response_29
dtype: string
- name: response_30
dtype: string
- name: response_31
dtype: string
- name: eval_0
dtype: float64
- name: eval_1
dtype: float64
- name: eval_2
dtype: float64
- name: eval_3
dtype: float64
- name: eval_4
dtype: float64
- name: eval_5
dtype: float64
- name: eval_6
dtype: float64
- name: eval_7
dtype: float64
- name: eval_8
dtype: float64
- name: eval_9
dtype: float64
- name: eval_10
dtype: float64
- name: eval_11
dtype: float64
- name: eval_12
dtype: float64
- name: eval_13
dtype: float64
- name: eval_14
dtype: float64
- name: eval_15
dtype: float64
- name: eval_16
dtype: float64
- name: eval_17
dtype: float64
- name: eval_18
dtype: float64
- name: eval_19
dtype: float64
- name: eval_20
dtype: float64
- name: eval_21
dtype: float64
- name: eval_22
dtype: float64
- name: eval_23
dtype: float64
- name: eval_24
dtype: float64
- name: eval_25
dtype: float64
- name: eval_26
dtype: float64
- name: eval_27
dtype: float64
- name: eval_28
dtype: float64
- name: eval_29
dtype: float64
- name: eval_30
dtype: float64
- name: eval_31
dtype: float64
splits:
- name: train
num_bytes: 39694870
num_examples: 30
download_size: 14183542
dataset_size: 39694870
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_1.0_alpha_0.4_num-company_2_dataset_0_for_gen_1_v2 | HungVu2003 | 2025-05-04T21:22:54Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T21:22:53Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 2915606
num_examples: 15000
download_size: 1578395
dataset_size: 2915606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Roshal/AI4EO_DatasetsDiversity_Evals | Roshal | 2025-05-04T19:19:38Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-04T18:47:03Z | null | ---
license: apache-2.0
---
|
user074/concat_cleaned_gsm8k_math_8 | user074 | 2025-05-04T18:21:42Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T17:51:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 92211153
num_examples: 14310
download_size: 20119888
dataset_size: 92211153
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jumava/adv-ele | jumava | 2025-05-04T18:02:14Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T18:02:12Z | null | ---
dataset_info:
features:
- name: ADV
dtype: string
- name: ELE
dtype: string
splits:
- name: train
num_bytes: 430918.56140350876
num_examples: 1732
- name: test
num_bytes: 107978.43859649122
num_examples: 434
download_size: 296569
dataset_size: 538897.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mteb/VideoRetrieval | mteb | 2025-05-04T16:11:43Z | 14 | 0 | [
"task_categories:text-retrieval",
"multilinguality:monolingual",
"language:cmn",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.03367",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-11-28T10:51:02Z | null | ---
language:
- cmn
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids: []
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: dev
num_bytes: 8580491
num_examples: 100930
download_size: 7277662
dataset_size: 8580491
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: int64
splits:
- name: dev
num_bytes: 27968
num_examples: 1000
download_size: 17445
dataset_size: 27968
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: dev
num_bytes: 34156
num_examples: 1000
download_size: 29116
dataset_size: 34156
configs:
- config_name: corpus
data_files:
- split: dev
path: corpus/dev-*
- config_name: default
data_files:
- split: dev
path: data/dev-*
- config_name: queries
data_files:
- split: dev
path: queries/dev-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">VideoRetrieval</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
VideoRetrieval
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | None |
| Reference | https://arxiv.org/abs/2203.03367 |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["VideoRetrieval"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@misc{long2022multicprmultidomainchinese,
archiveprefix = {arXiv},
author = {Dingkun Long and Qiong Gao and Kuan Zou and Guangwei Xu and Pengjun Xie and Ruijie Guo and Jian Xu and Guanjun Jiang and Luxi Xing and Ping Yang},
eprint = {2203.03367},
primaryclass = {cs.IR},
title = {Multi-CPR: A Multi Domain Chinese Dataset for Passage Retrieval},
url = {https://arxiv.org/abs/2203.03367},
year = {2022},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("VideoRetrieval")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"dev": {
"num_samples": 101930,
"number_of_characters": 3141126,
"num_documents": 100930,
"min_document_length": 1,
"average_document_length": 31.048855642524522,
"max_document_length": 5869,
"unique_documents": 100930,
"num_queries": 1000,
"min_query_length": 2,
"average_query_length": 7.365,
"max_query_length": 19,
"unique_queries": 1000,
"none_queries": 0,
"num_relevant_docs": 1000,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.0,
"max_relevant_docs_per_query": 1,
"unique_relevant_docs": 1000,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/GerDaLIRSmall | mteb | 2025-05-04T16:09:30Z | 48 | 0 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:derived",
"multilinguality:monolingual",
"language:deu",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-retrieval"
] | 2024-03-30T07:42:00Z | null | ---
annotations_creators:
- derived
language:
- deu
license: mit
multilinguality: monolingual
task_categories:
- text-retrieval
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- mteb
- text
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_examples: 14320
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_examples: 9969
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_examples: 12234
configs:
- config_name: default
data_files:
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">GerDaLIRSmall</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
The dataset consists of documents, passages and relevance labels in German. In contrast to the original dataset, only documents that have corresponding queries in the query set are chosen to create a smaller corpus for evaluation purposes.
| | |
|---------------|---------------------------------------------|
| Task category | t2t |
| Domains | Legal, Written |
| Reference | https://github.com/lavis-nlp/GerDaLIR |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["GerDaLIRSmall"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{wrzalik-krechel-2021-gerdalir,
abstract = {We present GerDaLIR, a German Dataset for Legal Information Retrieval based on case documents from the open legal information platform Open Legal Data. The dataset consists of 123K queries, each labelled with at least one relevant document in a collection of 131K case documents. We conduct several baseline experiments including BM25 and a state-of-the-art neural re-ranker. With our dataset, we aim to provide a standardized benchmark for German LIR and promote open research in this area. Beyond that, our dataset comprises sufficient training data to be used as a downstream task for German or multilingual language models.},
address = {Punta Cana, Dominican Republic},
author = {Wrzalik, Marco and
Krechel, Dirk},
booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2021},
month = nov,
pages = {123--128},
publisher = {Association for Computational Linguistics},
title = {{G}er{D}a{LIR}: A {G}erman Dataset for Legal Information Retrieval},
url = {https://aclanthology.org/2021.nllp-1.13},
year = {2021},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("GerDaLIRSmall")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 22203,
"number_of_characters": 209081381,
"num_documents": 9969,
"min_document_length": 151,
"average_document_length": 19707.823653325308,
"max_document_length": 427235,
"unique_documents": 9969,
"num_queries": 12234,
"min_query_length": 150,
"average_query_length": 1031.0680889324833,
"max_query_length": 23560,
"unique_queries": 12234,
"none_queries": 0,
"num_relevant_docs": 14320,
"min_relevant_docs_per_query": 1,
"average_relevant_docs_per_query": 1.1705084191597188,
"max_relevant_docs_per_query": 9,
"unique_relevant_docs": 9969,
"num_instructions": null,
"min_instruction_length": null,
"average_instruction_length": null,
"max_instruction_length": null,
"unique_instructions": null,
"num_top_ranked": null,
"min_top_ranked_per_query": null,
"average_top_ranked_per_query": null,
"max_top_ranked_per_query": null
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
mteb/multilingual-scala-classification | mteb | 2025-05-04T16:08:12Z | 112 | 1 | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"annotations_creators:human-annotated",
"multilinguality:multilingual",
"language:dan",
"language:nno",
"language:nob",
"language:swe",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.13595",
"arxiv:2210.07316",
"region:us",
"mteb",
"text"
] | [
"text-classification"
] | 2024-04-29T20:11:40Z | null | ---
annotations_creators:
- human-annotated
language:
- dan
- nno
- nob
- swe
license: cc-by-sa-4.0
multilinguality: multilingual
task_categories:
- text-classification
task_ids:
- acceptability-classification
dataset_info:
- config_name: Danish
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 139194
num_examples: 1024
- name: test
num_bytes: 281517
num_examples: 2048
- name: full_train
num_bytes: 733506
num_examples: 5342
- name: val
num_bytes: 32942
num_examples: 256
download_size: 700593
dataset_size: 1187159
- config_name: Norwegian_b
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 126028
num_examples: 1024
- name: test
num_bytes: 258103
num_examples: 2048
- name: full_train
num_bytes: 3221649
num_examples: 25946
- name: val
num_bytes: 31302
num_examples: 256
download_size: 2161548
dataset_size: 3637082
- config_name: Norwegian_n
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 136251
num_examples: 1024
- name: test
num_bytes: 268761
num_examples: 2048
- name: full_train
num_bytes: 3062138
num_examples: 22800
- name: val
num_bytes: 33910
num_examples: 256
download_size: 2088966
dataset_size: 3501060
- config_name: Swedish
features:
- name: text
dtype: string
- name: corruption_type
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 135999
num_examples: 1024
- name: test
num_bytes: 262897
num_examples: 2048
- name: full_train
num_bytes: 1014513
num_examples: 7446
- name: val
num_bytes: 36681
num_examples: 256
download_size: 807624
dataset_size: 1450090
configs:
- config_name: Danish
data_files:
- split: train
path: Danish/train-*
- split: test
path: Danish/test-*
- split: full_train
path: Danish/full_train-*
- split: val
path: Danish/val-*
- config_name: Norwegian_b
data_files:
- split: train
path: Norwegian_b/train-*
- split: test
path: Norwegian_b/test-*
- split: full_train
path: Norwegian_b/full_train-*
- split: val
path: Norwegian_b/val-*
- config_name: Norwegian_n
data_files:
- split: train
path: Norwegian_n/train-*
- split: test
path: Norwegian_n/test-*
- split: full_train
path: Norwegian_n/full_train-*
- split: val
path: Norwegian_n/val-*
- config_name: Swedish
data_files:
- split: train
path: Swedish/train-*
- split: test
path: Swedish/test-*
- split: full_train
path: Swedish/full_train-*
- split: val
path: Swedish/val-*
tags:
- mteb
- text
---
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">ScalaClassification</h1>
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
</div>
ScaLa a linguistic acceptability dataset for the mainland Scandinavian languages automatically constructed from dependency annotations in Universal Dependencies Treebanks.
Published as part of 'ScandEval: A Benchmark for Scandinavian Natural Language Processing'
| | |
|---------------|---------------------------------------------|
| Task category | t2c |
| Domains | Fiction, News, Non-fiction, Blog, Spoken, Web, Written |
| Reference | https://aclanthology.org/2023.nodalida-1.20/ |
## How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code:
```python
import mteb
task = mteb.get_tasks(["ScalaClassification"])
evaluator = mteb.MTEB(task)
model = mteb.get_model(YOUR_MODEL)
evaluator.run(model)
```
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
## Citation
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
```bibtex
@inproceedings{nielsen-2023-scandeval,
address = {T{\'o}rshavn, Faroe Islands},
author = {Nielsen, Dan},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
editor = {Alum{\"a}e, Tanel and
Fishel, Mark},
month = may,
pages = {185--201},
publisher = {University of Tartu Library},
title = {{S}cand{E}val: A Benchmark for {S}candinavian Natural Language Processing},
url = {https://aclanthology.org/2023.nodalida-1.20},
year = {2023},
}
@article{enevoldsen2025mmtebmassivemultilingualtext,
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
author={Kenneth Enevoldsen and Isaac Chung and Imene Kerboua and Márton Kardos and Ashwin Mathur and David Stap and Jay Gala and Wissam Siblini and Dominik Krzemiński and Genta Indra Winata and Saba Sturua and Saiteja Utpala and Mathieu Ciancone and Marion Schaeffer and Gabriel Sequeira and Diganta Misra and Shreeya Dhakal and Jonathan Rystrøm and Roman Solomatin and Ömer Çağatan and Akash Kundu and Martin Bernstorff and Shitao Xiao and Akshita Sukhlecha and Bhavish Pahwa and Rafał Poświata and Kranthi Kiran GV and Shawon Ashraf and Daniel Auras and Björn Plüster and Jan Philipp Harries and Loïc Magne and Isabelle Mohr and Mariya Hendriksen and Dawei Zhu and Hippolyte Gisserot-Boukhlef and Tom Aarsen and Jan Kostkan and Konrad Wojtasik and Taemin Lee and Marek Šuppa and Crystina Zhang and Roberta Rocca and Mohammed Hamdy and Andrianos Michail and John Yang and Manuel Faysse and Aleksei Vatolin and Nandan Thakur and Manan Dey and Dipam Vasani and Pranjal Chitale and Simone Tedeschi and Nguyen Tai and Artem Snegirev and Michael Günther and Mengzhou Xia and Weijia Shi and Xing Han Lù and Jordan Clive and Gayatri Krishnakumar and Anna Maksimova and Silvan Wehrli and Maria Tikhonova and Henil Panchal and Aleksandr Abramov and Malte Ostendorff and Zheng Liu and Simon Clematide and Lester James Miranda and Alena Fenogenova and Guangyu Song and Ruqiya Bin Safi and Wen-Ding Li and Alessia Borghini and Federico Cassano and Hongjin Su and Jimmy Lin and Howard Yen and Lasse Hansen and Sara Hooker and Chenghao Xiao and Vaibhav Adlakha and Orion Weller and Siva Reddy and Niklas Muennighoff},
publisher = {arXiv},
journal={arXiv preprint arXiv:2502.13595},
year={2025},
url={https://arxiv.org/abs/2502.13595},
doi = {10.48550/arXiv.2502.13595},
}
@article{muennighoff2022mteb,
author = {Muennighoff, Niklas and Tazi, Nouamane and Magne, Lo{\"\i}c and Reimers, Nils},
title = {MTEB: Massive Text Embedding Benchmark},
publisher = {arXiv},
journal={arXiv preprint arXiv:2210.07316},
year = {2022}
url = {https://arxiv.org/abs/2210.07316},
doi = {10.48550/ARXIV.2210.07316},
}
```
# Dataset Statistics
<details>
<summary> Dataset Statistics</summary>
The following code contains the descriptive statistics from the task. These can also be obtained using:
```python
import mteb
task = mteb.get_task("ScalaClassification")
desc_stats = task.metadata.descriptive_stats
```
```json
{
"test": {
"num_samples": 8192,
"number_of_characters": 839257,
"number_texts_intersect_with_train": 0,
"min_text_length": 13,
"average_text_length": 102.4483642578125,
"max_text_length": 613,
"unique_text": 8192,
"unique_labels": 2,
"labels": {
"0": {
"count": 4096
},
"1": {
"count": 4096
}
},
"hf_subset_descriptive_stats": {
"Danish": {
"num_samples": 2048,
"number_of_characters": 224132,
"number_texts_intersect_with_train": 0,
"min_text_length": 13,
"average_text_length": 109.439453125,
"max_text_length": 443,
"unique_text": 2048,
"unique_labels": 2,
"labels": {
"0": {
"count": 1024
},
"1": {
"count": 1024
}
}
},
"Norwegian_b": {
"num_samples": 2048,
"number_of_characters": 201596,
"number_texts_intersect_with_train": 0,
"min_text_length": 18,
"average_text_length": 98.435546875,
"max_text_length": 397,
"unique_text": 2048,
"unique_labels": 2,
"labels": {
"1": {
"count": 1024
},
"0": {
"count": 1024
}
}
},
"Norwegian_n": {
"num_samples": 2048,
"number_of_characters": 212059,
"number_texts_intersect_with_train": 0,
"min_text_length": 18,
"average_text_length": 103.54443359375,
"max_text_length": 349,
"unique_text": 2048,
"unique_labels": 2,
"labels": {
"1": {
"count": 1024
},
"0": {
"count": 1024
}
}
},
"Swedish": {
"num_samples": 2048,
"number_of_characters": 201470,
"number_texts_intersect_with_train": 0,
"min_text_length": 17,
"average_text_length": 98.3740234375,
"max_text_length": 613,
"unique_text": 2048,
"unique_labels": 2,
"labels": {
"1": {
"count": 1024
},
"0": {
"count": 1024
}
}
}
}
},
"train": {
"num_samples": 4096,
"number_of_characters": 421198,
"number_texts_intersect_with_train": null,
"min_text_length": 14,
"average_text_length": 102.83154296875,
"max_text_length": 402,
"unique_text": 4096,
"unique_labels": 2,
"labels": {
"1": {
"count": 2048
},
"0": {
"count": 2048
}
},
"hf_subset_descriptive_stats": {
"Danish": {
"num_samples": 1024,
"number_of_characters": 110271,
"number_texts_intersect_with_train": null,
"min_text_length": 14,
"average_text_length": 107.6865234375,
"max_text_length": 392,
"unique_text": 1024,
"unique_labels": 2,
"labels": {
"1": {
"count": 512
},
"0": {
"count": 512
}
}
},
"Norwegian_b": {
"num_samples": 1024,
"number_of_characters": 97878,
"number_texts_intersect_with_train": null,
"min_text_length": 18,
"average_text_length": 95.583984375,
"max_text_length": 350,
"unique_text": 1024,
"unique_labels": 2,
"labels": {
"1": {
"count": 512
},
"0": {
"count": 512
}
}
},
"Norwegian_n": {
"num_samples": 1024,
"number_of_characters": 107913,
"number_texts_intersect_with_train": null,
"min_text_length": 20,
"average_text_length": 105.3837890625,
"max_text_length": 402,
"unique_text": 1024,
"unique_labels": 2,
"labels": {
"1": {
"count": 512
},
"0": {
"count": 512
}
}
},
"Swedish": {
"num_samples": 1024,
"number_of_characters": 105136,
"number_texts_intersect_with_train": null,
"min_text_length": 19,
"average_text_length": 102.671875,
"max_text_length": 326,
"unique_text": 1024,
"unique_labels": 2,
"labels": {
"1": {
"count": 512
},
"0": {
"count": 512
}
}
}
}
}
}
```
</details>
---
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)* |
nhagar/culturax_urls | nhagar | 2025-05-04T16:02:06Z | 786 | 0 | [
"task_categories:text-generation",
"size_categories:1B<n<10B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | 2025-04-20T18:58:48Z | null | ---
task_categories:
- text-generation
size_categories:
- 1B<n<10B
---
# Dataset Card for culturax_urls
This dataset provides the URLs and top-level domains associated with training records in [uonlp/CulturaX](https://huggingface.co/datasets/uonlp/CulturaX). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible.
## Dataset Details
### Dataset Description
This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy).
- **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy)
- **License:** Same as source dataset
### Dataset Sources
- **Repository:** [uonlp/CulturaX](https://huggingface.co/datasets/uonlp/CulturaX)
## Uses
This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data.
### Direct Use
The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve:
- Identifying the most-used websites
- Categorizing URLs to understand domain- or topic-level dataset composition
- Comparing URLs across datasets
- Digging into inclusion/exclusion patterns for a particular website
### Out-of-Scope Use
This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset.
## Dataset Structure
This dataset contains every record with a URL from the source dataset. It contains two columns:
- `url`: The raw URL associated with each record
- `domain`: The top-level domain for each URL, extracted with `tldextract`
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed] |
nhagar/infimm-webmath-40b_urls | nhagar | 2025-05-04T15:59:41Z | 2 | 0 | [
"task_categories:text-generation",
"language:en",
"language:zh",
"license:odc-by",
"size_categories:10M<n<100M",
"region:us"
] | [
"text-generation"
] | 2025-04-27T23:15:20Z | null | ---
license: odc-by
task_categories:
- text-generation
language:
- en
- zh
size_categories:
- 10M<n<100M
---
# Dataset Card for infimm-webmath-40b_urls
This dataset provides the URLs and top-level domains associated with training records in [Infi-MM/InfiMM-WebMath-40B](https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B). It is part of a [collection of datasets](https://huggingface.co/collections/nhagar/llm-urls-neurips-681698adac0862be6c65c72b) curated to make exploring LLM training datasets more straightforward and accessible.
## Dataset Details
### Dataset Description
This dataset was created by downloading the source data, extracting URLs and top-level domains, and retaining only those record identifiers. In doing so, it allows researchers and practitioners to explore the contents of these training datasets without having to manage terabytes of raw text. You can explore the pipeline used to construct this dataset on [GitHub](https://github.com/NHagar/cc-genealogy).
- **Curated by:** [Nick Hagar](https://huggingface.co/nhagar) and [Jack Bandy](https://huggingface.co/jackbandy)
- **License:** Same as source dataset
### Dataset Sources
- **Repository:** [Infi-MM/InfiMM-WebMath-40B](https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B)
## Uses
This dataset is intended to allow researchers and practitioners to analyze the contents of large LLM training datasets without having to wade through terabytes of unwieldy text data.
### Direct Use
The main use case for these data is to explore the contents of LLM training datasets at scale. This might involve:
- Identifying the most-used websites
- Categorizing URLs to understand domain- or topic-level dataset composition
- Comparing URLs across datasets
- Digging into inclusion/exclusion patterns for a particular website
### Out-of-Scope Use
This dataset is not intend to replicate or replace the source data, nor is it intended to enable large-scale scraping of the URLs listed. For source text, refer to the original dataset.
## Dataset Structure
This dataset contains every record with a URL from the source dataset. It contains two columns:
- `url`: The raw URL associated with each record
- `domain`: The top-level domain for each URL, extracted with `tldextract`
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed] |
arjunsama/tts_orpheus_valorant_viper_v2.5 | arjunsama | 2025-05-04T09:29:59Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T09:29:49Z | null | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 77510524.0
num_examples: 445
download_size: 67023534
dataset_size: 77510524.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Hkang/summarize_sft-test_lm-pythia1b-oai-summary-PPO-0KL-newrm_12K_seed-42_numex-250 | Hkang | 2025-05-04T06:48:34Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T06:48:33Z | null | ---
dataset_info:
features:
- name: id
dtype: string
- name: subreddit
dtype: string
- name: title
dtype: string
- name: post
dtype: string
- name: summary
dtype: string
- name: query_input_ids
sequence: int64
- name: query_attention_mask
sequence: int64
- name: query
dtype: string
- name: reference_response
dtype: string
- name: reference_response_input_ids
sequence: int64
- name: reference_response_attention_mask
sequence: int64
- name: reference_response_token_len
dtype: int64
- name: query_reference_response
dtype: string
- name: query_reference_response_input_ids
sequence: int64
- name: query_reference_response_attention_mask
sequence: int64
- name: query_reference_response_token_response_label
sequence: int64
- name: query_reference_response_token_len
dtype: int64
- name: model_response
dtype: string
splits:
- name: test
num_bytes: 6868072
num_examples: 250
download_size: 1158476
dataset_size: 6868072
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
spiralworks/openreview-iclr-decision-2025 | spiralworks | 2025-05-04T04:20:01Z | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-04T04:19:34Z | null | ---
dataset_info:
features:
- name: forum_id
dtype: string
- name: forum_title
dtype: string
- name: forum_authors
sequence: string
- name: forum_abstract
dtype: string
- name: forum_keywords
sequence: string
- name: forum_decision
dtype: string
- name: forum_pdf_url
dtype: string
- name: forum_url
dtype: string
- name: note_id
dtype: string
- name: note_type
dtype: string
- name: note_created
dtype: int64
- name: note_replyto
dtype: string
- name: note_readers
sequence: string
- name: note_signatures
sequence: string
- name: venue
dtype: string
- name: year
dtype: string
- name: note_text
dtype: string
splits:
- name: train
num_bytes: 1572432705
num_examples: 381588
download_size: 452272080
dataset_size: 1572432705
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.6_num-company_3_dataset_0_for_gen_13 | HungVu2003 | 2025-05-03T16:41:46Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T16:41:44Z | null | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 7304720
num_examples: 12500
download_size: 1973081
dataset_size: 7304720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
roaminwind/NUMINAMATH_MOMENTUM_5000_complete | roaminwind | 2025-05-03T12:08:00Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T12:06:17Z | null | ---
dataset_info:
features:
- name: index
dtype: int64
- name: query
dtype: string
- name: solution
dtype: string
- name: reasoning_text
dtype: string
- name: final_answer
dtype: string
- name: num_steps
dtype: int64
- name: max_momentum
dtype: float64
splits:
- name: train
num_bytes: 17184247
num_examples: 5000
download_size: 8155710
dataset_size: 17184247
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/text_L2-regular-14_trivia_qa-audio-score | chiyuanhsiao | 2025-05-03T06:43:29Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T06:43:25Z | null | ---
dataset_info:
features:
- name: question
dtype: string
- name: question_id
dtype: string
- name: question_source
dtype: string
- name: entity_pages
sequence:
- name: doc_source
dtype: string
- name: filename
dtype: string
- name: title
dtype: string
- name: wiki_context
dtype: string
- name: search_results
sequence:
- name: description
dtype: string
- name: filename
dtype: string
- name: rank
dtype: int32
- name: title
dtype: string
- name: url
dtype: string
- name: search_context
dtype: string
- name: answer
struct:
- name: aliases
sequence: string
- name: normalized_aliases
sequence: string
- name: matched_wiki_entity_name
dtype: string
- name: normalized_matched_wiki_entity_name
dtype: string
- name: normalized_value
dtype: string
- name: type
dtype: string
- name: value
dtype: string
- name: my_prediction_text
dtype: string
- name: text_score
dtype: int64
splits:
- name: validation
num_bytes: 74599653
num_examples: 1000
download_size: 31218183
dataset_size: 74599653
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
AlSamCur123/LesserShareGPTNotWorking | AlSamCur123 | 2025-05-03T03:29:56Z | 54 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-08T01:01:20Z | null | ---
license: apache-2.0
---
|
Aravindh25/eval_trossen_pick_tshirt_3cam_v2m5V5 | Aravindh25 | 2025-05-03T01:53:51Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"tutorial"
] | [
"robotics"
] | 2025-05-03T01:52:31Z | null | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "trossen_ai_solo",
"total_episodes": 5,
"total_frames": 8942,
"total_tasks": 1,
"total_videos": 15,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:5"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"main_joint_0",
"main_joint_1",
"main_joint_2",
"main_joint_3",
"main_joint_4",
"main_joint_5",
"main_joint_6"
]
},
"observation.images.cam_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
VGraf/mt_dependent_user_2_turns | VGraf | 2025-05-03T00:35:28Z | 41 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-09T15:30:17Z | null | ---
dataset_info:
features:
- name: conv
list:
- name: user
dtype: string
- name: sys
dtype: string
- name: id
dtype: string
- name: do_inference
dtype: bool
- name: inst
dtype: string
- name: key
dtype: int64
- name: prompt
dtype: string
- name: entity
dtype: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 203495
num_examples: 600
download_size: 83466
dataset_size: 203495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: user_reference
data_files:
- split: test
path: data/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_att_iter0_att40_sol5_dedup_dpo_10000 | cchoi1 | 2025-05-03T00:28:33Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-03T00:28:30Z | null | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: task_id
dtype: string
splits:
- name: train
num_bytes: 22153857.309409436
num_examples: 5662
- name: test
num_bytes: 5540420.690590562
num_examples: 1416
download_size: 7167234
dataset_size: 27694278.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tacab/Asr_agri_somalii | tacab | 2025-05-02T17:59:34Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T17:55:53Z | null | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: cleaned_text
dtype: string
splits:
- name: train
num_bytes: 597094141.956
num_examples: 2778
download_size: 397302337
dataset_size: 597094141.956
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.