Datasets:
Tasks:
Text Ranking
Modalities:
Text
Formats:
parquet
Languages:
French
Size:
10K - 100K
ArXiv:
License:
Add dataset card
Browse files
README.md
CHANGED
@@ -16,17 +16,26 @@ domains:
|
|
16 |
- Written
|
17 |
---
|
18 |
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
This dataset was provided by AlloProf, an organisation in Quebec, Canada offering resources and a help forum curated by a large number of teachers to students on all subjects taught from in primary and secondary school
|
22 |
|
23 |
-
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
- Task category: t2t
|
26 |
-
- Domains: ['Web', 'Academic', 'Written']
|
27 |
|
28 |
## How to evaluate on this task
|
29 |
|
|
|
|
|
30 |
```python
|
31 |
import mteb
|
32 |
|
@@ -38,23 +47,25 @@ evaluator.run(model)
|
|
38 |
```
|
39 |
|
40 |
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
|
41 |
-
|
42 |
|
43 |
## Citation
|
44 |
|
45 |
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
|
46 |
|
47 |
```bibtex
|
|
|
48 |
@misc{lef23,
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
|
|
58 |
|
59 |
@article{enevoldsen2025mmtebmassivemultilingualtext,
|
60 |
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
|
@@ -78,6 +89,19 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
|
|
78 |
```
|
79 |
|
80 |
# Dataset Statistics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
```json
|
82 |
{
|
83 |
"test": {
|
@@ -110,4 +134,9 @@ If you use this dataset, please cite the dataset as well as [mteb](https://githu
|
|
110 |
"max_top_ranked_per_query": 37
|
111 |
}
|
112 |
}
|
113 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
16 |
- Written
|
17 |
---
|
18 |
<!-- adapted from https://github.com/huggingface/huggingface_hub/blob/v0.30.2/src/huggingface_hub/templates/datasetcard_template.md -->
|
19 |
+
|
20 |
+
<div align="center" style="padding: 40px 20px; background-color: white; border-radius: 12px; box-shadow: 0 2px 10px rgba(0, 0, 0, 0.05); max-width: 600px; margin: 0 auto;">
|
21 |
+
<h1 style="font-size: 3.5rem; color: #1a1a1a; margin: 0 0 20px 0; letter-spacing: 2px; font-weight: 700;">AlloprofReranking</h1>
|
22 |
+
<div style="font-size: 1.5rem; color: #4a4a4a; margin-bottom: 5px; font-weight: 300;">An <a href="https://github.com/embeddings-benchmark/mteb" style="color: #2c5282; font-weight: 600; text-decoration: none;" onmouseover="this.style.textDecoration='underline'" onmouseout="this.style.textDecoration='none'">MTEB</a> dataset</div>
|
23 |
+
<div style="font-size: 0.9rem; color: #2c5282; margin-top: 10px;">Massive Text Embedding Benchmark</div>
|
24 |
+
</div>
|
25 |
|
26 |
This dataset was provided by AlloProf, an organisation in Quebec, Canada offering resources and a help forum curated by a large number of teachers to students on all subjects taught from in primary and secondary school
|
27 |
|
28 |
+
| | |
|
29 |
+
|---------------|---------------------------------------------|
|
30 |
+
| Task category | t2t |
|
31 |
+
| Domains | Web, Academic, Written |
|
32 |
+
| Reference | https://huggingface.co/datasets/antoinelb7/alloprof |
|
33 |
|
|
|
|
|
34 |
|
35 |
## How to evaluate on this task
|
36 |
|
37 |
+
You can evaluate an embedding model on this dataset using the following code:
|
38 |
+
|
39 |
```python
|
40 |
import mteb
|
41 |
|
|
|
47 |
```
|
48 |
|
49 |
<!-- Datasets want link to arxiv in readme to autolink dataset with paper -->
|
50 |
+
To learn more about how to run models on `mteb` task check out the [GitHub repitory](https://github.com/embeddings-benchmark/mteb).
|
51 |
|
52 |
## Citation
|
53 |
|
54 |
If you use this dataset, please cite the dataset as well as [mteb](https://github.com/embeddings-benchmark/mteb), as this dataset likely includes additional processing as a part of the [MMTEB Contribution](https://github.com/embeddings-benchmark/mteb/tree/main/docs/mmteb).
|
55 |
|
56 |
```bibtex
|
57 |
+
|
58 |
@misc{lef23,
|
59 |
+
author = {Lefebvre-Brossard, Antoine and Gazaille, Stephane and Desmarais, Michel C.},
|
60 |
+
copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International},
|
61 |
+
doi = {10.48550/ARXIV.2302.07738},
|
62 |
+
keywords = {Computation and Language (cs.CL), Information Retrieval (cs.IR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
|
63 |
+
publisher = {arXiv},
|
64 |
+
title = {Alloprof: a new French question-answer education dataset and its use in an information retrieval case study},
|
65 |
+
url = {https://arxiv.org/abs/2302.07738},
|
66 |
+
year = {2023},
|
67 |
+
}
|
68 |
+
|
69 |
|
70 |
@article{enevoldsen2025mmtebmassivemultilingualtext,
|
71 |
title={MMTEB: Massive Multilingual Text Embedding Benchmark},
|
|
|
89 |
```
|
90 |
|
91 |
# Dataset Statistics
|
92 |
+
<details>
|
93 |
+
<summary> Dataset Statistics</summary>
|
94 |
+
|
95 |
+
The following code contains the descriptive statistics from the task. These can also be obtained using:
|
96 |
+
|
97 |
+
```python
|
98 |
+
import mteb
|
99 |
+
|
100 |
+
task = mteb.get_task("AlloprofReranking")
|
101 |
+
|
102 |
+
desc_stats = task.metadata.descriptive_stats
|
103 |
+
```
|
104 |
+
|
105 |
```json
|
106 |
{
|
107 |
"test": {
|
|
|
134 |
"max_top_ranked_per_query": 37
|
135 |
}
|
136 |
}
|
137 |
+
```
|
138 |
+
|
139 |
+
</details>
|
140 |
+
|
141 |
+
---
|
142 |
+
*This dataset card was automatically generated using [MTEB](https://github.com/embeddings-benchmark/mteb)*
|