parquet-converter commited on
Commit
19eb241
·
1 Parent(s): 54ddf91

Update parquet files

Browse files
README.md DELETED
@@ -1,47 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - en
8
- license:
9
- - apache-2.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - extended|other-MS^2
16
- - extended|other-Cochrane
17
- task_categories:
18
- - summarization
19
- - text2text-generation
20
- paperswithcode_id: multi-document-summarization
21
- pretty_name: MSLR Shared Task
22
- ---
23
-
24
- This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
25
-
26
- - __query__: The `background` field of each example
27
- - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`.
28
- - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings
29
- - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example
30
-
31
- Retrieval results on the `train` set:
32
-
33
- | Recall@100 | Rprec | Precision@k | Recall@k |
34
- | ----------- | ----------- | ----------- | ----------- |
35
- | 0.4333 | 0.2163 | 0.2163 | 0.2163 |
36
-
37
- Retrieval results on the `validation` set:
38
-
39
- | Recall@100 | Rprec | Precision@k | Recall@k |
40
- | ----------- | ----------- | ----------- | ----------- |
41
- | 0.3780 | 0.1827 | 0.1827 | 0.1827 |
42
-
43
- Retrieval results on the `test` set:
44
-
45
- | Recall@100 | Rprec | Precision@k | Recall@k |
46
- | ----------- | ----------- | ----------- | ----------- |
47
- | 0.3928 | 0.1898 | 0.1898 | 0.1898 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/validation-00000-of-00001-2e02b7e0067d77aa.parquet → allenai--ms2_sparse_oracle/parquet-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b410d5ab40c5b9500cd6bab8e390f2c0ed6fdf074e14ab02a7182742b16707a
3
- size 52886168
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de969dfdea534611f50945709e0d2d24657fa727688854b88a4966679540c213
3
+ size 39897356
data/train-00000-of-00002-77362a46fb7ee87b.parquet → allenai--ms2_sparse_oracle/parquet-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3000a20a04142a09f51fa099c7c386295e61755b3401f95aacf0785513be230e
3
- size 153498083
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1f7d34f75013a3d898ef384f415da5b6df170cc9b8d9a522fbb03463d4f388a
3
+ size 308537894
data/test-00000-of-00001-2a72cba309e88d60.parquet → allenai--ms2_sparse_oracle/parquet-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:55c4a064d8d274f5ae035701fb452a316b82260e3f5263afbd4c84d049ab709d
3
- size 39899568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a02391b7ed493352937a3e7ca6b84a727a15eb9748667ebe1824582eb4a1a73
3
+ size 52849909
data/train-00001-of-00002-8c66d8133e80be99.parquet DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ff0c03c1a9f9eeb7cc43e2ab214b27ca3f6ecf13c9ae0cf74fa2fa866d2aefb9
3
- size 154468027
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"allenai--ms2_sparse_oracle": {"description": "The Multidocument Summarization for Literature Review (MSLR) Shared Task aims to study how medical\nevidence from different clinical studies are summarized in literature reviews. Reviews provide the\nhighest quality of evidence for clinical care, but are expensive to produce manually.\n(Semi-)automation via NLP may facilitate faster evidence synthesis without sacrificing rigor.\nThe MSLR shared task uses two datasets to assess the current state of multidocument summarization\nfor this task, and to encourage the development of modeling contributions, scaffolding tasks, methods\nfor model interpretability, and improved automated evaluation methods in this domain.\n", "citation": "@inproceedings{DeYoung2021MS2MS,\n title = {MS\u02c62: Multi-Document Summarization of Medical Studies},\n author = {Jay DeYoung and Iz Beltagy and Madeleine van Zuylen and Bailey Kuehl and Lucy Lu Wang},\n booktitle = {EMNLP},\n year = {2021}\n}\n@article{Wallace2020GeneratingN,\n title = {Generating (Factual?) Narrative Summaries of RCTs: Experiments with Neural Multi-Document Summarization},\n author = {Byron C. Wallace and Sayantani Saha and Frank Soboczenski and Iain James Marshall},\n year = 2020,\n journal = {AMIA Annual Symposium},\n volume = {abs/2008.11293}\n}\n", "homepage": "https://github.com/allenai/mslr-shared-task", "license": "Apache-2.0", "features": {"review_id": {"dtype": "string", "id": null, "_type": "Value"}, "pmid": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "title": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "abstract": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "target": {"dtype": "string", "id": null, "_type": "Value"}, "background": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "mslr2022", "config_name": "ms2", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 622737230, "num_examples": 14188, "dataset_name": "ms2_sparse_oracle"}, "test": {"name": "test", "num_bytes": 81506673, "num_examples": 1667, "dataset_name": "ms2_sparse_oracle"}, "validation": {"name": "validation", "num_bytes": 106328079, "num_examples": 2021, "dataset_name": "ms2_sparse_oracle"}}, "download_checksums": null, "download_size": 400751846, "post_processing_size": null, "dataset_size": 810571982, "size_in_bytes": 1211323828}}