Datasets:
IIC
/

Modalities:
Text
Formats:
parquet
Languages:
Spanish
ArXiv:
Libraries:
Datasets
pandas
License:
GuillemIIC commited on
Commit
3a5bb46
·
0 Parent(s):

feat: initial dataset release

Browse files
.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ - config_name: dedup
4
+ features:
5
+ - name: text
6
+ dtype: string
7
+ - name: source
8
+ dtype: string
9
+ splits:
10
+ - name: train
11
+ num_bytes: 85241511
12
+ num_examples: 30844
13
+ download_size: 48607995
14
+ dataset_size: 85241511
15
+ - config_name: original
16
+ features:
17
+ - name: text
18
+ dtype: string
19
+ - name: source
20
+ dtype: string
21
+ splits:
22
+ - name: train
23
+ num_bytes: 105400009
24
+ num_examples: 35996
25
+ download_size: 60150578
26
+ dataset_size: 105400009
27
+ configs:
28
+ - config_name: dedup
29
+ data_files:
30
+ - split: train
31
+ path: dedup/train-*
32
+ - config_name: original
33
+ data_files:
34
+ - split: train
35
+ path: original/train-*
36
+ default: true
37
+ license: mit
38
+ task_categories:
39
+ - text-generation
40
+ - mask-generation
41
+ language:
42
+ - es
43
+ tags:
44
+ - Clinical
45
+ - Spanish
46
+ size_categories:
47
+ - 10K<n<100K
48
+ ---
49
+
50
+ # ClinText-SP Dataset Card
51
+
52
+ ## Dataset Description
53
+ **ClinText-SP** is the largest publicly available Spanish clinical corpus designed to support research in clinical natural language processing. It aggregates a rich collection of clinical texts from diverse open sources, including medical journals, annotated corpora from shared tasks, and supplementary sources like Wikipedia and medical textbooks.
54
+
55
+ The dataset contains:
56
+ - **35,996 samples** with an average of ~700 tokens per sample
57
+ - **Approximately 25.62M tokens** in total
58
+
59
+ ClinText-SP offers a balanced mix of long, well-structured clinical case reports and shorter, schematic texts, making it ideal for a variety of clinical NLP tasks.
60
+
61
+ ## Data Sources
62
+ The corpus is built from three primary source types:
63
+ - **Medical Journals:** Clinical case reports from specialized Spanish-language journals.
64
+ - **Annotated Corpora:** Datasets from shared tasks.
65
+ - **Other Sources:** Additional clinical knowledge extracted from Wikipedia and select medical textbooks to complement the dataset.
66
+
67
+ ## Data Preprocessing
68
+ - **Cleaning & Extraction:** Texts were parsed and cleaned from PDFs, HTMLs, and other formats. Extraneous formatting, HTML artifacts, and non-essential metadata (e.g., author names) were removed.
69
+ - **Customized Strategies:** Specific regex-based heuristics and LLM-assisted methods (using Qwen2.5) were employed to accurately extract clinical case information.
70
+ - **Deduplication & Language Filtering:** Fuzzy deduplication (using MinHash) ensured unique entries, and non-Spanish texts were removed using Python Langdetect.
71
+
72
+ ## Intended Use
73
+ ClinText-SP is ideal for:
74
+ - **Training and Benchmarking:** Facilitating the development of Spanish clinical NLP models, including encoder-based models such as [RigoBERTa Clinical](https://huggingface.co/IIC/RigoBERTa-Clinical).
75
+ - **Domain-Adaptive Pretraining:** Serving as a robust resource for adapting language models to the clinical domain.
76
+ - **Research and Application:** Advancing clinical language understanding and supporting applications in healthcare AI.
77
+
78
+ ## Limitations and Biases
79
+ - **Biases:** The dataset may reflect biases inherent to the selected sources and may not cover every clinical specialty.
80
+ - **Coverage:** While comprehensive, the dataset might not fully encapsulate the entirety of clinical nuances across all medical fields.
81
+ - **Data Quality:** Variations in data quality exist due to the diversity of sources and extraction methods.
82
+
83
+ For more detailed information, please check the [original paper](https://arxiv.org/abs/2503.18594).
84
+
85
+ ## Citation
86
+ If you use ClinText-SP in your research, please cite the work as follows:
87
+
88
+ **BibTeX:**
89
+
90
+ ```bibtex
91
+ @misc{subies2025clintextsprigobertaclinicalnew,
92
+ title={ClinText-SP and RigoBERTa Clinical: a new set of open resources for Spanish Clinical NLP},
93
+ author={Guillem García Subies and Álvaro Barbero Jiménez and Paloma Martínez Fernández},
94
+ year={2025},
95
+ eprint={2503.18594},
96
+ archivePrefix={arXiv},
97
+ primaryClass={cs.CL},
98
+ url={https://arxiv.org/abs/2503.18594},
99
+ }
100
+ ```
101
+
102
+ **APA:**
103
+
104
+ ```
105
+ Subies, G. G., Barbero Jiménez, Á., & Martínez Fernández, P. (2025). ClinText-SP and RigoBERTa Clinical: A new set of open resources for Spanish Clinical NLP. arXiv. https://arxiv.org/abs/2503.18594
106
+ ```
107
+
108
+ ## Model Card Authors and Contact
109
+
110
+ Guillem García Subies: guillem.garcia@iic.uam.es, 100500844@alumnos.uc3m.es
dedup/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa36b9ac218309fa6bacb062a344fbcbfea1360abac5d29b40f0f8b1866d41a8
3
+ size 48607995
original/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec52e7d8fbbf5149d86d0fff99a36f074ae113a2e0c83fafd8f223cc98143d25
3
+ size 60150578