EmaRimoldi commited on
Commit
95ff2a3
·
verified ·
1 Parent(s): a9449ee

Upload folder using huggingface_hub

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md CHANGED
@@ -1,223 +1,173 @@
1
  ---
2
- library_name: transformers
 
 
3
  tags:
4
- - rag
5
- - retrieval-augmented-generation
6
- - mcqa
7
- - qwen3
8
- - epfl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- # Model Card for EmaRimoldi/MNLP_M2_rag_model
12
 
13
- <!-- Provide a quick summary of what the model is/does. -->
14
- #This model is a fine-tuned Retrieval-Augmented Generation (RAG-Sequence) system, built to answer advanced STEM multiple-choice and short-answer questions by retrieving relevant context from a curated EPFL STEM corpus and then generating grounded answers.
15
 
 
 
16
 
17
- ## Model Details
18
-
19
- ### Model Description
20
-
21
- <!-- Provide a longer summary of what this model is. -->
22
-
23
-
24
- - **Developed by:** Ema Rimoldi (EPFL CS-552 MNLP course)
25
- - **Funded by [optional]:** EPFL Natural Language Processing Lab
26
- - **Model type:** RAG-Sequence (Retrieval-Augmented Generation)
27
- - **Language(s) (NLP):** English
28
- - **License:** Apache-2.0
29
- - **Finetuned from model [optional]:** Qwen3-0.6B-Base
30
-
31
-
32
- ### Model Sources [optional]
33
-
34
- <!-- Provide the basic links for the model. -->
35
-
36
- - **Repository:** https://huggingface.co/EmaRimoldi/MNLP_M2_rag_model
37
- - **Dataset:** https://huggingface.co/datasets/EmaRimoldi/MNLP_M2_rag_dataset
38
- - **Document encoder:** https://huggingface.co/EmaRimoldi/MNLP_M2_document_encoder
39
- - **Retriever index:** FAISS index stored under https://huggingface.co/datasets/EmaRimoldi/MNLP_M2_documents
40
-
41
-
42
- ## Uses
43
-
44
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
45
-
46
- ### Direct Use
47
-
48
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
49
-
50
- Call the RAG pipeline to ground answers in retrieved EPFL STEM documents:
51
-
52
- ```python
53
- from transformers import RagTokenizer, RagSequenceForGeneration
54
-
55
- tokenizer = RagTokenizer.from_pretrained("EmaRimoldi/MNLP_M2_rag_model")
56
- model = RagSequenceForGeneration.from_pretrained("EmaRimoldi/MNLP_M2_rag_model")
57
-
58
- input_dict = tokenizer.prepare_seq2seq_batch(
59
- question="What is the Carnot engine?",
60
- n_docs=5,
61
- return_tensors="pt"
62
- )
63
- generated = model.generate(**input_dict)
64
- print(tokenizer.batch_decode(generated, skip_special_tokens=True))
65
  ```
66
 
 
 
 
 
67
 
68
- [More Information Needed]
69
-
70
- ### Downstream Use [optional]
71
-
72
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
73
-
74
- [More Information Needed]
75
-
76
- ### Out-of-Scope Use
77
-
78
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
79
-
80
- [More Information Needed]
81
-
82
- ## Bias, Risks, and Limitations
83
-
84
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
85
-
86
- [More Information Needed]
87
-
88
- ### Recommendations
89
-
90
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
91
-
92
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
93
-
94
- ## How to Get Started with the Model
95
-
96
- Use the code below to get started with the model.
97
-
98
- [More Information Needed]
99
-
100
- ## Training Details
101
-
102
- ### Training Data
103
-
104
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
105
-
106
- [More Information Needed]
107
-
108
- ### Training Procedure
109
-
110
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
111
-
112
- #### Preprocessing [optional]
113
-
114
- [More Information Needed]
115
-
116
-
117
- #### Training Hyperparameters
118
-
119
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
120
-
121
- #### Speeds, Sizes, Times [optional]
122
-
123
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
124
-
125
- [More Information Needed]
126
-
127
- ## Evaluation
128
-
129
- <!-- This section describes the evaluation protocols and provides the results. -->
130
-
131
- ### Testing Data, Factors & Metrics
132
-
133
- #### Testing Data
134
-
135
- <!-- This should link to a Dataset Card if possible. -->
136
-
137
- [More Information Needed]
138
-
139
- #### Factors
140
-
141
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
142
-
143
- [More Information Needed]
144
-
145
- #### Metrics
146
-
147
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
148
-
149
- [More Information Needed]
150
-
151
- ### Results
152
-
153
- [More Information Needed]
154
-
155
- #### Summary
156
-
157
-
158
-
159
- ## Model Examination [optional]
160
-
161
- <!-- Relevant interpretability work for the model goes here -->
162
-
163
- [More Information Needed]
164
-
165
- ## Environmental Impact
166
-
167
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
168
-
169
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
170
-
171
- - **Hardware Type:** [More Information Needed]
172
- - **Hours used:** [More Information Needed]
173
- - **Cloud Provider:** [More Information Needed]
174
- - **Compute Region:** [More Information Needed]
175
- - **Carbon Emitted:** [More Information Needed]
176
-
177
- ## Technical Specifications [optional]
178
-
179
- ### Model Architecture and Objective
180
-
181
- [More Information Needed]
182
-
183
- ### Compute Infrastructure
184
-
185
- [More Information Needed]
186
-
187
- #### Hardware
188
-
189
- [More Information Needed]
190
-
191
- #### Software
192
-
193
- [More Information Needed]
194
-
195
- ## Citation [optional]
196
-
197
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
198
-
199
- **BibTeX:**
200
 
201
- [More Information Needed]
 
202
 
203
- **APA:**
 
 
 
204
 
205
- [More Information Needed]
 
 
 
 
206
 
207
- ## Glossary [optional]
208
 
209
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
210
 
211
- [More Information Needed]
 
 
212
 
213
- ## More Information [optional]
 
214
 
215
- [More Information Needed]
 
 
216
 
217
- ## Model Card Authors [optional]
 
218
 
219
- [More Information Needed]
 
220
 
221
- ## Model Card Contact
 
 
222
 
223
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ license: apache-2.0
4
+ library_name: sentence-transformers
5
  tags:
6
+ - sentence-transformers
7
+ - feature-extraction
8
+ - sentence-similarity
9
+ - transformers
10
+ datasets:
11
+ - s2orc
12
+ - flax-sentence-embeddings/stackexchange_xml
13
+ - ms_marco
14
+ - gooaq
15
+ - yahoo_answers_topics
16
+ - code_search_net
17
+ - search_qa
18
+ - eli5
19
+ - snli
20
+ - multi_nli
21
+ - wikihow
22
+ - natural_questions
23
+ - trivia_qa
24
+ - embedding-data/sentence-compression
25
+ - embedding-data/flickr30k-captions
26
+ - embedding-data/altlex
27
+ - embedding-data/simple-wiki
28
+ - embedding-data/QQP
29
+ - embedding-data/SPECTER
30
+ - embedding-data/PAQ_pairs
31
+ - embedding-data/WikiAnswers
32
+ pipeline_tag: sentence-similarity
33
  ---
34
 
 
35
 
36
+ # all-MiniLM-L12-v2
37
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
38
 
39
+ ## Usage (Sentence-Transformers)
40
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
41
 
42
+ ```
43
+ pip install -U sentence-transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ```
45
 
46
+ Then you can use the model like this:
47
+ ```python
48
+ from sentence_transformers import SentenceTransformer
49
+ sentences = ["This is an example sentence", "Each sentence is converted"]
50
 
51
+ model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2')
52
+ embeddings = model.encode(sentences)
53
+ print(embeddings)
54
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
+ ## Usage (HuggingFace Transformers)
57
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
58
 
59
+ ```python
60
+ from transformers import AutoTokenizer, AutoModel
61
+ import torch
62
+ import torch.nn.functional as F
63
 
64
+ #Mean Pooling - Take attention mask into account for correct averaging
65
+ def mean_pooling(model_output, attention_mask):
66
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
67
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
68
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
69
 
 
70
 
71
+ # Sentences we want sentence embeddings for
72
+ sentences = ['This is an example sentence', 'Each sentence is converted']
73
 
74
+ # Load model from HuggingFace Hub
75
+ tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
76
+ model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L12-v2')
77
 
78
+ # Tokenize sentences
79
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
80
 
81
+ # Compute token embeddings
82
+ with torch.no_grad():
83
+ model_output = model(**encoded_input)
84
 
85
+ # Perform pooling
86
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
87
 
88
+ # Normalize embeddings
89
+ sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
90
 
91
+ print("Sentence embeddings:")
92
+ print(sentence_embeddings)
93
+ ```
94
 
95
+ ------
96
+
97
+ ## Background
98
+
99
+ The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
100
+ contrastive learning objective. We used the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model and fine-tuned in on a
101
+ 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
102
+
103
+ We developped this model during the
104
+ [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104),
105
+ organized by Hugging Face. We developped this model as part of the project:
106
+ [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks.
107
+
108
+ ## Intended uses
109
+
110
+ Our model is intented to be used as a sentence and short paragraph encoder. Given an input text, it ouptuts a vector which captures
111
+ the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
112
+
113
+ By default, input text longer than 256 word pieces is truncated.
114
+
115
+
116
+ ## Training procedure
117
+
118
+ ### Pre-training
119
+
120
+ We use the pretrained [`microsoft/MiniLM-L12-H384-uncased`](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure.
121
+
122
+ ### Fine-tuning
123
+
124
+ We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch.
125
+ We then apply the cross entropy loss by comparing with true pairs.
126
+
127
+ #### Hyper parameters
128
+
129
+ We trained ou model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core).
130
+ We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with
131
+ a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`.
132
+
133
+ #### Training data
134
+
135
+ We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
136
+ We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file.
137
+
138
+
139
+ | Dataset | Paper | Number of training tuples |
140
+ |--------------------------------------------------------|:----------------------------------------:|:--------------------------:|
141
+ | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
142
+ | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 |
143
+ | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 |
144
+ | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 |
145
+ | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 |
146
+ | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 |
147
+ | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 |
148
+ | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 |
149
+ | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 |
150
+ | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 |
151
+ | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 |
152
+ | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 |
153
+ | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 |
154
+ | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395|
155
+ | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 |
156
+ | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 |
157
+ | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 |
158
+ | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 |
159
+ | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 |
160
+ | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 |
161
+ | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 |
162
+ | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 |
163
+ | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 |
164
+ | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 |
165
+ | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 |
166
+ | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 |
167
+ | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 |
168
+ | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 |
169
+ | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 |
170
+ | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 |
171
+ | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 |
172
+ | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 |
173
+ | **Total** | | **1,170,060,424** |
config.json CHANGED
@@ -1,6 +1,6 @@
1
  {
2
  "architectures": [
3
- "BertLMHeadModel"
4
  ],
5
  "attention_probs_dropout_prob": 0.1,
6
  "classifier_dropout": null,
 
1
  {
2
  "architectures": [
3
+ "BertModel"
4
  ],
5
  "attention_probs_dropout_prob": 0.1,
6
  "classifier_dropout": null,
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.52.3",
5
+ "pytorch": "2.7.0"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5dd57b928f7ca776e3551ad3e8c307cc20d63f0b5d2fc31697594771ac0bc65e
3
- size 133588624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32ed5a30285dd435b59979b997f7d1c337486ad0b53d3ac0bfc78d779368452e
3
+ size 133462128
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
tokenizer_config.json CHANGED
@@ -48,7 +48,7 @@
48
  "extra_special_tokens": {},
49
  "mask_token": "[MASK]",
50
  "max_length": 128,
51
- "model_max_length": 512,
52
  "never_split": null,
53
  "pad_to_multiple_of": null,
54
  "pad_token": "[PAD]",
 
48
  "extra_special_tokens": {},
49
  "mask_token": "[MASK]",
50
  "max_length": 128,
51
+ "model_max_length": 128,
52
  "never_split": null,
53
  "pad_to_multiple_of": null,
54
  "pad_token": "[PAD]",