DoDucAnh commited on
Commit
8da4c75
·
verified ·
1 Parent(s): a251374

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ checkpoint-49503/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,531 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:594028
8
+ - loss:MultipleNegativesRankingLoss
9
+ base_model: intfloat/multilingual-e5-large-instruct
10
+ widget:
11
+ - source_sentence: '''আমি'' শব্দটি কোন লিঙ্গ?
12
+
13
+ A. উভয় লিঙ্গ
14
+
15
+ B. ক্লীব লিঙ্গ
16
+
17
+ C. পুংলিঙ্গ
18
+
19
+ D. স্ত্রী লিঙ্গ'
20
+ sentences:
21
+ - F.P. Dobroslavin, tibbin müxtəlif sahələri üzrə tanınmış bir alimdir, ancaq daha
22
+ çox baş vermiş tədqiqatlara görə seçilir. Onun əməyinin sanitar-gigiyenik sahəyə
23
+ təsiri əhəmiyyətlidir.
24
+ - 'বাংলা ভাষায় শব্দগুলোর লিঙ্গ সাধারণত তিনটি মূল শ্রেণিতে ভাগ হয়: পুংলিঙ্গ (পুরুষ),
25
+ স্ত্রী লিঙ্গ (মহিলা), এবং ক্লীব লিঙ্গ (যার কোনো লিঙ্গ নেই)।'
26
+ - Waves are disturbances that transfer energy from one place to another without
27
+ transferring matter. Think of a ripple on a pond – the water molecules don't travel
28
+ across the pond with the ripple; they mostly move up and down as the energy passes
29
+ through them.
30
+ - source_sentence: '企业产品组合中所拥有的产品线数目是
31
+
32
+ A. 产品组合的宽度
33
+
34
+ B. 产品组合的相关性
35
+
36
+ C. 产品组合的深度
37
+
38
+ D. 产品组合的长度'
39
+ sentences:
40
+ - 产品组合的宽度(Width)是指企业拥有的产品线数目。
41
+ - This fluid is produced by the walls of the vagina and the Bartholin's glands.
42
+ - "### Assumption of Risk Defined \nAssumption of risk is a legal doctrine used\
43
+ \ in tort law that can limit or bar recovery in negligence claims. This doctrine\
44
+ \ suggests that if a person voluntarily engages in a risky activity, knowing the\
45
+ \ risks involved, they cannot hold another party responsible for resulting injuries.\
46
+ \ Common scenarios where this applies include contact sports and recreational\
47
+ \ activities, where participants understand the inherent hazards. \n\n### Elements\
48
+ \ of Assumption of Risk \nTo successfully argue assumption of risk, certain elements\
49
+ \ must be established: \n1. **Knowledge of the Risk**: The individual must have\
50
+ \ actual or constructive knowledge of the risk involved. \n2. **Voluntary Exposure**:\
51
+ \ The individual must voluntarily choose to expose themselves to that risk. \n\
52
+ 3. **Informed Consent**: The individual must have consented to take that risk\
53
+ \ despite being aware of it. \n\n### Contributory Negligence \nContributory\
54
+ \ negligence is a legal concept that exists in some jurisdictions where a plaintiff's\
55
+ \ own negligence contributes to their injury. Under this doctrine, if the plaintiff\
56
+ \ is found to have played any part in their injury, they may be barred from recovering\
57
+ \ damages, or the damage award could be reduced. It emphasizes the responsibility\
58
+ \ of the injured party to exercise reasonable care for their own safety. \n\n\
59
+ ### Interaction of Assumption of Risk and Contributory Negligence \nIn many jurisdictions,\
60
+ \ both assumption of risk and contributory negligence can coexist as defenses.\
61
+ \ However, some legal systems assert that if a plaintiff is found contributorily\
62
+ \ negligent, they cannot also claim assumed risk for the same incident. This overlap\
63
+ \ can complicate cases since the determination of the plaintiff's awareness and\
64
+ \ behavior prior to the accident can alter the outcome. \n\n### Legal Standard\
65
+ \ for Warnings and Liability \nIn negligence cases, the adequacy of warnings\
66
+ \ provided is crucial. Courts often assess whether the warnings were sufficient\
67
+ \ to inform the individual of the specific hazards present. A simple sign may\
68
+ \ not meet the threshold if it fails to clearly communicate the danger involved,\
69
+ \ especially if the harm is not immediately obvious or if the context (e.g., a\
70
+ \ crowded street) suggests additional risks."
71
+ - source_sentence: "While shopping at a grocery store, a customer tripped over a broken\
72
+ \ tile, fell, and suffered a concussion. A few months after the accident, the\
73
+ \ customer's attorney deposed a store employee. In the deposition, the employee\
74
+ \ testified, \"I'd been telling the store manager for years to get that broken\
75
+ \ tile fixed, but he wouldn't do it. \" The employee died in an automobile accident\
76
+ \ after being deposed. At trial, the deposition should be\nA. admitted, as a dying\
77
+ \ declaration. \nB. admitted, as former testimony. \nC. not admitted, because\
78
+ \ it is hearsay not within any exception. \nD. not admitted, because the employee\
79
+ \ is not available for cross-examination. "
80
+ sentences:
81
+ - In the context of human evolution, brain size is often compared to body size in
82
+ a measurement called the encephalization quotient (EQ). This measure assesses
83
+ the expected brain size for an animal of a given body size compared to actual
84
+ brain size. An increase in EQ among hominins is often linked to advancements in
85
+ cognitive abilities, such as problem-solving and social interaction.
86
+ - Another exception to the hearsay rule, though often with specific requirements
87
+ related to the declarant's belief of impending death, is the dying declaration.
88
+ - In assessing moral actions, it is also essential to consider societal norms. In
89
+ the U.S. context in 2020, moral standards often emphasize community well-being
90
+ and individual rights. An action like diverting emergency supplies would likely
91
+ be condemned in most social circles, while stepping out of rhythm during a line
92
+ dancewould not commonly qualify as a serious moral offense. Thus, moral wrongness
93
+ is often context-dependent and tied closely to consequences for individuals and
94
+ society.
95
+ - source_sentence: 'Recent research on hominid species dating from the Middle Pliocene
96
+ indicates there was (as of 2020):
97
+
98
+ A. multiple hominid species but with limited diversity.
99
+
100
+ B. a single species with no diversity.
101
+
102
+ C. decreased species diversity but increased numbers of hammerstones and flakes,
103
+ indicating stone tool manufacture.
104
+
105
+ D. a single dominant species that outcompeted all others, leading to decreased
106
+ diversity.
107
+
108
+ E. increased species diversity due to a prolonged ice age followed by a severe
109
+ drought.
110
+
111
+ F. decreased species diversity due to a prolonged ice age followed by a severe
112
+ drought.
113
+
114
+ G. a great amount of species diversity, or a single species that exhibited a lot
115
+ of diversity.
116
+
117
+ H. increased species diversity but with decreased population numbers due to harsh
118
+ climate conditions.
119
+
120
+ I. increased species diversity but decreased numbers of hammerstones and flakes,
121
+ indicating less stone tool manufacture.
122
+
123
+ J. very little species diversity during this period and very few hominids.'
124
+ sentences:
125
+ - Hammerstones and flakes are artifacts associated with early stone tool technology.
126
+ Hammerstones are hard rocks used to strike other stones, while flakes are the
127
+ sharp pieces produced from such strikes, which could be utilized for tasks like
128
+ cutting or scraping, indicating early cognitive and manual skills in tool-making
129
+ among certain species.
130
+ - The Doppler effect is a phenomenon that occurs when the source of a wave and the
131
+ observer are moving relative to each other. It results in a change in the observed
132
+ frequency of the wave compared to the source frequency.
133
+ - Counseling and therapeutic interventions can play a role in addressing student
134
+ behavioral issues, but they should be considered within a broader context of classroom
135
+ dynamics and educational strategies. Counseling might help the child develop coping
136
+ mechanisms, social skills, and emotional regulation strategies. However, the effectiveness
137
+ of counseling is often maximized when the child is supported in the classroom
138
+ environment as well, suggesting that changes to the teacher's approach could lead
139
+ to improved outcomes.
140
+ - source_sentence: 'Hipotalamusi NUK kontrollon sekretimin e hormoneve:
141
+
142
+ A. FSH dhe LH
143
+
144
+ B. te rritjes(GH)
145
+
146
+ C. ACTH
147
+
148
+ D. te pankreasit'
149
+ sentences:
150
+ - In the context of estate planning and inheritance law, a will serves as a legal
151
+ document outlining how a person's property and assets will be distributed after
152
+ their death. The interpretation of a will often hinges on the intent of the testator,
153
+ or the person who made the will, which can affect how property interests are determined.
154
+ - State laws that regulate matters of legitimate local concern but have an incidental
155
+ effect on interstate commerce are subject to a less strict balancing test. Under
156
+ this test, a state law will be upheld unless the burden imposed on interstate
157
+ commerce is clearly excessive in relation to the putative local benefits.
158
+ - Hipotalamusi është një pjesë e trurit që ndodhet nën talamusin. Ai luan një rol
159
+ kryesor në lidhjen e sistemit nervor me sistemin endokrin përmes gjëndrës së hipofizës.
160
+ datasets:
161
+ - DoDucAnh/mcqa-rag-finetune
162
+ pipeline_tag: sentence-similarity
163
+ library_name: sentence-transformers
164
+ ---
165
+
166
+ # SentenceTransformer based on intfloat/multilingual-e5-large-instruct
167
+
168
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
169
+
170
+ ## Model Details
171
+
172
+ ### Model Description
173
+ - **Model Type:** Sentence Transformer
174
+ - **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision 84344a23ee1820ac951bc365f1e91d094a911763 -->
175
+ - **Maximum Sequence Length:** 512 tokens
176
+ - **Output Dimensionality:** 1024 dimensions
177
+ - **Similarity Function:** Cosine Similarity
178
+ - **Training Dataset:**
179
+ - [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune)
180
+ <!-- - **Language:** Unknown -->
181
+ <!-- - **License:** Unknown -->
182
+
183
+ ### Model Sources
184
+
185
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
186
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
187
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
188
+
189
+ ### Full Model Architecture
190
+
191
+ ```
192
+ SentenceTransformer(
193
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
194
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
195
+ (2): Normalize()
196
+ )
197
+ ```
198
+
199
+ ## Usage
200
+
201
+ ### Direct Usage (Sentence Transformers)
202
+
203
+ First install the Sentence Transformers library:
204
+
205
+ ```bash
206
+ pip install -U sentence-transformers
207
+ ```
208
+
209
+ Then you can load this model and run inference.
210
+ ```python
211
+ from sentence_transformers import SentenceTransformer
212
+
213
+ # Download from the 🤗 Hub
214
+ model = SentenceTransformer("sentence_transformers_model_id")
215
+ # Run inference
216
+ sentences = [
217
+ 'Hipotalamusi NUK kontrollon sekretimin e hormoneve:\nA. FSH dhe LH\nB. te rritjes(GH)\nC. ACTH\nD. te pankreasit',
218
+ 'Hipotalamusi është një pjesë e trurit që ndodhet nën talamusin. Ai luan një rol kryesor në lidhjen e sistemit nervor me sistemin endokrin përmes gjëndrës së hipofizës.',
219
+ 'State laws that regulate matters of legitimate local concern but have an incidental effect on interstate commerce are subject to a less strict balancing test. Under this test, a state law will be upheld unless the burden imposed on interstate commerce is clearly excessive in relation to the putative local benefits.',
220
+ ]
221
+ embeddings = model.encode(sentences)
222
+ print(embeddings.shape)
223
+ # [3, 1024]
224
+
225
+ # Get the similarity scores for the embeddings
226
+ similarities = model.similarity(embeddings, embeddings)
227
+ print(similarities.shape)
228
+ # [3, 3]
229
+ ```
230
+
231
+ <!--
232
+ ### Direct Usage (Transformers)
233
+
234
+ <details><summary>Click to see the direct usage in Transformers</summary>
235
+
236
+ </details>
237
+ -->
238
+
239
+ <!--
240
+ ### Downstream Usage (Sentence Transformers)
241
+
242
+ You can finetune this model on your own dataset.
243
+
244
+ <details><summary>Click to expand</summary>
245
+
246
+ </details>
247
+ -->
248
+
249
+ <!--
250
+ ### Out-of-Scope Use
251
+
252
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
253
+ -->
254
+
255
+ <!--
256
+ ## Bias, Risks and Limitations
257
+
258
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
259
+ -->
260
+
261
+ <!--
262
+ ### Recommendations
263
+
264
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
265
+ -->
266
+
267
+ ## Training Details
268
+
269
+ ### Training Dataset
270
+
271
+ #### mcqa-rag-finetune
272
+
273
+ * Dataset: [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune) at [d1f5446](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune/tree/d1f5446a80c070fb8e1abfffef8a9dace426026b)
274
+ * Size: 594,028 training samples
275
+ * Columns: <code>anchor</code> and <code>positive</code>
276
+ * Approximate statistics based on the first 1000 samples:
277
+ | | anchor | positive |
278
+ |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
279
+ | type | string | string |
280
+ | details | <ul><li>min: 22 tokens</li><li>mean: 105.96 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 70.95 tokens</li><li>max: 478 tokens</li></ul> |
281
+ * Samples:
282
+ | anchor | positive |
283
+ |:------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
284
+ | <code>Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.<br>A. 0<br>B. 1<br>C. 2<br>D. 3</code> | <code>The notation Z_3 refers to the finite field with three elements, often denoted as {0, 1, 2}. This field operates under modular arithmetic, specifically modulo 3. Elements in Z_3 can be added and multiplied according to the rules of modulo 3, where any number can wrap around upon reaching 3.</code> |
285
+ | <code>Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.<br>A. 0<br>B. 1<br>C. 2<br>D. 3</code> | <code>A field is a set equipped with two operations, addition and multiplication, satisfying certain properties: associativity, commutativity, distributivity, the existence of additive and multiplicative identities, and the existence of additive inverses and multiplicative inverses (for all elements except the zero element). In order for Z_3[x]/(f(x)) to be a field, the polynomial f(x) must be irreducible over Z_3.</code> |
286
+ | <code>Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.<br>A. 0<br>B. 1<br>C. 2<br>D. 3</code> | <code>The expression Z_3[x] indicates the set of all polynomials with coefficients in Z_3. A polynomial is said to be irreducible over Z_3 if it cannot be factored into the product of two non-constant polynomials with coefficients in Z_3. In the case of quadratic polynomials like x^2 + c, irreducibility depends on whether it has any roots in the field Z_3.</code> |
287
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
288
+ ```json
289
+ {
290
+ "scale": 20.0,
291
+ "similarity_fct": "cos_sim"
292
+ }
293
+ ```
294
+
295
+ ### Evaluation Dataset
296
+
297
+ #### mcqa-rag-finetune
298
+
299
+ * Dataset: [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune) at [d1f5446](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune/tree/d1f5446a80c070fb8e1abfffef8a9dace426026b)
300
+ * Size: 1,000 evaluation samples
301
+ * Columns: <code>anchor</code> and <code>positive</code>
302
+ * Approximate statistics based on the first 1000 samples:
303
+ | | anchor | positive |
304
+ |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
305
+ | type | string | string |
306
+ | details | <ul><li>min: 22 tokens</li><li>mean: 98.74 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 59.88 tokens</li><li>max: 501 tokens</li></ul> |
307
+ * Samples:
308
+ | anchor | positive |
309
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
310
+ | <code>ക്രൂരകോഷ്ഠം ഉള്ള ഒരാളിൽ കോപിച്ചിരിക്കുന്ന ദോഷം താഴെപ്പറയുന്നവയിൽ ഏതാണ്?<br>A. കഫം<br>B. പിത്തം<br>C. വാതം<br>D. രക്തം</code> | <code>ഓരോ ദോഷത്തിനും അതിന്റേതായ സ്വഭാവങ്ങളും ശരീരത്തിൽ അത് ഉണ്ടാക്കുന്ന ഫലങ്ങളും ഉണ്ട്.</code> |
311
+ | <code>Melyik tényező nem befolyásolja a fagylalt keresleti függvényét?<br>A. A fagylalt árának változása.<br>B. Mindegyik tényező befolyásolja.<br>C. A jégkrém árának változása.<br>D. A fagylalttölcsér árának változása.</code> | <code>A keresleti függvény negatív meredekségű, ami azt jelenti, hogy az ár növekedésével a keresett mennyiség csökken (csökkenő kereslet törvénye).</code> |
312
+ | <code>In contrast to _______, _______ aim to reward favourable behaviour by companies. The success of such campaigns have been heightened through the use of ___________, which allow campaigns to facilitate the company in achieving _________ .<br>A. Boycotts, Buyalls, Blockchain technology, Increased Sales<br>B. Buycotts, Boycotts, Digital technology, Decreased Sales<br>C. Boycotts, Buycotts, Digital technology, Decreased Sales<br>D. Buycotts, Boycotts, Blockchain technology, Charitable donations<br>E. Boycotts, Buyalls, Blockchain technology, Charitable donations<br>F. Boycotts, Buycotts, Digital technology, Increased Sales<br>G. Buycotts, Boycotts, Digital technology, Increased Sales<br>H. Boycotts, Buycotts, Physical technology, Increased Sales<br>I. Buycotts, Buyalls, Blockchain technology, Charitable donations<br>J. Boycotts, Buycotts, Blockchain technology, Decreased Sales</code> | <code>**Consumer Activism**: This term refers to the actions taken by consumers to promote social, political, or environmental causes. These actions can include boycotting certain companies or buycotting others, influencing market dynamics based on ethical considerations. The effectiveness of consumer activism can vary but has gained prominence in recent years with increased visibility through social media.</code> |
313
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
314
+ ```json
315
+ {
316
+ "scale": 20.0,
317
+ "similarity_fct": "cos_sim"
318
+ }
319
+ ```
320
+
321
+ ### Training Hyperparameters
322
+ #### Non-Default Hyperparameters
323
+
324
+ - `eval_strategy`: steps
325
+ - `per_device_train_batch_size`: 12
326
+ - `per_device_eval_batch_size`: 12
327
+ - `learning_rate`: 3e-05
328
+ - `num_train_epochs`: 1
329
+ - `warmup_steps`: 5000
330
+ - `fp16`: True
331
+ - `load_best_model_at_end`: True
332
+
333
+ #### All Hyperparameters
334
+ <details><summary>Click to expand</summary>
335
+
336
+ - `overwrite_output_dir`: False
337
+ - `do_predict`: False
338
+ - `eval_strategy`: steps
339
+ - `prediction_loss_only`: True
340
+ - `per_device_train_batch_size`: 12
341
+ - `per_device_eval_batch_size`: 12
342
+ - `per_gpu_train_batch_size`: None
343
+ - `per_gpu_eval_batch_size`: None
344
+ - `gradient_accumulation_steps`: 1
345
+ - `eval_accumulation_steps`: None
346
+ - `torch_empty_cache_steps`: None
347
+ - `learning_rate`: 3e-05
348
+ - `weight_decay`: 0.0
349
+ - `adam_beta1`: 0.9
350
+ - `adam_beta2`: 0.999
351
+ - `adam_epsilon`: 1e-08
352
+ - `max_grad_norm`: 1.0
353
+ - `num_train_epochs`: 1
354
+ - `max_steps`: -1
355
+ - `lr_scheduler_type`: linear
356
+ - `lr_scheduler_kwargs`: {}
357
+ - `warmup_ratio`: 0.0
358
+ - `warmup_steps`: 5000
359
+ - `log_level`: passive
360
+ - `log_level_replica`: warning
361
+ - `log_on_each_node`: True
362
+ - `logging_nan_inf_filter`: True
363
+ - `save_safetensors`: True
364
+ - `save_on_each_node`: False
365
+ - `save_only_model`: False
366
+ - `restore_callback_states_from_checkpoint`: False
367
+ - `no_cuda`: False
368
+ - `use_cpu`: False
369
+ - `use_mps_device`: False
370
+ - `seed`: 42
371
+ - `data_seed`: None
372
+ - `jit_mode_eval`: False
373
+ - `use_ipex`: False
374
+ - `bf16`: False
375
+ - `fp16`: True
376
+ - `fp16_opt_level`: O1
377
+ - `half_precision_backend`: auto
378
+ - `bf16_full_eval`: False
379
+ - `fp16_full_eval`: False
380
+ - `tf32`: None
381
+ - `local_rank`: 0
382
+ - `ddp_backend`: None
383
+ - `tpu_num_cores`: None
384
+ - `tpu_metrics_debug`: False
385
+ - `debug`: []
386
+ - `dataloader_drop_last`: False
387
+ - `dataloader_num_workers`: 0
388
+ - `dataloader_prefetch_factor`: None
389
+ - `past_index`: -1
390
+ - `disable_tqdm`: False
391
+ - `remove_unused_columns`: True
392
+ - `label_names`: None
393
+ - `load_best_model_at_end`: True
394
+ - `ignore_data_skip`: False
395
+ - `fsdp`: []
396
+ - `fsdp_min_num_params`: 0
397
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
398
+ - `fsdp_transformer_layer_cls_to_wrap`: None
399
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
400
+ - `deepspeed`: None
401
+ - `label_smoothing_factor`: 0.0
402
+ - `optim`: adamw_torch
403
+ - `optim_args`: None
404
+ - `adafactor`: False
405
+ - `group_by_length`: False
406
+ - `length_column_name`: length
407
+ - `ddp_find_unused_parameters`: None
408
+ - `ddp_bucket_cap_mb`: None
409
+ - `ddp_broadcast_buffers`: False
410
+ - `dataloader_pin_memory`: True
411
+ - `dataloader_persistent_workers`: False
412
+ - `skip_memory_metrics`: True
413
+ - `use_legacy_prediction_loop`: False
414
+ - `push_to_hub`: False
415
+ - `resume_from_checkpoint`: None
416
+ - `hub_model_id`: None
417
+ - `hub_strategy`: every_save
418
+ - `hub_private_repo`: None
419
+ - `hub_always_push`: False
420
+ - `gradient_checkpointing`: False
421
+ - `gradient_checkpointing_kwargs`: None
422
+ - `include_inputs_for_metrics`: False
423
+ - `include_for_metrics`: []
424
+ - `eval_do_concat_batches`: True
425
+ - `fp16_backend`: auto
426
+ - `push_to_hub_model_id`: None
427
+ - `push_to_hub_organization`: None
428
+ - `mp_parameters`:
429
+ - `auto_find_batch_size`: False
430
+ - `full_determinism`: False
431
+ - `torchdynamo`: None
432
+ - `ray_scope`: last
433
+ - `ddp_timeout`: 1800
434
+ - `torch_compile`: False
435
+ - `torch_compile_backend`: None
436
+ - `torch_compile_mode`: None
437
+ - `include_tokens_per_second`: False
438
+ - `include_num_input_tokens_seen`: False
439
+ - `neftune_noise_alpha`: None
440
+ - `optim_target_modules`: None
441
+ - `batch_eval_metrics`: False
442
+ - `eval_on_start`: False
443
+ - `use_liger_kernel`: False
444
+ - `eval_use_gather_object`: False
445
+ - `average_tokens_across_devices`: False
446
+ - `prompts`: None
447
+ - `batch_sampler`: batch_sampler
448
+ - `multi_dataset_batch_sampler`: proportional
449
+
450
+ </details>
451
+
452
+ ### Training Logs
453
+ | Epoch | Step | Training Loss | Validation Loss |
454
+ |:--------:|:--------:|:-------------:|:---------------:|
455
+ | **0.05** | **2476** | **0.1209** | **0.0347** |
456
+ | 0.1000 | 4952 | 0.0737 | 0.0459 |
457
+ | 0.1501 | 7428 | 0.087 | 0.0732 |
458
+ | 0.2001 | 9904 | 0.0825 | 0.1209 |
459
+ | 0.2501 | 12380 | 0.0783 | 0.0934 |
460
+ | 0.3001 | 14856 | 0.071 | 0.0793 |
461
+ | 0.3501 | 17332 | 0.0661 | 0.0855 |
462
+ | 0.4001 | 19808 | 0.0652 | 0.0964 |
463
+ | 0.4502 | 22284 | 0.063 | 0.0892 |
464
+ | 0.5002 | 24760 | 0.056 | 0.0923 |
465
+ | 0.5502 | 27236 | 0.0509 | 0.1016 |
466
+ | 0.6002 | 29712 | 0.045 | 0.0918 |
467
+ | 0.6502 | 32188 | 0.0472 | 0.0896 |
468
+ | 0.7002 | 34664 | 0.0396 | 0.0959 |
469
+ | 0.7503 | 37140 | 0.0371 | 0.0819 |
470
+ | 0.8003 | 39616 | 0.0341 | 0.0845 |
471
+ | 0.8503 | 42092 | 0.0344 | 0.0790 |
472
+ | 0.9003 | 44568 | 0.0288 | 0.0863 |
473
+ | 0.9503 | 47044 | 0.03 | 0.0767 |
474
+
475
+ * The bold row denotes the saved checkpoint.
476
+
477
+ ### Framework Versions
478
+ - Python: 3.11.9
479
+ - Sentence Transformers: 4.1.0
480
+ - Transformers: 4.52.3
481
+ - PyTorch: 2.7.0+cu126
482
+ - Accelerate: 1.7.0
483
+ - Datasets: 3.6.0
484
+ - Tokenizers: 0.21.1
485
+
486
+ ## Citation
487
+
488
+ ### BibTeX
489
+
490
+ #### Sentence Transformers
491
+ ```bibtex
492
+ @inproceedings{reimers-2019-sentence-bert,
493
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
494
+ author = "Reimers, Nils and Gurevych, Iryna",
495
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
496
+ month = "11",
497
+ year = "2019",
498
+ publisher = "Association for Computational Linguistics",
499
+ url = "https://arxiv.org/abs/1908.10084",
500
+ }
501
+ ```
502
+
503
+ #### MultipleNegativesRankingLoss
504
+ ```bibtex
505
+ @misc{henderson2017efficient,
506
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
507
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
508
+ year={2017},
509
+ eprint={1705.00652},
510
+ archivePrefix={arXiv},
511
+ primaryClass={cs.CL}
512
+ }
513
+ ```
514
+
515
+ <!--
516
+ ## Glossary
517
+
518
+ *Clearly define terms in order to be accessible across audiences.*
519
+ -->
520
+
521
+ <!--
522
+ ## Model Card Authors
523
+
524
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
525
+ -->
526
+
527
+ <!--
528
+ ## Model Card Contact
529
+
530
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
531
+ -->
checkpoint-49503/1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
checkpoint-49503/README.md ADDED
@@ -0,0 +1,530 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:594028
8
+ - loss:MultipleNegativesRankingLoss
9
+ base_model: intfloat/multilingual-e5-large-instruct
10
+ widget:
11
+ - source_sentence: '''আমি'' শব্দটি কোন লিঙ্গ?
12
+
13
+ A. উভয় লিঙ্গ
14
+
15
+ B. ক্লীব লিঙ্গ
16
+
17
+ C. পুংলিঙ্গ
18
+
19
+ D. স্ত্রী লিঙ্গ'
20
+ sentences:
21
+ - F.P. Dobroslavin, tibbin müxtəlif sahələri üzrə tanınmış bir alimdir, ancaq daha
22
+ çox baş vermiş tədqiqatlara görə seçilir. Onun əməyinin sanitar-gigiyenik sahəyə
23
+ təsiri əhəmiyyətlidir.
24
+ - 'বাংলা ভাষায় শব্দগুলোর লিঙ্গ সাধারণত তিনটি মূল শ্রেণিতে ভাগ হয়: পুংলিঙ্গ (পুরুষ),
25
+ স্ত্রী লিঙ্গ (মহিলা), এবং ক্লীব লিঙ্গ (যার কোনো লিঙ্গ নেই)।'
26
+ - Waves are disturbances that transfer energy from one place to another without
27
+ transferring matter. Think of a ripple on a pond – the water molecules don't travel
28
+ across the pond with the ripple; they mostly move up and down as the energy passes
29
+ through them.
30
+ - source_sentence: '企业产品组合中所拥有的产品线数目是
31
+
32
+ A. 产品组合的宽度
33
+
34
+ B. 产品组合的相关性
35
+
36
+ C. 产品组合的深度
37
+
38
+ D. 产品组合的长度'
39
+ sentences:
40
+ - 产品组合的宽度(Width)是指企业拥有的产品线数目。
41
+ - This fluid is produced by the walls of the vagina and the Bartholin's glands.
42
+ - "### Assumption of Risk Defined \nAssumption of risk is a legal doctrine used\
43
+ \ in tort law that can limit or bar recovery in negligence claims. This doctrine\
44
+ \ suggests that if a person voluntarily engages in a risky activity, knowing the\
45
+ \ risks involved, they cannot hold another party responsible for resulting injuries.\
46
+ \ Common scenarios where this applies include contact sports and recreational\
47
+ \ activities, where participants understand the inherent hazards. \n\n### Elements\
48
+ \ of Assumption of Risk \nTo successfully argue assumption of risk, certain elements\
49
+ \ must be established: \n1. **Knowledge of the Risk**: The individual must have\
50
+ \ actual or constructive knowledge of the risk involved. \n2. **Voluntary Exposure**:\
51
+ \ The individual must voluntarily choose to expose themselves to that risk. \n\
52
+ 3. **Informed Consent**: The individual must have consented to take that risk\
53
+ \ despite being aware of it. \n\n### Contributory Negligence \nContributory\
54
+ \ negligence is a legal concept that exists in some jurisdictions where a plaintiff's\
55
+ \ own negligence contributes to their injury. Under this doctrine, if the plaintiff\
56
+ \ is found to have played any part in their injury, they may be barred from recovering\
57
+ \ damages, or the damage award could be reduced. It emphasizes the responsibility\
58
+ \ of the injured party to exercise reasonable care for their own safety. \n\n\
59
+ ### Interaction of Assumption of Risk and Contributory Negligence \nIn many jurisdictions,\
60
+ \ both assumption of risk and contributory negligence can coexist as defenses.\
61
+ \ However, some legal systems assert that if a plaintiff is found contributorily\
62
+ \ negligent, they cannot also claim assumed risk for the same incident. This overlap\
63
+ \ can complicate cases since the determination of the plaintiff's awareness and\
64
+ \ behavior prior to the accident can alter the outcome. \n\n### Legal Standard\
65
+ \ for Warnings and Liability \nIn negligence cases, the adequacy of warnings\
66
+ \ provided is crucial. Courts often assess whether the warnings were sufficient\
67
+ \ to inform the individual of the specific hazards present. A simple sign may\
68
+ \ not meet the threshold if it fails to clearly communicate the danger involved,\
69
+ \ especially if the harm is not immediately obvious or if the context (e.g., a\
70
+ \ crowded street) suggests additional risks."
71
+ - source_sentence: "While shopping at a grocery store, a customer tripped over a broken\
72
+ \ tile, fell, and suffered a concussion. A few months after the accident, the\
73
+ \ customer's attorney deposed a store employee. In the deposition, the employee\
74
+ \ testified, \"I'd been telling the store manager for years to get that broken\
75
+ \ tile fixed, but he wouldn't do it. \" The employee died in an automobile accident\
76
+ \ after being deposed. At trial, the deposition should be\nA. admitted, as a dying\
77
+ \ declaration. \nB. admitted, as former testimony. \nC. not admitted, because\
78
+ \ it is hearsay not within any exception. \nD. not admitted, because the employee\
79
+ \ is not available for cross-examination. "
80
+ sentences:
81
+ - In the context of human evolution, brain size is often compared to body size in
82
+ a measurement called the encephalization quotient (EQ). This measure assesses
83
+ the expected brain size for an animal of a given body size compared to actual
84
+ brain size. An increase in EQ among hominins is often linked to advancements in
85
+ cognitive abilities, such as problem-solving and social interaction.
86
+ - Another exception to the hearsay rule, though often with specific requirements
87
+ related to the declarant's belief of impending death, is the dying declaration.
88
+ - In assessing moral actions, it is also essential to consider societal norms. In
89
+ the U.S. context in 2020, moral standards often emphasize community well-being
90
+ and individual rights. An action like diverting emergency supplies would likely
91
+ be condemned in most social circles, while stepping out of rhythm during a line
92
+ dancewould not commonly qualify as a serious moral offense. Thus, moral wrongness
93
+ is often context-dependent and tied closely to consequences for individuals and
94
+ society.
95
+ - source_sentence: 'Recent research on hominid species dating from the Middle Pliocene
96
+ indicates there was (as of 2020):
97
+
98
+ A. multiple hominid species but with limited diversity.
99
+
100
+ B. a single species with no diversity.
101
+
102
+ C. decreased species diversity but increased numbers of hammerstones and flakes,
103
+ indicating stone tool manufacture.
104
+
105
+ D. a single dominant species that outcompeted all others, leading to decreased
106
+ diversity.
107
+
108
+ E. increased species diversity due to a prolonged ice age followed by a severe
109
+ drought.
110
+
111
+ F. decreased species diversity due to a prolonged ice age followed by a severe
112
+ drought.
113
+
114
+ G. a great amount of species diversity, or a single species that exhibited a lot
115
+ of diversity.
116
+
117
+ H. increased species diversity but with decreased population numbers due to harsh
118
+ climate conditions.
119
+
120
+ I. increased species diversity but decreased numbers of hammerstones and flakes,
121
+ indicating less stone tool manufacture.
122
+
123
+ J. very little species diversity during this period and very few hominids.'
124
+ sentences:
125
+ - Hammerstones and flakes are artifacts associated with early stone tool technology.
126
+ Hammerstones are hard rocks used to strike other stones, while flakes are the
127
+ sharp pieces produced from such strikes, which could be utilized for tasks like
128
+ cutting or scraping, indicating early cognitive and manual skills in tool-making
129
+ among certain species.
130
+ - The Doppler effect is a phenomenon that occurs when the source of a wave and the
131
+ observer are moving relative to each other. It results in a change in the observed
132
+ frequency of the wave compared to the source frequency.
133
+ - Counseling and therapeutic interventions can play a role in addressing student
134
+ behavioral issues, but they should be considered within a broader context of classroom
135
+ dynamics and educational strategies. Counseling might help the child develop coping
136
+ mechanisms, social skills, and emotional regulation strategies. However, the effectiveness
137
+ of counseling is often maximized when the child is supported in the classroom
138
+ environment as well, suggesting that changes to the teacher's approach could lead
139
+ to improved outcomes.
140
+ - source_sentence: 'Hipotalamusi NUK kontrollon sekretimin e hormoneve:
141
+
142
+ A. FSH dhe LH
143
+
144
+ B. te rritjes(GH)
145
+
146
+ C. ACTH
147
+
148
+ D. te pankreasit'
149
+ sentences:
150
+ - In the context of estate planning and inheritance law, a will serves as a legal
151
+ document outlining how a person's property and assets will be distributed after
152
+ their death. The interpretation of a will often hinges on the intent of the testator,
153
+ or the person who made the will, which can affect how property interests are determined.
154
+ - State laws that regulate matters of legitimate local concern but have an incidental
155
+ effect on interstate commerce are subject to a less strict balancing test. Under
156
+ this test, a state law will be upheld unless the burden imposed on interstate
157
+ commerce is clearly excessive in relation to the putative local benefits.
158
+ - Hipotalamusi është një pjesë e trurit që ndodhet nën talamusin. Ai luan një rol
159
+ kryesor në lidhjen e sistemit nervor me sistemin endokrin përmes gjëndrës së hipofizës.
160
+ datasets:
161
+ - DoDucAnh/mcqa-rag-finetune
162
+ pipeline_tag: sentence-similarity
163
+ library_name: sentence-transformers
164
+ ---
165
+
166
+ # SentenceTransformer based on intfloat/multilingual-e5-large-instruct
167
+
168
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) on the [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
169
+
170
+ ## Model Details
171
+
172
+ ### Model Description
173
+ - **Model Type:** Sentence Transformer
174
+ - **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision 84344a23ee1820ac951bc365f1e91d094a911763 -->
175
+ - **Maximum Sequence Length:** 512 tokens
176
+ - **Output Dimensionality:** 1024 dimensions
177
+ - **Similarity Function:** Cosine Similarity
178
+ - **Training Dataset:**
179
+ - [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune)
180
+ <!-- - **Language:** Unknown -->
181
+ <!-- - **License:** Unknown -->
182
+
183
+ ### Model Sources
184
+
185
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
186
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
187
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
188
+
189
+ ### Full Model Architecture
190
+
191
+ ```
192
+ SentenceTransformer(
193
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
194
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
195
+ (2): Normalize()
196
+ )
197
+ ```
198
+
199
+ ## Usage
200
+
201
+ ### Direct Usage (Sentence Transformers)
202
+
203
+ First install the Sentence Transformers library:
204
+
205
+ ```bash
206
+ pip install -U sentence-transformers
207
+ ```
208
+
209
+ Then you can load this model and run inference.
210
+ ```python
211
+ from sentence_transformers import SentenceTransformer
212
+
213
+ # Download from the 🤗 Hub
214
+ model = SentenceTransformer("sentence_transformers_model_id")
215
+ # Run inference
216
+ sentences = [
217
+ 'Hipotalamusi NUK kontrollon sekretimin e hormoneve:\nA. FSH dhe LH\nB. te rritjes(GH)\nC. ACTH\nD. te pankreasit',
218
+ 'Hipotalamusi është një pjesë e trurit që ndodhet nën talamusin. Ai luan një rol kryesor në lidhjen e sistemit nervor me sistemin endokrin përmes gjëndrës së hipofizës.',
219
+ 'State laws that regulate matters of legitimate local concern but have an incidental effect on interstate commerce are subject to a less strict balancing test. Under this test, a state law will be upheld unless the burden imposed on interstate commerce is clearly excessive in relation to the putative local benefits.',
220
+ ]
221
+ embeddings = model.encode(sentences)
222
+ print(embeddings.shape)
223
+ # [3, 1024]
224
+
225
+ # Get the similarity scores for the embeddings
226
+ similarities = model.similarity(embeddings, embeddings)
227
+ print(similarities.shape)
228
+ # [3, 3]
229
+ ```
230
+
231
+ <!--
232
+ ### Direct Usage (Transformers)
233
+
234
+ <details><summary>Click to see the direct usage in Transformers</summary>
235
+
236
+ </details>
237
+ -->
238
+
239
+ <!--
240
+ ### Downstream Usage (Sentence Transformers)
241
+
242
+ You can finetune this model on your own dataset.
243
+
244
+ <details><summary>Click to expand</summary>
245
+
246
+ </details>
247
+ -->
248
+
249
+ <!--
250
+ ### Out-of-Scope Use
251
+
252
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
253
+ -->
254
+
255
+ <!--
256
+ ## Bias, Risks and Limitations
257
+
258
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
259
+ -->
260
+
261
+ <!--
262
+ ### Recommendations
263
+
264
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
265
+ -->
266
+
267
+ ## Training Details
268
+
269
+ ### Training Dataset
270
+
271
+ #### mcqa-rag-finetune
272
+
273
+ * Dataset: [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune) at [d1f5446](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune/tree/d1f5446a80c070fb8e1abfffef8a9dace426026b)
274
+ * Size: 594,028 training samples
275
+ * Columns: <code>anchor</code> and <code>positive</code>
276
+ * Approximate statistics based on the first 1000 samples:
277
+ | | anchor | positive |
278
+ |:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
279
+ | type | string | string |
280
+ | details | <ul><li>min: 22 tokens</li><li>mean: 105.96 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 70.95 tokens</li><li>max: 478 tokens</li></ul> |
281
+ * Samples:
282
+ | anchor | positive |
283
+ |:------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
284
+ | <code>Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.<br>A. 0<br>B. 1<br>C. 2<br>D. 3</code> | <code>The notation Z_3 refers to the finite field with three elements, often denoted as {0, 1, 2}. This field operates under modular arithmetic, specifically modulo 3. Elements in Z_3 can be added and multiplied according to the rules of modulo 3, where any number can wrap around upon reaching 3.</code> |
285
+ | <code>Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.<br>A. 0<br>B. 1<br>C. 2<br>D. 3</code> | <code>A field is a set equipped with two operations, addition and multiplication, satisfying certain properties: associativity, commutativity, distributivity, the existence of additive and multiplicative identities, and the existence of additive inverses and multiplicative inverses (for all elements except the zero element). In order for Z_3[x]/(f(x)) to be a field, the polynomial f(x) must be irreducible over Z_3.</code> |
286
+ | <code>Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.<br>A. 0<br>B. 1<br>C. 2<br>D. 3</code> | <code>The expression Z_3[x] indicates the set of all polynomials with coefficients in Z_3. A polynomial is said to be irreducible over Z_3 if it cannot be factored into the product of two non-constant polynomials with coefficients in Z_3. In the case of quadratic polynomials like x^2 + c, irreducibility depends on whether it has any roots in the field Z_3.</code> |
287
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
288
+ ```json
289
+ {
290
+ "scale": 20.0,
291
+ "similarity_fct": "cos_sim"
292
+ }
293
+ ```
294
+
295
+ ### Evaluation Dataset
296
+
297
+ #### mcqa-rag-finetune
298
+
299
+ * Dataset: [mcqa-rag-finetune](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune) at [d1f5446](https://huggingface.co/datasets/DoDucAnh/mcqa-rag-finetune/tree/d1f5446a80c070fb8e1abfffef8a9dace426026b)
300
+ * Size: 1,000 evaluation samples
301
+ * Columns: <code>anchor</code> and <code>positive</code>
302
+ * Approximate statistics based on the first 1000 samples:
303
+ | | anchor | positive |
304
+ |:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
305
+ | type | string | string |
306
+ | details | <ul><li>min: 22 tokens</li><li>mean: 98.74 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 59.88 tokens</li><li>max: 501 tokens</li></ul> |
307
+ * Samples:
308
+ | anchor | positive |
309
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
310
+ | <code>ക്രൂരകോഷ്ഠം ഉള്ള ഒരാളിൽ കോപിച്ചിരിക്കുന്ന ദോഷം താഴെപ്പറയുന്നവയിൽ ഏതാണ്?<br>A. കഫം<br>B. പിത്തം<br>C. വാതം<br>D. രക്തം</code> | <code>ഓരോ ദോഷത്തിനും അതിന്റേതായ സ്വഭാവങ്ങളും ശരീരത്തിൽ അത് ഉണ്ടാക്കുന്ന ഫലങ്ങളും ഉണ്ട്.</code> |
311
+ | <code>Melyik tényező nem befolyásolja a fagylalt keresleti függvényét?<br>A. A fagylalt árának változása.<br>B. Mindegyik tényező befolyásolja.<br>C. A jégkrém árának változása.<br>D. A fagylalttölcsér árának változása.</code> | <code>A keresleti függvény negatív meredekségű, ami azt jelenti, hogy az ár növekedésével a keresett mennyiség csökken (csökkenő kereslet törvénye).</code> |
312
+ | <code>In contrast to _______, _______ aim to reward favourable behaviour by companies. The success of such campaigns have been heightened through the use of ___________, which allow campaigns to facilitate the company in achieving _________ .<br>A. Boycotts, Buyalls, Blockchain technology, Increased Sales<br>B. Buycotts, Boycotts, Digital technology, Decreased Sales<br>C. Boycotts, Buycotts, Digital technology, Decreased Sales<br>D. Buycotts, Boycotts, Blockchain technology, Charitable donations<br>E. Boycotts, Buyalls, Blockchain technology, Charitable donations<br>F. Boycotts, Buycotts, Digital technology, Increased Sales<br>G. Buycotts, Boycotts, Digital technology, Increased Sales<br>H. Boycotts, Buycotts, Physical technology, Increased Sales<br>I. Buycotts, Buyalls, Blockchain technology, Charitable donations<br>J. Boycotts, Buycotts, Blockchain technology, Decreased Sales</code> | <code>**Consumer Activism**: This term refers to the actions taken by consumers to promote social, political, or environmental causes. These actions can include boycotting certain companies or buycotting others, influencing market dynamics based on ethical considerations. The effectiveness of consumer activism can vary but has gained prominence in recent years with increased visibility through social media.</code> |
313
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
314
+ ```json
315
+ {
316
+ "scale": 20.0,
317
+ "similarity_fct": "cos_sim"
318
+ }
319
+ ```
320
+
321
+ ### Training Hyperparameters
322
+ #### Non-Default Hyperparameters
323
+
324
+ - `eval_strategy`: steps
325
+ - `per_device_train_batch_size`: 12
326
+ - `per_device_eval_batch_size`: 12
327
+ - `learning_rate`: 3e-05
328
+ - `num_train_epochs`: 1
329
+ - `warmup_steps`: 5000
330
+ - `fp16`: True
331
+ - `load_best_model_at_end`: True
332
+
333
+ #### All Hyperparameters
334
+ <details><summary>Click to expand</summary>
335
+
336
+ - `overwrite_output_dir`: False
337
+ - `do_predict`: False
338
+ - `eval_strategy`: steps
339
+ - `prediction_loss_only`: True
340
+ - `per_device_train_batch_size`: 12
341
+ - `per_device_eval_batch_size`: 12
342
+ - `per_gpu_train_batch_size`: None
343
+ - `per_gpu_eval_batch_size`: None
344
+ - `gradient_accumulation_steps`: 1
345
+ - `eval_accumulation_steps`: None
346
+ - `torch_empty_cache_steps`: None
347
+ - `learning_rate`: 3e-05
348
+ - `weight_decay`: 0.0
349
+ - `adam_beta1`: 0.9
350
+ - `adam_beta2`: 0.999
351
+ - `adam_epsilon`: 1e-08
352
+ - `max_grad_norm`: 1.0
353
+ - `num_train_epochs`: 1
354
+ - `max_steps`: -1
355
+ - `lr_scheduler_type`: linear
356
+ - `lr_scheduler_kwargs`: {}
357
+ - `warmup_ratio`: 0.0
358
+ - `warmup_steps`: 5000
359
+ - `log_level`: passive
360
+ - `log_level_replica`: warning
361
+ - `log_on_each_node`: True
362
+ - `logging_nan_inf_filter`: True
363
+ - `save_safetensors`: True
364
+ - `save_on_each_node`: False
365
+ - `save_only_model`: False
366
+ - `restore_callback_states_from_checkpoint`: False
367
+ - `no_cuda`: False
368
+ - `use_cpu`: False
369
+ - `use_mps_device`: False
370
+ - `seed`: 42
371
+ - `data_seed`: None
372
+ - `jit_mode_eval`: False
373
+ - `use_ipex`: False
374
+ - `bf16`: False
375
+ - `fp16`: True
376
+ - `fp16_opt_level`: O1
377
+ - `half_precision_backend`: auto
378
+ - `bf16_full_eval`: False
379
+ - `fp16_full_eval`: False
380
+ - `tf32`: None
381
+ - `local_rank`: 0
382
+ - `ddp_backend`: None
383
+ - `tpu_num_cores`: None
384
+ - `tpu_metrics_debug`: False
385
+ - `debug`: []
386
+ - `dataloader_drop_last`: False
387
+ - `dataloader_num_workers`: 0
388
+ - `dataloader_prefetch_factor`: None
389
+ - `past_index`: -1
390
+ - `disable_tqdm`: False
391
+ - `remove_unused_columns`: True
392
+ - `label_names`: None
393
+ - `load_best_model_at_end`: True
394
+ - `ignore_data_skip`: False
395
+ - `fsdp`: []
396
+ - `fsdp_min_num_params`: 0
397
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
398
+ - `fsdp_transformer_layer_cls_to_wrap`: None
399
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
400
+ - `deepspeed`: None
401
+ - `label_smoothing_factor`: 0.0
402
+ - `optim`: adamw_torch
403
+ - `optim_args`: None
404
+ - `adafactor`: False
405
+ - `group_by_length`: False
406
+ - `length_column_name`: length
407
+ - `ddp_find_unused_parameters`: None
408
+ - `ddp_bucket_cap_mb`: None
409
+ - `ddp_broadcast_buffers`: False
410
+ - `dataloader_pin_memory`: True
411
+ - `dataloader_persistent_workers`: False
412
+ - `skip_memory_metrics`: True
413
+ - `use_legacy_prediction_loop`: False
414
+ - `push_to_hub`: False
415
+ - `resume_from_checkpoint`: None
416
+ - `hub_model_id`: None
417
+ - `hub_strategy`: every_save
418
+ - `hub_private_repo`: None
419
+ - `hub_always_push`: False
420
+ - `gradient_checkpointing`: False
421
+ - `gradient_checkpointing_kwargs`: None
422
+ - `include_inputs_for_metrics`: False
423
+ - `include_for_metrics`: []
424
+ - `eval_do_concat_batches`: True
425
+ - `fp16_backend`: auto
426
+ - `push_to_hub_model_id`: None
427
+ - `push_to_hub_organization`: None
428
+ - `mp_parameters`:
429
+ - `auto_find_batch_size`: False
430
+ - `full_determinism`: False
431
+ - `torchdynamo`: None
432
+ - `ray_scope`: last
433
+ - `ddp_timeout`: 1800
434
+ - `torch_compile`: False
435
+ - `torch_compile_backend`: None
436
+ - `torch_compile_mode`: None
437
+ - `include_tokens_per_second`: False
438
+ - `include_num_input_tokens_seen`: False
439
+ - `neftune_noise_alpha`: None
440
+ - `optim_target_modules`: None
441
+ - `batch_eval_metrics`: False
442
+ - `eval_on_start`: False
443
+ - `use_liger_kernel`: False
444
+ - `eval_use_gather_object`: False
445
+ - `average_tokens_across_devices`: False
446
+ - `prompts`: None
447
+ - `batch_sampler`: batch_sampler
448
+ - `multi_dataset_batch_sampler`: proportional
449
+
450
+ </details>
451
+
452
+ ### Training Logs
453
+ | Epoch | Step | Training Loss | Validation Loss |
454
+ |:------:|:-----:|:-------------:|:---------------:|
455
+ | 0.0500 | 2476 | 0.1209 | 0.0347 |
456
+ | 0.1000 | 4952 | 0.0737 | 0.0459 |
457
+ | 0.1501 | 7428 | 0.087 | 0.0732 |
458
+ | 0.2001 | 9904 | 0.0825 | 0.1209 |
459
+ | 0.2501 | 12380 | 0.0783 | 0.0934 |
460
+ | 0.3001 | 14856 | 0.071 | 0.0793 |
461
+ | 0.3501 | 17332 | 0.0661 | 0.0855 |
462
+ | 0.4001 | 19808 | 0.0652 | 0.0964 |
463
+ | 0.4502 | 22284 | 0.063 | 0.0892 |
464
+ | 0.5002 | 24760 | 0.056 | 0.0923 |
465
+ | 0.5502 | 27236 | 0.0509 | 0.1016 |
466
+ | 0.6002 | 29712 | 0.045 | 0.0918 |
467
+ | 0.6502 | 32188 | 0.0472 | 0.0896 |
468
+ | 0.7002 | 34664 | 0.0396 | 0.0959 |
469
+ | 0.7503 | 37140 | 0.0371 | 0.0819 |
470
+ | 0.8003 | 39616 | 0.0341 | 0.0845 |
471
+ | 0.8503 | 42092 | 0.0344 | 0.0790 |
472
+ | 0.9003 | 44568 | 0.0288 | 0.0863 |
473
+ | 0.9503 | 47044 | 0.03 | 0.0767 |
474
+
475
+
476
+ ### Framework Versions
477
+ - Python: 3.11.9
478
+ - Sentence Transformers: 4.1.0
479
+ - Transformers: 4.52.3
480
+ - PyTorch: 2.7.0+cu126
481
+ - Accelerate: 1.7.0
482
+ - Datasets: 3.6.0
483
+ - Tokenizers: 0.21.1
484
+
485
+ ## Citation
486
+
487
+ ### BibTeX
488
+
489
+ #### Sentence Transformers
490
+ ```bibtex
491
+ @inproceedings{reimers-2019-sentence-bert,
492
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
493
+ author = "Reimers, Nils and Gurevych, Iryna",
494
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
495
+ month = "11",
496
+ year = "2019",
497
+ publisher = "Association for Computational Linguistics",
498
+ url = "https://arxiv.org/abs/1908.10084",
499
+ }
500
+ ```
501
+
502
+ #### MultipleNegativesRankingLoss
503
+ ```bibtex
504
+ @misc{henderson2017efficient,
505
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
506
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
507
+ year={2017},
508
+ eprint={1705.00652},
509
+ archivePrefix={arXiv},
510
+ primaryClass={cs.CL}
511
+ }
512
+ ```
513
+
514
+ <!--
515
+ ## Glossary
516
+
517
+ *Clearly define terms in order to be accessible across audiences.*
518
+ -->
519
+
520
+ <!--
521
+ ## Model Card Authors
522
+
523
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
524
+ -->
525
+
526
+ <!--
527
+ ## Model Card Contact
528
+
529
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
530
+ -->
checkpoint-49503/config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLMRobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 1024,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 4096,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "xlm-roberta",
17
+ "num_attention_heads": 16,
18
+ "num_hidden_layers": 24,
19
+ "output_past": true,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.52.3",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 250002
27
+ }
checkpoint-49503/config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.52.3",
5
+ "pytorch": "2.7.0+cu126"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
checkpoint-49503/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e58742f5af6bd1b55773f348ed6c62bf1348e7465473e5642354d8e94be20e8
3
+ size 2239607176
checkpoint-49503/modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
checkpoint-49503/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13756a43f82471ed146dcd21ec83e3345e1b8e9719d20e7471e4b12b3cd9f09e
3
+ size 14645
checkpoint-49503/sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
checkpoint-49503/special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
checkpoint-49503/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:883b037111086fd4dfebbbc9b7cee11e1517b5e0c0514879478661440f137085
3
+ size 17082987
checkpoint-49503/tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "additional_special_tokens": [],
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": true,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "extra_special_tokens": {},
50
+ "mask_token": "<mask>",
51
+ "model_max_length": 512,
52
+ "pad_token": "<pad>",
53
+ "sep_token": "</s>",
54
+ "tokenizer_class": "XLMRobertaTokenizer",
55
+ "unk_token": "<unk>"
56
+ }
checkpoint-49503/trainer_state.json ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": 2476,
3
+ "best_metric": 0.03474666550755501,
4
+ "best_model_checkpoint": "./multilingual-e5-large-instruct-tuned/checkpoint-2476",
5
+ "epoch": 1.0,
6
+ "eval_steps": 2476,
7
+ "global_step": 49503,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "epoch": 0.050017170676524655,
14
+ "grad_norm": 2.024930715560913,
15
+ "learning_rate": 1.4826e-05,
16
+ "loss": 0.1209,
17
+ "step": 2476
18
+ },
19
+ {
20
+ "epoch": 0.050017170676524655,
21
+ "eval_loss": 0.03474666550755501,
22
+ "eval_runtime": 4.009,
23
+ "eval_samples_per_second": 249.438,
24
+ "eval_steps_per_second": 20.953,
25
+ "step": 2476
26
+ },
27
+ {
28
+ "epoch": 0.10003434135304931,
29
+ "grad_norm": 20.096084594726562,
30
+ "learning_rate": 2.9681999999999998e-05,
31
+ "loss": 0.0737,
32
+ "step": 4952
33
+ },
34
+ {
35
+ "epoch": 0.10003434135304931,
36
+ "eval_loss": 0.04586857557296753,
37
+ "eval_runtime": 3.9771,
38
+ "eval_samples_per_second": 251.442,
39
+ "eval_steps_per_second": 21.121,
40
+ "step": 4952
41
+ },
42
+ {
43
+ "epoch": 0.15005151202957395,
44
+ "grad_norm": 28.073986053466797,
45
+ "learning_rate": 2.836797519268364e-05,
46
+ "loss": 0.087,
47
+ "step": 7428
48
+ },
49
+ {
50
+ "epoch": 0.15005151202957395,
51
+ "eval_loss": 0.07319504022598267,
52
+ "eval_runtime": 4.0741,
53
+ "eval_samples_per_second": 245.454,
54
+ "eval_steps_per_second": 20.618,
55
+ "step": 7428
56
+ },
57
+ {
58
+ "epoch": 0.20006868270609862,
59
+ "grad_norm": 17.107769012451172,
60
+ "learning_rate": 2.6700222456913017e-05,
61
+ "loss": 0.0825,
62
+ "step": 9904
63
+ },
64
+ {
65
+ "epoch": 0.20006868270609862,
66
+ "eval_loss": 0.12089800834655762,
67
+ "eval_runtime": 3.9723,
68
+ "eval_samples_per_second": 251.744,
69
+ "eval_steps_per_second": 21.146,
70
+ "step": 9904
71
+ },
72
+ {
73
+ "epoch": 0.2500858533826233,
74
+ "grad_norm": 3.858909845352173,
75
+ "learning_rate": 2.503112149742714e-05,
76
+ "loss": 0.0783,
77
+ "step": 12380
78
+ },
79
+ {
80
+ "epoch": 0.2500858533826233,
81
+ "eval_loss": 0.0933559387922287,
82
+ "eval_runtime": 4.0558,
83
+ "eval_samples_per_second": 246.558,
84
+ "eval_steps_per_second": 20.711,
85
+ "step": 12380
86
+ },
87
+ {
88
+ "epoch": 0.3001030240591479,
89
+ "grad_norm": 14.209930419921875,
90
+ "learning_rate": 2.3362694649798892e-05,
91
+ "loss": 0.071,
92
+ "step": 14856
93
+ },
94
+ {
95
+ "epoch": 0.3001030240591479,
96
+ "eval_loss": 0.07928071916103363,
97
+ "eval_runtime": 4.0248,
98
+ "eval_samples_per_second": 248.461,
99
+ "eval_steps_per_second": 20.871,
100
+ "step": 14856
101
+ },
102
+ {
103
+ "epoch": 0.3501201947356726,
104
+ "grad_norm": 10.779126167297363,
105
+ "learning_rate": 2.1693593690313013e-05,
106
+ "loss": 0.0661,
107
+ "step": 17332
108
+ },
109
+ {
110
+ "epoch": 0.3501201947356726,
111
+ "eval_loss": 0.08546418696641922,
112
+ "eval_runtime": 4.043,
113
+ "eval_samples_per_second": 247.343,
114
+ "eval_steps_per_second": 20.777,
115
+ "step": 17332
116
+ },
117
+ {
118
+ "epoch": 0.40013736541219724,
119
+ "grad_norm": 1.4697766304016113,
120
+ "learning_rate": 2.002584095454239e-05,
121
+ "loss": 0.0652,
122
+ "step": 19808
123
+ },
124
+ {
125
+ "epoch": 0.40013736541219724,
126
+ "eval_loss": 0.09644335508346558,
127
+ "eval_runtime": 4.0609,
128
+ "eval_samples_per_second": 246.249,
129
+ "eval_steps_per_second": 20.685,
130
+ "step": 19808
131
+ },
132
+ {
133
+ "epoch": 0.4501545360887219,
134
+ "grad_norm": 2.2271530628204346,
135
+ "learning_rate": 1.8356739995056515e-05,
136
+ "loss": 0.063,
137
+ "step": 22284
138
+ },
139
+ {
140
+ "epoch": 0.4501545360887219,
141
+ "eval_loss": 0.08915343880653381,
142
+ "eval_runtime": 4.0402,
143
+ "eval_samples_per_second": 247.51,
144
+ "eval_steps_per_second": 20.791,
145
+ "step": 22284
146
+ },
147
+ {
148
+ "epoch": 0.5001717067652466,
149
+ "grad_norm": 1.1830430030822754,
150
+ "learning_rate": 1.668966137114352e-05,
151
+ "loss": 0.056,
152
+ "step": 24760
153
+ },
154
+ {
155
+ "epoch": 0.5001717067652466,
156
+ "eval_loss": 0.09230654686689377,
157
+ "eval_runtime": 3.9998,
158
+ "eval_samples_per_second": 250.015,
159
+ "eval_steps_per_second": 21.001,
160
+ "step": 24760
161
+ },
162
+ {
163
+ "epoch": 0.5501888774417713,
164
+ "grad_norm": 1.6715331077575684,
165
+ "learning_rate": 1.5020560411657642e-05,
166
+ "loss": 0.0509,
167
+ "step": 27236
168
+ },
169
+ {
170
+ "epoch": 0.5501888774417713,
171
+ "eval_loss": 0.10158851742744446,
172
+ "eval_runtime": 4.0427,
173
+ "eval_samples_per_second": 247.359,
174
+ "eval_steps_per_second": 20.778,
175
+ "step": 27236
176
+ },
177
+ {
178
+ "epoch": 0.6002060481182958,
179
+ "grad_norm": 12.792489051818848,
180
+ "learning_rate": 1.3351459452171763e-05,
181
+ "loss": 0.045,
182
+ "step": 29712
183
+ },
184
+ {
185
+ "epoch": 0.6002060481182958,
186
+ "eval_loss": 0.09177897125482559,
187
+ "eval_runtime": 4.0062,
188
+ "eval_samples_per_second": 249.616,
189
+ "eval_steps_per_second": 20.968,
190
+ "step": 29712
191
+ },
192
+ {
193
+ "epoch": 0.6502232187948205,
194
+ "grad_norm": 2.540958881378174,
195
+ "learning_rate": 1.1683032604543514e-05,
196
+ "loss": 0.0472,
197
+ "step": 32188
198
+ },
199
+ {
200
+ "epoch": 0.6502232187948205,
201
+ "eval_loss": 0.08962409943342209,
202
+ "eval_runtime": 3.9987,
203
+ "eval_samples_per_second": 250.082,
204
+ "eval_steps_per_second": 21.007,
205
+ "step": 32188
206
+ },
207
+ {
208
+ "epoch": 0.7002403894713451,
209
+ "grad_norm": 1.4448179006576538,
210
+ "learning_rate": 1.0015279868772891e-05,
211
+ "loss": 0.0396,
212
+ "step": 34664
213
+ },
214
+ {
215
+ "epoch": 0.7002403894713451,
216
+ "eval_loss": 0.09593009948730469,
217
+ "eval_runtime": 4.0564,
218
+ "eval_samples_per_second": 246.525,
219
+ "eval_steps_per_second": 20.708,
220
+ "step": 34664
221
+ },
222
+ {
223
+ "epoch": 0.7502575601478698,
224
+ "grad_norm": 0.11427940428256989,
225
+ "learning_rate": 8.346178909287014e-06,
226
+ "loss": 0.0371,
227
+ "step": 37140
228
+ },
229
+ {
230
+ "epoch": 0.7502575601478698,
231
+ "eval_loss": 0.08187365531921387,
232
+ "eval_runtime": 4.0715,
233
+ "eval_samples_per_second": 245.607,
234
+ "eval_steps_per_second": 20.631,
235
+ "step": 37140
236
+ },
237
+ {
238
+ "epoch": 0.8002747308243945,
239
+ "grad_norm": 0.4649958312511444,
240
+ "learning_rate": 6.677752061658764e-06,
241
+ "loss": 0.0341,
242
+ "step": 39616
243
+ },
244
+ {
245
+ "epoch": 0.8002747308243945,
246
+ "eval_loss": 0.08447403460741043,
247
+ "eval_runtime": 3.9953,
248
+ "eval_samples_per_second": 250.296,
249
+ "eval_steps_per_second": 21.025,
250
+ "step": 39616
251
+ },
252
+ {
253
+ "epoch": 0.8502919015009192,
254
+ "grad_norm": 0.39804375171661377,
255
+ "learning_rate": 5.009999325888142e-06,
256
+ "loss": 0.0344,
257
+ "step": 42092
258
+ },
259
+ {
260
+ "epoch": 0.8502919015009192,
261
+ "eval_loss": 0.07903166115283966,
262
+ "eval_runtime": 3.9825,
263
+ "eval_samples_per_second": 251.1,
264
+ "eval_steps_per_second": 21.092,
265
+ "step": 42092
266
+ },
267
+ {
268
+ "epoch": 0.9003090721774438,
269
+ "grad_norm": 0.6405961513519287,
270
+ "learning_rate": 3.341572478259893e-06,
271
+ "loss": 0.0288,
272
+ "step": 44568
273
+ },
274
+ {
275
+ "epoch": 0.9003090721774438,
276
+ "eval_loss": 0.08632776886224747,
277
+ "eval_runtime": 4.0332,
278
+ "eval_samples_per_second": 247.94,
279
+ "eval_steps_per_second": 20.827,
280
+ "step": 44568
281
+ },
282
+ {
283
+ "epoch": 0.9503262428539685,
284
+ "grad_norm": 0.09592943638563156,
285
+ "learning_rate": 1.6724715187740154e-06,
286
+ "loss": 0.03,
287
+ "step": 47044
288
+ },
289
+ {
290
+ "epoch": 0.9503262428539685,
291
+ "eval_loss": 0.07667936384677887,
292
+ "eval_runtime": 4.0126,
293
+ "eval_samples_per_second": 249.217,
294
+ "eval_steps_per_second": 20.934,
295
+ "step": 47044
296
+ }
297
+ ],
298
+ "logging_steps": 2476,
299
+ "max_steps": 49503,
300
+ "num_input_tokens_seen": 0,
301
+ "num_train_epochs": 1,
302
+ "save_steps": 2476,
303
+ "stateful_callbacks": {
304
+ "TrainerControl": {
305
+ "args": {
306
+ "should_epoch_stop": false,
307
+ "should_evaluate": false,
308
+ "should_log": false,
309
+ "should_save": true,
310
+ "should_training_stop": true
311
+ },
312
+ "attributes": {}
313
+ }
314
+ },
315
+ "total_flos": 0.0,
316
+ "train_batch_size": 12,
317
+ "trial_name": null,
318
+ "trial_params": null
319
+ }
checkpoint-49503/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc67c94763bb3d5ae4066c0d59f3305d8199398cf64391c50b68b21528394025
3
+ size 5905
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLMRobertaModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "eos_token_id": 2,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 1024,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 4096,
14
+ "layer_norm_eps": 1e-05,
15
+ "max_position_embeddings": 514,
16
+ "model_type": "xlm-roberta",
17
+ "num_attention_heads": 16,
18
+ "num_hidden_layers": 24,
19
+ "output_past": true,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.52.3",
24
+ "type_vocab_size": 1,
25
+ "use_cache": true,
26
+ "vocab_size": 250002
27
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.52.3",
5
+ "pytorch": "2.7.0+cu126"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e2f752905c4511972f69bfbb0dc95468f97d108dab1f69093614fc6e79fe1a1
3
+ size 2239607176
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:883b037111086fd4dfebbbc9b7cee11e1517b5e0c0514879478661440f137085
3
+ size 17082987
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "250001": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "additional_special_tokens": [],
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": true,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "extra_special_tokens": {},
50
+ "mask_token": "<mask>",
51
+ "model_max_length": 512,
52
+ "pad_token": "<pad>",
53
+ "sep_token": "</s>",
54
+ "tokenizer_class": "XLMRobertaTokenizer",
55
+ "unk_token": "<unk>"
56
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc67c94763bb3d5ae4066c0d59f3305d8199398cf64391c50b68b21528394025
3
+ size 5905