SentenceTransformer based on intfloat/multilingual-e5-large-instruct

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large-instruct on the mcqa-rag-finetune dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Hipotalamusi NUK kontrollon sekretimin e hormoneve:\nA. FSH dhe LH\nB. te rritjes(GH)\nC. ACTH\nD. te pankreasit',
    'Hipotalamusi është një pjesë e trurit që ndodhet nën talamusin. Ai luan një rol kryesor në lidhjen e sistemit nervor me sistemin endokrin përmes gjëndrës së hipofizës.',
    'State laws that regulate matters of legitimate local concern but have an incidental effect on interstate commerce are subject to a less strict balancing test. Under this test, a state law will be upheld unless the burden imposed on interstate commerce is clearly excessive in relation to the putative local benefits.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

mcqa-rag-finetune

  • Dataset: mcqa-rag-finetune at d1f5446
  • Size: 594,028 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 22 tokens
    • mean: 105.96 tokens
    • max: 512 tokens
    • min: 12 tokens
    • mean: 70.95 tokens
    • max: 478 tokens
  • Samples:
    anchor positive
    Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.
    A. 0
    B. 1
    C. 2
    D. 3
    The notation Z_3 refers to the finite field with three elements, often denoted as {0, 1, 2}. This field operates under modular arithmetic, specifically modulo 3. Elements in Z_3 can be added and multiplied according to the rules of modulo 3, where any number can wrap around upon reaching 3.
    Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.
    A. 0
    B. 1
    C. 2
    D. 3
    A field is a set equipped with two operations, addition and multiplication, satisfying certain properties: associativity, commutativity, distributivity, the existence of additive and multiplicative identities, and the existence of additive inverses and multiplicative inverses (for all elements except the zero element). In order for Z_3[x]/(f(x)) to be a field, the polynomial f(x) must be irreducible over Z_3.
    Find all c in Z_3 such that Z_3[x]/(x^2 + c) is a field.
    A. 0
    B. 1
    C. 2
    D. 3
    The expression Z_3[x] indicates the set of all polynomials with coefficients in Z_3. A polynomial is said to be irreducible over Z_3 if it cannot be factored into the product of two non-constant polynomials with coefficients in Z_3. In the case of quadratic polynomials like x^2 + c, irreducibility depends on whether it has any roots in the field Z_3.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

mcqa-rag-finetune

  • Dataset: mcqa-rag-finetune at d1f5446
  • Size: 1,000 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 22 tokens
    • mean: 98.74 tokens
    • max: 512 tokens
    • min: 9 tokens
    • mean: 59.88 tokens
    • max: 501 tokens
  • Samples:
    anchor positive
    ക്രൂരകോഷ്ഠം ഉള്ള ഒരാളിൽ കോപിച്ചിരിക്കുന്ന ദോഷം താഴെപ്പറയുന്നവയിൽ ഏതാണ്?
    A. കഫം
    B. പിത്തം
    C. വാതം
    D. രക്തം
    ഓരോ ദോഷത്തിനും അതിന്റേതായ സ്വഭാവങ്ങളും ശരീരത്തിൽ അത് ഉണ്ടാക്കുന്ന ഫലങ്ങളും ഉണ്ട്.
    Melyik tényező nem befolyásolja a fagylalt keresleti függvényét?
    A. A fagylalt árának változása.
    B. Mindegyik tényező befolyásolja.
    C. A jégkrém árának változása.
    D. A fagylalttölcsér árának változása.
    A keresleti függvény negatív meredekségű, ami azt jelenti, hogy az ár növekedésével a keresett mennyiség csökken (csökkenő kereslet törvénye).
    In contrast to _______, _______ aim to reward favourable behaviour by companies. The success of such campaigns have been heightened through the use of ___________, which allow campaigns to facilitate the company in achieving _________ .
    A. Boycotts, Buyalls, Blockchain technology, Increased Sales
    B. Buycotts, Boycotts, Digital technology, Decreased Sales
    C. Boycotts, Buycotts, Digital technology, Decreased Sales
    D. Buycotts, Boycotts, Blockchain technology, Charitable donations
    E. Boycotts, Buyalls, Blockchain technology, Charitable donations
    F. Boycotts, Buycotts, Digital technology, Increased Sales
    G. Buycotts, Boycotts, Digital technology, Increased Sales
    H. Boycotts, Buycotts, Physical technology, Increased Sales
    I. Buycotts, Buyalls, Blockchain technology, Charitable donations
    J. Boycotts, Buycotts, Blockchain technology, Decreased Sales
    Consumer Activism: This term refers to the actions taken by consumers to promote social, political, or environmental causes. These actions can include boycotting certain companies or buycotting others, influencing market dynamics based on ethical considerations. The effectiveness of consumer activism can vary but has gained prominence in recent years with increased visibility through social media.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 12
  • per_device_eval_batch_size: 12
  • learning_rate: 3e-05
  • num_train_epochs: 1
  • warmup_steps: 5000
  • fp16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 12
  • per_device_eval_batch_size: 12
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 5000
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.05 2476 0.1209 0.0347
0.1000 4952 0.0737 0.0459
0.1501 7428 0.087 0.0732
0.2001 9904 0.0825 0.1209
0.2501 12380 0.0783 0.0934
0.3001 14856 0.071 0.0793
0.3501 17332 0.0661 0.0855
0.4001 19808 0.0652 0.0964
0.4502 22284 0.063 0.0892
0.5002 24760 0.056 0.0923
0.5502 27236 0.0509 0.1016
0.6002 29712 0.045 0.0918
0.6502 32188 0.0472 0.0896
0.7002 34664 0.0396 0.0959
0.7503 37140 0.0371 0.0819
0.8003 39616 0.0341 0.0845
0.8503 42092 0.0344 0.0790
0.9003 44568 0.0288 0.0863
0.9503 47044 0.03 0.0767
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DoDucAnh/MNLP_M2_document_encoder

Finetuned
(131)
this model

Dataset used to train DoDucAnh/MNLP_M2_document_encoder