Lampistero

This is a sentence-transformers model finetuned from jinaai/jina-embeddings-v3 on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: jinaai/jina-embeddings-v3
  • Maximum Sequence Length: 8194 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: es
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (transformer): Transformer(
    (auto_model): XLMRobertaLoRA(
      (roberta): XLMRobertaModel(
        (embeddings): XLMRobertaEmbeddings(
          (word_embeddings): ParametrizedEmbedding(
            250002, 1024, padding_idx=1
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (token_type_embeddings): ParametrizedEmbedding(
            1, 1024
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
        )
        (emb_drop): Dropout(p=0.1, inplace=False)
        (emb_ln): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        (encoder): XLMRobertaEncoder(
          (layers): ModuleList(
            (0-23): 24 x Block(
              (mixer): MHA(
                (rotary_emb): RotaryEmbedding()
                (Wqkv): ParametrizedLinearResidual(
                  in_features=1024, out_features=3072, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (inner_attn): FlashSelfAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (inner_cross_attn): FlashCrossAttention(
                  (drop): Dropout(p=0.1, inplace=False)
                )
                (out_proj): ParametrizedLinear(
                  in_features=1024, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout1): Dropout(p=0.1, inplace=False)
              (drop_path1): StochasticDepth(p=0.0, mode=row)
              (norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
              (mlp): Mlp(
                (fc1): ParametrizedLinear(
                  in_features=1024, out_features=4096, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
                (fc2): ParametrizedLinear(
                  in_features=4096, out_features=1024, bias=True
                  (parametrizations): ModuleDict(
                    (weight): ParametrizationList(
                      (0): LoRAParametrization()
                    )
                  )
                )
              )
              (dropout2): Dropout(p=0.1, inplace=False)
              (drop_path2): StochasticDepth(p=0.0, mode=row)
              (norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
            )
          )
        )
        (pooler): XLMRobertaPooler(
          (dense): ParametrizedLinear(
            in_features=1024, out_features=1024, bias=True
            (parametrizations): ModuleDict(
              (weight): ParametrizationList(
                (0): LoRAParametrization()
              )
            )
          )
          (activation): Tanh()
        )
      )
    )
  )
  (pooler): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (normalizer): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("csanz91/lampistero_rag_embeddings")
# Run inference
sentences = [
    '¿Qué porcentaje de aumento salarial reclamaba el Sindicato Minero en el conflicto de Utrillas que llevó a plantear la huelga del 12 de octubre de 1930?',
    'El Sindicato Minero reclamaba un aumento del 20% los sueldos en el conflicto de Utrillas.',
    'Antonio Gargallo.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.7803
cosine_accuracy@3 0.8884
cosine_accuracy@5 0.904
cosine_accuracy@10 0.9234
cosine_precision@1 0.7803
cosine_precision@3 0.2961
cosine_precision@5 0.1808
cosine_precision@10 0.0923
cosine_recall@1 0.7803
cosine_recall@3 0.8884
cosine_recall@5 0.904
cosine_recall@10 0.9234
cosine_ndcg@10 0.8576
cosine_mrr@10 0.8359
cosine_map@100 0.8374

Information Retrieval

Metric Value
cosine_accuracy@1 0.7827
cosine_accuracy@3 0.8877
cosine_accuracy@5 0.9034
cosine_accuracy@10 0.9246
cosine_precision@1 0.7827
cosine_precision@3 0.2959
cosine_precision@5 0.1807
cosine_precision@10 0.0925
cosine_recall@1 0.7827
cosine_recall@3 0.8877
cosine_recall@5 0.9034
cosine_recall@10 0.9246
cosine_ndcg@10 0.8588
cosine_mrr@10 0.8372
cosine_map@100 0.8385

Information Retrieval

Metric Value
cosine_accuracy@1 0.7797
cosine_accuracy@3 0.8859
cosine_accuracy@5 0.901
cosine_accuracy@10 0.9228
cosine_precision@1 0.7797
cosine_precision@3 0.2953
cosine_precision@5 0.1802
cosine_precision@10 0.0923
cosine_recall@1 0.7797
cosine_recall@3 0.8859
cosine_recall@5 0.901
cosine_recall@10 0.9228
cosine_ndcg@10 0.8564
cosine_mrr@10 0.8347
cosine_map@100 0.8362

Information Retrieval

Metric Value
cosine_accuracy@1 0.7707
cosine_accuracy@3 0.8823
cosine_accuracy@5 0.9016
cosine_accuracy@10 0.9191
cosine_precision@1 0.7707
cosine_precision@3 0.2941
cosine_precision@5 0.1803
cosine_precision@10 0.0919
cosine_recall@1 0.7707
cosine_recall@3 0.8823
cosine_recall@5 0.9016
cosine_recall@10 0.9191
cosine_ndcg@10 0.8512
cosine_mrr@10 0.8287
cosine_map@100 0.8303

Information Retrieval

Metric Value
cosine_accuracy@1 0.7604
cosine_accuracy@3 0.869
cosine_accuracy@5 0.8902
cosine_accuracy@10 0.9131
cosine_precision@1 0.7604
cosine_precision@3 0.2897
cosine_precision@5 0.178
cosine_precision@10 0.0913
cosine_recall@1 0.7604
cosine_recall@3 0.869
cosine_recall@5 0.8902
cosine_recall@10 0.9131
cosine_ndcg@10 0.8415
cosine_mrr@10 0.8181
cosine_map@100 0.82

Information Retrieval

Metric Value
cosine_accuracy@1 0.7248
cosine_accuracy@3 0.8521
cosine_accuracy@5 0.8751
cosine_accuracy@10 0.8974
cosine_precision@1 0.7248
cosine_precision@3 0.284
cosine_precision@5 0.175
cosine_precision@10 0.0897
cosine_recall@1 0.7248
cosine_recall@3 0.8521
cosine_recall@5 0.8751
cosine_recall@10 0.8974
cosine_ndcg@10 0.8182
cosine_mrr@10 0.792
cosine_map@100 0.7938

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 14,907 training samples
  • Columns: query and answer
  • Approximate statistics based on the first 1000 samples:
    query answer
    type string string
    details
    • min: 9 tokens
    • mean: 25.88 tokens
    • max: 63 tokens
    • min: 3 tokens
    • mean: 34.09 tokens
    • max: 340 tokens
  • Samples:
    query answer
    En Valdeconejos, ¿cuál era la sociedad de agricultores en 1952? En Valdeconejos, la sociedad de agricultores en 1952 era el Pósito de Agricultores.
    ¿Qué nombres de capataces se registran en el pueblo de Escucha en el año 1952? En Escucha, en 1952, los capataces registrados son Peralta (Manuel) y Rodriguez (Gonzalo).
    En el contexto de la minería, ¿qué implica 'despajar'? 'Despajar' se refiere a cribar a mano material y desechos para obtener las partes de carbón que hay en ellos.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            1024,
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 32
  • learning_rate: 2e-05
  • num_train_epochs: 12
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 32
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 12
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_1024_cosine_ndcg@10 dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 8 - 0.7663 0.7676 0.7656 0.7626 0.7393 0.6969
1.2747 10 127.0406 - - - - - -
2.0 16 - 0.8244 0.8240 0.8226 0.8172 0.8060 0.7775
2.5494 20 38.8995 - - - - - -
3.0 24 - 0.8425 0.8426 0.8444 0.8373 0.8252 0.7996
3.8240 30 20.1528 - - - - - -
4.0 32 - 0.8526 0.8520 0.8498 0.8456 0.8289 0.8037
5.0 40 14.0513 0.8550 0.8543 0.8517 0.8490 0.8368 0.8139
6.0 48 - 0.8572 0.8565 0.8557 0.8520 0.8404 0.8170
6.2747 50 13.364 - - - - - -
7.0 56 - 0.8579 0.8576 0.8553 0.8514 0.8422 0.8180
7.5494 60 12.7986 - - - - - -
8.0 64 - 0.8573 0.8580 0.8560 0.8523 0.8414 0.8178
8.8240 70 12.0091 - - - - - -
9.0 72 - 0.8578 0.8586 0.8562 0.8519 0.8423 0.8184
10.0 80 10.9468 0.8583 0.8589 0.8565 0.8530 0.8413 0.8191
10.5494 84 - 0.8576 0.8588 0.8564 0.8512 0.8415 0.8182

Framework Versions

  • Python: 3.12.10
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.7.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
61
Safetensors
Model size
572M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for csanz91/lampistero_rag_embeddings

Finetuned
(27)
this model

Evaluation results