sentence_transformers_support (#2)
Browse files- Add support for Sentence Transformer (1b2cdd0b8c2cddc96166877511643807f276e8ac)
- .gitattributes +2 -0
- README.md +65 -0
- config_sentence_transformers.json +14 -0
- document_0_MLMTransformer/config.json +23 -0
- document_0_MLMTransformer/model.safetensors +3 -0
- document_0_MLMTransformer/sentence_bert_config.json +4 -0
- document_0_MLMTransformer/special_tokens_map.json +37 -0
- document_0_MLMTransformer/tokenizer.json +0 -0
- document_0_MLMTransformer/tokenizer_config.json +58 -0
- document_0_MLMTransformer/vocab.txt +0 -0
- document_1_SpladePooling/config.json +5 -0
- modules.json +8 -0
- query_1_SpladePooling/config.json +5 -0
- router_config.json +22 -0
- sentence_bert_config.json +4 -0
.gitattributes
CHANGED
@@ -25,3 +25,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
25 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
29 |
+
query_0_MLMTransformer/model.safetensors filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -9,6 +9,12 @@ tags:
|
|
9 |
- passage-retrieval
|
10 |
- knowledge-distillation
|
11 |
- document encoder
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
datasets:
|
13 |
- ms_marco
|
14 |
---
|
@@ -20,6 +26,65 @@ Efficient SPLADE model for passage retrieval. This architecture uses two distinc
|
|
20 |
| --- | --- | --- | --- | --- |
|
21 |
| `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3
|
22 |
| `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## Citation
|
24 |
If you use our checkpoint, please cite our work (need to update):
|
25 |
```
|
|
|
9 |
- passage-retrieval
|
10 |
- knowledge-distillation
|
11 |
- document encoder
|
12 |
+
- sentence-transformers
|
13 |
+
- sparse-encoder
|
14 |
+
- sparse
|
15 |
+
- asymmetric
|
16 |
+
pipeline_tag: feature-extraction
|
17 |
+
library_name: sentence-transformers
|
18 |
datasets:
|
19 |
- ms_marco
|
20 |
---
|
|
|
26 |
| --- | --- | --- | --- | --- |
|
27 |
| `naver/efficient-splade-V-large` | 38.8 | 98.0 | 29.0 | 45.3
|
28 |
| `naver/efficient-splade-VI-BT-large` | 38.0 | 97.8 | 31.1 | 0.7
|
29 |
+
|
30 |
+
## Model Details
|
31 |
+
|
32 |
+
This is a [Asymmetric SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
|
33 |
+
|
34 |
+
### Model Description
|
35 |
+
- **Model Type:** SPLADE Sparse Encoder
|
36 |
+
- **Maximum Sequence Length:** 512 tokens (256 for evaluation reproduction)
|
37 |
+
- **Output Dimensionality:** 30522 dimensions
|
38 |
+
- **Similarity Function:** Dot Product
|
39 |
+
|
40 |
+
### Full Model Architecture
|
41 |
+
|
42 |
+
```
|
43 |
+
SparseEncoder(
|
44 |
+
(0): Router(
|
45 |
+
(query_0_MLMTransformer): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM
|
46 |
+
(query_1_SpladePooling): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
47 |
+
(document_0_MLMTransformer): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM
|
48 |
+
(document_1_SpladePooling): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
|
49 |
+
)
|
50 |
+
)
|
51 |
+
```
|
52 |
+
|
53 |
+
## Usage
|
54 |
+
|
55 |
+
### Direct Usage (Sentence Transformers)
|
56 |
+
|
57 |
+
First install the Sentence Transformers library:
|
58 |
+
|
59 |
+
```bash
|
60 |
+
pip install -U sentence-transformers
|
61 |
+
```
|
62 |
+
|
63 |
+
Then you can load this model and run inference. Note that with Sentence Transformers you load the entire model, i.e. the doc and query part.
|
64 |
+
```python
|
65 |
+
from sentence_transformers import SparseEncoder
|
66 |
+
|
67 |
+
# Download from the 🤗 Hub
|
68 |
+
model = SparseEncoder("naver/efficient-splade-V-large-query")
|
69 |
+
# Run inference
|
70 |
+
queries = ["what causes aging fast"]
|
71 |
+
documents = [
|
72 |
+
"UV-A light, specifically, is what mainly causes tanning, skin aging, and cataracts, UV-B causes sunburn, skin aging and skin cancer, and UV-C is the strongest, and therefore most effective at killing microorganisms. Again â\x80\x93 single words and multiple bullets.",
|
73 |
+
"Answers from Ronald Petersen, M.D. Yes, Alzheimer's disease usually worsens slowly. But its speed of progression varies, depending on a person's genetic makeup, environmental factors, age at diagnosis and other medical conditions. Still, anyone diagnosed with Alzheimer's whose symptoms seem to be progressing quickly â\x80\x94 or who experiences a sudden decline â\x80\x94 should see his or her doctor.",
|
74 |
+
"Bell's palsy and Extreme tiredness and Extreme fatigue (2 causes) Bell's palsy and Extreme tiredness and Hepatitis (2 causes) Bell's palsy and Extreme tiredness and Liver pain (2 causes) Bell's palsy and Extreme tiredness and Lymph node swelling in children (2 causes)",
|
75 |
+
]
|
76 |
+
query_embeddings = model.encode_query(queries)
|
77 |
+
document_embeddings = model.encode_document(documents)
|
78 |
+
print(query_embeddings.shape, document_embeddings.shape)
|
79 |
+
# [1, 30522] [3, 30522]
|
80 |
+
|
81 |
+
# Get the similarity scores for the embeddings
|
82 |
+
similarities = model.similarity(query_embeddings, document_embeddings)
|
83 |
+
print(similarities)
|
84 |
+
# tensor([[9.0047, 8.1454, 2.5808]])
|
85 |
+
|
86 |
+
```
|
87 |
+
|
88 |
## Citation
|
89 |
If you use our checkpoint, please cite our work (need to update):
|
90 |
```
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"model_type": "SparseEncoder",
|
3 |
+
"__version__": {
|
4 |
+
"sentence_transformers": "5.0.0",
|
5 |
+
"transformers": "4.50.3",
|
6 |
+
"pytorch": "2.6.0+cu124"
|
7 |
+
},
|
8 |
+
"prompts": {
|
9 |
+
"query": "",
|
10 |
+
"document": ""
|
11 |
+
},
|
12 |
+
"default_prompt_name": null,
|
13 |
+
"similarity_fn_name": "dot"
|
14 |
+
}
|
document_0_MLMTransformer/config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"activation": "gelu",
|
3 |
+
"architectures": [
|
4 |
+
"DistilBertForMaskedLM"
|
5 |
+
],
|
6 |
+
"attention_dropout": 0.1,
|
7 |
+
"dim": 768,
|
8 |
+
"dropout": 0.1,
|
9 |
+
"hidden_dim": 3072,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"max_position_embeddings": 512,
|
12 |
+
"model_type": "distilbert",
|
13 |
+
"n_heads": 12,
|
14 |
+
"n_layers": 6,
|
15 |
+
"pad_token_id": 0,
|
16 |
+
"qa_dropout": 0.1,
|
17 |
+
"seq_classif_dropout": 0.2,
|
18 |
+
"sinusoidal_pos_embds": false,
|
19 |
+
"tie_weights_": true,
|
20 |
+
"torch_dtype": "float32",
|
21 |
+
"transformers_version": "4.50.3",
|
22 |
+
"vocab_size": 30522
|
23 |
+
}
|
document_0_MLMTransformer/model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1f55c529cbcde8a8665b9628293f4e1a3f4ded37b51dfe221fb5b360f52fde48
|
3 |
+
size 267954768
|
document_0_MLMTransformer/sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 512,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
document_0_MLMTransformer/special_tokens_map.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cls_token": {
|
3 |
+
"content": "[CLS]",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"mask_token": {
|
10 |
+
"content": "[MASK]",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"pad_token": {
|
17 |
+
"content": "[PAD]",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
},
|
23 |
+
"sep_token": {
|
24 |
+
"content": "[SEP]",
|
25 |
+
"lstrip": false,
|
26 |
+
"normalized": false,
|
27 |
+
"rstrip": false,
|
28 |
+
"single_word": false
|
29 |
+
},
|
30 |
+
"unk_token": {
|
31 |
+
"content": "[UNK]",
|
32 |
+
"lstrip": false,
|
33 |
+
"normalized": false,
|
34 |
+
"rstrip": false,
|
35 |
+
"single_word": false
|
36 |
+
}
|
37 |
+
}
|
document_0_MLMTransformer/tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
document_0_MLMTransformer/tokenizer_config.json
ADDED
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"added_tokens_decoder": {
|
3 |
+
"0": {
|
4 |
+
"content": "[PAD]",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false,
|
9 |
+
"special": true
|
10 |
+
},
|
11 |
+
"100": {
|
12 |
+
"content": "[UNK]",
|
13 |
+
"lstrip": false,
|
14 |
+
"normalized": false,
|
15 |
+
"rstrip": false,
|
16 |
+
"single_word": false,
|
17 |
+
"special": true
|
18 |
+
},
|
19 |
+
"101": {
|
20 |
+
"content": "[CLS]",
|
21 |
+
"lstrip": false,
|
22 |
+
"normalized": false,
|
23 |
+
"rstrip": false,
|
24 |
+
"single_word": false,
|
25 |
+
"special": true
|
26 |
+
},
|
27 |
+
"102": {
|
28 |
+
"content": "[SEP]",
|
29 |
+
"lstrip": false,
|
30 |
+
"normalized": false,
|
31 |
+
"rstrip": false,
|
32 |
+
"single_word": false,
|
33 |
+
"special": true
|
34 |
+
},
|
35 |
+
"103": {
|
36 |
+
"content": "[MASK]",
|
37 |
+
"lstrip": false,
|
38 |
+
"normalized": false,
|
39 |
+
"rstrip": false,
|
40 |
+
"single_word": false,
|
41 |
+
"special": true
|
42 |
+
}
|
43 |
+
},
|
44 |
+
"clean_up_tokenization_spaces": false,
|
45 |
+
"cls_token": "[CLS]",
|
46 |
+
"do_basic_tokenize": true,
|
47 |
+
"do_lower_case": true,
|
48 |
+
"extra_special_tokens": {},
|
49 |
+
"mask_token": "[MASK]",
|
50 |
+
"model_max_length": 512,
|
51 |
+
"never_split": null,
|
52 |
+
"pad_token": "[PAD]",
|
53 |
+
"sep_token": "[SEP]",
|
54 |
+
"strip_accents": null,
|
55 |
+
"tokenize_chinese_chars": true,
|
56 |
+
"tokenizer_class": "DistilBertTokenizer",
|
57 |
+
"unk_token": "[UNK]"
|
58 |
+
}
|
document_0_MLMTransformer/vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
document_1_SpladePooling/config.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"pooling_strategy": "max",
|
3 |
+
"activation_function": "relu",
|
4 |
+
"word_embedding_dimension": null
|
5 |
+
}
|
modules.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Router"
|
7 |
+
}
|
8 |
+
]
|
query_1_SpladePooling/config.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"pooling_strategy": "max",
|
3 |
+
"activation_function": "relu",
|
4 |
+
"word_embedding_dimension": null
|
5 |
+
}
|
router_config.json
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"types": {
|
3 |
+
"": "sentence_transformers.sparse_encoder.models.MLMTransformer.MLMTransformer",
|
4 |
+
"query_1_SpladePooling": "sentence_transformers.sparse_encoder.models.SpladePooling.SpladePooling",
|
5 |
+
"document_0_MLMTransformer": "sentence_transformers.sparse_encoder.models.MLMTransformer.MLMTransformer",
|
6 |
+
"document_1_SpladePooling": "sentence_transformers.sparse_encoder.models.SpladePooling.SpladePooling"
|
7 |
+
},
|
8 |
+
"structure": {
|
9 |
+
"query": [
|
10 |
+
"",
|
11 |
+
"query_1_SpladePooling"
|
12 |
+
],
|
13 |
+
"document": [
|
14 |
+
"document_0_MLMTransformer",
|
15 |
+
"document_1_SpladePooling"
|
16 |
+
]
|
17 |
+
},
|
18 |
+
"parameters": {
|
19 |
+
"default_route": "query",
|
20 |
+
"allow_empty_key": true
|
21 |
+
}
|
22 |
+
}
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 512,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|