Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ A self‑attentive embedding model for premise / proof selection in Rocq‑based
|
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
-
RocqStar is a
|
19 |
|
20 |
* **Model type:** Transformer encoder with self‑attentive pooling
|
21 |
* **Language(s):** Rocq / Coq (Gallina) syntax (tokens)
|
|
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
+
RocqStar is a 125 M‑parameter Transformer encoder (768‑dim hidden size) with multi‑head self‑attention and a learned self‑attentive pooling head. It is trained with an InfoNCE contrastive objective so that the cosine similarity of two statement embeddings approximates the similarity of their proofs, measured by a hybrid Levenshtein + Jaccard distance. The model takes tokenised Rocq (Gallina) theorem statements as input and outputs a 768‑d embedding.
|
19 |
|
20 |
* **Model type:** Transformer encoder with self‑attentive pooling
|
21 |
* **Language(s):** Rocq / Coq (Gallina) syntax (tokens)
|