GATE: General Arabic Text Embedding for Enhanced Semantic Textual Similarity with Matryoshka Representation Learning and Hybrid Loss Training
Abstract
GATE models using Matryoshka Representation Learning and a hybrid loss approach achieve state-of-the-art performance on Arabic Semantic Textual Similarity benchmarks.
Semantic textual similarity (STS) is a critical task in natural language processing (NLP), enabling applications in retrieval, clustering, and understanding semantic relationships between texts. However, research in this area for the Arabic language remains limited due to the lack of high-quality datasets and pre-trained models. This scarcity of resources has restricted the accurate evaluation and advance of semantic similarity in Arabic text. This paper introduces General Arabic Text Embedding (GATE) models that achieve state-of-the-art performance on the Semantic Textual Similarity task within the MTEB benchmark. GATE leverages Matryoshka Representation Learning and a hybrid loss training approach with Arabic triplet datasets for Natural Language Inference, which are essential for enhancing model performance in tasks that demand fine-grained semantic understanding. GATE outperforms larger models, including OpenAI, with a 20-25% performance improvement on STS benchmarks, effectively capturing the unique semantic nuances of Arabic.
Community
π§ Arabic Matryoshka Embedding Models Collection
Welcome to the official Arabic Matryoshka Embedding Models collection!
This collection showcases a series of cutting-edge Arabic text embedding models built using:
- πͺ Matryoshka Representation Learning
- βοΈ Hybrid Loss Multi-task Training
- π Arabic Triplet and NLI datasets
These models are designed to capture fine-grained semantic similarity in Arabic while being efficient, scalable, and resource-friendly.
π What's Inside?
- β State-of-the-art performance on Arabic STS benchmarks (MTEB: STS17, STS22, STS22-v2)
- β Multi-dimensional embeddings (768, 512, 256, 128, 64)
- β Models outperforming much larger LLMs like OpenAI and Mistral-7B on Arabic tasks
- β Trained with contrastive triplet learning, softmax classification, and cosine similarity loss
- β Includes adaptations of AraBERT, MARBERT, LaBSE, and E5 within the Matryoshka framework
π Highlights from Our Research (GATE Paper)
π° Paper Title:
GATE: General Arabic Text Embedding for Enhanced Semantic Textual Similarity with Matryoshka Representation Learning and Hybrid Loss Training
π Read on arXiv:
https://arxiv.org/abs/2505.24581
π Key Achievements:
- Up to +25% improvement over OpenAI embeddings on Arabic STS
- Models with only 135M parameters beating billion-parameter LLMs
- Maintains high performance even at reduced dimensions (64d!)
- First large-scale benchmark of Arabic triplet-based contrastive embeddings
π₯ Top Models (So Far)
Model Name | Base | Type | STS Avg Score |
---|---|---|---|
Arabic-Triplet-Matryoshka-V2 |
AraBERT | Triplet + MRL | 69.99 |
GATE-AraBERT-V1 |
AraBERT | Hybrid Loss + MRL | 68.54 |
Arabic-LabSE-Matryoshka |
LaBSE | Triplet + MRL | 66.76 |
Marbert-AllNLI-Triplet-Matryoshka |
MARBERT | Dialect-Aware | 67.19 |
E5-AllNLI-Triplet-Matryoshka |
multilingual-E5 | Cross-lingual | 65.45 |
π¦ Collection Link
π Explore all models:
π Arabic Matryoshka Embedding Models Collection
π§ͺ Use Cases
- Arabic Semantic Search
- Duplicate Question Detection
- Clustering & Retrieval
- Arabic Text Understanding Tasks
- Scalable NLP for low-resource environments
π οΈ Training Details
- Hardware: NVIDIA A100 GPUs
- Framework: π€
sentence-transformers
, customSentenceTransformerTrainer
- Datasets: Arabic Triplet-NLI, STS pairs, Classification datasets
- Training Losses:
MultipleNegativesRankingLoss
,CoSentLoss
,SoftmaxLoss
,MatryoshkaLoss
- Dimensions: Trained with
[768, 512, 256, 128, 64]
π Contributions & Feedback
We welcome feedback, benchmarks, and contributions!
If youβve fine-tuned one of these models or tested them on new Arabic datasets, let us know!
π§ Contact: onajar@psu.edu.sa
Letβs make Arabic NLP faster, smarter, and more accessible β one embedding at a time. π
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Optimized Text Embedding Models and Benchmarks for Amharic Passage Retrieval (2025)
- Hakim: Farsi Text Embedding Model (2025)
- Anveshana: A New Benchmark Dataset for Cross-Lingual Information Retrieval On English Queries and Sanskrit Documents (2025)
- CrosGrpsABS: Cross-Attention over Syntactic and Semantic Graphs for Aspect-Based Sentiment Analysis in a Low-Resource Language (2025)
- Mutarjim: Advancing Bidirectional Arabic-English Translation with a Small Language Model (2025)
- MedEIR: A Specialized Medical Embedding Model for Enhanced Information Retrieval (2025)
- Advancing Arabic Reverse Dictionary Systems: A Transformer-Based Approach with Dataset Construction Guidelines (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper