Update README.md
Browse files
README.md
CHANGED
@@ -111,7 +111,7 @@ metrics:
|
|
111 |
|
112 |
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
|
113 |
|
114 |
-
This model was fine-tuned to classify intents based on the dataset [Jarbas/ovos_intents_train](https://huggingface.co/Jarbas/ovos_intents_train)
|
115 |
|
116 |
## Intended uses & limitations
|
117 |
|
|
|
111 |
|
112 |
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
|
113 |
|
114 |
+
This model was fine-tuned to classify intents based on the dataset [Jarbas/ovos_intents_train](https://huggingface.co/datasets/Jarbas/ovos_intents_train)
|
115 |
|
116 |
## Intended uses & limitations
|
117 |
|