SSA-COMET
Collection
3 items
•
Updated
SSA-COMET-STL, a robust, automatic metric for MTE, built based on SSA-MTE: It receives a triplet with (source sentence, translation, reference translation), and returns a score that reflects the quality of the translation. This model is based on an improved African enhanced encoder, afro-xlmr-large-76L.
Coming soon
Apache-2.0
Using this model requires unbabel-comet to be installed:
pip install --upgrade pip # ensures that pip is current
pip install unbabel-comet
Then you can use it through comet CLI:
comet-score -s {source-inputs}.txt -t {translation-outputs}.txt -r {references}.txt --model McGill-NLP/ssa-comet-stl
Or using Python:
from comet import download_model, load_from_checkpoint
model_path = download_model("McGill-NLP/ssa-comet-stl")
model = load_from_checkpoint(model_path)
data = [
{
"src": "Nadal sàkọọ́lẹ̀ ìforígbárí o ní àmì méje sóódo pẹ̀lú ilẹ̀ Canada.",
"mt": "Nadal's head to head record against the Canadian is 7–2.",
"ref": "Nadal scored seven unanswered points against Canada."
},
{
"src": "Laipe yi o padanu si Raoniki ni ere Sisi Brisbeni.",
"mt": "He recently lost against Raonic in the Brisbane Open.",
"ref": "He recently lost to Raoniki in the game Sisi Brisbeni."
}
]
model_output = model.predict(data, batch_size=8, gpus=1)
print (model_output)
Our model is intended to be used for MT evaluation.
Given a triplet with (source sentence, translation, reference translation), it outputs a single score between 0 and 1, where 1 represents a perfect translation.
There are 76 languages available :