ACL-OCL / Base_JSON /prefixG /json /globalex /2020.globalex-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:05:43.354378Z"
},
"title": "NUIG at TIAD: Combining Unsupervised NLP and Graph Metrics for Translation Inference",
"authors": [
{
"first": "John",
"middle": [
"P"
],
"last": "Mccrae",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Ireland",
"location": {
"settlement": "Galway"
}
},
"email": ""
},
{
"first": "Mihael",
"middle": [],
"last": "Arcan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Ireland",
"location": {
"settlement": "Galway"
}
},
"email": "mihael.arcan@insight-centre.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present the NUIG system at the TIAD shard task. This system includes graph-based metrics calculated using novel algorithms, with an unsupervised document embedding tool called ONETA and an unsupervised multi-way neural machine translation method. The results are an improvement over our previous system and produce the highest precision among all systems in the task as well as very competitive F-Measure results. Incorporating features from other systems should be easy in the framework we describe in this paper, suggesting this could very easily be extended to an even stronger result.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present the NUIG system at the TIAD shard task. This system includes graph-based metrics calculated using novel algorithms, with an unsupervised document embedding tool called ONETA and an unsupervised multi-way neural machine translation method. The results are an improvement over our previous system and produce the highest precision among all systems in the task as well as very competitive F-Measure results. Incorporating features from other systems should be easy in the framework we describe in this paper, suggesting this could very easily be extended to an even stronger result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Translation inference is the task of inferring new translations between a pair of languages, based on existing translations to one or more pivot language. In the particular context of the TIAD task (Gracia et al., 2019) , there is a graph of translations shown in Figure 1 available from the Apertium project (Forcada et al., 2011) and the goal is to use this graph of translations to infer missing links (shown with dotted lines), in particular between English, French and Portuguese. This year, we combined two systems that had participated in a previous task (Arcan et al., 2019; McCrae, 2019) and show that this combination can improve the results. This combination consists of an unsupervised cross-lingual document embeddings system called Orthonormal Explicit Topic Analysis (McCrae et al., 2013, ONETA) and the results of unsupervised machine translation using the multi-way neural machine translation (NMT) approach (Ha et al., 2016) . We also further extended this system by developing a new methodology of analysing the graph to find candidates and we show that most of the candidates (74.5%) that are likely to be correct are at a graph distance of 2, that is they are discoverable using only a single pivot translation, while quite a large amount of translations cannot be inferred using the graph (23.1%). This shows that the use of more sophisticated graph metrics is unlikely to gain more improvement in this task and that attention should instead be directed to unsupervised NLP techniques. We also analyzed the provided reference data and found that the data seems to diverge quite distinctly from the training data, suggesting that there may be a need to look for more robust methods of evaluation for future editions of this task.",
"cite_spans": [
{
"start": 198,
"end": 219,
"text": "(Gracia et al., 2019)",
"ref_id": null
},
{
"start": 309,
"end": 331,
"text": "(Forcada et al., 2011)",
"ref_id": "BIBREF1"
},
{
"start": 562,
"end": 582,
"text": "(Arcan et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 583,
"end": 596,
"text": "McCrae, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 782,
"end": 810,
"text": "(McCrae et al., 2013, ONETA)",
"ref_id": null
},
{
"start": 925,
"end": 942,
"text": "(Ha et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 264,
"end": 272,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One of the principal challenges of working with the TIAD data is that there are a very large number of entities and it is difficult to predict which ones are likely to be good candidates for translation inference. Following, the intuition that translations should be connected in the graph, we wish to find for a pair of languages l 1 ,l 2 all the entities that are connected. As the graph of all TIAD connections contains 1,053,203 nodes connected with 531,053 edges, calculating all the possible connections between the edges of the graph can be quite challenging when approached naively. We developed the following approach to constructing the set of distances between all nodes in two languages, based on a set of translations T li,lj by language and a lexicon of words W i for language l i as shown in Algorithm 1. The first step of this algorithm is to initialize two distance lists dist 1 and dist 2 that measure the distance between terms in l 1 or l 2 respectively and all terms in languages other than l 1 or l 2 . The next step is then to iterate through all translations between languages other than l 1 and l 2 and connect the distance metrics dist 1 and dist 2 . In this way, the first value of dist 1 contains only terms in l 1 and so they can easily be implemented as an array of associative arrays and hence kept quite sparse. Finally, we iterate through the words of l 1 and l 2 and calculate the distance between each word. This relies on the keys function which returns the list of terms in a third language, which have a noninfinite distance in dist 1 and dist 2 . In practice, this is implemented by taking the smaller of the associative arrays associated with dist 1 or dist 2 and filtering the results according to the presence in the larger associative array. As such, while the worst-case performance of the algorithm is ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": ": dist for l \u2208 L, l = l 1 , l = l 2 do for (s, t) \u2208 T l1,l do dist 1 (s, t) \u2190 1 end for (s, t) \u2208 T l2,l do dist 2 (s, t) \u2190 1 end end for l i \u2208 L, l j \u2208 L, l i = l 1 , l i = l 2 , l i = l 1 , l j = l 2 do for (s, t) \u2208 T li,lj do for u \u2208 W 1 do dist 1 (u, t) \u2190 min(dist 1 (u, t), dist 1 (u, s) + 1) end for u \u2208 W 2 do dist 2 (u, t) \u2190 min(dist 2 (u, t), dist 2 (u, s) + 1) end end end for s \u2208 W 1 do for t \u2208 W 2 do dist(s, t) \u2190 min u\u2208keys(s,t) dist 1 (s, u) + dist 2 (u, t) end end still O(|W 1 | \u00d7 |W 2 | \u00d7 |W 1,2 |) where W 1,2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": "is the words in all languages other than l 1 and l 2 , in fact the calculation of keys is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": "O[min(|X 1 (s)|, |X 2 (t)|) \u00d7 log max(|X 1 (s)|, |X 2 (t)|)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": "Where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": "X i (s) = {u : dist i (s, u) < \u221e}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": "In order to analyze the results of this analysis, we considered the provided Apertium training data holding out the translations for one language pair, namely English-Spanish, and the results are presented in Table 1 . We see that there are 46,004 terms in the English data and 28,615 terms in the Spanish data meaning there are potentially 1.3 billion translations that can be inferred. Our algorithm found that only 496,427 of these term pairs are connected in the Apertium graph, which overlaps quite well with the correct translations in the Apertium data. In fact, 23.1% of translations from the gold standard are not connected whereas 76.9% are connected at graph distance 2, that is inferred by a single pivot translation. For this reason, we used this method as the basis for generating candidate translations, in particular, we only considered translations that were at graph distance 2 or 3, and in addition, we extracted the size of the keys set for each translation as it was a useful and readily available statistic.",
"cite_spans": [],
"ref_spans": [
{
"start": 209,
"end": 216,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Graph Extraction",
"sec_num": "2.1."
},
{
"text": "The OrthoNormal Explicit Topic Analysis (ONETA) methodology used in the system was not much changed from how it was applied previously (McCrae, 2019), only this time instead of just using a single language for finding potential pivots, the results of the graph distance method were used to select all translations at distance 2 or 3. For the purpose of completeness, we will briefly recap the methodology here. ONETA aims to find a vector to represent a term satisfying",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "\u03c6 ONETA (d i ) T \u03c6 TF-IDF (d j ) = \u03b4 ij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "It does this by constructing the TF-IDF vectors for each of the words and organizing them in a matrix X and then the vector for ONETA can be obtained as 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "\u03c6 ONETA (d i ) = X + \u03c6 TF-IDF (d j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "Where:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "x ij = \u03c6 TF-IDF (d i ) T \u03c6 TF-IDF (d j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "It was shown (McCrae et al., 2013 ) that this can be efficiently approximated by organizing the matrix X into the form",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(McCrae et al., 2013",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ONETA",
"sec_num": "2.2."
},
{
"text": "A B 0 C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X",
"sec_num": null
},
{
"text": "And using the following form of the projection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X",
"sec_num": null
},
{
"text": "\u03c6 ONETA (d i ) = A + \u2212A + BC + 0 C + \u03c6 TF-IDF (d j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "X",
"sec_num": null
},
{
"text": "To perform experiments on neural machine translation (NMT) models with a minimal set of parallel data, i.e. for less-resourced languages, we trained a multi-source and multi-target NMT model (Ha et al., 2016) with wellresourced language pairs. In our work, we have chosen parallel corpora in the Romance language family, i.e. Spanish, Italian, French, Portuguese, Romanian, as well as English.",
"cite_spans": [
{
"start": 191,
"end": 208,
"text": "(Ha et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-way Neural Machine",
"sec_num": "2.3."
},
{
"text": "To train the multi-way NMT system, we used all possible language combinations within the targeted Romance language family, but excluded the English-Spanish, English-French, English-Portuguese and Portuguese-French language pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-way Neural Machine",
"sec_num": "2.3."
},
{
"text": "We used Open-NMT (Klein et al., 2017) , a generic deep learning framework mainly specialised in sequence-to-sequence models covering a variety of tasks such as machine translation, summarisation, speech processing and question answering as NMT framework. Due to computational complexity, the vocabulary in NMT models had to be limited. To overcome this limitation, we used byte pair encoding (BPE) to generate subword units (Sennrich et al., 2016) . BPE is a data compression technique that iteratively replaces the most frequent pair of bytes in a sequence with a single, unused byte. We used the following default neural network training parameters: two hidden layers, 500 hidden LSTM (long short term memory) units per layer, input feeding enabled, 13 epochs, batch size of 64, 0.3 dropout probability, dynamic learning rate decay, 500 dimension embeddings.",
"cite_spans": [
{
"start": 17,
"end": 37,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 424,
"end": 447,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation Setup",
"sec_num": null
},
{
"text": "1 X + denotes the Moore-Penrose pseudo-inverse Dataset for NMT training To train the multi-way model, we used the DGT (Directorate General for Translation) corpus (Steinberger et al., 2012) , a publicly accessible resource provided by the European Commission to support multilingualism and the re-use of European Commission information available in 24 different European languages. The English, Spanish, French, Romanian, Italian and Portuguese languages were selected to train the multi-way NMT system, from which we extracted 200,000 translated sentences present in all six languages within the DGT corpus (Table 2) .",
"cite_spans": [
{
"start": 163,
"end": 189,
"text": "(Steinberger et al., 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 608,
"end": 618,
"text": "(Table 2)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Neural Machine Translation Setup",
"sec_num": null
},
{
"text": "In order to develop and train our system, we used the available Apertium data as a gold standard. In this case, we held out the English-Spanish translation data and tried to predict the values in this dataset. From our methods, we had the following features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Apertium",
"sec_num": "3.1."
},
{
"text": "Distance The graph distance, either 2 or 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Apertium",
"sec_num": "3.1."
},
{
"text": "Connections The size of the keys set used in calculating the graph distance. To improve the result, we scaled this logarithmically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Apertium",
"sec_num": "3.1."
},
{
"text": "ONETA The score coming out of ONETA. We scaled this geometrically to obtain a roughly even distribution of values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Apertium",
"sec_num": "3.1."
},
{
"text": "Translation & Inverse Translation The perplexity of the translation. As the translation methodology is not",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Apertium",
"sec_num": "3.1."
},
{
"text": "Multi-Way # Subwords # Uniq. Subwords # Subwords # Uniq . Subwords # Lines train 131,146,463 32,180 121,544,872 32,161 4,400,000 validation 656,154 29,380 608,006 29,408 22,000 Table 2 : Dataset statistics for the DGT corpus the combined multi-way dataset used to train the translation system symmetric we obtained two scores for English \u2192 Spanish and Spanish \u2192 English. As the perplexity naturally decreases for longer outputs, we divided it by the number of tokens in the output score.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 199,
"text": ". Subwords # Lines train 131,146,463 32,180 121,544,872 32,161 4,400,000 validation 656,154 29,380 608,006 29,408 22,000 Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Source Target",
"sec_num": null
},
{
"text": "An analysis of these features using 10-fold cross-validation compared is shown in Table 3 . Note that due to the limitation of using only those translations that have a graph distance of 2 or 3, the highest recall we could achieve is 0.76 and the highest F-Measure is 0.870. ",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Source Target",
"sec_num": null
},
{
"text": "The official results from the organizers are reproduced in Table 4 . We can see from this that in all evaluations, the system described in this paper (labelled 'NUIG'), produced the highest precision in its results. However, as we saw in the Apertium analysis we had a significant drop in recall compared to the baselines and these overall meant that the system was 2nd or 3rd in terms of F-Measure. We also note that the systems to beat ours were those based on one-time inverse consultation (Tanaka and Umemura, 1994) , and it should be relatively easy to combine these results into our architecture, suggesting that we could easily obtain a much stronger result.",
"cite_spans": [
{
"start": 493,
"end": 519,
"text": "(Tanaka and Umemura, 1994)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Task Results",
"sec_num": "3.2."
},
{
"text": "The organizers of the TIAD task released a small part of the evaluation dataset, and it appears that this dataset has significant differences to the translations that form Apertium. For example, in Table 5 , the translation for chestnuts are given 2 , and we see that the gold standard gives 'ch\u00e2taigne' as does our system but also gives two more terms 'ch\u00e2taignier' and 'marronnier', which our system does not. These terms refer to chestnut as a tree and our system correctly predicts that this is a translation of 'chestnuttree' and fails to generate a translation for these terms, principally because they only occur in a single translation language pair (French-Esperanto) and so are not connected in any way to the English. More concerningly, the term 'marron' is missed in the gold standard, as well as by our system, even though this is the translation preferred by several online sources. In Figure 3 , we see a relative plot of the correct terms in the released gold standard versus the graph distance calculated according to the training data. The distribution is quite different from the training data, with much less of the data being connected by a single pivot translation (that is at graph distance 1) and much more distant connections. It is especially surprising that some of the translations are at a distance of 4 or 5, which for English-Portuguese and French-Portuguese represents about 9% of the data but in the training set, while the precision of such distant links was less than 1% in the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 900,
"end": 908,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3.3."
},
{
"text": "We have presented the results of our system for the TIAD task that combined unsupervised document embedding, unsupervised machine translation and graph analysis to produce a very high precision result. We have seen that the graph metrics are a good initial filtering, but that the main improvement can be achieved by incorporating metrics related to unsupervised multilingual NLP and the one-time inverse consultation method. This leads us to some obvious paths that can improve our results for future evaluations. Figure 3 : Distribution of translations relative to distance in training data Gracia, J., Kabashi, B., Kernerman, I., Lanau-Coronas, M., and Lonke, D. (2019) . Results of the translation inference across dictionaries 2019 shared task. pages 1-12.",
"cite_spans": [
{
"start": 605,
"end": 672,
"text": "Kabashi, B., Kernerman, I., Lanau-Coronas, M., and Lonke, D. (2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 515,
"end": 523,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4."
},
{
"text": "This is the second example given by the organizer for this language pair",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 P2, co-funded by the European Regional Development Fund, as well as by the H2020 project Pr\u00eat-\u00e0-LLOD under Grant Agreement number 825182.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Inferring translation candidates for multilingual dictionary generation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Arcan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Torregrosa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ahmadi",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Translation Inference Across Dictionaries (TIAD) Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arcan, M., Torregrosa, D., Ahmadi, S., and McCrae, J. P. (2019). Inferring translation candidates for mul- tilingual dictionary generation. In Proceedings of the 2nd Translation Inference Across Dictionaries (TIAD) Shared Task.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Apertium: a free/open-source platform for rule-based machine translation. Machine translation",
"authors": [
{
"first": "M",
"middle": [
"L"
],
"last": "Forcada",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ginest\u00ed-Rosell",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nordfalk",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "O'regan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ortiz-Rojas",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "P\u00e9rez-Ortiz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "S\u00e1nchez-Mart\u00ednez",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ram\u00edrez-S\u00e1nchez",
"suffix": ""
},
{
"first": "F",
"middle": [
"M"
],
"last": "Tyers",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "25",
"issue": "",
"pages": "127--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Forcada, M. L., Ginest\u00ed-Rosell, M., Nordfalk, J., O'Regan, J., Ortiz-Rojas, S., P\u00e9rez-Ortiz, J. A., S\u00e1nchez-Mart\u00ednez, F., Ram\u00edrez-S\u00e1nchez, G., and Tyers, F. M. (2011). Aper- tium: a free/open-source platform for rule-based ma- chine translation. Machine translation, 25(2):127-144.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Toward multilingual neural machine translation with universal encoder and decoder",
"authors": [
{
"first": "T",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "A",
"middle": [
"H"
],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ha, T., Niehues, J., and Waibel, A. H. (2016). Toward mul- tilingual neural machine translation with universal en- coder and decoder. CoRR, abs/1611.04798.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "OpenNMT: Open-Source Toolkit for Neural Machine Translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, G., Kim, Y., Deng, Y., Senellart, J., and Rush, A. M. (2017). OpenNMT: Open-Source Toolkit for Neural Ma- chine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics, pages 67-72.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Orthonormal explicit topic analysis for cross-lingual document matching",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mccrae",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1732--1742",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCrae, J., Cimiano, P., and Klinger, R. (2013). Or- thonormal explicit topic analysis for cross-lingual doc- ument matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1732-1742.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "TIAD Shared Task 2019: Orthonormal Explicit Topic Analysis for Translation Inference across Dictionaries",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Mccrae",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Translation Inference Across Dictionaries (TIAD) Shared Task",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCrae, J. P. (2019). TIAD Shared Task 2019: Orthonor- mal Explicit Topic Analysis for Translation Inference across Dictionaries. In Proceedings of the 2nd Transla- tion Inference Across Dictionaries (TIAD) Shared Task.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016). Neural Machine Translation of Rare Words with Subword Units. Proceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics, abs/1508.07909.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "DGT-TM: A freely available Translation Memory in 22 languages",
"authors": [
{
"first": "R",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Klocek",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pilos",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC12)",
"volume": "",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steinberger, R., Eisele, A., Klocek, S., Pilos, S., and Schl\u00fcter, P. (2012). DGT-TM: A freely available Trans- lation Memory in 22 languages. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC12), pages 454-459, Istanbul, Turkey.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Construction of a bilingual dictionary intermediated by a third language",
"authors": [
{
"first": "K",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Umemura",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "297--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanaka, K. and Umemura, K. (1994). Construction of a bilingual dictionary intermediated by a third language. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 297-303. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Languages available in the Apertium training data (solid lines) and language pairs to be inferred in the translation graph (dotted lines)",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Distribution of the features relative to correct (blue) translations and incorrect (red) translations",
"uris": null,
"num": null
},
"TABREF2": {
"html": null,
"text": "",
"content": "<table><tr><td>: Performance of our system on predicting English-</td></tr><tr><td>Spanish Apertium data</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"html": null,
"text": "0.44 0.53 0.64 0.38 0.48 0.74 0.54 0.62 Baseline word2vec 0.37 0.41 0.39 0.23 0.39 0.29 0.27 0.34 0.30 NUIG 0.80 0.35 0.49 0.68 0.31 0.43 0.84 0.40 0.54",
"content": "<table><tr><td>System</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td></tr><tr><td colspan=\"10\">Baseline OTIC 0.67 ACOLI Baseline 0.57 0.30 0.39 0.48 0.24 0.32 0.63 0.27 0.38</td></tr><tr><td>ACOLI WordNet</td><td colspan=\"9\">0.59 0.18 0.28 0.54 0.13 0.21 0.62 0.15 0.24</td></tr><tr><td>CL -Embeddings</td><td>-</td><td>-</td><td>-</td><td colspan=\"6\">0.52 0.35 0.42 0.55 0.34 0.42</td></tr><tr><td>Ciclos -OTIC</td><td>-</td><td>-</td><td>-</td><td colspan=\"6\">0.57 0.44 0.50 0.67 0.55 0.60</td></tr><tr><td>Multi-Strategy</td><td>-</td><td>-</td><td>-</td><td colspan=\"6\">0.52 0.34 0.41 0.58 0.34 0.43</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "The performance of systems in the TIAD-2020 benchmark from the organizers in terms of Precision, Recall and F-Measure",
"content": "<table><tr><td>English</td><td>French</td><td colspan=\"3\">Gold Standard Our System Graph Distance</td></tr><tr><td>chestnut</td><td>ch\u00e2taigne</td><td>Yes</td><td>Yes</td><td>2</td></tr><tr><td>chestnut</td><td>ch\u00e2taignier</td><td>Yes</td><td>No</td><td>\u221e</td></tr><tr><td>chestnut</td><td>marronnier</td><td>Yes</td><td>No</td><td>\u221e</td></tr><tr><td>chestnut</td><td>marron</td><td>No</td><td>No</td><td>2</td></tr><tr><td colspan=\"2\">chestnuttree ch\u00e2taignier</td><td>?</td><td>Yes</td><td>2</td></tr></table>",
"num": null,
"type_str": "table"
},
"TABREF5": {
"html": null,
"text": "Translations in the released gold standard and our system",
"content": "<table><tr><td/><td/><td/><td>EN-FR</td><td colspan=\"2\">EN-PT</td><td>FR-PT</td><td/><td>Apertium</td><td/><td/></tr><tr><td>0.8</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0.6</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0.4</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0.2</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>1 0</td><td>1 1</td><td>1 2 U n c o n n e c t e d</td></tr></table>",
"num": null,
"type_str": "table"
}
}
}
}