|
{ |
|
"paper_id": "Y13-1040", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:32:00.918943Z" |
|
}, |
|
"title": "Transliteration Systems Across Indian Languages Using Parallel Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Rishabh", |
|
"middle": [], |
|
"last": "Srivastava", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Technologies Research Center IIIT-Hyderabad", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "rishabh.srivastava@research.iiit.ac.in" |
|
}, |
|
{ |
|
"first": "Riyaz", |
|
"middle": [ |
|
"Ahmad" |
|
], |
|
"last": "Bhat", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Language Technologies Research Center IIIT-Hyderabad", |
|
"location": { |
|
"country": "India" |
|
} |
|
}, |
|
"email": "riyaz.bhat@research.iiit.ac.in" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Hindi is the lingua-franca of India. Although all non-native speakers can communicate well in Hindi, there are only a few who can read and write in it. In this work, we aim to bridge this gap by building transliteration systems that could transliterate Hindi into at-least 7 other Indian languages. The transliteration systems are developed as a reading aid for non-Hindi readers. The systems are trained on the transliteration pairs extracted automatically from a parallel corpora. All the transliteration systems perform satisfactorily for a non-Hindi reader to understand a Hindi text.", |
|
"pdf_parse": { |
|
"paper_id": "Y13-1040", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Hindi is the lingua-franca of India. Although all non-native speakers can communicate well in Hindi, there are only a few who can read and write in it. In this work, we aim to bridge this gap by building transliteration systems that could transliterate Hindi into at-least 7 other Indian languages. The transliteration systems are developed as a reading aid for non-Hindi readers. The systems are trained on the transliteration pairs extracted automatically from a parallel corpora. All the transliteration systems perform satisfactorily for a non-Hindi reader to understand a Hindi text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "India is home to languages from four language families namely Indo-Aryan, Dravidian, Austroasiatic and Tibeto-Burman. There are 22 official languages and more than 1000 dialects, which are written in more than 14 different scripts 1 in this country. Hindi, an Indo-Aryan language, written in Devanagari, is the lingua-franca of India (Masica, 1993, p. 6) . Most Indians are orally proficient in Hindi while they lack a good proficiency in reading and writing it. In this work, we come up with transliteration systems, so that non-native speakers of Hindi don't face a problem in reading Hindi script. We considered 7 Indian languages, including 4 Indo-Aryan (Punjabi, Gujarati, Urdu and Bengali) and 3 Dravidian (Telugu, Tamil and Malayalam) languages, for this task. The quantity of Hindi literature (especially online) is more than twice as in any other Indian language. There are approximately 107 newspapers 2 , 15 online newspapers 3 and 94067 Wikipedia articles 4 (reported 1 http://en.wikipedia.org/wiki/Languages of India 2 http://en.wikipedia.org/wiki/List of newspapers in India 3 http://www.indiapress.org/ 4 http://stats.wikimedia.org/EN India/Sitemap.htm in March 2013), which are published in Hindi.", |
|
"cite_spans": [ |
|
{ |
|
"start": 334, |
|
"end": 354, |
|
"text": "(Masica, 1993, p. 6)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The transliteration systems will be helpful for non-Hindi readers to understand these as well as various other existing Hindi resources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As the transliteration task has to be done for 7 languages, a rule-based system would become very expensive. The cost associated with crafting exhaustive rule-sets for transliteration has already been demostrated in works on Hindi-Punjabi (Goyal and Lehal, 2009) , Hindi-Gujarati (Patel and Pareek, 2009) and Hindi-Urdu (Malik et al., 2009; Lehal and Saini, 2010) . In this work, we have modelled the task of transliteration as a noisy channel model with minimum error rate training (Och, 2003) . However, such a statistical modelling needs an ample amount of data for training and testing. The data is extracted from an Indian language sentence aligned parallel corpora available for 10 Indian languages. These sentences are automatically word aligned across the languages. Since these languages are written in different scripts, we have used an Indian modification of the soundex algorithm (Russell and Odell, 1918 ) (henceforth Indic-Soundex) for a normalized language representation. Extraction of the transliteration pairs (two words having the similar pronunciation) is then followed by Longest Common Subsequence (henceforth LCS) algorithm, a string similarity algorithm. The extracted pairs are evaluated manually by annotators and the accuracies are calculated. We found promising results as far as the accuracies of these extracted pairs are concerned. These transliteration pairs are then used to train the transliteration systems. Various evaluation tests are performed on these transliteration systems which confirm the high accuracy of these transliteration systems. Though the best system was nearly 70% accurate on word-level, the character-level accuracies (greater than 70% for all systems) along with the encouraging results from the human evaluations, clearly show that these transliterations are good enough for a typical Indian reader to easily interpret the text.", |
|
"cite_spans": [ |
|
{ |
|
"start": 239, |
|
"end": 262, |
|
"text": "(Goyal and Lehal, 2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 280, |
|
"end": 304, |
|
"text": "(Patel and Pareek, 2009)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 320, |
|
"end": 340, |
|
"text": "(Malik et al., 2009;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 341, |
|
"end": 363, |
|
"text": "Lehal and Saini, 2010)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 483, |
|
"end": 494, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 892, |
|
"end": 916, |
|
"text": "(Russell and Odell, 1918", |
|
"ref_id": "BIBREF27" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Knight (1998) provides a deep insight on how transliteration can be thought of as translation. Zhang et al.(2010) have proposed 2 approaches, for machine transliteration among English, Chinese, Japanese and Korean language pairs when extraction/creation of parallel data is expensive. Tiedemann (1998) has worked on text-based multi-language transliteration exploiting short aligned units and structural & orthographic similarities in a corpus. Indirect generation of Chinese text from English transliterated counter-part (Kuo and Yang, 2004) discusses the changes that happen in a borrowed word. Matthews (2007) has created statistical model for transliteration of proper names in English-Chinese and English-Arabic.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 113, |
|
"text": "Zhang et al.(2010)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 285, |
|
"end": 301, |
|
"text": "Tiedemann (1998)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 522, |
|
"end": 542, |
|
"text": "(Kuo and Yang, 2004)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 597, |
|
"end": 612, |
|
"text": "Matthews (2007)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "As Indian languages are written in different scripts, they must be converted to some common representation before comparison can be made between them. Grapheme to Phoneme conversion (Pagel et al., 1998) is one of the ways to do this. Gupta et al. (2010) have used WX notation as the common representation to transliterate among various Indian languages including Hindi, Bengali, Punjabi, Telugu, Malayalam and Kannada. Soundex algorithm (Russell and Odell, 1918) converts words into a common representation for comparison. Levenshtein distance (Levenshtein, 1966) between two strings has long been established as a distance function. It calculates the minimum number of insertions, deletions and substitutions needed to convert a string into another. Longest Common Subsequence (LCS) algorithm is similar to Levenshtein distance with the difference being that it does not consider substitution as a distance metric.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 202, |
|
"text": "(Pagel et al., 1998)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 234, |
|
"end": 253, |
|
"text": "Gupta et al. (2010)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 462, |
|
"text": "(Russell and Odell, 1918)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 544, |
|
"end": 563, |
|
"text": "(Levenshtein, 1966)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "Zahid et al. (2010) have applied Soundex algorithm for extraction of English-Urdu transliteration pairs. An attempt towards a rule based phonetic matching algorithm for Hindi and Marathi using Soundex algorithm (Chaware and Rao, 2011) has given quite promising results. Soundex has already been used in many Indian language systems including Named entity recognition (Nayan et al., 2008) and cross-language information retrieval (Jagarlamudi and Kumaran, 2008) . Although they applied soundex after translitera-tion from Indian language to English. Namedentity transliteration pairs mining from Tamil and English corpora has been performed earlier using a linear classifier (Saravanan and Kumaran, 2008) . Sajjad et al. (2012) have mined transliteration pairs independent of the language pair using both supervised and unsupervised models. Transliteration pairs have also been mined from online Hindi song lyrics noting the word-by-word transliteration of Hindi songs which maintain the word order (Gupta et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 234, |
|
"text": "(Chaware and Rao, 2011)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 387, |
|
"text": "(Nayan et al., 2008)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 429, |
|
"end": 460, |
|
"text": "(Jagarlamudi and Kumaran, 2008)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 674, |
|
"end": 703, |
|
"text": "(Saravanan and Kumaran, 2008)", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 706, |
|
"end": 726, |
|
"text": "Sajjad et al. (2012)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1018, |
|
"text": "(Gupta et al., 2012)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "In what follows, we present our methodology to extract transliteration pairs in section 2. The next section, Section 3, talks about the details of the creation and evaluation of transliteration systems. We conclude the paper in section 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "1.1" |
|
}, |
|
{ |
|
"text": "We first align the words for all the languages with Hindi in the parallel corpora. Phoneme matching techniques are applied to these pairs and the pairs satisfying the set threshold are selected. Given these pairs, transliteration systems are trained for all the 7 language pairs with Hindi as the source language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extraction of transliteration pairs", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We have used the ILCI corpora (Jha, 2010) which contains 30000 parallel sentences per language for 11 languages (we have not considered English. Neither are Marathi and Konkani as the latter 2 are written in Devanagari script, which is same for Hindi). The corpora contains sentences from the domain of tourism and health with Hindi as their source language. Table 1 shows the various scripts in which these languages are written. All the sentences are encoded in utf-8 format.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 359, |
|
"end": 366, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpora", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The first task is to align words from the parallel corpora between Hindi and the other languages. We have used IBM model 1 to 5 and HMM model to align the words using Giza++ (Och and Ney, 2000) . Hindi shows a remarkable similarity with the other 4 Indo-Aryan languages considered for this work (Masica, 1993) . With the other 3 Dravidian languages Hindi shares typological properties like word order, head directionality, parameters, etc (Krishnamurti, 2003) . Being so similar in structure, these language pairs exhibit high alignment accuracies. The extracted translation ", |
|
"cite_spans": [ |
|
{ |
|
"start": 174, |
|
"end": 193, |
|
"text": "(Och and Ney, 2000)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 309, |
|
"text": "(Masica, 1993)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 439, |
|
"end": 459, |
|
"text": "(Krishnamurti, 2003)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Alignment", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Bengali(Ben) Bengali alphabet Gujarati(Guj) Gujarati alphabet Hindi(Hin) Devanagari Konkani(Kon) Devanagari Malayalam(Mal) Malayalam alphabet Marathi(Mar) Devanagari Punjabi(Pun) Gurmukhi Tamil(Tam) Tamil alphabet Telugu(Tel) Telugu alphabet Urdu(Urd) Arabic English(Eng)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word Alignment", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Latin (English alphabet) pairs are then matched for phonetic similarity using LCS algorithm, as discussed in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 6, |
|
"end": 24, |
|
"text": "(English alphabet)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word Alignment", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In the extracted translation pairs, we have to find whether these words are transliteration pairs or just translation pairs. The major issue in finding these pairs is that the languages are in different scripts and no distance matching algorithm can be applied directly. Using Roman as a common representation (Gupta et al., 2010) , however, is not a solution either. A Roman representation will miss out issues like short vowel drop. For example, ktAb (Urdu, book) and kitAb (Hindi, book) ( Figure 1 ), essentially same, are marked as non-transliteration pairs due to short vowel drop in Urdu (Kulkarni et al., 2012) . We opt for a phoneme matching algorithm to bring all the languages into a single representation and then apply a distance matching algorithm to extract the transliteration pairs. Fortunately, such a scheme for Indian languages exists, which will be addressed in the 2.3.1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 310, |
|
"end": 330, |
|
"text": "(Gupta et al., 2010)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 589, |
|
"end": 617, |
|
"text": "Urdu (Kulkarni et al., 2012)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 492, |
|
"end": 500, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Phonetic Matching", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Soundex algorithm (Russell and Odell, 1918 ) developed for English is often used for phoneme matching. Soundex is an optimal algorithm when we just have to compare if two words in English sound same. Swathanthra Indian Language Computing Project (Silpa 5 ) (Silpa, 2010) , written in Hindi and Urdu respectively, with their gloss (Hindi is written in Devanagari script from left to right while Urdu is written in Persio-Arabic script from right to left. The gloss is given from left to right in both). As is clear that if a both are transliterated into a common representation, they wont result into a transliteration pair an Indic-Soundex system to map words phonetically in many Indian languages. Currently, mappings for Hindi, Bengali, Punjabi, Gujarati, Oriya, Tamil, Telugu, Kannada and Malayalam are handled in the Silpa system. Since Urdu is one of the languages we are working on, we introduced the mapping of its character set in the system. The task is convoluted, since with the other Indian languages, mapping direct Unicode is possible, but Urdu script being a derivative from Arabic modification of Persian script, has a completely different Unicode mapping 6 (BIS, 1991). Also there were some minor issues with Silpa system which we corrected. Figure 2 shows the various character mappings of languages to a common representation. Some of the differences from Silpa system include:", |
|
"cite_spans": [ |
|
{ |
|
"start": 18, |
|
"end": 42, |
|
"text": "(Russell and Odell, 1918", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 254, |
|
"text": "(Silpa 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 257, |
|
"end": 270, |
|
"text": "(Silpa, 2010)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 1259, |
|
"end": 1267, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 mapping for long vowels.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "-U, o, au, are mapped to v.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "-E and ae are mapped to y.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "-A is mapped to a.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 bindu and chandrabindu in Hindi are mapped to n.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 ah and halant are mapped to null (as they have no sound).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Short vowels like a, e, u are mapped to null.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 h is mapped to null as it does not contribute much to the sound. It is just a emphasis marker.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 To make Silpa mappings readable every sound is mapped to its correspoding character in Roman. Soundex is also shown as a mapping. This is modified from one given by (Silpa, 2010) In the following, we discuss the computation and extraction of phonetically similar word pairs using LCS algorithm.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 180, |
|
"text": "(Silpa, 2010)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Indic-Soundex", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "Aligned word pairs, with soundex representation, are further checked for character level similarity. Strings with LCS distance 0 or 1 are considered as transliteration pairs. We also consider strings with distance 1 because there is a possibility that some sound sequence is a bit different from the other in two different languages. This window of 1 is permitted to allow extraction of pairs with slight variations. If pairs are not found to be exact match but at a difference of 1 from each other, they are checked if their translation probability (obtained after alignment of the corpora) is more than 50% (empirically chosen). If this condition is satisfied, the words are taken as transliteration pairs (Figure 3) . This increases the recall of transliteration pair extraction reducing precision by a slight percentage. Table 2 presents the statistics of extracted transliteration pairs using LCS. A detailed algorithm using translation probabilities obtained during alignment phase is provided in Algorithm 1.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 708, |
|
"end": 718, |
|
"text": "(Figure 3)", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 825, |
|
"end": 832, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Word-pair Similarity", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "Algorithm 1 match(w1, w2, translationprobability (w1,w2)) 1. Find the language of both the words by using the 1st character of each and checking in the character list. 2. Calculate Soundex equivalent of w1 and w2 using Soundex algorithm. 3. Check if both the soundex codes are equal. 4. If yes, return both as transliteration pairs. 5. else, check the LCS between the soundex codes of w1 and w2. 6. If the distance is found to be 1, 7. check if the translation probability for w1 to w2 is more than 0.5. 8. if Yes, return both are transliteration pairs. 9. else both are not transliteration pairs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word-pair Similarity", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "To evaluate the phonetic similarity of the extracted aligned word pairs by LCS algorithm, a small subset (10%) from each language pair is given for Human evaluation. Annotators 7 are asked to judge whether the extracted word pairs are in fact transliterations of each other or not, based on the way the word pairs are pronounced in the respective languages. The results for the transliteration pairs for a given language pair extracted by LCS algorithm are reported in Table 2 . Hindi-Urdu and Hindi-Telugu (even though Hindi and Telugu do not belong to the same family of languages) demonstrate a remarkably high accuracy. Hindi-Bengali, Hindi-Punjabi, Hindi-Malayalam and Hindi-Gujarati have mild accuracies while Hindi-Tamil is the least accurate pair. Not only Tamil and Hindi do not belong to the same family, the lexical diffusion between these two languages is very less. For automatic evaluation of alignment quality we calculated the alignment entropy of all the transliteration pairs (Pervouchine et al., 2009) . These have also been listed in Table 2 . Tamil, Telugu and Urdu have a relatively high entropy indicating a low quality alignment. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 994, |
|
"end": 1020, |
|
"text": "(Pervouchine et al., 2009)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 469, |
|
"end": 476, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1054, |
|
"end": 1061, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation of Transliteration Pairs", |
|
"sec_num": "2.4" |
|
}, |
|
{ |
|
"text": "After the transliteration pairs are extracted and evaluated, we train transliteration systems for 7 languages with Hindi as the source language. In the following section, we explain the training of these transliteration systems in detail.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Transliteration with Hindi as the Source Language", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "All the extracted transliteration word pairs of a particular language pair are split into their corresponding characters to create a parallel data set, for building the transliteration system. The dataset of a given language pair is further split into training and development sets. 90% data is randomly selected for training and the remaining 10% is kept for development. Evaluation set is created separately because of two reasons; firstly we don't want to reduce the size of the training data by splitting the data set into training, testing and devel-opment, secondly evaluating the results on a gold data set would give us a clear picture of the performance of our system. Section 3.3 explans various evaluation methodologies for our transliteration systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of data-sets", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We model transliteration as a translation problem, treating a word as a sentence and a character as a word using the aforementioned datasets (Matthews, 2007, ch. 2,3) (Chinnakotla and Damani, 2009) . We train machine transliteration systems with Hindi as a source language and others as target (all in different models), using Moses (Koehn et al., 2007) . Giza++ (Och and Ney, 2000) is used for character-level alignments (Matthews, 2007, ch. 2,3) . Phrase-based alignment model (Figure 4 ) (Koehn et al., 2003) is used with a trigram language model of the target side to train the transliteration models. Phrase translations probabilities are calculated employing the noisy channel model and Mert is used for minimum error-rate training to tune the model using the development data-set. Top 1 result is considered as the default output (Och, 2003; Bertoldi et al., 2009) . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 183, |
|
"text": "(Matthews, 2007, ch. 2,3) (Chinnakotla and", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 184, |
|
"end": 197, |
|
"text": "Damani, 2009)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 333, |
|
"end": 353, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 382, |
|
"text": "(Och and Ney, 2000)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 422, |
|
"end": 447, |
|
"text": "(Matthews, 2007, ch. 2,3)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 491, |
|
"end": 511, |
|
"text": "(Koehn et al., 2003)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 837, |
|
"end": 848, |
|
"text": "(Och, 2003;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 849, |
|
"end": 871, |
|
"text": "Bertoldi et al., 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 479, |
|
"end": 488, |
|
"text": "(Figure 4", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Training of transliteration systems", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this section we will do an in-depth evaluation of all the transliteration systems that we reported in this work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "We used two data-sets for the evaluation. The creation of these data-sets is discussed below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of Evaluation Sets", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Gold test-set: Nearly 25 sentences in Hindi, containing an approximate of 500 words (unique 260 words) were randomly extracted from a text and given to human annotators for preparing gold data. The annotators 7 were PACLIC-27", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of Evaluation Sets", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "given full sentences rather than individual words, so that they could decide the correct transliteration according to the context. We were not able to create gold test-set for Tamil.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of Evaluation Sets", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 WordNet based test-set: For automatic evaluation, the evaluation set is created from the synsets of Indian languages present in Hindi WordNet (Sinha et al., 2006) . A Hindi word and its corresponding synsets in other languages (except Gujarati) are extracted and represented in a common format using Indic-Soundex and then among the synsets only exact match(s), if any, with the corresponding Hindi word, are picked. In this way, we ensure that the evaluation set is perfect. The set mainly contains cognate words (words of similar origin) and named entities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 164, |
|
"text": "(Sinha et al., 2006)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Creation of Evaluation Sets", |
|
"sec_num": "3.3.1" |
|
}, |
|
{ |
|
"text": "We evaluated the transliteration systems on the above-discussed test-sets following the metrics discussed below:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 We used the evaluation metrics like ACC, Mean Fscore, MRR and MAP ref , which refer to Word-accuracy in Top1, Fuzziness in Top1, Mean-reciprocal rank and precision in the n-best candidates repectively (Zhang et al., 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 203, |
|
"end": 223, |
|
"text": "(Zhang et al., 2012)", |
|
"ref_id": "BIBREF34" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Keeping in view the actual goal of the task, we also evaluated the systems based on the readability of their top output (1-best) based on the transliteration of consonants. Consonants have a higher role in lexical access than vowels (New et al., 2008) , if the consonants of a word are transliterated correctly, the word is most likely to be accessed and thus maintaining readability of the text. So, we evaluated the systems based on the transliteration of consonants.", |
|
"cite_spans": [ |
|
{ |
|
"start": 235, |
|
"end": 253, |
|
"text": "(New et al., 2008)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.3.2" |
|
}, |
|
{ |
|
"text": "We present consolidated results in Table 3 and Table 4 . Apart from standard metrics i.e., metrics 1, metrics 2 captures character-level and wordlevel accuracies considering the complete word and only the consonants of that word with the number of testing pairs for all the transliteration systems. The character-level accuracies are calculated according to the percentage of insertions and deletions required to convert a transliterated word to a gold word. Accuracy of all the transliteration systems is greater than 70%, i.e. even a worst transliteration system would return a string with 7 correct characters, out of 10, on an average. The accuracies at the character-level of only the consonants ranges from 75-95% which clearly proves our systems to be of good quality. It is clear from the results that these systems can be used as a reading aid by a non-Hindi reader.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 55, |
|
"text": "Table 3 and Table 4", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "As the table shows, all the transliteration systems have shown similar results on both the testsets. These results clearly show that all the systems except Malayalam, Tamil and Telugu perform rather well. This can be attributed to the fact that these languages belong to the Dravidian family while Hindi is an Indo-Aryan language. Although, as per Metrics 1, the results are not promising for these languages, the consonantbased evaluation, i.e. Metrics 2, shows that the performance is not that bad.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "Perfect match of the transliterated and gold word is considered for word-level accuracy. Bengali, Gujarati, Punjabi and Urdu yield the very high transliteration accuracy. The best system (Hindi-Pujabi) gives an accuracy of nearly 70% on word-level whereas Hindi-Urdu gives the highest accuracy on character-level. Urdu transliteration accuracy being so high is strengthened from the fact that linguistically the division between Hindi and Urdu is not well-founded (Masica, 1993, p. 27-30) (Rai, 2001) . We can infer from the results of the word-level accuracies of the whole word that these transliteration systems cannot be directly used by a system for further processing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 464, |
|
"end": 488, |
|
"text": "(Masica, 1993, p. 27-30)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 489, |
|
"end": 500, |
|
"text": "(Rai, 2001)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "3.3.3" |
|
}, |
|
{ |
|
"text": "In order to re-confirm the validity of the output in practical scenarios, we also performed humanbased evaluation. For human evaluations 10 short Hindi sentences, with an average length of 10 words, were randomly selected. All these sentences were transliterated by all the 7 transliteration systems and the results of each were given to several evaluators 8 to rate the sentences on the scale of 0 to 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "\u2022 Score 0: Non-Sense. If the sentence makes no sense to one at all. \u2022 Score 1: Some parts make sense but is not comprehensible over all.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "\u2022 Score 2: Comprehensible but has quite few errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "\u2022 Score 3: Comprehensible, containing an error or two.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "\u2022 Score 4: Perfect. Contains minute errors, if any. Table 5 contains the average scores given by evaluators for the outputs of various transliteration systems. The results clearly depict the ease that a reader faced while evaluating the sentences. According to these scores, Gujarati, Bengali and Telugu transliteration system gives nearly perfect outputs, followed by the transliteration systems of Urdu and Malayalam which can be directly used as a reading aid. Tamil and Punjabi transliterations were comprehensible but contained a considerable number of errors. 9 ACC stands for Word level accuracy; Char(all) stands for Character level accuracy; Char(consonant) stands for Character level accuracy considering only the consonants; Word(consonant) stands for Word level accuracy considering only the consonants ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 59, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Human Evaluation", |
|
"sec_num": "3.3.4" |
|
}, |
|
{ |
|
"text": "We have proposed a method for transliteration of Hindi into various other Indian languages as a reading aid for non-Hindi readers. We have chosen a complete statistical approach for the same and extracted training data automatically from parallel corpora. An adaptation of Soundex algorithm for a normalized language representation has been integrated with LCS algorithm to extract training transliteration pairs from the aligned language-pairs. All the transliteration systems return transliterations, good enough to understand the text, which is strengthened from the evaluators' score as well as from the character-level ac-curacies. However, word-level accuracies of these transliteration systems prompt them not to be used as a tool for text processing applications. Further, we are training transliteration models between all these 8 Indian languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "http://thottingal.in/soundex/soundex.html 6 Source: http://en.wikipedia.org/wiki/Indian Script Code for Information Interchange PACLIC-27", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotators were bi-literate graduates or undergraduate students, in the age of 20-24 with either Hindi or the transliterated language as their mother tongue", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Annotators were bi-literate, some of who did not know how to read Hindi, graduates or undergraduate students, in the age of 20-24 with the transliterated language as their mother tongue", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Improved minimum error rate training in moses", |
|
"authors": [ |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barry", |
|
"middle": [], |
|
"last": "Haddow", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean-Baptiste", |
|
"middle": [], |
|
"last": "Fouet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "The Prague Bulletin of Mathematical Linguistics", |
|
"volume": "91", |
|
"issue": "1", |
|
"pages": "7--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicola Bertoldi, Barry Haddow, and Jean-Baptiste Fouet. 2009. Improved minimum error rate train- ing in moses. The Prague Bulletin of Mathematical Linguistics, 91(1):7-16.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Indian Script Code for Information Interchange", |
|
"authors": [], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bureau of Indian Standards BIS. 1991. Indian Script Code for Information Interchange, ISCII. IS 13194.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Rule-Based Phonetic Matching Approach for Hindi and Marathi", |
|
"authors": [ |
|
{ |
|
"first": "Sandeep", |
|
"middle": [], |
|
"last": "Chaware", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Srikantha", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computer Science & Engineering", |
|
"volume": "", |
|
"issue": "3", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sandeep Chaware and Srikantha Rao. 2011. Rule- Based Phonetic Matching Approach for Hindi and Marathi. Computer Science & Engineering, 1(3).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Experiences with english-hindi, english-tamil and english-kannada transliteration tasks at news", |
|
"authors": [ |
|
{ |
|
"first": "Manoj", |
|
"middle": [], |
|
"last": "Kumar Chinnakotla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Om", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Damani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Named Entities Workshop Shared Task on Transliteration", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "44--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manoj Kumar Chinnakotla and Om P Damani. 2009. Experiences with english-hindi, english-tamil and english-kannada transliteration tasks at news 2009. In Proceedings of the 2009 Named Entities Work- shop Shared Task on Transliteration, pages 44-47. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Hindipunjabi machine transliteration system (for machine translation system)", |
|
"authors": [ |
|
{ |
|
"first": "Vishal", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gurpreet", |
|
"middle": [], |
|
"last": "Singh Lehal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "George Ronchi Foundation Journal", |
|
"volume": "64", |
|
"issue": "1", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vishal Goyal and Gurpreet Singh Lehal. 2009. Hindi- punjabi machine transliteration system (for machine translation system). George Ronchi Foundation Journal, Italy, 64(1):2009.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Transliteration among indian languages using wx notation. g Semantic Approaches in Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "Rohit", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pulkit", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iiit", |
|
"middle": [], |
|
"last": "Allahabad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sapan", |
|
"middle": [], |
|
"last": "Diwakar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rohit Gupta, Pulkit Goyal, Allahabad IIIT, and Sapan Diwakar. 2010. Transliteration among indian lan- guages using wx notation. g Semantic Approaches in Natural Language Processing, page 147.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Mining hindi-english transliteration pairs from online hindi lyrics", |
|
"authors": [ |
|
{ |
|
"first": "Kanika", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Monojit", |
|
"middle": [], |
|
"last": "Choudhury", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kalika", |
|
"middle": [], |
|
"last": "Bali", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "23--25", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kanika Gupta, Monojit Choudhury, and Kalika Bali. 2012. Mining hindi-english transliteration pairs from online hindi lyrics. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012), pages 23-25.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Girish Nath Jha. 2010. The tdil program and the indian language corpora initiative (ilci)", |
|
"authors": [ |
|
{ |
|
"first": "Jagadeesh", |
|
"middle": [], |
|
"last": "Jagarlamudi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the Seventh Conference on International Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "80--87", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jagadeesh Jagarlamudi and A Kumaran. 2008. Cross- Lingual Information Retrieval System for Indian Languages. In Advances in Multilingual and Multi- modal Information Retrieval, pages 80-87. Springer. Girish Nath Jha. 2010. The tdil program and the indian language corpora initiative (ilci). In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC 2010). European Language Resources Association (ELRA).", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Graehl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Computational Linguistics", |
|
"volume": "24", |
|
"issue": "4", |
|
"pages": "599--612", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Knight and Jonathan Graehl. 1998. Ma- chine transliteration. Computational Linguistics, 24(4):599-612.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Statistical phrase-based translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Franz", |
|
"middle": [ |
|
"Josef" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Marcu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "48--54", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology- Volume 1, pages 48-54. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Moses: Open source toolkit for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hieu", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Birch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison-Burch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marcello", |
|
"middle": [], |
|
"last": "Federico", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nicola", |
|
"middle": [], |
|
"last": "Bertoldi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brooke", |
|
"middle": [], |
|
"last": "Cowan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wade", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Moran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Zens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--180", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177-180. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "The Dravidian Languages", |
|
"authors": [ |
|
{ |
|
"first": "Bhadriraju", |
|
"middle": [], |
|
"last": "Krishnamurti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bhadriraju Krishnamurti. 2003. The Dravidian Lan- guages. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Urdu-hindi-urdu machine translation: Some problems", |
|
"authors": [], |
|
"year": 2012, |
|
"venue": "Health", |
|
"volume": "666", |
|
"issue": "", |
|
"pages": "99--100", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amba Kulkarni, Rahmat Yousufzai, and Pervez Ahmed Azmi. 2012. Urdu-hindi-urdu machine translation: Some problems. Health, 666:99-1.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Generating paired transliterated-cognates using multiple pronunciation characteristics from Web Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Jin-Shea", |
|
"middle": [], |
|
"last": "Kuo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ying-Kuei", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "PACLIC", |
|
"volume": "18", |
|
"issue": "", |
|
"pages": "275--282", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jin-Shea Kuo and Ying-Kuei Yang. 2004. Gener- ating paired transliterated-cognates using multiple pronunciation characteristics from Web Corpora. In PACLIC, volume 18, pages 275-282.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A hindi to urdu transliteration system", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Gurpreet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lehal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Tejinder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Saini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of ICON-2010: 8th International Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gurpreet S Lehal and Tejinder S Saini. 2010. A hindi to urdu transliteration system. In Proceedings of ICON-2010: 8th International Conference on Nat- ural Language Processing, Kharagpur.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Binary codes capable of correcting deletions, insertions and reversals", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Vladimir I Levenshtein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1966, |
|
"venue": "Soviet physics doklady", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. In Soviet physics doklady, volume 10, page 707.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "A hybrid model for urdu hindi transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Abbas", |
|
"middle": [], |
|
"last": "Malik", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Laurent", |
|
"middle": [], |
|
"last": "Besacier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christian", |
|
"middle": [], |
|
"last": "Boitet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "177--185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abbas Malik, Laurent Besacier, Christian Boitet, and Pushpak Bhattacharyya. 2009. A hybrid model for urdu hindi transliteration. In Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration, pages 177-185. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The Indo-Aryan Languages", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Colin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Masica", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin P Masica. 1993. The Indo-Aryan Languages. Cambridge University Press.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Machine transliteration of proper names", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Matthews", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David Matthews. 2007. Machine transliteration of proper names. Master's Thesis, University of Ed- inburgh, Edinburgh, United Kingdom.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Named entity recognition for Indian languages. NER for South and South East Asian Languages", |
|
"authors": [ |
|
{ |
|
"first": "Animesh", |
|
"middle": [], |
|
"last": "Nayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ravi Kiran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pawandeep", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Animesh Nayan, B Ravi Kiran Rao, Pawandeep Singh, Sudip Sanyal, and Ratna Sanyal. 2008. Named en- tity recognition for Indian languages. NER for South and South East Asian Languages, page 97.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Differential processing of consonants and vowels in lexical access through reading", |
|
"authors": [ |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "New", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ara\u00fajo", |
|
"middle": [], |
|
"last": "Ver\u00f3nica", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thierry", |
|
"middle": [], |
|
"last": "Nazzi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Psychological Science", |
|
"volume": "19", |
|
"issue": "12", |
|
"pages": "1223--1227", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boris New, Ver\u00f3nica Ara\u00fajo, and Thierry Nazzi. 2008. Differential processing of consonants and vowels in lexical access through reading. Psychological Sci- ence, 19(12):1223-1227.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Improved Statistical Alignment Models", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. J. Och and H. Ney. 2000. Improved Statisti- cal Alignment Models. pages 440-447, Hongkong, China, October.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Compu- tational Linguistics-Volume 1, pages 160-167. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Letter to sound rules for accented lexicon compression", |
|
"authors": [ |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Pagel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Lenzo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Black", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vincent Pagel, Kevin Lenzo, and Alan Black. 1998. Letter to sound rules for accented lexicon compres- sion. arXiv preprint cmp-lg/9808010.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Gh-map-rule based token mapping for translation between sibling language pair: Gujarati-hindi", |
|
"authors": [ |
|
{ |
|
"first": "Kalyani", |
|
"middle": [], |
|
"last": "Patel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jyoti", |
|
"middle": [], |
|
"last": "Pareek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of International Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kalyani Patel and Jyoti Pareek. 2009. Gh-map-rule based token mapping for translation between sibling language pair: Gujarati-hindi. In Proceedings of In- ternational Conference on Natural Language Pro- cessing.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Transliteration alignment", |
|
"authors": [ |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Pervouchine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haizhou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "136--144", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vladimir Pervouchine, Haizhou Li, and Bo Lin. 2009. Transliteration alignment. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Vol- ume 1-Volume 1, pages 136-144. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Hindi nationalism", |
|
"authors": [ |
|
{ |
|
"first": "Alok", |
|
"middle": [], |
|
"last": "Rai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Orient Blackswan", |
|
"volume": "13", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alok Rai. 2001. Hindi nationalism, volume 13. Orient Blackswan.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "A statistical model for unsupervised and semi-supervised transliteration mining", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Russell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Odell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1918, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "469--477", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "R Russell and M Odell. 1918. Soundex. US Patent, 1. Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2012. A statistical model for unsupervised and semi-supervised transliteration mining. In Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 469-477. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Some experiments in mining named entity transliteration pairs from comparable corpora", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Saravanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "CLIA", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "K Saravanan and A Kumaran. 2008. Some experi- ments in mining named entity transliteration pairs from comparable corpora. CLIA 2008, page 26.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Swathanthra Indian Language Computing Project", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Silpa", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Silpa. 2010. Swathanthra Indian Language Computing Project. [Online].", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "An approach towards construction and application of multilingual indo-wordnet", |
|
"authors": [ |
|
{ |
|
"first": "Manish", |
|
"middle": [], |
|
"last": "Sinha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mahesh", |
|
"middle": [], |
|
"last": "Reddy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "3rd Global Wordnet Conference (GWC 06)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manish Sinha, Mahesh Reddy, and Pushpak Bhat- tacharyya. 2006. An approach towards construction and application of multilingual indo-wordnet. In 3rd Global Wordnet Conference (GWC 06), Jeju Island, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Extraction of translation equivalents from parallel corpora", |
|
"authors": [ |
|
{ |
|
"first": "J\u00f6rg", |
|
"middle": [], |
|
"last": "Tiedemann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of the 11th Nordic conference on computational linguistics", |
|
"volume": "80", |
|
"issue": "", |
|
"pages": "120--128", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00f6rg Tiedemann. 1998. Extraction of translation equiv- alents from parallel corpora. In Proceedings of the 11th Nordic conference on computational lin- guistics, pages 120-128. Center f\u00f6r Sprogteknologi and Department of Genral and Applied Lingusitcs (IAAS), University of Copenhagen, Njalsgade 80, DK-2300 Copenhagen S, Denmark.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "English to Urdu transliteration: An application of Soundex algorithm", |
|
"authors": [ |
|
{ |
|
"first": "Naveed Iqbal", |
|
"middle": [], |
|
"last": "Muhammad Adeel Zahid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adil Masood", |
|
"middle": [], |
|
"last": "Rao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Siddiqui", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Information and Emerging Technologies (ICIET), 2010 International Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--5", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Muhammad Adeel Zahid, Naveed Iqbal Rao, and Adil Masood Siddiqui. 2010. English to Urdu transliteration: An application of Soundex algo- rithm. In Information and Emerging Technologies (ICIET), 2010 International Conference on, pages 1-5. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Machine transliteration: Leveraging on third languages", |
|
"authors": [ |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiangyu", |
|
"middle": [], |
|
"last": "Duan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Pervouchine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haizhou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1444--1452", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Zhang, Xiangyu Duan, Vladimir Pervouchine, and Haizhou Li. 2010. Machine transliteration: Lever- aging on third languages. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1444-1452. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Whitepaper of news 2012 shared task on machine transliteration", |
|
"authors": [ |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haizhou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kumaran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 4th Named Entity Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Min Zhang, Haizhou Li, Ming Liu, and A Kumaran. 2012. Whitepaper of news 2012 shared task on machine transliteration. In Proceedings of the 4th Named Entity Workshop, pages 1-9. Association for Computational Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Figure shows kitAb (book)", |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "A part of Indic-Soundex, mappings of various Indian characters to a common representation, English", |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Figure showsV ikram, a person name, written in Hindi and Bengali respectively, with their gloss and soundex code. The two forms are considered transliteration pairs if they have a high translation probability and don't differ by more than 1 character.", |
|
"uris": null |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"num": null, |
|
"text": "Figure depictsan example of phrase-based alignment on kitAb (book), written in Hindi (top) and Urdu (bottom).", |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table/>", |
|
"num": null, |
|
"text": "Written scripts of various Indian languages Language Script" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>language-pairs</td><td/><td/><td/></tr><tr><td>pair</td><td>#pairs</td><td colspan=\"2\">accu. entropy</td></tr><tr><td colspan=\"3\">Hin-Ben 103706 0.84</td><td>0.44</td></tr><tr><td colspan=\"3\">Hin-Guj 107677 0.89</td><td>0.28</td></tr><tr><td colspan=\"2\">Hin-Mal 20143</td><td>0.86</td><td>0.55</td></tr><tr><td colspan=\"2\">Hin-Pun 23098</td><td>0.84</td><td>0.39</td></tr><tr><td colspan=\"2\">Hin-Tam 10741</td><td>0.68</td><td>0.73</td></tr><tr><td>Hin-Tel</td><td>45890</td><td>0.95</td><td>0.76</td></tr><tr><td colspan=\"3\">Hin-Urd 284932 0.91</td><td>0.79</td></tr></table>", |
|
"num": null, |
|
"text": "This table shows various language pairs with the number of word-pairs, accuracies of manually annotated pairs and the alignment entropy of all the langauges. Here accu. represents average accuracy of the" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>Metrics 1</td><td/><td/><td/><td>Metrics 2</td><td/><td/></tr><tr><td colspan=\"2\">Lang ACC 9</td><td>Mean</td><td colspan=\"3\">MRR Map ref Char</td><td>Char</td><td>Word</td><td>#Pairs</td></tr><tr><td/><td/><td>F-score</td><td/><td/><td colspan=\"3\">(all) (consonant) (consonant)</td><td/></tr><tr><td>Ben</td><td>0.50</td><td>0.89</td><td>0.57</td><td>0.73</td><td>0.89</td><td>0.94</td><td>0.72</td><td>260</td></tr><tr><td>Guj</td><td>0.59</td><td>0.89</td><td>0.67</td><td>0.84</td><td>0.91</td><td>0.97</td><td>0.86</td><td>260</td></tr><tr><td>Mal</td><td>0.11</td><td>0.69</td><td>0.26</td><td>0.55</td><td>0.73</td><td>0.94</td><td>0.40</td><td>260</td></tr><tr><td>Pun</td><td>0.60</td><td>0.90</td><td>0.69</td><td>0.83</td><td>0.89</td><td>0.93</td><td>0.81</td><td>260</td></tr><tr><td>Tel</td><td>0.27</td><td>0.75</td><td>0.32</td><td>0.49</td><td>0.78</td><td>0.93</td><td>0.71</td><td>260</td></tr><tr><td>Urd</td><td>0.58</td><td>0.89</td><td>0.67</td><td>0.81</td><td>0.88</td><td>0.89</td><td>0.70</td><td>260</td></tr></table>", |
|
"num": null, |
|
"text": "Evaluation Metrics on Gold data." |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td>Metrics 1</td><td/><td/><td/><td>Metrics 2</td><td/><td/></tr><tr><td colspan=\"2\">Lang ACC</td><td>Mean</td><td colspan=\"3\">MRR Map ref Char</td><td>Char</td><td>Word</td><td>#Pairs</td></tr><tr><td/><td/><td>F-score</td><td/><td/><td colspan=\"3\">(all) (consonant) (consonant)</td><td/></tr><tr><td>Ben</td><td>0.60</td><td>0.91</td><td>0.70</td><td>0.87</td><td>0.93</td><td>0.95</td><td>0.87</td><td>1263</td></tr><tr><td>Mal</td><td>0.15</td><td>0.78</td><td>0.31</td><td>0.61</td><td>0.83</td><td>0.88</td><td>0.71</td><td>198</td></tr><tr><td>Pun</td><td>0.69</td><td>0.92</td><td>0.76</td><td>0.88</td><td>0.70</td><td>0.73</td><td>0.47</td><td>1475</td></tr><tr><td>Tam</td><td>0.31</td><td>0.82</td><td>0.38</td><td>0.57</td><td>0.82</td><td>0.86</td><td>0.58</td><td>58</td></tr><tr><td>Tel</td><td>0.34</td><td>0.87</td><td>0.49</td><td>0.76</td><td>0.87</td><td>0.93</td><td>0.82</td><td>528</td></tr><tr><td>Urd</td><td>0.67</td><td>0.92</td><td>0.73</td><td>0.84</td><td>0.92</td><td>0.94</td><td>0.83</td><td>720</td></tr></table>", |
|
"num": null, |
|
"text": "Evaluation Metrics on Indo-WordNet data." |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>ious transliteration systems</td><td/></tr><tr><td>language</td><td>avg. score</td></tr><tr><td>Bengali</td><td>3.6</td></tr><tr><td>Gujarati</td><td>3.8</td></tr><tr><td>Malayalam</td><td>3.3</td></tr><tr><td>Punjabi</td><td>1.9</td></tr><tr><td>Tamil</td><td>2.5</td></tr><tr><td>Telugu</td><td>3.6</td></tr><tr><td>Urdu</td><td>3.2</td></tr></table>", |
|
"num": null, |
|
"text": "Average score (out of 4) by evaluators for var-" |
|
} |
|
} |
|
} |
|
} |