|
{ |
|
"paper_id": "Y04-1020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:35:19.844638Z" |
|
}, |
|
"title": "Tiny Corpus Applications with Transformation-Based Error-Driven Learning: Evaluations of Automatic Grammar Induction and Partial Parsing of SaiSiyat", |
|
"authors": [ |
|
{ |
|
"first": "Zhemin", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "lin.zhemin@gmail.com" |
|
}, |
|
{ |
|
"first": "Li-May", |
|
"middle": [], |
|
"last": "Sung", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper reports a preliminary result on automatic grammar induction based on the framework of Brill and Markus (1992) and binary-branching syntactic parsing of Esperanto and SaiSiyat (a Formosan language). Automatic grammar induction requires large corpus and is found implausible to process endangered minor languages. Syntactic parsing, on the contrary, needs merely tiny corpus and works along with corpora segmented by intonation-unit which results in high accuracy.", |
|
"pdf_parse": { |
|
"paper_id": "Y04-1020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper reports a preliminary result on automatic grammar induction based on the framework of Brill and Markus (1992) and binary-branching syntactic parsing of Esperanto and SaiSiyat (a Formosan language). Automatic grammar induction requires large corpus and is found implausible to process endangered minor languages. Syntactic parsing, on the contrary, needs merely tiny corpus and works along with corpora segmented by intonation-unit which results in high accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "SaiSiyat is a Formosan Austronesian language with less than 4,677 speakers (1995 census data). It is an SOV language with four verbal voices, six case markers, but without declensions (Yeh (2000) ). As other Austronesian languages in Taiwan, SaiSiyat writing system is just officially standardised. 1 Few written materials are published in this language and the main source of its corpora is linguistic fieldwork in form of transcription of oral narration and conversation. The tiny scale of corpora makes it hard to do probabilistic natural language processing. Other affordable methods to build a syntactically tagged treebank are thus subjects to our work.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 195, |
|
"text": "(Yeh (2000)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "SaiSiyat parallels to ancient Egyptian in terms of the description of Rosmorduc (published on Internet). Part of its grammar is still unsure. Grammatical errors are found in texts. The absence of punctuation makes the corpus impossible to be proceeded at sentence level. In order to partially parse this language, the applications of Kullback-Leibler divergence and transformation-based error-driven learning (TBL) are evaluated in the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "NTU SaiSiyat corpus (?))contains 27 texts, 3702 intonation units (IUs), 12065 words. Its notation follows the convention of Du Bois (1993) . Sixteen narrations are composed in the corpus, including 4 Pear Stories (a colour mute film), 8 Frog Stories (a sketchbook by Mayer (1980) ) and 4 indigenous legends. The corpus is tagged with a TBL tagger in reduced Penn Treebank Tagset. The overall accuracy is 88.11%. (Lin (2004) ) Additional collected texts are added in our experiment to enlarge the corpus. An example of original and tagged data segment follows:", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 138, |
|
"text": "Bois (1993)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 267, |
|
"end": 279, |
|
"text": "Mayer (1980)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 412, |
|
"end": 423, |
|
"text": "(Lin (2004)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "1. kor-koring min-a'rem korkoring/NN mina'rem/VB Red-discipline MIN-rest \"A child was asleep.\" Esperanto is planned as an international help language in 1887 by L. Zamenhof. 2 Large corpora of authentic journals, translated works and archives of Yahoo!Groups are available online for free. Its declension and conjugation are regular, permitting us to tag the texts easily, quickly and correctly. We choose \"Monato\" archive, a periodic written in the language, as a sentence-based contrast. See table 1 for corpora statistics. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Kullback-Leibler divergence as reported in Brill and Markus (1992) measures the distributional similarity of two sets of tags. For each set of tags in similar environment, a binary-branching rule tag x \u2192 tag y tag z is built and their similarity measured. A context-free grammar is hence reduced to finding the nearest path to collapse a sentence (or a segment of words, in case of SaiSiyat) into a single tag. The relative entropy (1) of one set of tags is first calculated to estimate the amount of extra information necessary to describe another set. The divergence (2) between two sets is the sum of the amount of necessary data for describing each other, serving as a measure of the difference of their distributions.", |
|
"cite_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 66, |
|
"text": "Brill and Markus (1992)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Structure Grammar with K-L Divergence", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "D(P 1 ||P 2 ) = x\u2208Env P 1 (x) * log P 1 (x) P 2 (x) (1) D 1,2 = D(P 1 ||P 2 ) + D(P 2 ||P 1 )", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Phrase Structure Grammar with K-L Divergence", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The \"environment\" may be surrounding words or tags. For example, This is John/NNP . and This is a/DT chair/NN . , we find NNP and DT-NN occurring between \"is\" and \".\". The environment is schematically written as word word. However, we may not have enough environments in case of a tiny corpus. word word can be replaced by tag tag, but lexical information is discarded if we made this change. As a result, a transitive verb is confounded with an intransitive verb with a preposition (VBD \u2192 VBD IN in Brill & Marcus' example) for their high frequency in the same environment. They try to multiply the divergence with the mutual information (3) of tag y and tag z in order to exclude grammatically unbounded set of tags, such as VBD IN (4).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Phrase Structure Grammar with K-L Divergence", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "i ) = \u2212 tag j \u2208T agset p(tag j |tag i ) * log 2 p(tag j |tag i ) (3) D(P 1 ||P 2 ) * (H(tag y , tag z , ) \u2212 H(tag y , ))", |
|
"eq_num": "(4)" |
|
} |
|
], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The example is found in our experiment, showing that the rule of pronoun \u2192 noun phrase (PRP \u2192 DT NN) gets adjusted to a lower (better) score, Divergence: PRP DT NN 1.06991078978 Adjusted:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "PRP DT NN 0.183881076959", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since the system may find a large sum of rules, only the 15 best scored rules of each possible set of tag y tag z are applied in path finding. For each path which permits to reduce a string into a single tag, the sum of divergences along the path is calculated. The path with the lowest (thus the best) score is considered the correct one. For example, korkoring/NN min'itol/VB ila/ASP (lit. \"child rest PER-FECT MARKER\") may be reduced by the following rules:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(begin state) NN VB ASP 1 VB VB ASP 0.0864877817911 NN VB 2 NN NN VB 0.235779052654 NN (end)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The path is NN VB ASP \u2192 NN VB \u2192 NN, scored (1) + (2) = 0.32226683444510001. We adopt the Brill & Marcus model, and filtered each corpus by the following criteria:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 No UNK (unknown) tag.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Sentence length: 2 to 20 words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 The correctness is judged by maximum match to government-binding theory.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In case of AT JJ NN, for example, the one bracketed as (AT (JJ NN)) is considered correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "H(tag", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since there are more than 1 million words in the Esperanto corpus, we apply the word word schema in order to get a more precise measurement. 10,384 rules are generated at the first stage. After the 15 best rules are chosen, 6,980 rules remain. The method is found to require a huge search space, consuming far more computation time than we could afford. Therefore only sentences with 3 to 5 words are reported for their correctness (see table 2). Average offset and average search space of 10 sentences of each length are reported in the table. There is a good reason not to give each of them a score. Since merely the path with lowest/best score are considered right, and we have no external data to decide if some higher scored rule should be the right one, we can just demonstrate the distance between our result and the ideal. Among 30 test sentences, only 1 sentence is parsed correct.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Esperanto Corpus", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Below is an example of one test sentence of each length, the offset of the correct path, the search space, path score and how they are reduced into one tag. Non-terminal labels are ignored in the model. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Esperanto Corpus", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Phenomena such as repair, repetition and recover occur frequently in oral data. The constituent of [spec,CP] (e.g. complementiser that) tends to stay at the final position of the main IU. Case marker (CM) and its marked noun (NN) are often separated in two conjoint IUs whenever the speaker needs time to recall a word. However, a IU-based corpus seems to provide more information to the extent of frequent collocating constituents. For example, for the following input, we induce easily the right rule of VB \u2192 VB ASP:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SaiSiyat Corpus", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "korkoring/NN mina'rem/VB (lit. \"Child sleeps.\") ahoe'/NN mwa:i'/VB ila/ASP (lit. \"Dog came ASP.\") PACLIC 18, December 8th-10th, 2004, Waseda University, Tokyo", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SaiSiyat Corpus", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "This observation is further proven by our experiment. 3,473 rules are first generated and 2,980 rules remain for finding paths. The correctness of 10 IUs of lengths 3, 4 and 5 are reported in table 3. The result is surprisingly good. Among 30 test sentences, 13 are correct (i.e. with the lowest score). It is observed, however, the result declines quickly as IU length increases.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SaiSiyat Corpus", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "The correct path to parse a sentence is not always found to be the lowest scored one. We observed two major problems preventing this model to be useful for our task. First, some constituents tend to conjoin firmly, causing correct path to be scored either the best or far from the best. It is clear that the low divergence of (VB VB) causes other paths incompetent. Second, the huge search space enforces Brill & Marcus to implement the beam search. Without implementing this, we suffer for the computation time. The search space is calculated in the following formula:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "space = t (n\u22121) * f (n) f (n) = 2, n = 2 f (n) = 5, n = 3 f (n) = n\u22121 i=3 i * (i+1) 2 \u2212 (n \u2212 3) = n 3 \u22127n\u22126 6 , n > 3", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "Let n be sentence length, t be the average amount of the left value of each right hand rule (in our case, 15), the possible paths for a 15-word sentence is approximately 1.6 * 10 1 9. We once tried to parse a simple sentence like \"Alie vi povas ser\u0109i nur en la Anta\u0217parolo kaj Ekzercaro de la Fundamento de Esperanto.\" and the computer hanged. This is definitely uncomputable. Brill (1993) offers another way to produce parsing tree. Each sentence is parsed in a naive manner before being fed into a learner. The learner holds the result of a iterative learning process of comparing the input tree and the golden corpus (\"truth\"). The highest scored rule in each iteration is acquired. The TBL model is shown in figure 1.", |
|
"cite_spans": [ |
|
{ |
|
"start": 377, |
|
"end": 389, |
|
"text": "Brill (1993)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "2.3" |
|
}, |
|
{ |
|
"text": "For example, the sentence ray babaw hayza' ka 'ilaS. ray/IN babaw/RB hayza'/EX ka/CM 'ilaS/NN Loc above Ex Nom moon The dog barked ) ( ( The dog barked ) ) ( ( The dog ) barked ) ) If a rule fails to apply, nothing happens to the input sentence. The rule is scored by the number of non-crossing constituents in comparison to the golden corpus, e.g.,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Syntactic Tree Acquisition", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "( ( The big ) ( dog ate ) ) Golden: ( ( The ( big dog ) ) ate ) There is one non-crossing constituent (\"the\") and 3 are crossing. The transformation is then scored 1 / 4.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Since we have to make a golden corpus of each language, 200 sentences/IUs are manually annotated. 150 sentences are randomly selected to train the learner and the remaining 50 sentences are randomly put in the test corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Test:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The accuracy of the naive parser is 29.17%. The size of training corpus, the number of acquired rules and accuracy in terms of Brill's scoring system are shown in table 4, the error rate shown in table 5. The accuracy is astonishingly low. In fact, we find the learner unable to seize good generalisation even inside the training corpus (see table 6). This may be caused by the complexity of constituent composition in Esperanto grammar. We will return to this issue in \u00a73.3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Esperanto Corpus", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Merely complete IUs are evaluated in the experiment since incomplete IUs do not form a close bracket. The naive accuracy of SaiSiyat corpus is 34.52%. The statistics is reported in table 7, the error rate in table 8.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SaiSiyat Corpus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The result is good (about 68%) but not good enough. The accuracy should be at least 80% for really practical task. The learner over-generalise too easily, preventing accurate parsing of complete IUs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "SaiSiyat Corpus", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Esperanto written sentences are often long and complex. Its word order is more free then the English one. Even more, subject is often post-posed and object or adverb is often moved to [spec,CP] position. This implies that a reduced tagset may not be distinguishing enough to catch the language fact. The The tables show that a larger tagset works better then the reduced one. However, a larger tagset implies even larger training corpus. A training corpus with 500 Esperanto sentences is likely to result in high accuracy. This would be a dilemma if our purpose was to make the work faster and easier done.", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 193, |
|
"text": "[spec,CP]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The applications of automatic grammar induction using Kullback-Leibler divergence and syntactic parser based on transformation-based error-driven learning are evaluated in this paper. Since most Formosan Austronesian corpora contain less than 20,000 words, we have to deal with every possibility to process the languages with a computer in the critical task of language preservation. K-L divergence profits from a large corpus and is helpful only when a segment of text contains less than 3 words. This would not be very practical. Although a TBL parser is not as appealing as demonstrated in Brill (1993) , the accuracy may be enhanced by a complex tagset or affordable (less than 1000) training corpus. It helps us at least to segment short phrases from continuous constituents and eases the work of building a human-polished treebank. 3 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 593, |
|
"end": 605, |
|
"text": "Brill (1993)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "A standardised spelling system of Formosan Austronesian languages is published by the Council of Indigenous Peoples, Executive Yuan. Diversity still exists among their users.2 Cf. http://eo.wikipedia.org/wiki/Esperanto", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Chinese version of this paper and more information regarding this topic is accessible at http://ljm.idv.tw/mywiki/DraftTinyCorpusApplication. Esperanto tagger used here can be downloaded at http://ljm.idv.tw/download/esptag-0.2.tar.gz.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Our thanks go to SaiSiyat informants. We appreciate Michael Tanangkingsing for his brilliant fieldwork reports, Guido van Rossum for Python programming language and Linus Torvalds for Linux operating system. Michael Tanangkingsing also reviewed this article and checked the grammar.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Automatically acquiring phrase structure using distributional analysis", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Markus", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "DARPA Speech and Natural Language Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "155--159", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill, E. and M. Markus. 1992. Automatically acquiring phrase structure using distributional analysis. In DARPA Speech and Natural Language Workshop, pages 155-159.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "A simple rule-based part of speech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Proceedings of ANLP-92, 3rd Conference on Applied Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "152--155", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill, Eric. 1992. A simple rule-based part of speech tagger. In Proceedings of ANLP-92, 3rd Conference on Applied Natural Language Processing, pages 152-155, Trento, IT.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Automatic grammar induction and parsing free text: a transformation-based approach", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "Meeting of the ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "259--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brill, Eric. 1993. Automatic grammar induction and parsing free text: a transformation-based approach. In Meeting of the ACL, pages 259-265.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Outline of Discourse Transcription", |
|
"authors": [ |
|
{ |
|
"first": "Du", |
|
"middle": [], |
|
"last": "Bois", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "45--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Du Bois, J. W., 1993. Outline of Discourse Transcription, pages 45-89. Hillsdale: Lawrence Erlbaum Associates, NJ.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "POS-tagger for SaiSiyat: using fieldwork notations and TBL", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ROCLING XVI Student Workshop II", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "25--33", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, Z. 2004. POS-tagger for SaiSiyat: using fieldwork notations and TBL. In ROCLING XVI Student Workshop II, pages 25-33.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Automata-guided context-free parsing for punctuationless languages", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Rosmorduc", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Published On", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Internet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rosmorduc, S. published on Internet. Automata-guided context-free parsing for punctuationless languages. URL: http://citeseer.ist.psu.edu/363381.html.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "S\u00e0ixi\u00e0y\u01d4 C\u0101nk\u01ceu Y\u01d4f\u01ce", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Yeh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yeh, M. 2000. S\u00e0ixi\u00e0y\u01d4 C\u0101nk\u01ceu Y\u01d4f\u01ce. Yu\u01cenli\u00fa, Taipei.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "5000] 0.0669057743547 #(WP,RP(NN,RB(IN,NN))) NNP VB JJ JJ NNS [ 2513/5000] 0.0446834608909 UH(NNP,WRB(VB,PRP$(JJ,.(JJ,NNS))))", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF1": { |
|
"text": "Below are first 5 paths in parsing (NN (VB (VB ASP))): #1 0.107512915137 ,(NN,,(UNK(VB,VB),ASP)) #2 0.108052449446 .(NN,.(UNK(VB,VB),ASP)) #3 0.111216713424 NNP(NN,.(UNK(VB,VB),ASP)) #4 0.111984096398 NNP(UNK(EX(NN,VB),VB),ASP) #5 0.112173131487 .(NN,.(VB,PRP$(VB,ASP)))", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"FIGREF2": { |
|
"text": "Transformation-based error-driven learning (adopted from Brill (1992)) (ray (babaw (hayza' (ka 'ilaS)))) ((ray babaw) (hayza' (ka 'ilaS)Tree mutation after correct transformation is first right-bracketed as (ray (babaw (hayza' (ka 'ilaS)))) and then transformed into ((ray babaw) (hayza' (ka 'ilaS))), resulting in the mutation of syntactic trees as shown in figure 2.Twelve template rules are generated for each tag:\u2022 (Add-delete) a (left-right) parenthesis to the (left-right) of POS tag X\u2022 (Add-delete) a (left-right) parenthesis between tag X and Y For example, the rule of -LL NN (delete left parenthesis to the left of NN) works as,", |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td>Language</td><td>Size</td><td>Words</td><td colspan=\"3\">Vocab. Tags Sentence Length</td></tr><tr><td>SaiSiyat</td><td>3,888 IU</td><td>13,970</td><td>1,697</td><td>20</td><td>4.15</td></tr><tr><td colspan=\"4\">Esperanto 84,496 sent. 1,556,566 125,663</td><td>27</td><td>18.49</td></tr></table>", |
|
"num": null, |
|
"text": "Corpora data" |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">Sentence Length Offset Search Space</td></tr><tr><td>3</td><td>15</td><td>379</td></tr><tr><td>4</td><td>146</td><td>> 5000</td></tr><tr><td>5</td><td>3365</td><td>> 5000</td></tr></table>", |
|
"num": null, |
|
"text": "Result of K-L divergence (Esperanto)" |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">Sentence Length Offset Search Space</td></tr><tr><td>3</td><td>1</td><td>385</td></tr><tr><td>4</td><td>1 7</td><td>> 5000</td></tr><tr><td>5</td><td>5 5</td><td>> 5000</td></tr></table>", |
|
"num": null, |
|
"text": "Result of K-L divergence (SaiSiyat)" |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">: Result of TBL parser (Esperanto)</td></tr><tr><td colspan=\"3\"># training # rules Accuracy</td></tr><tr><td>50</td><td>12</td><td>35.04%</td></tr><tr><td>100</td><td>24</td><td>25.64%</td></tr><tr><td>150</td><td>33</td><td>28.77%</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">: Error rate of TBL parser (Esperanto)</td></tr><tr><td colspan=\"4\"># training 0-error \u22641-error \u22642-error 50 32.0% 34.0% 36.0%</td></tr><tr><td>100</td><td>24.0%</td><td>24.0%</td><td>24.0%</td></tr><tr><td>150</td><td>26.0%</td><td>26.0%</td><td>26.0%</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF5": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\">: Accuracy of parsing a training corpus</td></tr><tr><td colspan=\"3\"># training # rules Accuracy</td></tr><tr><td>50</td><td>12</td><td>48.34%</td></tr><tr><td>100</td><td>24</td><td>49.86%</td></tr><tr><td>150</td><td>33</td><td>46.67%</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"3\"># training # rules Accuracy</td></tr><tr><td>50</td><td>16</td><td>70.64%</td></tr><tr><td>100</td><td>15</td><td>67.43%</td></tr><tr><td>150</td><td>17</td><td>69.27%</td></tr></table>", |
|
"num": null, |
|
"text": "Result of TBL parser (SaiSiyat)" |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"4\">: Error rate of TBL parser (SaiSiyat)</td></tr><tr><td colspan=\"4\"># training 0-error \u22641-error \u22642-error 50 68.0% 68.0% 74.0%</td></tr><tr><td>100</td><td>62.0%</td><td>64.0%</td><td>70.0%</td></tr><tr><td>150</td><td>64.0%</td><td>66.0%</td><td>72.0%</td></tr><tr><td>quick over-generalisation in parsing</td><td/><td/><td/></tr></table>", |
|
"num": null, |
|
"text": "SaiSiyat implies a larger tagset as well. Yet we are unable to refine SaiSiyat tags since its lack of declension. For examine this assumption, we can tag Esperanto corpus with complete Penn Treebank tagset and redo the process. In fact, additional tag NNA, NNAS, JJA, JJAS, PRPA, PRP$A, PRP$AS are implemented to reflect Esperanto declension. The result is shown in table 9 and 10." |
|
}, |
|
"TABREF8": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td/><td/><td colspan=\"2\">: Result of larger tagset</td><td/></tr><tr><td colspan=\"5\"># training # Simple S-Accu # Compl C-Accu</td></tr><tr><td>50</td><td>12</td><td>35.04</td><td>14</td><td>35.04%</td></tr><tr><td>100</td><td>24</td><td>25.64</td><td>16</td><td>30.20%</td></tr><tr><td>150</td><td>33</td><td>28.77</td><td>28</td><td>38.46%</td></tr></table>", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"html": null, |
|
"content": "<table><tr><td># training 50</td><td>0-error 36.0% (+ 4%) 38.0% (+ 4%) 38.0% (+ 2%) \u22641-error \u22642-error</td></tr><tr><td>100</td><td>32.0% (+ 8%) 32.0% (+ 8%) 32.0% (+ 8%)</td></tr><tr><td>150</td><td>36.0% (+10%) 40.0% (+14%) 40.0% (+14%)</td></tr></table>", |
|
"num": null, |
|
"text": "Error rate of larger tagset compared with reduced tagset" |
|
} |
|
} |
|
} |
|
} |