|
{ |
|
"paper_id": "Y11-1026", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:39:43.067606Z" |
|
}, |
|
"title": "Developing a Chunk-based Grammar Checker for Translated English Sentences", |
|
"authors": [ |
|
{ |
|
"first": "Yee", |
|
"middle": [], |
|
"last": "Nay", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Khin", |
|
"middle": [], |
|
"last": "Mar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Natural Language Processing Laboratory", |
|
"institution": "University of Computer Studies", |
|
"location": { |
|
"settlement": "Yangon", |
|
"country": "Myanmar" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Ni", |
|
"middle": [ |
|
"Lar" |
|
], |
|
"last": "Thein", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "nilarthein@gmail.com" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Machine Translation systems expect target language output to be grammatically correct. In Myanmar-English statistical machine translation system, target language output (English) can often be ungrammatical. To address this issue, we propose an ongoing chunk-based grammar checker by using trigram language model and rule based model. It is able to solve distortion, deficiency and make smooth the translated English sentences. We identify the sentences with chunk levels and generate context free grammar (CFG) rules for recognizing grammatical relations of chunks. There are three main processes to build a grammar checker: checking the sentence patterns in chunk level, analyzing the chunk errors and correcting the errors. According to experimental results, this checker can detect simple, compound and complex sentence types for declarative and interrogative sentences. This system is useful for reducing grammar errors of target language in Myanmar-English machine translation system.", |
|
"pdf_parse": { |
|
"paper_id": "Y11-1026", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Machine Translation systems expect target language output to be grammatically correct. In Myanmar-English statistical machine translation system, target language output (English) can often be ungrammatical. To address this issue, we propose an ongoing chunk-based grammar checker by using trigram language model and rule based model. It is able to solve distortion, deficiency and make smooth the translated English sentences. We identify the sentences with chunk levels and generate context free grammar (CFG) rules for recognizing grammatical relations of chunks. There are three main processes to build a grammar checker: checking the sentence patterns in chunk level, analyzing the chunk errors and correcting the errors. According to experimental results, this checker can detect simple, compound and complex sentence types for declarative and interrogative sentences. This system is useful for reducing grammar errors of target language in Myanmar-English machine translation system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "A language checker, typically, has two basic components, a spell-checker and a grammar checker. Whereas the spell-checker, usually, limits its operation to the inspection and correction of individual text words, the grammar checker has to cope with errors that can only be detected in contexts that are larger than the word (Anna). Grammar is the set of structural rules that govern the composition of clauses, phrases, chunks and words in any given natural language. Grammar checking is one of the most widely used tools within natural language processing (NLP) applications. Grammar checkers check the grammatical structure of sentences based on morphological processing and syntactic processing. These two steps are part of natural language processing to understand the natural languages. Morphological processing is the step where individual words are analyzed into their components and non-word tokens such as punctuation. Syntactic processing is the analysis where linear sequences of words are transformed into structures that show grammatical relationships between the words in the sentence (Rich and Knight 1991) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 1099, |
|
"end": 1121, |
|
"text": "(Rich and Knight 1991)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Three main approaches are widely used for grammar checking in a language; syntax-based checking, statistics-based checking and rule-based checking. In syntax based grammar checking, each sentence is completely parsed to check the grammatical correctness of it. The text is considered incorrect if the syntactic parsing fails. In statistics-based approach, POS tag sequences are built from an annotated corpus, and the frequency, and thus the probability, of these sequences are noted. The text is considered incorrect if the POS-tagged text contains POS 25th Pacific Asia Conference on Language, Information and Computation, pages 245-254 sequences with frequencies lower than some threshold. The statistics based approach essentially learns the rules from the tagged training corpus. In rule-based approach, the approach is very similar to the statistics based one, except that the rules must be handcrafted (Naber, 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 909, |
|
"end": 922, |
|
"text": "(Naber, 2003)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Grammar checkers are most often implemented as a feature of a larger program, such as a word processor. However, such a feature is not available as a separate free program for machine translation. Therefore, we propose a grammar checker as a complement of Myanmar-English machine translation by using trigram language model and rule based model. In this approach, the translated English sentence is used as an input. Firstly, this input sentence is tokenized and tagged POS to each word. Then these tagged words are grouped into chunks by parsing the sentence into a form that is a chunk based sentence structure. After making chunks, these chunks relationship for input sentence are detected by using trained sentence patterns. If the sentence pattern is incorrect, we analyze the chunk errors and then correct the errors using English grammar rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of the paper is organized as follows. Section 2 presents the related work of this paper. Section 3 describes the overview of Myanmar-English Statistical Machine Translation System. In section 4, the proposed chunk based grammar checker is explained. Section 5 reports the experimental results of our proposed system and finally section 6 concludes the paper.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "This section presents the related works of the grammar checking in natural language processing for many languages. Alam et al. (2006) proposed an approach which based on n-gram statistical grammar checker for both Bangla and English. It considered the n-gram based analysis of words and POS tags to decide whether the sentence is grammatically correct or not. Sharma and Jaiswal (2010) developed a model for reducing errors in translation using Pre-editor for Indian English Sentences. They have used a major corpus in tourism and health domains. This was incorporated in the AnglaBharti Engine and gave significant improvement in the Machine Translation output.", |
|
"cite_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 133, |
|
"text": "Alam et al. (2006)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 360, |
|
"end": 385, |
|
"text": "Sharma and Jaiswal (2010)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "A user model can be tailored to different types of users to identify and correct English language errors. It is presented in the context of a written English tutoring system for deaf people. The model consists of a static model of the expected language and a dynamic model that represents how a language might be acquired over time. Together these models affect scores on a set of grammar rules which are used to produce a \"best interpretation\" of the user's input (McCoy et al., 1996) . Stymne and Ahrenberg (2010) checked the Swedish grammar for evaluation tool and post processing tool of Statistical Machine Translation. They have performed experiments for English-Swedish translation using a factored phrase-based statistical machine translation (PBSMT) system based on Moses (Koehn et al., 2007) and the mainly rule-based Swedish grammar checker Granska (Domeij et al., 2000; Knutsson, 2001) ..", |
|
"cite_spans": [ |
|
{ |
|
"start": 465, |
|
"end": 485, |
|
"text": "(McCoy et al., 1996)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 488, |
|
"end": 515, |
|
"text": "Stymne and Ahrenberg (2010)", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 781, |
|
"end": 801, |
|
"text": "(Koehn et al., 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 881, |
|
"text": "(Domeij et al., 2000;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 882, |
|
"end": 897, |
|
"text": "Knutsson, 2001)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The ongoing developments in the LRE-2 project SECC (A Simplified English Grammar and Style Checker/Corrector) check if the documents comply with the syntactic and lexical rules; if not, error messages are given, and automatic correction is attempted wherever possible to reduce the amount of human correction needed (Adriaens,1993) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 331, |
|
"text": "(Adriaens,1993)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "An approach based on hybrid approach that presents an implemented hybrid approach for grammar and style checking, combining an industrial pattern based grammar and style checker with bidirectional, large-scale HPSG grammars for German and English 2 (Crysmann et al., 2008) . Buscail and Dizier (2009) presented an analysis of the most frequently encountered style and text structure errors produced by a variety of types of authors when producing texts. They showed an argumentation system can be used so that the user can get arguments for or against a certain correction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 249, |
|
"end": 272, |
|
"text": "(Crysmann et al., 2008)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 275, |
|
"end": 300, |
|
"text": "Buscail and Dizier (2009)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Input for Myanmar-English statistical machine translation system (SMT) is Myanmar sentence and the target output is English sentence. Myanmar-English statistical machine translation system has developed source language model, alignment model, translation model and target language model to complete translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Myanmar-English Statistical Machine Translation System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\uf0b7 The source language model includes making Part-of-Speech (POS) tags and function tags for each Myanmar word and searching grammatical relations of Myanmar sentence. \uf0b7 The translation model includes phrase extraction, translation from Myanmar sentences to English sentences by using Myanmar-English bilingual corpus. This model also interacts with Word Sense Disambiguation (WSD) system to solve ambiguities when a phrase of a Myanmar sentence has more than one sense.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Myanmar-English Statistical Machine Translation System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\uf0b7 The alignment model is working parallel with the other models. Its main work is to build the word and phrase aligned Myanmar-English bilingual corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Myanmar-English Statistical Machine Translation System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\uf0b7 The target language model includes two parts such as reordering the translated English sentences and smoothing it by using English grammar checker to reduce grammar errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Myanmar-English Statistical Machine Translation System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our proposed system is concerned with the target language model to check the grammar errors for translated English sentences. After input sentence has been processed in three models (source language model, alignment model and translation model), the translated English sentence is obtained in target language model. This sentence might be incomplete in grammar because the syntactic structures of Myanmar and English language are totally different. For example, after translating the Myanmar sentence \"\u1015\u1014\u1039 \u1038\u103b\u1001\u1036 \u1011\u1032 \u1019\u103d \u102c \u101e\u1005\u1039 \u1015\u1004\u1039 \u1019\u103a\u102c\u1038 \u101b\u103d \u102d \u107e\u1000\u101e\u100a\u1039 \u104b\", \"pan chan htae hmar thet pin myar shi kya thi\", the translated English sentence might be \"are trees in park.\". This sentence has missing words \"There\" and \"the\" for correct English sentence \"There are trees in the park.\". As an another input \"\u101e\u1030 \u101e\u100a\u1039 \u101c\u1000\u1039 \u1016\u1000\u1039 \u101b\u100a\u1039 \u1010\u1005\u1039 \u1001\u103c \u1000\u1039 \u1031\u101e\u102c\u1000\u1039 \u1031\u1014\u101e\u100a\u1039 \u104b\", \"thu thi laphet yae ta khwit thauk nay thi\", the translated output is \"He is drinking a cup tea.\". In this sentence, \"of\" (preposition) is omitted from \"a cup of tea\". These examples are just simple sentence errors. When the sentence types are more complex, grammar errors detection and correction are more needed. There are many English grammar errors to correct ungrammatical sentences. This grammar checker currently detects and provides the following errors:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Myanmar-English Statistical Machine Translation System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\uf0b7 If the sentence has missing words such as preposition (PPC), conjunction (COC), determiner (DT) and existential (EX) then this system suggests the required words according to the chunk types. \uf0b7 In Subject-Verb agreement rule, if the subject is plural, verb has to be the plural. Verbs vary in form according to the person and number of the object. \uf0b7 Sentence can contain inappropriate determiner. Therefore grammatical rules have been identified several kinds of determiner for appropriate noun. \uf0b7 Translated English sentences can have the incorrect verb form. The system has to memorize all of the commonly used tenses and suggest the possible verb form.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Myanmar-English Statistical Machine Translation System", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "In SMT system, there are very few spelling errors in the translation output, because all words are come from the corpus. Therefore, this system proposes a target-dominant grammar checker for Myanmar-English statistical machine translation system as shown in Figure 1 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 258, |
|
"end": 266, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Chunk-based Grammar Checker", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "POS-tagging is the main process of making up the chunks in a sentence as corresponding to a particular part of speech. POS tagging is the process of assigning a part-of-speech tag such as noun, verb, pronoun, preposition, adverb, adjective or other tags to each word in a sentence. Nouns can be further divided into singular and plural nouns, verbs can be divided into past tense verbs and present tense verbs and so on. There are many approaches to automated part of speech tagging. In this system, each word is tagged by using Tree Tagger which is a Java based open source tagger. However, Tree Tagger often fails to tag correctly some words when one word has more than one POS tag. For example, POS tags of the word \"sweet\" are \"JJ\" and \"NN\". In this case, refinement of the POS tags for these words is made by using the rules based on the position of the neighbor words' POS tags.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Part-of-Speech (POS) Tagging", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The example for refinement tags is shown in Table 1 . If previous tag is \"PP\", current tag is \"JJ\" And current word is \"bit\", then change bit[RB] to bit[VBD].", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 44, |
|
"end": 51, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Part-of-Speech (POS) Tagging", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Making chunks is a process to parse the sentence into a form that is a chunk based sentence structure. A chunk is a textual unit of adjacent POS tags which display the relations between their internal words. Input English sentence is made in chunk structure by using hand written rules. It represents how these chunks fit together to form the constituents of the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Making Chunk-based Sentence Patterns", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "CFGs constitute an important class of grammars, with a broad range of applications including programming languages, natural language processing, bio informatics and so on. CFG's rules present a single symbol on the left-hand-side, are a sufficiently powerful formalism to describe most of the structure in natural language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A context-free grammar G = (V, T, S, P) is given by \uf0b7 A finite set V of variables or non terminal symbols. \uf0b7 A finite set T of symbols or terminal symbols. We assume that the sets V and T are disjoint.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf0b7 A start symbol S \uf0ce V. \uf0b7 A finite set P \uf0cd V \uf0b4 (V \uf0c8T)* of productions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "A production (A, \u03b1), where A\uf0ceV and \u03b1\uf0ce(V \uf0c8T)* is a sequence of terminals and variables, is written as A\u2192\u03b1. CFGs are powerful enough to express sophisticated relations among the words in a sentence. It is also tractable enough to be computed using parsing algorithms (Thurimella, 2005) . NLP applications like Grammar Checker need a parser with an optional parsing model. Parsing is the process of analyzing the text automatically by assigning syntactic structure according to the grammar of language. Parser is used to understand the syntax and semantics of a natural language sentences confined to the grammar.", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 283, |
|
"text": "(Thurimella, 2005)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are two methods for parsing such as Top-down parsing and Bottom-up parsing. Topdown parsing begins with the start symbol and attempt to derive the input sentence by substituting the right hand side of productions for non terminals. Bottom-up (shift-reduce) parsing begins with the input sentence and combines words into higher-level chunks until the unit finally becomes a sentence. Bottom-up parsers handle a large class of grammars (Cooper et al., 2003) . In this system, Bottom-up parsing is used to parse the sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 440, |
|
"end": 461, |
|
"text": "(Cooper et al., 2003)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Parsing chunks by using CFG: Chunking or shallow parsing segments a sentence into a sequence of syntactic constituents or chunks, i.e. sequences of adjacent words grouped on the basis of linguistic properties (Abney, 1996) . The syntactic chunk structure of a sentence is necessary to determine its grammar correctness. In the proposed system, ten general chunk types are used to make the chunk structure as shown in Table 2 . The proposed grammar checker identifies the chunks using CFG based bottom-up parsing for assembling POS tags into higher level chunks, until a complete sentence has been found. For example, a simple sentence \"The students are playing football in the playground.\" is chunked as follows: Chunk-based sentence patterns are widely used in this system for detection sentence patterns. The larger the trained sentence patterns, the better the detection errors. The system has currently trained on about 6000 number of sentence patterns for simple, compound and complex sentence types. Some sample sentence rules are shown in Table 3 . ", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 222, |
|
"text": "(Abney, 1996)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 417, |
|
"end": 424, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1046, |
|
"end": 1053, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NC_VC_NC_PPC_NC_END (", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Context Free Grammar (CFG):", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After making chunks, these chunks relationship for input sentence are detected and analyzed chunk errors using trigram language model and rule based model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Detecting and Analyzing Chunk Errors", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The simplest models of natural language are n-gram Markov models. The Markov models for any n-gram are called Markov Chains. A Markov Chain is at most one path through the model for any given input (Saul and Pereira, 1997) . N-gram models are the examples of statistical model. N-grams are traditionally presented as an approximation to a distribution of strings of fixed length.", |
|
"cite_spans": [ |
|
{ |
|
"start": 198, |
|
"end": 222, |
|
"text": "(Saul and Pereira, 1997)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "N-grams of words or POSs are widely used but are not the only type of patterns used in previous work. Sun et al. (2007) extended n-grams to non continuous sequential patterns allowing arbitrary gaps between words. Sj\u00f6bergh (2006) used sequences of chunk types, for example, \"NP_VC_PP.\" The parse trees returned by a statistical parser are used by Lee and Seneff (2008) to detect verb form errors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 119, |
|
"text": "Sun et al. (2007)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 214, |
|
"end": 229, |
|
"text": "Sj\u00f6bergh (2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 368, |
|
"text": "Lee and Seneff (2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "According to the n-gram language model, a sentence has a fixed set of chunks,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "{ 0 c , 1 c , 2 c ,\u2026, n c }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is a set of chunks in our training sentences, e.g., {NC, VC, AC,\u2026, END}. In N-gram language model, each chunk depends probabilistically on the n-1 preceding words. This is expressed as shown in equation 1.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": ") ,..., ( ) ( 1 1 1 0 , c c c c i n i i n i n o p p \uf02d \uf02b \uf02d \uf02d \uf03d \uf0d5 \uf03d (1) where\uf028 \uf029 i c", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "is the current chunk of the input sentence and it depends on the previous chunks. In trigram language model, each chunk \uf028 \uf029 i c depends probabilistically on previous two chunks", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf028 \uf029 2 1 , \uf02d \uf02d i i c c", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "and is shown in equation 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "\uf0d5 \uf02d \uf03d \uf02d \uf02d \uf03d 1 0 2 1 , ) , ( ) ( n i i i i n o c c c c p p", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given a sentence, a trigram is a sequence of three chunks ( i c , 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\uf02b i c , 2 \uf02b i c )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where a generic chunk i c is either the i-th chunk of the sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Trigram language model is most suitable due to the capacity, coverage and computational power (3 Roark and Chamiak, 2000) . The trigram model is used in a greater level of some advanced and optimizing techniques such as smoothing, caching, skipping, clustering, sentence mixing, structuring and text normalization. This model makes use of the history events in assigning the current event some probability value and therefore, it suits for our approach.", |
|
"cite_spans": [ |
|
{ |
|
"start": 97, |
|
"end": 121, |
|
"text": "Roark and Chamiak, 2000)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Rule-Based Model: Rule-based model has successfully used to develop natural language processing tools and applications. English grammatical rules are developed to define precisely how and where to assign the various words in a sentence. Rule-based system is more transparent and errors are easier to diagnose and debug.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "It relies on hand-constructed rules that are to be acquired from language specialists, requires only small amount of training data and development could be very time consuming. It can be used with both well-formed and ill-formed input. It is extensible and maintainable. Rules play major role in various stages of translation: syntactic processing, semantic interpretation, and contextual processing of language (Charoenpornsawat et al., 2002) . Therefore, the accuracy of translation system can be increased by the product of the rule based correcting ungrammatical sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 412, |
|
"end": 443, |
|
"text": "(Charoenpornsawat et al., 2002)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Trigram Language Model:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The final step of our proposed system is controlled by grammar rules to determine proper corrections. These rules can determine syntactic structure and ensure the agreement relations between various chunks in the sentence. POS tags for each chunk type are used to correct grammar errors. There are about 1800 sentence patterns and 1300 English grammar rules for correction at present. When the sentence patterns increased, the grammar rules will be improved. Some rules for correcting subject-verb agreement are presented in Table 4 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 525, |
|
"end": 532, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Grammar Error Correction", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "For an incorrect translated sentence \"A man a woman went to their house\", the following sentence pattern and probability values are obtained. The product of the whole sentence is 0.0 by equation (2). In this case, we search the sequence of chunks P(NC/none, NC) which has zero probability. We get the probability values for possible chunks depend on previous chunks (none, NC) as follows: P(VC/none, NC)=0.54 P(RC/none, NC)=0.01 P(COC/none, NC)= 0.01 According to these probabilities, RC, VC and COC can be in the second place. Firstly, VC (verb chunk) is substituted as the maximum probability. Then the sentence pattern NC_VC_ NC_VC_INFC_NC_END is obtained. However, this rule is incorrect by comparing the trained sentence patterns. Therefore, RC and COC are also substituted. When COC is substituted, the correct sentence rule NC_COC_NC_VC_INFC_NC_END is resulted for our system. From this example, the proposed system can search the correct chunk type (COC) by using trigram language model and rule based model. Thereafter, the proposed system fills up a word in the missing place depending on grammar rules to correct the error. The missing chunk (COC) represents POS tag CC which corresponds to English words ('and', 'or', ',') according to the chunk rules. The correct sentence pattern might include 'and' between two noun chunks ([NC_COC_NC] [A man and a woman]) according to the English grammar rules.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1216, |
|
"end": 1234, |
|
"text": "('and', 'or', ',')", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Example", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "The proposed system is tested on about 1800 number of sentences. For each input sentence, the system has classified the kinds of sentence such as simple, compound and complex and then described whether the sentence type is interrogative or declarative. The grammar errors mainly found in the tested sentences are subject verb agreement, missing chunks and incorrect verb form. The performance of this approach is measured with precision, recall and F-score according to equation 3, 4 and 5. The resulting precision, recall and F-score of chunk-based grammar checker on different sentence types are shown in Table 5 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 607, |
|
"end": 614, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experimental Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "A chunk-based grammar checker for translated English sentences which makes use of trigram language model and rule based model. Context Free Grammar rules are also used for identifying the sentence patterns and to divide a text into chunk types which correspond to certain syntactic units. We use our own training sentence patterns. We expect this grammar checker will get the benefits for Myanmar-English machine translation system. Moreover, we plan to improve the accuracies of detection, analyzing and correction grammar errors. In the future, we will expand the sentence rules to fully assess all sentence types and detect the semantic errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Tagging and Partial Parsing", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Abney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Corpus-Based Methods in Language and Speech", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abney, S. 1996. Tagging and Partial Parsing, In: Ken Church, Steve Young, and Gerrit Bloothooft (eds.), Corpus-Based Methods in Language and Speech. Kluwer Academic Publishers, Dordrecht.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Simplified English Grammar and Style Correction in an MT Framework, Translation and the Computer 15 conference", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Adriaens", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "8--19", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adriaens, G. November, 1993. Simplified English Grammar and Style Correction in an MT Framework, Translation and the Computer 15 conference, pp. 8-19, ( London:Aslib).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "N-gram based Statistical Grammar Checker for Bangla and English", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Alam", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Uzzaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Khan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of 9th International Conference on Computer and Information Technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alam, J., N. UzZaman and M. Khan. 2006. N-gram based Statistical Grammar Checker for Bangla and English, Proceedings of 9th International Conference on Computer and Information Technology (ICCIT 2006), Dhaka, Bangladesh.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Textual and Stylistic Error Detection and Correction: Categorization, Annotation and Correction Strategies", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Buscail", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Saint-Dizie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "IEEE English International Symposium on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Buscail, L. and P. Saint-Dizie. 2009. Textual and Stylistic Error Detection and Correction: Categorization, Annotation and Correction Strategies, IEEE English International Symposium on Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Improving Translation Quality of Rule-based Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Charoenpornsawat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Sornlertlamvanich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Charoenporn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "19th International Conference on Computational Linguistics (Coling2002)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Charoenpornsawat, P., V. Sornlertlamvanich and T. Charoenporn. 2002. Improving Translation Quality of Rule-based Machine Translation, In 19th International Conference on Computational Linguistics (Coling2002), Workshop on Machine Translation in Asia. Taipei, Taiwan.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Bottom-up Parsing", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Cooper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kennedy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Torczon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cooper, K.D., K. Kennedy and L. Torczon. 2003. \"Bottom-up Parsing\".", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Hybrid processing for grammar and style checking", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Crysmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Bertomeu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Adolphs", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Flickinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kluwer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "153--160", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Crysmann, B., N. Bertomeu, P. Adolphs, D. Flickinger and T. Kluwer. August 2008. Hybrid processing for grammar and style checking, Proceedings of the 22nd International Conference on Computational Linguistics, pp. 153-160, Manchester.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Granska-an efficient hybrid system for Swedish grammar checking", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Domeij", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "O. Knutsson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Carlberger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the 12th Nordic Conference on Computational Linguistics (Nodalida'99)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "49--56", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Domeij, R., O. Knutsson, J. Carlberger, and V. Kann. 2000. Granska-an efficient hybrid system for Swedish grammar checking, In Proceedings of the 12th Nordic Conference on Computational Linguistics (Nodalida'99), pp.49-56, Trondheim, Norway.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A Chart-Based Framework for Grammar Checking Initial Studies", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Hein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hein, A.S. \"A Chart-Based Framework for Grammar Checking Initial Studies\", Linguistics Department, Uppsala University, http://guagua.echo.lu/langeng/en/le3/scarrie/scarrie.html.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Factored translation models", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Hoang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural language Processing and Computational Natural Language Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "868--876", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koehn, P. and H. Hoang. 2007. Factored translation models, In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural language Processing and Computational Natural Language Learning, pp. 868-876, Prague, Czech Republic.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "English Error Correction: A Syntactic User Model Based on Principled Mal-Rule Scoring", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Mccoy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [ |
|
"Z" |
|
], |
|
"last": "Suri", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Fifth International Conference on User Modeling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "McCoy, K.F., C. Pennington and L.Z. Suri. January, 1996. English Error Correction: A Syntactic User Model Based on Principled Mal-Rule Scoring, In Proceedings of the Fifth International Conference on User Modeling, Kailua-Kona, Hawaii.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A Rule-Based Style and Grammar Checker", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Naber", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Naber, D. 2003. \"A Rule-Based Style and Grammar Checker\" Faculty of Engineering, University of Bielefeld.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Artificial Intelligent", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Rich", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1991, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rich, E. and K. Knight. 1991. Artificial Intelligent. Second edition. New York: McGraw Hill, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Measuring Efficiency in High-Accuracy, Broad-Coverage Statistical Parsing", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Roark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of the International Conference on Computational Linguistics (COLING) 2000 Workshop on Efficiency in Large-Scale Parsing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--36", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roark, B. and E. Charniak. 2000. Measuring Efficiency in High-Accuracy, Broad-Coverage Statistical Parsing, In Proceedings of the International Conference on Computational Linguistics (COLING) 2000 Workshop on Efficiency in Large-Scale Parsing Systems, pp. 29- 36.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Aggregate and mixed order Markov models for statistical language processing", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Saul", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Second Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--89", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saul, L. and F. Pereira. 1997. Aggregate and mixed order Markov models for statistical language processing, Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pp. 81-89. ACM Press, New York.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Reducing Errors in Translation using Pre-editor for Indian English Sentences", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Jaiswal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of Annual Seminar of CDAC-Noida Technologies (ASCNT)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "70--76", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sharma, A. and N. Jaiswal. 2010. Reducing Errors in Translation using Pre-editor for Indian English Sentences, Proceedings of Annual Seminar of CDAC-Noida Technologies (ASCNT), Noida, pp.70-76, India.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Chunking: An unsupervised method to find errors in text", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Sj\u00f6bergh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "University of Joensuu electronic publications in linguistics and language technology", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "180--185", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sj\u00f6bergh, J. 2006. Chunking: An unsupervised method to find errors in text, In Proceedings of the 15th Nodalida Conference, pp.180-185. Joensuu, Finland: University of Joensuu electronic publications in linguistics and language technology.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Using a Grammar Checker for Evaluation and Postprocessing of Statistical Machine Translation", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Stymne", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Ahrenberg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC) 2010", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stymne, S. and L. Ahrenberg. 2010. Using a Grammar Checker for Evaluation and Postprocessing of Statistical Machine Translation, In Proceedings of the International Conference on Language Resources and Evaluation (LREC) 2010. Valetta, Malta.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Detecting erroneous sentences using automatically mined sequential patterns", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Cong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Xiong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "81--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sun, G., X. Liu, G. Cong, M. Zhou, Z. Xiong, J. Lee and C. Lin. 2007. Detecting erroneous sentences using automatically mined sequential patterns, In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pp. 81-88. Prague, Crech Republic: Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Context Free Grammars", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Thurimella", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Thurimella, R. 2005. \"Context Free Grammars\".", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Overview of Proposed System.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "=0.586 * 0.0 * 0.0 * 0.483 * 0.364 *0.675 =0.0", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "Example of refinement tags", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Sentence</td><td>POS Tagging</td><td>Refine Tag by rules</td></tr><tr><td>He eats a sweet.</td><td>He[PP] eats[VBZ] a[DT] sweet[JJ] . [SENT]</td><td>If previous tag is \"DT\", current tag is \"JJ\" And current word is \"sweet\", then change sweet[JJ] to sweet[NN].</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "Chunk Types", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Chunk Types</td><td>Description</td><td>Example</td></tr><tr><td>NC</td><td>Noun Chunk</td><td>a young boy, the girls</td></tr><tr><td>VC</td><td>Verb Chunk</td><td>is playing, goes, went</td></tr><tr><td>AC</td><td>Adjective Chunk</td><td>more beautiful, younger, old</td></tr><tr><td>RC</td><td>Adverb Chunk</td><td>usually, quickly</td></tr><tr><td>PTC</td><td>Particle Chunk</td><td>up, down</td></tr><tr><td>PPC</td><td>Prepositional Chunk</td><td>at, on, in, under</td></tr><tr><td>COC</td><td>Conjunction Chunk</td><td>and, or, but</td></tr><tr><td>QC</td><td>Question Chunk</td><td>Where, Who, When</td></tr><tr><td>INFC</td><td>Infinitive Chunk</td><td>to</td></tr><tr><td>TC</td><td>Time Chunk</td><td>tomorrow, yesterday</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Chunk-based Sentence Patterns", |
|
"type_str": "table", |
|
"content": "<table><tr><td>NC_VC_END=S</td></tr><tr><td>NC_VC_NC_END=S</td></tr><tr><td>NC_VC_AC_PPC_NC_END=S</td></tr><tr><td>NC_RC_VC_PTC_RC_END=S</td></tr><tr><td>NCS_PRV2_RC_NCB2_END_END=S</td></tr><tr><td>VC_NC_PPC_NC_END=S</td></tr><tr><td>VC_NC_VC_TC_END=S</td></tr><tr><td>VC_NC_VC_TO_NC_END=S</td></tr><tr><td>QC_VC_NC_VC_IEND=S</td></tr><tr><td>QC_VC_NC _IEND=S</td></tr><tr><td>QC_VC_NC_AC_PPC _IEND=S</td></tr><tr><td>QC_VC_NC_VC_PPC _IEND=S</td></tr><tr><td>QC_VC_NC_PPC_TC _IEND=S</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "Some Rules for Subject Verb Agreement", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Rules</td><td>(NC_VC)</td><td>Example</td></tr><tr><td colspan=\"2\">NNS +VBP</td><td>We go</td></tr><tr><td colspan=\"2\">NNS +VBD</td><td>We went</td></tr><tr><td colspan=\"2\">NNS +VBP_VBG</td><td>We are going</td></tr><tr><td colspan=\"2\">NNS +VBD_VBG</td><td>They were going</td></tr><tr><td colspan=\"2\">NNS +VBP_VBD</td><td>They have worked</td></tr><tr><td colspan=\"2\">NNS +MD_VB</td><td>They will come</td></tr><tr><td colspan=\"2\">NN +VBZ</td><td>She goes</td></tr><tr><td colspan=\"2\">NN +VBD</td><td>She went</td></tr><tr><td colspan=\"2\">NN +VBZ_VBG</td><td>She is going</td></tr><tr><td colspan=\"2\">NN +VBD_VBG</td><td>She was going</td></tr><tr><td colspan=\"2\">NN +VBZ_VBD</td><td>He has walked</td></tr><tr><td colspan=\"2\">NN +MD_VB</td><td>He will come</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF7": { |
|
"text": "Experimental Results", |
|
"type_str": "table", |
|
"content": "<table><tr><td>Sentence Type</td><td colspan=\"5\">Actual Reduce Correct Precision Recall</td><td>F-score</td></tr><tr><td>Simple</td><td>650</td><td>570</td><td>512</td><td colspan=\"3\">89.83 % 78.77 % 83.94%</td></tr><tr><td>Compound</td><td>530</td><td>480</td><td>402</td><td colspan=\"3\">83.75 % 75.85 % 79.61%</td></tr><tr><td>Complex</td><td>560</td><td>530</td><td>440</td><td>83.02%</td><td colspan=\"2\">78.57% 80.73%</td></tr></table>", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |