Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y09-1008",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:43:07.342251Z"
},
"title": "Dependency Grammar Based English Subject-Verb Agreement Evaluation1",
"authors": [
{
"first": "Dongfeng",
"middle": [],
"last": "Cai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shenyang Institute of Aeronautical Engineering No",
"location": {
"addrLine": "37 Daoyi South Avenue",
"postCode": "110136",
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Yonghua",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shenyang Institute of Aeronautical Engineering No",
"location": {
"addrLine": "37 Daoyi South Avenue",
"postCode": "110136",
"settlement": "Shenyang",
"country": "China"
}
},
"email": ""
},
{
"first": "Xuelei",
"middle": [],
"last": "Miao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shenyang Institute of Aeronautical Engineering No",
"location": {
"addrLine": "37 Daoyi South Avenue",
"postCode": "110136",
"settlement": "Shenyang",
"country": "China"
}
},
"email": "miaoxl@ge-soft.com"
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Hong",
"location": {
"addrLine": "Kong 83 Tat Chee Ave",
"settlement": "Kowloon, Hong Kong"
}
},
"email": "yansong@student.cityu.edu.hk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As a key factor in English grammar checking, subject-verb agreement evaluation plays an important part in assessing translated English texts. In this paper, we propose a hybrid method for subject-verb agreement evaluation on dependency grammars with the processing of phrase syntactic parsing and sentence simplification for subject-verb discovery. Experimental results on patent text show that we achieve an F-score of 91.98% for subject-verb pair recognition, and a precision rate of 97.93% for subject-verb agreement evaluation on correctly recognized pairs in the previous stage.",
"pdf_parse": {
"paper_id": "Y09-1008",
"_pdf_hash": "",
"abstract": [
{
"text": "As a key factor in English grammar checking, subject-verb agreement evaluation plays an important part in assessing translated English texts. In this paper, we propose a hybrid method for subject-verb agreement evaluation on dependency grammars with the processing of phrase syntactic parsing and sentence simplification for subject-verb discovery. Experimental results on patent text show that we achieve an F-score of 91.98% for subject-verb pair recognition, and a precision rate of 97.93% for subject-verb agreement evaluation on correctly recognized pairs in the previous stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Subject-verb agreement error is the most common type of mistakes made in translating other languages to English text, and affects the quality of the generated text considerably. By making a detailed analysis on 300,000 error-noted English patent texts, we found that the subject-verb agreement errors comprise 21.7% of all the translation errors. It is obviously indicated that subject-verb agreement is one of the common problems translators would encounter. Due to the complicate grammar and flexible usage of sentence types, especially the complicated relationship between subjects and predicate verbs, the subject-verb agreement evaluation is a difficult mission to tackle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Currently, manual proofreading is still the main approach widely applied in detecting subject-verb agreement errors made by translators. However, it costs too much while in low efficiency, and manual work is not capable of reuse. To solve this problem, a computational approach is proposed in this paper to automatically recognize the subject-verb pairs and evaluate their agreement by obtaining the dependency relationship between the subjects and its predicate verbs. Phrase syntactic parsing and sentence simplification are used and proved to be effective in our routine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: a concise survey of related works is presented in the next section; section 3 is the description of our method; section 4 illustrates the procedure of our experiments; and the experimental results with analysis are presented in section 5; section 6 is the conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Copyright 2009 by Dongfeng Cai, Yonghua Hu, Xuelei Miao, and Yan Song For there are limited researches exclusively focus on subject-verb agreement, many related works are reported on dealing with grammatical errors, some of which includes subject-verb agreement case. Atwell (1987) , Bigert and Knutsson (2002) , Chodorow and Leacock (2000) proposed the n-gram error checking for finding grammatical errors. Hand-crafted error production rules (or \"mal-rules\"), with context-free grammar, are designed for a writing tutor for deaf students (Michaud et al., 2000) . Similar strategies with parse trees are pursued in (Bender et al., 2004) , and error templates are utilized in (Heidorn, 2000) for a word processor. An approach combining a hand-crafted context free grammar and stochastic probabilities is proposed in (Lee and Seneff, 2006) for correcting verb form errors, but it is designed for restricted domain. A maximum entropy model using lexical and part of speech(POS) features, is trained in (Izumi et al., 2003) to recognize a variety of errors, and achieves 55% precision and 23% recall on evaluation data. John Lee and Stephanie Seneff (2008) proposed a method based on irregularities in parsing tree and n-gram, to correct English verb form errors made by non-native speakers, and achieved a precision around 83.93%. However, on subject-verb agreement processing, it mainly aimed at those sentences which are relatively simple, and proved some wh-subject problems to be difficult for its approach.",
"cite_spans": [
{
"start": 268,
"end": 281,
"text": "Atwell (1987)",
"ref_id": "BIBREF0"
},
{
"start": 284,
"end": 310,
"text": "Bigert and Knutsson (2002)",
"ref_id": "BIBREF1"
},
{
"start": 313,
"end": 340,
"text": "Chodorow and Leacock (2000)",
"ref_id": "BIBREF4"
},
{
"start": 540,
"end": 562,
"text": "(Michaud et al., 2000)",
"ref_id": "BIBREF10"
},
{
"start": 616,
"end": 637,
"text": "(Bender et al., 2004)",
"ref_id": "BIBREF2"
},
{
"start": 676,
"end": 691,
"text": "(Heidorn, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 816,
"end": 838,
"text": "(Lee and Seneff, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 1000,
"end": 1020,
"text": "(Izumi et al., 2003)",
"ref_id": "BIBREF7"
},
{
"start": 1122,
"end": 1153,
"text": "Lee and Stephanie Seneff (2008)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "During the translation process, the subject-verb disagreement phenomenon is common, especially the confusion between the base form and the third person singular form. E.g. the sentence: the utility model disclose a mosaic thrust bearing shell. The subject 'model' and the predicate verb 'disclose' do not agree with each other. This aparts the sentence from good quality and should be checked in the proofreading process. Sentences that regard subject-verb disagreement errors as the main target are considered here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Issues",
"sec_num": "3.1"
},
{
"text": "There are many factors involved that can disturb the recognition and agreement evaluation of subject-verb, mainly on semantic level and syntactic level. In detail as follows: Semantic level It is concerned with inappropriate choices of tense, aspect, voice or mood. E.g., the subject-verb pair recognition is correct, but the verb form does not agree with the context on the semantic level. Such as, He *ate some bread for his breakfast. The predicate verb 'ate' is in past tense, it agrees with the subject on sentence level. But if its context features need it to be in future tense, the verb form will have to be modified. Here, the checking is only done on syntactic level without considering the context. Syntactic level As the second type, it can be subdivided into two sub-classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Issues",
"sec_num": "3.1"
},
{
"text": "(1) Too many modifiers in the sentence may disturb the dependency parsing and phrase syntactic parsing. E.g., The under *frame, the tension *spring, the swing *arm and the tensile force constant *device are all equipped in the protecting cover. Parsed as follows in Figure 1 In this sentence, 'frame-3', 'spring-7', 'arm-11' and 'device-17' actually share the same verb 'are-18'. But as a result of the modifiers such as 'JJ' and 'NN' (Santorini, 1990) , the subject is only recognized as 'device-17' from 'nsubjpass(equipped-20, device-17)' (de Marneffe et al., 2008.) , with other four omitted. As regard to this, sentence simplification is introduced to compress the sentence structure and avoid the disturbance of too many modifiers and some other elements.",
"cite_spans": [
{
"start": 435,
"end": 452,
"text": "(Santorini, 1990)",
"ref_id": "BIBREF3"
},
{
"start": 542,
"end": 569,
"text": "(de Marneffe et al., 2008.)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 266,
"end": 274,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Research Issues",
"sec_num": "3.1"
},
{
"text": "(2) The subject-verb pairs have been recognized, but the information that the subject and the predicate verb offer is not enough to evaluate if they are in agreement. E.g., The opening of existing hook *which is hanged on a straight rod is unclosed. The sentence contains a wh-subordinate clause. The phrase syntactic parsing and dependency parsing are: In Figure 2 , subject-verb pair '(opening-2 is-13)' can be concluded from dependency parsing 'nsubjpass(unclosed-14, opening-2)' and 'auxpass(unclosed-14, is-13)'. In the same way, the other pair '(which-6 is-7)' is obtained, too. However, the problem is that 'which-6' is not the true subject capable to evaluate if the subject-verb is in agreement, the true one should be 'hook-5'. But no links between 'which-6'and 'hook-5' is served in the parsing above in Figure 2 . As regards to this kind outcome as '(which-6 is-7)', we re-recognize the subject-verb after reverting the wh-word back to the most possible sentence element that wh-word points to.",
"cite_spans": [],
"ref_spans": [
{
"start": 357,
"end": 365,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 815,
"end": 823,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Research Issues",
"sec_num": "3.1"
},
{
"text": "Sentence simplification is an interesting point in this paper. Grefenstette (1998) applies shallow parsing and simplification rules to the problem of telegraphic text reduction, with as goal the development of an audio scanner for the blind or for people using their sight for other tasks like driving. Another related application area is the shorting of text to fit the screen of mobile devices (Corston-Oliver, 2001; Euler 2002) .",
"cite_spans": [
{
"start": 396,
"end": 418,
"text": "(Corston-Oliver, 2001;",
"ref_id": null
},
{
"start": 419,
"end": 430,
"text": "Euler 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "3.2"
},
{
"text": "We employ the sentence simplification as a pre-processing operation by deleting some kinds of adjective, adverb, modified noun and some kind prepositional phrase, so that the sentence becomes more simple with the trunk elements, such as the subject, the verbs and the object, left. By analyzing the training data, a positive simplification categories set is picked out and shown as follows: Table 1 , the 'Original' POS sequence can be regarded as triggering environment, 'Delete' points to the sequence that should be deleted. And the signal '!' is not a punctuation, but as a logic operator. 'NN*' means NN or NNS. In addition, the simplification operation of 'JJ', 'NN', 'VB*', 'RB' or their POS sequence is done based on POS, while the operation of 'PP' chunk is done based on Phrase Structure Parsing. The best target of sentence simplification are sentences that are totally correctly tagged (POS) and parsed (Phrase Structure Parsing). For those incorrectly done, inappropriate simplification outcome appear. But since incorrectly done, no matter whether the simplification operation is correct, it will not decline the system performance. So, we make each sentence in the corpus simplified.",
"cite_spans": [],
"ref_spans": [
{
"start": 391,
"end": 398,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "3.2"
},
{
"text": "The wh-words, such as \"which\", \"who\", \"what\" and \"that\", usually exist in a sentence as the subject, and if the sentence is a subordinate clause, a more detailed sentence subject should be found. In order to obtain a much exacter subject, we do a reverting operation to the wh-word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "wh-type Word Reverting",
"sec_num": "3.3"
},
{
"text": "Firstly, retrieve the most possible subject element in the sentence that wh-word may point to. Secondly, replace wh-word with the subject element and extract the subordinate clauses to be independent, so that a complicate and long sentence becomes several relative simple ones. Then, discover the subject-verb pairs of all the new generated sentences by making dependency grammar analysis. Terminally, we combine the subject-verb pairs back into the outcome of the original sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "wh-type Word Reverting",
"sec_num": "3.3"
},
{
"text": "The algorithm for reverting is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "wh-type Word Reverting",
"sec_num": "3.3"
},
{
"text": "Input\uff1aPhrase syntactic parsing file; Output\uff1aThe element that wh-word most possibly points to, only NP is considered here; int Distan(WDT, NP i ) // the distance between WDT and NP i ; // WDT is the Part Of Speech of wh-word; begin // weight of each branch w = 1; // Value_Distance(node1,node2) = w \u00d7 the number of branches connect node1 and node2; Definition\uff1aint distance = 0; if node P as the nearest and common ancestor of WDT and NP i ; distance = Value_Distance(P,WDT) + Value_Distance(P,NP i ); return distance; else return +\u221e; end if end string Revert() Definition: int dis; int DIS; // the distance between the wh-word and the NP; string SUBJ; // the most possible NP wh-word points to; SUBJ = Null\uff0cDIS = +\u221e; begin for each NP i before the wh-word in Parsed-Tree // NP i must before the // wh-word in the sentence; dis = Distan(WDT, NP i ); // calculate the distance of NP i and WDT; If dis < DIS // search the nearest NP i ; DIS = dis; SUBJ = NP i ; else continue; end if end for return SUBJ; end",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "wh-type Word Reverting",
"sec_num": "3.3"
},
{
"text": "How the subject and the predicate verb link up with each other in a sentence is rather flexible, especially for the science and technology literature sentences, such as patent corpus, which are too long and with too many modifiers in. This makes the subject-verb agreement evaluation more difficult. In this paper, we utilize the patent corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "It is mainly used for learning the sentence simplification categories. By analyzing the tagging and parsing outcome of the sentences given, we choose categories that positively function to simplifying a sentence to be a set, as in Table 1 . Totally, 600 manually proofread English patent sentences are used to develop the categories set.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Development Data",
"sec_num": "4.1"
},
{
"text": "For the evaluation, we experiment on 1000 English patent sentences translated by non-native speakers. In order to make a general comparison, the corpus is separated into four different parts as follows: In order to compute the precision of the system outcome, we annotate the correct subject-verb pairs and their agreement of the 1000 sentences manually as the reference. E.g., for the sentence in Figure 2 , it is 'opening-2 is-13 1|hook-5 is-7 1|', where '1|' means the subject-verb is in agreement, '0|' means disagreement in contrast.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluation Data",
"sec_num": "4.2"
},
{
"text": "According to the common three evaluation guidelines, the following statistics are computed as the criterion to evaluate the performance of the system: Precision The proportion of the system subject-verb pairs which are correct. Calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "100% N P M = \u00d7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Note: N is the number of the correct subject-verb pairs in system outcome. M is the total number of the subject-verb pairs in system outcome.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Recall Out of all the subject-verb pairs in the reference, the proportion that appear in the system outcome. Calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "100% N R T = \u00d7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "(2) Note: T is the total number of the subject-verb pairs in the reference. F-Score Which is a combination of P and R, and is a more general evaluation score. The formula is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "2 2 ( 1) 100% P R F R P \u03b2 \u03b2 \u00d7 \u00d7 + = \u00d7 + \u00d7",
"eq_num": "(3)"
}
],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "Note: \u03b2 is an important weight parameter between P and R, it is regarded as 1 in this paper, i.e. P and R share the same weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.3"
},
{
"text": "The experiment is implemented as following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "Step 1 Pre-Processing Tokenize the patent corpus in \u00a74.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "Step 2 Phrase Syntactic Parsing e.g. The opening of existing hook which is hanged on a straight rod is unclosed, and the under frame, the tension spring, the swing arm and the tensile force constant device are all equipped in the protecting cover. (2) Step 3 Sentence Simplification. Simplify the sentences by deleting some elements, such as some kind JJ or NN or RB or PP chunk that listed in Table 1 . As is simplified, (2) becomes into (3): The opening of existing hook which is hanged on a rod is unclosed, and the frame, the spring, the arm and the force device are equipped in the cover.",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 401,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "Step 4 Do Dependency Parsing to sentence (3), the subjects and their predicate verbs are linked up, and subject-verb pairs: 'opening-2 is-12 |which-6 is-7 |frame-17 are-28 |spring-20 are-28 |arm-23 are-28 |device-27 are-28 |' (4) can be recognized.",
"cite_spans": [
{
"start": 165,
"end": 225,
"text": "are-28 |spring-20 are-28 |arm-23 are-28 |device-27 are-28 |'",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "Step 5 Revert the wh-subject For the pairs such as 'which-6 is-7 |' in which wh-type subject is recognized, the sentence will be rechecked by reverting the wh-word back into the word or chunk (usually as NP chunk before the wh-word) that the wh-word most possibly points to. Once the wh-word is reverted, retrieve the subordinate clauses to be independent. Go to step 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "For the outcome of (4), 'which-6' is replaced as 'hook-5', and the original sentence becomes: The opening of existing hook is unclosed, and the frame , the spring , the arm and the force device are equipped in the cover.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "(5) and Existing hook is hanged on a rod.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "(6) Since both are rechecked, combine the subject-verb pairs of (5) and (6) to be: opening-2 is-12 |hook-5 is-7 |frame-17 are-28 |spring-20 are-28 |arm-23 are-28 |device-27 are-28 | (7) Step 6 Terminal outcome Evaluate if the subject-verb pairs are in agreement according to their POS (Part Of Speech). According to (2), the POS of (7) is:",
"cite_spans": [
{
"start": 122,
"end": 185,
"text": "are-28 |spring-20 are-28 |arm-23 are-28 |device-27 are-28 | (7)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setting",
"sec_num": "4.4"
},
{
"text": "So, the agreement outcome is: opening-2 is-12 1|hook-5 is-7 1|frame-17 are-28 0|spring-20 are-28 0|arm-23 are-28 0|device-27 are-28 0| (8) Note: '0|' stands for disagreement; '1|' stands for agreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NN VBZ |NN VBZ | NN VBP |NN VBP | NN VBP | NN VBP |",
"sec_num": null
},
{
"text": "In addition, four different subjects in (8) share the same verb 'are-28', it is a plural case. So, their agreement labels should be modified to '1|'. Then, the terminal result comes to be: opening-2 is-12 1|hook-5 is-7 1|frame-17 are-28 1|spring-20 are-28 1|arm-23 are-28 1|device-27 are-28 1| (9) 5 The Experimental Results and Analysis Table 3 compares the outcomes of different phases of the subject-verb discovery: the first one is merely based on dependency grammar; sentence simplification is added to be the second one; and the third one adds wh-type word reverting operation to the second. Outcome of the first is present as the baseline. The comparison of the subject-verb agreement evaluation on the pairs that correctly recognized is as follows in Table 4 : In Table 3 , the subject-verb discovery outcomes of the three methods are presented, including the Precision(P), Recall(R) and F-score(F) on each subset of the corpus, as well as the total Fscore on the whole corpus. In Table 4 , it is the precision of the subject-verb agreement evaluation based on the subject-verb pairs that have been recognized correctly in Table 3 . By comparison, the figures show that both the SSIM and WH-operations function positively that the final F total of the recognition improves 1.67%. And from the percentage it improves step by step, SSIM is shown to get a more remarkable F total . This is because every sentence can be simplified while not all of them contain a wh-subordinate clause, actually there are only 269 wh-words in the corpus. Moreover, the categories for SSIM must be selected carefully, or else it may result in negative effect. But WH-is always positive, since it only aims at the incorrect subject-verb recognition. However, maybe there could be more appropriate categories for SSIM or more perfect method for WH-, on that the system will perform better.",
"cite_spans": [
{
"start": 230,
"end": 297,
"text": "are-28 1|spring-20 are-28 1|arm-23 are-28 1|device-27 are-28 1| (9)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 759,
"end": 766,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 772,
"end": 779,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 989,
"end": 996,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 1131,
"end": 1138,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "NN VBZ |NN VBZ | NN VBP |NN VBP | NN VBP | NN VBP |",
"sec_num": null
},
{
"text": "As to the subject-verb pairs that is discovered correctly, for the reason of the precision of Part Of Speech tagging, the agreement evaluation is impossible to be whole correct. The Precision(P) on the subsets of the corpus and the whole corpus are as Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 259,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "NN VBZ |NN VBZ | NN VBP |NN VBP | NN VBP | NN VBP |",
"sec_num": null
},
{
"text": "Subject-verb agreement is a complicated and difficult problem in Machine Translation Evaluation, it is involved with complicated grammar, long dependency relationship, and subordinate clause factors, and so on. Especially for the science and technology literature sentences, such as patent corpus, which are too long or with too many modifiers in, it gets worse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have proposed a hybrid method for subject-verb agreement evaluation on dependency grammars with the processing of phrase syntactic parsing and sentence simplification for subject-verb discovery. It is completely automatically done, and the results show its efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "By the way, the categories we use for sentence simplification and wh-type subject reverting operation may be not much appropriate, the better categories are made, the better the system performs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "23rd Pacific Asia Conference on Language, Information and Computation, pages 63-71",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How to detect grammatical errors in a text without parsing it",
"authors": [
{
"first": "E",
"middle": [
"S"
],
"last": "Atwell",
"suffix": ""
}
],
"year": 1987,
"venue": "Proceeding of the 3 rd EACL",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atwell, E. S. 1987. How to detect grammatical errors in a text without parsing it. Proceeding of the 3 rd EACL. 38-45.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Robust error detection: A Hybrid Approach Combining unsupervised error detection and linguistic knowledge. Proceeding of Robust Method in Analysis of Natural Language Data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bigert",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Knutsson",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "10--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bigert, J. and O. Knutsson. 2002. Robust error detection: A Hybrid Approach Combining unsupervised error detection and linguistic knowledge. Proceeding of Robust Method in Analysis of Natural Language Data. 10-19.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Arboretum: Using a Precision Grammar for Grammar Checking in CALL",
"authors": [
{
"first": "E",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Walsh",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. In-STIL/ICALL Symposium on Computer Assisted Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bender, E., D. Flickinger, S. Oepen, A. Walsh, and T. Baldwin. 2004. Arboretum: Using a Precision Grammar for Grammar Checking in CALL. Proc. In-STIL/ICALL Symposium on Computer Assisted Learning.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Part Of Speech Tagging Guidelines for the Penn Treebank Project (3 rd Version, 2 nd Printing)",
"authors": [
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Santorini, B. 1990. Part Of Speech Tagging Guidelines for the Penn Treebank Project (3 rd Version, 2 nd Printing). http://bulba.sdsu.edu/jeanette/thesis/PennTags.html.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An Unsupervised Method for detecting Grammatical Errors. Proceeding of NAACL'00",
"authors": [
{
"first": "M",
"middle": [],
"last": "Chodorow",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "140--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chodorow, M. and C. Leacock. 2000. An Unsupervised Method for detecting Grammatical Errors. Proceeding of NAACL'00. 140-147.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Stanford typed dependencies manual",
"authors": [
{
"first": ", M.-C",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "de Marneffe, M.-C. and C.D. Manning. 2008. Stanford typed dependencies manual-[EB].",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Intelligent Writing Assistance. Handbook of Natural Language Processing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Heidorn",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heidorn, G. 2000. Intelligent Writing Assistance. Handbook of Natural Language Processing. Obert Dale, Hermann Moisi and Harold Somers (ed.). Marcel Dekker, Inc.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic Error Detection in the Japanese Learner's English Spoken Data. Companion Volume to",
"authors": [
{
"first": "E",
"middle": [],
"last": "Izumi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Saiga",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Supnithi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Izumi, E., K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic Error Detection in the Japanese Learner's English Spoken Data. Companion Volume to Proc. ACL. Sapporo, Japan.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Correcting Misuse of Verb Forms. 22nd International Conference on Computational Linguistics",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, J. and S. Seneff. 2008. Correcting Misuse of Verb Forms. 22nd International Conference on Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic Grammar Correction for Second-Language Learners",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Seneff",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, J. and S. Seneff. 2006. Automatic Grammar Correction for Second-Language Learners. Proc. Interspeech. Pittsburgh, PA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An Intelligent Tutoring System for Deaf Learners of Written English",
"authors": [
{
"first": "L",
"middle": [],
"last": "Michaud",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Mccoy",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pennington",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. 4th International ACM Conference on Assistive Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michaud, L., K. McCoy, and C. Pennington. 2000. An Intelligent Tutoring System for Deaf Learners of Written English. Proc. 4th International ACM Conference on Assistive Technologies.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Elements de Syntaxe Structurale",
"authors": [
{
"first": "L",
"middle": [],
"last": "Tesniere",
"suffix": ""
}
],
"year": 1959,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tesniere, L. 1959. Elements de Syntaxe Structurale. Paris: Klincksieck.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Example for class 1 on Syntactic level."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Example for class 2 on Syntactic level."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Parsed with Stanford-parser: (ROOT (S (S (NP (NP (DT The) (NN opening)) (PP (IN of) (NP (VBG existing) (NN hook) (SBAR (WHNP (WDT which)) (S (VP (VBZ is) (VP (VBN hanged) (PP (IN on) (NP (DT a) (JJ straight) (NN rod)))))))))) (VP (VBZ is) (VP (VBN unclosed)))) (, ,) (CC and) (S (NP (DT the) (ADJP (JJ under) (NP (NP (NN frame)) (, ,) (NP (DT the) (NN tension) (NN spring)) (, ,) (NP (DT the) (NN swing) (NN arm)) (CC and) (NP (DT the) (JJ tensile) (NN force)))) (JJ constant) (NN device)) (VP (VBP are) (RB all) (ADJP (VBN equipped) (PP (IN in) (NP (DT the) (JJ protecting) (NN cover)))))) (. .)))"
},
"TABREF0": {
"text": "Categories to simplify a sentence.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>#</td><td>Original</td><td>Delete</td><td># Original</td><td>Delete</td></tr><tr><td colspan=\"2\">1 RB1 CC RB2 JJ</td><td>RB1 CC RB2</td><td>11 DT JJ CC VBG NN*</td><td>JJ CC VBG</td></tr><tr><td colspan=\"2\">2 RB1 JJ|RB2|MD</td><td>RB1</td><td>12 !VB JJ1 NN*|JJ2</td><td>JJ1</td></tr><tr><td colspan=\"2\">3 DT NN1 CC NN2 NN*</td><td>CC NN2</td><td>13 , JJ ,</td><td>JJ,</td></tr><tr><td colspan=\"2\">4 !IN&amp;&amp;!TO NN|CD NN (!%)</td><td>NN|CD</td><td>14 JJ1 VBG NN*|JJ2</td><td>JJ1 VBG</td></tr><tr><td colspan=\"3\">5 NN VBP|VBZ|VBG VBP|VBZ NN</td><td>15 DT VBG1 CC VBG2 NN*|JJ</td><td>VBG1 CC VBG2</td></tr><tr><td colspan=\"2\">6 NN VBP|VBZ JJ IN</td><td>NN</td><td>16 DT VBD VBG NN</td><td>VBD VBG</td></tr><tr><td colspan=\"2\">7 DT NN* CC NN VBN NN*</td><td>CC NN1 VBN</td><td>17 DT VBG|VBN JJ|NN*</td><td>VBG|VBN</td></tr><tr><td colspan=\"2\">8 DT NN1 VBG1 CC VBG2 NN2</td><td>NN1 VBG1 CC VBG2</td><td>18 only VB*(is|are|am|was|were)</td><td>\"Only\" TO \"there\"</td></tr><tr><td colspan=\"2\">9 DT NN1 VBG|VBN NN2</td><td>NN1 VBG|VBN</td><td>19 NN* PP (not with VB* in)</td><td>PP ( not with VB* in)</td></tr><tr><td colspan=\"2\">10 JJ1 CC JJ2 JJ3|NN</td><td>JJ1 CC JJ2</td><td>#</td><td/></tr><tr><td>In</td><td/><td/><td/><td/></tr></table>",
"html": null
},
"TABREF1": {
"text": "Analysis of evaluation corpus.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>corpus</td><td colspan=\"2\">Short sentences</td><td colspan=\"2\">Long sentences</td></tr><tr><td>Number(sen.)</td><td colspan=\"2\">332 sen.</td><td colspan=\"2\">668 sen.</td></tr><tr><td>Percentage(%)</td><td colspan=\"2\">33.2%</td><td/><td>66.8%</td></tr><tr><td/><td>subject-verb</td><td>subject-verb</td><td>subject-verb</td><td>subject-verb pairs</td></tr><tr><td/><td>pairs agreed</td><td>pairs disagreed</td><td>pairs agreed</td><td>disagreed</td></tr><tr><td>Number(sen.)</td><td>172 sen.</td><td>160 sen.</td><td>328 sen.</td><td>340 sen.</td></tr><tr><td>Percentage(%)</td><td>17.2%</td><td>16%</td><td>32.8%</td><td>34%</td></tr></table>",
"html": null
},
"TABREF2": {
"text": "The outcome of the subject-verb discovery.",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Dep.</td><td/><td/><td colspan=\"2\">SSIM+Dep.</td><td/><td colspan=\"4\">SSIM+ Dep.+WH-.</td></tr><tr><td/><td colspan=\"2\">Short</td><td colspan=\"2\">Long</td><td colspan=\"2\">Short</td><td colspan=\"2\">Long</td><td colspan=\"2\">Short</td><td colspan=\"2\">Long</td></tr><tr><td/><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td></tr><tr><td>Subject-verb agreed(Y/N)</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td></tr><tr><td>R(%)</td><td>96.89</td><td>96.33</td><td>91.05</td><td>91.12</td><td>96.89</td><td>96.33</td><td>93.68</td><td>90.89</td><td>96.89</td><td>97.91</td><td colspan=\"2\">93.82 91.13</td></tr><tr><td>P(%)</td><td>93.96</td><td>92.93</td><td>89.29</td><td>85.68</td><td>94.92</td><td>94.85</td><td>92.35</td><td>86.14</td><td>94.92</td><td>96.89</td><td colspan=\"2\">92.48 86.66</td></tr><tr><td>F(%)</td><td>95.41</td><td>94.60</td><td>90.16</td><td>88.32</td><td>95.90</td><td>95.58</td><td>93.01</td><td>88.45</td><td>95.90</td><td>97.40</td><td colspan=\"2\">93.14 88.84</td></tr><tr><td>R total (%)</td><td/><td colspan=\"2\">92.16</td><td/><td/><td colspan=\"2\">93.07</td><td/><td/><td colspan=\"2\">93.38</td><td/></tr><tr><td>P total (%)</td><td/><td colspan=\"2\">88.53</td><td/><td/><td colspan=\"2\">90.16</td><td/><td/><td colspan=\"2\">90.63</td><td/></tr><tr><td>F total (%)</td><td/><td colspan=\"2\">90.31</td><td/><td/><td colspan=\"2\">91.59</td><td/><td/><td colspan=\"2\">91.98</td><td/></tr></table>",
"html": null
},
"TABREF3": {
"text": "Precision of agreement evaluation on the subject-verb pairs that correctly recognized.",
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td/><td colspan=\"2\">Dep.</td><td/><td/><td colspan=\"2\">SSIM+ Dep.</td><td/><td colspan=\"4\">SSIM+ Dep.+WH-.</td></tr><tr><td/><td colspan=\"2\">Short</td><td colspan=\"2\">Long</td><td colspan=\"2\">Short</td><td colspan=\"2\">Long</td><td colspan=\"2\">Short</td><td colspan=\"2\">Long</td></tr><tr><td/><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td><td colspan=\"2\">sentences</td></tr><tr><td>Subject-verb agreed(Y/N)</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td><td>Y</td><td>N</td></tr><tr><td>P(%)</td><td>99.47</td><td>97.27</td><td>97.53</td><td>97.88</td><td>99.47</td><td>97.81</td><td>97.04</td><td>98.28</td><td>99.47</td><td>97.86</td><td>97.04</td><td>98.41</td></tr><tr><td>P total (%)</td><td/><td colspan=\"2\">97.86</td><td/><td/><td colspan=\"2\">97.88</td><td/><td/><td colspan=\"2\">97.93</td><td/></tr></table>",
"html": null
}
}
}
}