|
{ |
|
"paper_id": "Y06-1027", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T13:34:08.935811Z" |
|
}, |
|
"title": "Knowledge-Rich Approach to Automatic Grammatical Information Acquisition: Enriching Chinese Sketch Engine with a Lexical Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Wei-Yun", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "ma@iis.sinica.edu.tw" |
|
}, |
|
{ |
|
"first": "Yi-Ching", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Chih-Ming", |
|
"middle": [], |
|
"last": "Chiu", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper discusses the implementation of a knowledge-rich approach to automatic acquisition of grammatical information. Our study is based on Word Sketch Engine (Kilgarriff and Tudgell 2002). The original claims of WSE are two folded: that linguistic generalizations can be automatically extracted from a corpus with simple collocation information provided that the corpus is large enough; and that such a methodology is easily adaptable for a new language. Our work on Chinese Sketch Engine attests to the claim the WSE is adaptable for a new language. More critically, we show that the quality of grammatical information provided has a directly bearing on the result of grammatical information acquisition. We show that when provided with a knowledge rich lexical grammar, both the quantity and quality of the extracted knowledge improves substantially over the results with simple PS rules.", |
|
"pdf_parse": { |
|
"paper_id": "Y06-1027", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper discusses the implementation of a knowledge-rich approach to automatic acquisition of grammatical information. Our study is based on Word Sketch Engine (Kilgarriff and Tudgell 2002). The original claims of WSE are two folded: that linguistic generalizations can be automatically extracted from a corpus with simple collocation information provided that the corpus is large enough; and that such a methodology is easily adaptable for a new language. Our work on Chinese Sketch Engine attests to the claim the WSE is adaptable for a new language. More critically, we show that the quality of grammatical information provided has a directly bearing on the result of grammatical information acquisition. We show that when provided with a knowledge rich lexical grammar, both the quantity and quality of the extracted knowledge improves substantially over the results with simple PS rules.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The original goal of corpus-based studies was to provide 'a body of evidence' for more theoretical linguistic studies (Francis and Kucera 1965) . However, corpus-based studies evolved with the improvements made in electronic data manipulation, making of automatic acquisition of grammatical information a goal of computational linguistics, computational lexicography, as well as theoretical corpus linguistics. Previous works that made significant contribution to the study of automatic extraction of grammatical relation includes Sinclair's (1987) work on KWIC, Church and Hanks' (1989) introduction of Mutual Information, and Lin's (1998) introduction of relevance measurement. Kilgarriff and colleagues' work on Word Sktech Engine (WSE) makes a bold step forwards in automatic linguistic knowledge acquisition Tudgell 2002, Kilgarriff et al. 2004) . The main claim is that a 'gargantuan' corpus 1 contains enough distributional information about most grammatical dependencies in a language such that the set of simple collocational patterns will allow automatic extraction of grammatical relations and other grammatical information. Crucially, the validity of the extracted information does not rely on the preciseness of the rules or the perfect grammaticality of the data. Instead, WSE allows the presence of ungrammatical examples in the corpus and the possibility for collocational patterns to occasionally identify the wrong lexical pairs. WSE assumes that these anomalies will be statistically insignificant, especially when there are enough examples instantiating the intended grammatical information. In addition, WSE relies on Salience measurement to rank the significance of all attested relations. Salience is calculated by MI of a relation multiplied with the frequency of the relation, in order to correct MI's bias towards low frequency items. WSE follows Lin's (1998) formulation of MI of relations, where ||w 1 , R, w 2 || stands for the frequency of the relation R between w 1 and, w 2 . A wild card * can occurs in place of w 1 , R, or w 2 to represent the all cases. Hence MI between w 1 , and w 2 given a relation R is given below (Kilgarriff and Tudgell 2002) : *, || x || * R, w1, || || w2 R, w1, || x || * R, *, || log( ) w2 R, w1, ( = I With Salience ranking, WSE gives a one page summary of the most significant grammatical behaviors of any given word. The report includes SUBJ, OBJ, modifier, coordination, etc. WSE is also able to calculate Sketch differences between two sketches, and create automatic thesauri that underline the comparisons between the synonym pairs based on sketch similarity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 143, |
|
"text": "(Francis and Kucera 1965)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 531, |
|
"end": 548, |
|
"text": "Sinclair's (1987)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 563, |
|
"end": 587, |
|
"text": "Church and Hanks' (1989)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 628, |
|
"end": 640, |
|
"text": "Lin's (1998)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 813, |
|
"end": 850, |
|
"text": "Tudgell 2002, Kilgarriff et al. 2004)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1873, |
|
"end": 1885, |
|
"text": "Lin's (1998)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 2154, |
|
"end": 2183, |
|
"text": "(Kilgarriff and Tudgell 2002)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2186, |
|
"end": 2284, |
|
"text": "*, || x || * R, w1, || || w2 R, w1, || x || * R, *, || log( ) w2 R, w1, (", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Background: Word Sketch Engine and Automatic Acquisition of Grammatical Information", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "(1). ) || w2 R,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background: Word Sketch Engine and Automatic Acquisition of Grammatical Information", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A crucial claim of the WSE is that this methodology can be easily adapted to new languages. That is, each language would require a different set of collocational patterns for relation extraction. WSE has been successfully ported to Czech and Irish (Kilgarriff et al. 2004) . And work has done to produce a prototype of Chinese Sketch Engine (called CSE I hereafter for easy reference, Kilgarriff et al. 2005 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 248, |
|
"end": 272, |
|
"text": "(Kilgarriff et al. 2004)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 385, |
|
"end": 407, |
|
"text": "Kilgarriff et al. 2005", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work: The Preliminary Implementation of Chinese Word Sketch", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "One issue not addressed in previous literature on WSE or similar work on automatic extraction of grammatical information is how much can existent grammatical knowledge help. While WSE requires only simple collocational information, it was not clear if more sophisticated grammatical information will help or hurt the result of the WSE. Three previous adaptation of the WSE, including Kilgarriff et al.'s (2005) adaptation of CSE I, replies heavily on transferring the original BNC-based templates to a different language and achieved reasonable results. However, there have been observations that they seem to miss some language-specific grammatical behaviors.", |
|
"cite_spans": [ |
|
{ |
|
"start": 384, |
|
"end": 410, |
|
"text": "Kilgarriff et al.'s (2005)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work: The Preliminary Implementation of Chinese Word Sketch", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Word Sketch uses regular expressions over POS-tags to formalize rules of collocation patterns. CSE I utilizes 11 collocating patterns to extract all grammatical relations and only one pattern for the simplest verb-object relation as shown as 2(2) Collocating Pattern for Object from CSE I 1:\"V[BCJ]\" \"Di\"? \"N[abc]\"? \"DE\"? \"N[abc]\"? 2: \"Na\" [tag!= \"Na\"] (\"XXX\" represents XXX is a regular expression, \"XXX\"? represents XXX appears zero or one time, \"XXX\"{a,b} represents XXX appears a~b times.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work: The Preliminary Implementation of Chinese Word Sketch", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In (2), the 1: and 2: identify the two collocated components. Between the components, zero or one particle may appear (denoted by \"Di\"?), zero or one processor may appears (denoted by any_noun? \"DE\"?), and zero or one noun-modifier may appears (denoted by \"N[abc]\"?) Huang et al. (2005) pointed out that the prototype version of CSE I did not deal with the prevalent non-canonical word orders in Chinese (3). In addition, we also noticed that it fails to identify grammatical relations when an argument lies some distance away from a verb because of internal modification (4). Chinese objects often occur in pre-verbal positions in various pre-posing constructions, such as topicalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 267, |
|
"end": 286, |
|
"text": "Huang et al. (2005)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work: The Preliminary Implementation of Chinese Word Sketch", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "( Such examples led to the question of whether the simple collocation rules adapted in Kilgarriff et al. (2005) was sufficient and if a knowledge-rich approach would yield better results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 87, |
|
"end": 111, |
|
"text": "Kilgarriff et al. (2005)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work: The Preliminary Implementation of Chinese Word Sketch", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "3 Porting ICG lexical grammar as collocation patterns", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work: The Preliminary Implementation of Chinese Word Sketch", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The important design criteria of WSE is that salience statistics is compiled based on relational tuples such as {w 1 , R, w 2 }. This is a crucial decision since word-based lexical statistics itself does not offer enough grammatical information, while it is hard to obtain enough information-rich parsed trees for statistic studies. It is interesting to observe that Kilgarriff et al. (2002) obtained only 70 million tuples (types) based on the 100 million words BNC. In terms of elements that need to be traced, this is indeed comparable to a general bi-gram model and definitely less complex than models that allows any lexical bi-gram without adjacency conditions. The reason for the reduction in complexity is because the collocational patterns serve as filters that disregard non-significant relations. Based on this model, a set of collocational patterns that contains richer grammatical information will enable the sketch engine to better identify grammatical relation tuples and render more precise grammatical information. Ideally, the most effective collocational patterns are those with explicit annotations of the targeted grammatical relations. Hence we propose to port a lexical grammar with argument annotation as WSE collocational patterns.", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 391, |
|
"text": "Kilgarriff et al. (2002)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivating a knowledge-rich approach", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The Information-based Case Grammar (Chen and Huang 1992 ) is a unification-based formalism proposed specifically for Chinese language processing. ICG is a head-driven lexical grammar in the sense that all grammatical information is encoded on the verb. Each verb is encoded with a set of basic patterns (BP) which stipulate the possible structural instantiations of that verb as well as the positions of participant roles (called Case) for each verb. There are over 100 templates of patterns corresponding to each verb sub-class. In the Academia Sinica CKIP lexicon, over 40,000 verbs are annotated with ICG information. Each verb starts with a default assignment according to its verbal sub-class, with the template information manually corrected based on corpus data and linguistic analysis. Obviously, not unlike the Levin classes for English (Levin, 1993) , each BP is repeated and shared by a number of verb sub-classes. Both the BP information and the Verb sub-classes information will be utilized in our adaptation of Chinese Sketch Engine (referred to as CSE II hereafter). ", |
|
"cite_spans": [ |
|
{ |
|
"start": 35, |
|
"end": 55, |
|
"text": "(Chen and Huang 1992", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 846, |
|
"end": 859, |
|
"text": "(Levin, 1993)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introducing ICG", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "There are two steps in the implementation of CSE II: the first step is corpus preparation, and the second is grammar adaptation. For corpus, we follow CSE I and use the LDC Chinese Gigaword Corpus because of its size (over 11 billion characters) and its coverage of both traditional and simplified characters. The Gigaword Corpus was fully automatically segmented and tagged using the Academia Sinica tagset and tagtool (Ma and Huang 2006) . Our work in adaptation for CSE II includes resolution of categorical ambiguity for nominalization and improvement of unknown word resolution.", |
|
"cite_spans": [ |
|
{ |
|
"start": 420, |
|
"end": 439, |
|
"text": "(Ma and Huang 2006)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation: Preparation of Corpus and Grammar", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "For grammar adaptation, we concentrate on exploiting lexically encoded ICG grammatical knowledge. Since the corpus was tagged with Academia Sinica tagset, the verb-subclass information for each verb is specified. Hence we can utilize the structural information from ICG BP. Since the tagged corpus has identified the verb-subclasses, we are able to correctly identify different grammatical relations, even though two verbs may share the same local structure. For instance, many verbs share the [..PP V NP] structure. However, for pseudo-transitive verbs (VB), contrary to na\u00efve structural assignment, it is the object of PP that has the Object role for the matrix sentences, as illustrated in 6. Such structural mismatches are easily resolved when the sub-class tag information is unified with ICG BP information. A further crucial step that we take in grammar adaptation is to allow a dependency relation that is separated by several constituents. Recall that a crucial motivation for the design of WSE is because parsing would be too time-and labor-consuming and would not yield highly reliable results. However, without parsing, it would be difficult to identify a head of a complex object, or a preposed object. Based on ICG grammar, we observe that such behaviors are often dependent on the verb sub-classes and can be captured. An illustrating example is the identification of a preposed object of a pseudotransitive verb (VB). 7a. \u6751\u838a(object) \u660e\u5929\u5c07 \u88ab \u5937\u70ba\u5e73\u5730(VB11) cunzhuang mingtian jiang bei yiweipingdi village tomorrow will BY level-to-the-ground 'The village will be leveled to the ground tomorrow.' b. begin time1 location time1 adv? passive_prep adv_string 1:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation: Preparation of Corpus and Grammar", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "\"V[BCJ].*\" [tag!=\"DE\"]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation: Preparation of Corpus and Grammar", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Note that the rule allows CSE II to ignore the temporal NP closer to the verb and pick the initial NP as the Object (denoted by 'location' in (7b), which is a noun phrase describing a location. Complete definition is given in appendix). Another set of rules utilizes the fact that successive Chinese nouns of a NP are head final. Hence, in order to determine which noun of a NP is the head Object, we stipulate that it has to be the Object which must not precede another noun. A relation and the rule that it accounts for are given in (8). Note that the NP stands for a noun-head following zero, one, or two noun-modifiers. The rule correctly pick jingguan 'sight-and-view' and not the noun-modifier preceding it (i.e. gongyuan 'park') as the Object of pohuai 'to damage'. (NP is defined as \"\u2026noun_modifier{0,2} 2:noun\u2026\". Complete definition is given in appendix.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation: Preparation of Corpus and Grammar", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The 32 definitions and 80 collocating patterns are designed for all Chinese grammatical relations according to their sub-classes. Note that the English grammar has 39 definitions but only 40 collocating patterns. We can safely say the CSE II grammar contains richer structural information. Of the 80 patterns, 20 of them are for verb-object relations. The complete list is given in Appendix I for reference. Please note that the number of rules are greater than the number (11 collocating patterns in all, one of them for verb-object relation) for CSE I. The new grammar took the Word Sketch Engine over 7 hours to compile. But once complied, the composition of word sketch for each word could be done in real time. Please note that, based on the compile log, each of the 20 object rule are useful and applied to at least 2, 7515415,56153,713,[103] times. This clearly shows that all new rules are basic and necessary.", |
|
"cite_spans": [ |
|
{ |
|
"start": 825, |
|
"end": 848, |
|
"text": "7515415,56153,713,[103]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation: Preparation of Corpus and Grammar", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "At the time of submission, only spot-checks of the results have been performed. Overall evaluation is still being conducted and results will be available in the final paper. The spot-checking so far does show clear and evident improvements over CSE I. The recall data comparing in (9) underlines the drastic improvement of CSE II over CSE I. For simple transitive verbs (the state verb kan4 and the activity verb da3), CSE II recall almost twice as many objects as CSE I. For more complex verb (ditransitive song4, as well as all types of clause taking verbs xiang1xin4, xiang1xin4, and quan4), CSE I fails to identify any of their objects, while CSE II. does correctly extract their objects. On the other hand, for intransitive verbs, CSE I and CSE II both correctly extract no object relations for the state verb hong2. The fact that CSE II extracted some object relations for the activity verb pao3, although with relatively low frequency, is worth noting. Upon further examination, we found that many of the objects extracted have habitual readings, such as pao2 ma3la1song1'runs marathon' or idiomatic reading pao3 bai2tie3 '(of a politician) runs from one funeral to another'. These are additional senses of the lemma pao3 that to take objects. In sum, the recall comparison data shows improvement of both quality and quantity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to contrast the quality of the extracted grammatical knowledge, we take the verb chi1 'to eat' for a more in-depth analysis. For chi1, only 23,421 objects were identified by CSE I, while we identified 33,038 objects with the richer grammar patterns in CSE II. This is an improvement of over 42% in terms of recall and a substantial quantitative gain. In terms of quality improvement, we observed that the following three objects are among the top 20 collocates identified by CSE II, but no by CSE I. Note that the three numbers following each object is its frequency (as object of chi), its saliency in this relation, and its saliency ranking (in parentheses). Note that both chi-gui 'to be taken advantage of' and chi-kutou 'to suffer' are both idiom chunks, and expected to be among the most salient collocating objects of chi. However, since they both allow frequent internal modification (e.g. chi zhangsan de an kui, 'been taken advantage of in the dark by Zhangsan'), a simple collocation pattern such as adopted by CSE I fails to identify them. Our adaptation in CSE II took internal modification into consideration and successfully identified them. The case with fan is even more general and potentially more interesting in terms of extracting basic collocation. Rice is undoubtedly the most typical conceptual object of chi 'to eat' and it occurs frequently in the corpus. However, CSE I only identified 266 instances of fan as object of chi, even less than the 427 instances of binglang 'beetlenut'. This is because fan represents a basic and generic concept and is rarely used along without modification. Since it often does not occur in concatenation with the verb, the simple collocation pattern of CSE I cannot identify it. We can see in (10) that CSE II identifies 802 instances of fan as object of chi, a recall improvement of over 200%. In addition, CSE II shows that fan as object of chi is almost twice as frequent as binglang (450). This fact is more consistent with our knowledge of the Chinese language and a clear indication that our adaptation successfully corrected the biased introduced by the incomplete grammatical knowledge of in CSE I.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Nevertheless, a recall of instance fan as an object improves over 200% in terms of its identification., misplace of instance fan as a subject still remain. As CSE II shown, 718 instances of fan as a subject require us to modify our grammar adaptation. In fact, instance fan will never serve as a grammatical relation of subject, hence collocation patterns of object/object_of ought to be adapted according to its sub-classes. In view of 718 instances of fan as a subject, we found that both of mei and you that precede a POS \"Na\" play a significant role in marking a object and identifying topicalization. The examples above reveal that an object is likely to be identified between mei / you and \"VC.*\". In that case, collocating pattern for object in CSE II can be altered and added to extract the very collocation of verb_object like this, [word=\"\u6c92\"|word=\"\u6c92\u6709\"|word=\"\u6709\"]NP adv_string 1:\"VC.*\" [tag!=\"DE\"] Although this collocating pattern cannot capture all the topicalized objects (e.g. \u6211\u98ef\u5403\u5b8c\u5c31\u8d70 \uf9ba\u3002), it seems to help identify instance fan as an object as illustrated in CSE II, or rather, it helps to mark the object in another collocation of verb_object indeed. In addition to the collocating pattern illustrated above, there exists a sentence pattern that helps to point out the topicalized objects, (13) \u4ed6 \u7d93\u5e38 \u662f \u4e00\u982d \u624e\u9032 \u5be6\u9a57\u5ba4 \u5c31 \uf99a \ufa2a \ufa26 \u9867\uf967\u4e0a \u5403 \u3002 ta jingchang shi yitou zhajin shi yan shi jiu lian fan dou gubushang chi often SHI completely invest laboratory jiu LIAN rice DOU unconcernedly eat 'He often invest such much time in the laboratory that he forgets to eat.'", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In example (15), it represents a predication of lian-dou pattern and the topicalized object fan is inbetween. Therefore, we may extract the collocation of verb_object stated as below, [word=\"\uf99a\"] NP [word=\"\ufa26\"| adv_string] 1:\"VC.*\" [tag!=\"DE\"] Hereby, we still are confronted with one problems as below, though lian-dou construction seems to help extract all the topicalized objects: (14) \u9019\u7a2e \ufa2a \u5c31 \uf99a \u4e5e\u4e10 \ufa26 \uf967 \u5403\u3002 zhezhong fan jiu lian qigai dou bu chi This sort rice jiu LIAN beggar DOU not eat 'Even a beggar won't eat this sort of rice. '", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In the light of the sentence (14), we are certain to come up with more refined grammar adaptation to capture the real topicalized object that instantiates in the natural language realization. Identifying an object to be a topicalization is really a thorny problem in terms of grammatical knowledge; even though the above suggested collocating patterns advance the identification of object as a sub-class, the goal is aimed to extract all sorts of topicalized objects in CSE II.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results and Evaluation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this paper, we showed that rich grammatical knowledge can be utilized to improve results of automatic grammatical information acquisition. In particular, we applied the ICG lexical grammar to the WSE to support the claim that its methodology can be easily adapted in different languages. We also introduced richer grammatical information to the finite state specifications of WSE, such as verb subclasses and displaced arguments. We showed that a knowledge-rich approach substantially improves the quality and quantity of extracted grammatical information with our adaptation of verb sub-class and argument Basic Pattern information from ICG. We believe that this is an encouraging result for future developments of automatic grammatical acquisition research since acquired grammatical knowledge can provide feedback to the system in order to improve future results. In addition, our work also points to a potentially productive way for theoretical and computational linguists to collaborate.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "define(`adj',`(\"A\"|\"VH11\"|\"VH13\"|\"VH21\"|\"V.*\" \"DE\")') define(`dm',`(\"Neqa\"|\"Neu\"|\"Neqa\" \"Nf.*\"|\"Neu\" \"Nf.*\"|\"Nf.*\")') define(`dm1',`(\"Neqa\"|\"Neqa\" \"Nf.*\"|\"Neu\" \"Nf.*\"|\"Nf.*\")') define (`adj_string',`dm? adj{0,2}') define(`prep',`\"P.?.?\"') define(`rel_clause',`(\"P.?.?\" [tag!=\".*Y\"]{1,5}|[tag!=\".*Y\"]{1,3}) (\"Ng\"|[word=\"\u7684\"] [word=\"\u6642\u5019\"])') define(`particle',`\"Di\"|\"T\"|\"I\"') define(`passive_prep',`[word=\"\u88ab\"|word=\"\u7d66\"|word=\"\u906d\"|word=\"\u7531\"|word=\"\u6328\"|word=\"\u53d7\"|word=\"\u5099\u53d7 \"|word=\"\u906d\u53d7\"]') define(`exclude_prep',`[word!=\"\u88ab\" & word !=\"\u7d66\" & word!= \"\u906d\" & word!=\"\u7531\" & word!=\"\u6328\" & word!=\"\u53d7\" & word!=\"\u5099\u53d7\" & word!=\"\u906d\u53d7\" & word!=\"\u628a\" & word!=\"\u5c07\" & word!=\"\u5411\"]') define(`adv',`([tag=\"D.*\" & tag!=\"DE\"]|\"V.*\" \"DE\")') define (`adv_string',`adv{0,2}') define (`noun',`[tag=\"N[abcdhf] .*\" & tag!=\"Nbc.*\" & tag!=\"Ncd.*\" & word!=\"\u8005\" & word!=\"\u5011\"]') define (`not_noun',`[tag!=\"N[abcdhef] .*\"|tag=\"Nbc.*\"|tag=\"Ncd.*\"]') define (`not_nounandDE',`([tag!=\"N[abcdhef] .*\" & tag!=\"DE\"]|[tag=\"Nbc.*\"]|[tag=\"Ncd.*\"])') define (`noun_without_NcNd',`[tag=\"N[abh] .*\" & tag!=\"Nbc.*\" & word!=\"\u8005\" & word!=\"\u5011\" & word!=\"\uf98e\" & word!=\"\u6708\" & word!=\"\u65e5\"]') define (`noun_modifier ',`[tag=\"N[abcd] .*\" & tag!=\"Ncd\"]') define(`time',`adv_string adj_string \"Nd.*\"{0,5} 2:\"Nd\" \"Ng\"?') define(`location',`adv_string adj_string \"Nc.*\"{0,5} 2:\"Nc\" \"Ncd\"? \"Ng\"?') define(`time1',`(prep? adv_string adj_string \"Nd.*\"{1,6} \"Ng\"?)?') define(`location1',`(prep? adv_string adj_string \"Nc.*\"{1,6} \"Ncd\"? \"Ng\"?)?') define(`NP',`([tag!=\".*Y\" & tag!=\"DE\"]{1,3} \"DE\")? adv_string adj_string noun_modifier{0,2} 2:noun \"Ncd\"?') define(`NP_without_NcNd',`([tag!=\".*Y\" & tag!=\"DE\"]{1,3} \"DE\")? adv_string adj_string noun_modifier{0,2} 2:noun_without_NcNd') define(`NP1',`([tag!=\".*Y\" & tag!=\"DE\"]{1,3} \"DE\")? adv_string adj_string noun_modifier{0,2} noun \"Ncd\"?') define(`NP1_without_NcNd',`([tag!=\".*Y\" & tag!=\"DE\"]{1,3} \"DE\")? adv_string adj_string noun_modifier{0,2} noun_without_NcNd') define(`begin',`(\"COMMACATEGORY\"|\"PERIODCATEGORY\"|\"VE.*\"|\"VK.*\")') define(`end',`(\"COMMACATEGORY\"|\"PERIODCATEGORY\"|\"Ca.*\"|\"Cb.*\")')", |
|
"cite_spans": [ |
|
{ |
|
"start": 184, |
|
"end": 213, |
|
"text": "(`adj_string',`dm? adj{0,2}')", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 720, |
|
"text": "(`adv_string',`adv{0,2}')", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 752, |
|
"text": "(`noun',`[tag=\"N[abcdhf]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 821, |
|
"end": 851, |
|
"text": "(`not_noun',`[tag!=\"N[abcdhef]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 890, |
|
"end": 926, |
|
"text": "(`not_nounandDE',`([tag!=\"N[abcdhef]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 982, |
|
"end": 1016, |
|
"text": "(`noun_without_NcNd',`[tag=\"N[abh]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1106, |
|
"end": 1138, |
|
"text": "(`noun_modifier ',`[tag=\"N[abcd]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Definition of Symbols", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The required corpus size was not specified in WSE literature. However, we estimate from existing work that for WSE to be efficient, corpus scale must be 100 millions words or above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Information-based Case Grammar", |
|
"authors": [ |
|
{ |
|
"first": "Keh-Jiann", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "Proceedings of the 13th International Conference on Computational Linguistics (COLING '90). Vol.ii", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "54--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chen, Keh-jiann and Chu-Ren Huang. 1990. Information-based Case Grammar. Proceedings of the 13th International Conference on Computational Linguistics (COLING '90). Vol.ii.54-59. Helsinki, Finland. August 20-25th.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Word association norms, mutual information and lexicography", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Church", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Hanks", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1989, |
|
"venue": "Proceedings of the 27th Annual Meeting of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "76--83", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Church, Ken. W. and Hanks, Patrick. 1989. Word association norms, mutual information and lexicography. Proceedings of the 27th Annual Meeting of ACL. Pp. 76-83. Vancouver.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Chinese Sketch Engine and the Extraction of Collocations", |
|
"authors": [ |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiching", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Min", |
|
"middle": [], |
|
"last": "Chiu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Rychly", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Hong", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Keh-Jiann", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of t the Fourth SigHan Workshop on Chinese Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Huang, Chu-Ren, Adam Kilgarriff, Yiching Wu, Chih-Min Chiu, Simon Smith, Pavel Rychly, Ming-Hong Bai, and Keh-jiann Chen. 2005. Chinese Sketch Engine and the Extraction of Collocations. Proceedings of t the Fourth SigHan Workshop on Chinese Language Processing. Jeju, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Sketching Words", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Tugwell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Lexicography and Natural Language Processing. A Festschrift in Honour of B", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarriff, Adam and Tugwell, David. Sketching Words. 2002. In Marie-H\u00e9l\u00e8ne Corr\u00e9ard (ed.): Lexicography and Natural Language Processing. A Festschrift in Honour of B.T.S. Atkins. Euralex.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Chinese Word Sketches. ASIALEX 2005: Words in Asian Cultural Context", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Rychl\u00fd", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Tugwell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarriff, Adam, Chu-Ren Huang, Pavel Rychl\u00fd, Simon Smith, and David Tugwell. 2005. Chinese Word Sketches. ASIALEX 2005: Words in Asian Cultural Context. Singapore.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "English Verb Classes and Alternations: A Preliminary Investigation", |
|
"authors": [ |
|
{ |
|
"first": "Beth", |
|
"middle": [], |
|
"last": "Levin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Levin, Beth. 1993. English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Automatic retrieval; and clustering of similar words", |
|
"authors": [ |
|
{ |
|
"first": "Dekang", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Proceedings of COLING-ACL. Montreal", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "768--774", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lin, Dekang. 1998. Automatic retrieval; and clustering of similar words. Proceedings of COLING-ACL. Montreal. 768-774.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Uniform and Effective Tagging of a Heterogeneous Giga-word Corpus", |
|
"authors": [ |
|
{ |
|
"first": "Wei-Yun", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chu-Ren", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of LREC 2006", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ma, Wei-yun and Chu-Ren Huang. 2006. Uniform and Effective Tagging of a Heterogeneous Giga-word Corpus. Proceedings of LREC 2006. Genoa.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Looking Up: an account of the COBUILD project in lexical Object_of 1", |
|
"authors": [], |
|
"year": 1987, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sinclair, John. M. (editor). 1987. Looking Up: an account of the COBUILD project in lexical Object_of 1:\"V[ACFJKL].", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "NP not_noun 1:[tag=\"VH12\"|tag=\"VH14\"|tag=\"VH16\"|tag=\"VH17\"|tag=\"VH22\"] (particle|prep)? NP not_noun", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "*\" (particle|prep)? NP not_noun 1:[tag=\"VH12\"|tag=\"VH14\"|tag=\"VH16\"|tag=\"VH17\"|tag=\"VH22\"] (particle|prep)? NP not_noun [word=\"\u628a\"|word=\"\u5c07\"|word=\"\u5411\"] NP adv_string 1:\"VB.*\" [tag!=\"DE\"] [word=\"\u628a\"|word=\"\u5c07\"] NP adv_string 1:\"VC.*\" [tag!=\"DE\"]", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "NP_without_NcNd time1 location1 time1 adv? passive_prep NP1 adv_string 1", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "NP_without_NcNd time1 location1 time1 adv? passive_prep NP1 adv_string 1:\"V[BCJ].", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "end NP_without_NcNd time1 location1 time1 adv? passive_prep NP1 adv_string 1", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "*\" (particle|\"Ng\"|\"Ncd.*\")? end NP_without_NcNd time1 location1 time1 adv? passive_prep NP1 adv_string 1:\"V[DE].", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "VE.*\" (particle|prep)? NP (particle|\"Ng\"|\"Ncd.*\")? end 1:\"VE.*\" (particle|prep)? NP1 NP", |
|
"authors": [], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "*\" (particle|\"Ng\"|\"Ncd.*\")? end 1:\"VE.*\" (particle|prep)? NP (particle|\"Ng\"|\"Ncd.*\")? end 1:\"VE.*\" (particle|prep)? NP1 NP (particle|\"Ng\"|\"Ncd.*\")? end [word=\"\u5411\"] NP adv_string 1:\"VE.*\" (particle|\"Ng\"|\"Ncd.*\")? End", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"text": "\u7684 \u904a\u5ba2 \u7834\u58de(VC2) \u516c\u5712 \u666f\u89c0(object) daliang de youke pohuai gongyuan jingguan large-number DE tourists damage park sight-and-view 'Large number of tourists damaged the sight-and-view of the park.' b. 1:\"VC.*\" (particle|prep)? NP not_noun", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"text": "10). a. \ufa2a fan4 rice 802 70.96 (4), b. \u8667 kui disadvantage 329 59.24 (12) c. \u82e6\u982d ku3tou2 suffering 194 58.71 (14)", |
|
"uris": null, |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "", |
|
"content": "<table><tr><td>(4). \u4ed6 \u53ea \u5403\uf9ba \u4e00 \u53e3</td><td>\ufa2a \u2026</td></tr><tr><td>Ta zhi chi let yi kou fan</td><td/></tr><tr><td>s/he</td><td/></tr><tr><td>3)</td><td/></tr><tr><td>a. \u5168\u7a40\u9eb5\u5305\uff0c\u5403\uf9ba\u5f88\u5065\u5eb7\u3002</td><td/></tr><tr><td colspan=\"2\">quan.gu mian.bao, chi le hen jian.kang</td></tr><tr><td colspan=\"2\">whole-grain bread, eat LE very healthy</td></tr><tr><td colspan=\"2\">'Eating whole-grain bread is very healthy.'</td></tr><tr><td colspan=\"2\">b. \u6709\u4eba\u5617\u8a66\u8981\u5c07\u9019\u8377\u82b1\u5206\uf9d0\uff0c\u537b\u8d8a\u5206\u8d8a\uf94f\u3002</td></tr><tr><td colspan=\"2\">you ren chang.shi yao jiang zhe he.hua fen.lei, que yue fen yue lei</td></tr><tr><td colspan=\"2\">someone try to JIANG the lotus classify, but more classify more tired</td></tr><tr><td colspan=\"2\">'People have tried to decide what category the lotus belongs in, but have found the effort taxing.'</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"html": null, |
|
"text": "\u4fdd\u8b49 \u707d\u6c11 \u6709 \ufa2a \u5403\u3001\u6709 \u8863 \u7a7f\u3001\u6709 \u4f4f\u8655\u3002 baozheng zaimin you fan chi \u3001yao yi chuan \u3001yao zhuchu ensure victims YOU rice eat \u3001YOU clothes wear \u3001have dwelling place 'We ensure that the victims will have rice to eat, clothes to wear and have dwelling places.' (12) \u4ed6 \u76f8\u4fe1 \u6c34\uf9dd\u8655 \u5de5\u4f5c \u4eba\u54e1 \uf967\u6703 \u6c92\u6709 \ufa2a \u5403\u3002 ta xiang xin shuilichu gongzuo renyuan buhui meiyou fan chi he believe department of irrigation and engineering staff won't MEI rice eat 'He believes that the staff in department of irrigation and engineering will have rice to eat.'", |
|
"content": "<table/>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |