ACL-OCL / Base_JSON /prefixG /json /gwc /2016.gwc-1.36.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:03:51.395818Z"
},
"title": "Mapping and Generating Classifiers using an Open Chinese Ontology",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Morgado Da Costa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": "bond@ieee.org"
},
{
"first": "Helena",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": "helenagao@ntu.edu.sg>"
},
{
"first": "\u2666",
"middle": [
"\u2660"
],
"last": "Linguistics",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Multilingual",
"middle": [],
"last": "Studies",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chinese Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.",
"pdf_parse": {
"paper_id": "2016",
"_pdf_hash": "",
"abstract": [
{
"text": "In languages such as Chinese, classifiers (CLs) play a central role in the quantification of noun-phrases. This can be a problem when generating text from input that does not specify the classifier, as in machine translation (MT) from English to Chinese. Many solutions to this problem rely on dictionaries of noun-CL pairs. However, there is no open large-scale machine-tractable dictionary of noun-CL associations. Many published resources exist, but they tend to focus on how a CL is used (e.g. what kinds of nouns can be used with it, or what features seem to be selected by each CL). In fact, since nouns are open class words, producing an exhaustive definite list of noun-CL associations is not possible, since it would quickly get out of date. Our work tries to address this problem by providing an algorithm for automatic building of a frequency based dictionary of noun-CL pairs, mapped to concepts in the Chinese Open Wordnet (Wang and Bond, 2013), an open machinetractable dictionary for Chinese. All results will released under an open license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Classifiers (CLs) are an important part of the Chinese language. Different scholars treat this class of words very differently. Chao (1965) , the traditional and authoritative native Chinese grammar, splits CLs into nine different classes. Cheng and Sybesma (1998) draw a binary distinction between count-classifiers and massifiers. Erbaugh (2002) splits CLs into three categories (measure, collective and sortal classifiers). Measure classifiers describe quantities (e.g. 'a bottle of', 'a mouthful of'), collective classifiers describe arrangement of objects ('a row of', 'a bunch of'), and sortal classifiers refer to a particular noun category (which can be defined, for example, by shape). Huang et al. (1997) identify four main classes, individual classifiers, mass classifiers, kind classifiers, and event classifiers. And Bond and Paik (2000) define five major types of CLs: sortal (which classify the kind of the noun phrase they quantify); event (which are used to quantify events); mensural (which are used to measure the amount of some property); group (which refer to a collection of members); and taxonomic (which force the noun phrase to be interpreted as a generic kind). This enumeration is far from complete, and Lai (2011) provides a detailed literature review on the most prominent views on Chinese classifiers.",
"cite_spans": [
{
"start": 128,
"end": 139,
"text": "Chao (1965)",
"ref_id": "BIBREF4"
},
{
"start": 240,
"end": 264,
"text": "Cheng and Sybesma (1998)",
"ref_id": null
},
{
"start": 333,
"end": 347,
"text": "Erbaugh (2002)",
"ref_id": "BIBREF6"
},
{
"start": 695,
"end": 714,
"text": "Huang et al. (1997)",
"ref_id": "BIBREF11"
},
{
"start": 830,
"end": 850,
"text": "Bond and Paik (2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most languages make use of some of these classes (e.g. most languages have measure CLs, as in a kilo of coffee, or group CLs, as in a school of fish). What appears to be specific to some languages (e.g. Chinese, Japanese, Thai, etc.) is a class of CLs (sortal classifiers: S-CL) that depicts a selective association between quantifying morphemes and specific nouns. This association is licensed by a number of features (e.g. physical, functional, etc.) that are shared between CLs and nouns they can quantify, and these morphemes add little (but redundancy) to the semantics of nounphrase they are quantifying.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Consider the following examples of S-CL usage in Mandarin Chinese:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "( Examples (1) through (4) show how the simple act of counting in Mandarin Chinese involves pairing up nouns with specific classifiers, if incompatible nouns and classifiers are put together then the noun phrase is infelicitous, see (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Different S-CLs can be used to quantify the same noun, see 1and 2, and the same type of S-CL can be used with many different nouns -so long as the semantic features are compatible between the S-CL and the noun, see (2) and(3). Extensive work on these features is provided by Gao (2010) -where more than 800 classifiers (both sortal and non-sortal) are linked in a database according to the nominal features they select, but providing only a few example nouns that can be quantified by each CL. These many-to-one selective associations are hard to keep track of, especially since they depend greatly on context, which often restricts or coerces the sense in which the noun is being used (Huang et al., 1998) .",
"cite_spans": [
{
"start": 686,
"end": 706,
"text": "(Huang et al., 1998)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(6) \u4e00 y\u012b 1 \u4e2a ge CL \u6728\u5934 m\u00f9tou log (of wood) / blockhead \"a log / blockhead\" (7) \u4e00 y\u012b 1 \u4f4d w\u00e8i CL \u6728\u5934 m\u00f9tou blockhead \"a blockhead\" (8) \u4e00 y\u012b 1 \u6839 g\u0113n CL \u6728\u5934 m\u00f9tou log (of wood) \"a log\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Examples (6-8) show how the use of different CLs with ambiguous senses can help resolve this ambiguity. In (6), we can see that with the use of \u4e2a ge, the most general S-CL in Mandarin Chinese, mu4tou is ambiguous because it does not restrict the noun's semantic features. With the use of \u4f4d w\u00e8i (7), an honorific S-CL used almost exclusively with people, it can only be interpreted as \"blockhead\". And the reverse happens when using \u6839 g\u0113n (8), a S-CL for long, slender, inanimate objects: the sense of log (of wood) of \u6728\u5934 m\u00f9tou is selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Even though written resources concerning CLs are abundant, they are not machine tractable, and their usage is limited by copyright. Natural Language Processing (NLP) tasks depend heavily on open, machine tractable resources. Wordnets (WN) are a good example on the joint efforts to develop machine tractable dictionaries, linked in rich hierarchies. Resources like WNs play a central role in many NLP tasks (e.g. Word Sense Disambiguation, Question Answering, etc.). Huang et al. (1998) argue that the integration between corpora and knowledge rich resources, like dictionaries, can offer good insights and generalizations on linguistic knowledge. In this paper, we follow the same line of thought by integrating both a large collection of Chinese corpora and a knowledge rich resource (the Chinese Open Wordnet: COW (Wang and Bond, 2013) ). COW is a large open, machine tractable, Chinese semantic ontology, but it lacks information on noun-CL associations. We believe that enriching this resource with concept-CL links will increase the domain of it's applicability. Information about CLs could be used to generate CLs in MT tasks, or even to improve on Chinese Word Sense Disambiguation.",
"cite_spans": [
{
"start": 467,
"end": 486,
"text": "Huang et al. (1998)",
"ref_id": "BIBREF12"
},
{
"start": 817,
"end": 838,
"text": "(Wang and Bond, 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follows: Section 2 presents related work, followed by a description of the resources used in Section 3; Section 4 describes the algorithms applied, and Section 5 presents and discusses our results; Section 6 describes ongoing and future work; and Section 7 presents our conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Mapping CLs to semantic ontologies has been attempted in the past (Sornlertlamvanich et al., 1994; Bond and Paik, 2000; Paik and Bond, 2001; Mok et al., 2012) . Sornlertlamvanich et al. (1994) is the first description of leveraging hierarchical semantic classes to generalize noun-CL pairs (in Thai). Still, their contribution was mainly theoretical, as it failed to report on the performance of their algorithm. Bond and Paik (2000) and Paik and Bond (2001) further develop these ideas to develop similar works for Japanese and Korean. In their work, CLs are assigned to semantic classes by hand, and achieve up to 81% of generation accuracy, propagating CLs down semantic classes of Goi-Taikei (Ikehara et al., 1997) . Mok et al. (2012) develop a similar approach using the Japanese Wordnet (Isahara et al., 2008) and the Chinese Bilingual Wordnet (Huang et al., 2004) , and report a generation score of 78.8% and 89.8% for Chinese and Japanese, respectively, on a small news corpus.",
"cite_spans": [
{
"start": 66,
"end": 98,
"text": "(Sornlertlamvanich et al., 1994;",
"ref_id": "BIBREF21"
},
{
"start": 99,
"end": 119,
"text": "Bond and Paik, 2000;",
"ref_id": "BIBREF2"
},
{
"start": 120,
"end": 140,
"text": "Paik and Bond, 2001;",
"ref_id": "BIBREF20"
},
{
"start": 141,
"end": 158,
"text": "Mok et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 161,
"end": 192,
"text": "Sornlertlamvanich et al. (1994)",
"ref_id": "BIBREF21"
},
{
"start": 413,
"end": 433,
"text": "Bond and Paik (2000)",
"ref_id": "BIBREF2"
},
{
"start": 438,
"end": 458,
"text": "Paik and Bond (2001)",
"ref_id": "BIBREF20"
},
{
"start": 696,
"end": 718,
"text": "(Ikehara et al., 1997)",
"ref_id": "BIBREF14"
},
{
"start": 721,
"end": 738,
"text": "Mok et al. (2012)",
"ref_id": "BIBREF18"
},
{
"start": 793,
"end": 815,
"text": "(Isahara et al., 2008)",
"ref_id": "BIBREF15"
},
{
"start": 850,
"end": 870,
"text": "(Huang et al., 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "As it is common in dictionary building, all works mentioned made use of corpora to identify and extract CLs. Nevertheless, extracting noun-CL associations from corpora is not a straightforward task. Quantifier phrases are often used without a noun, resorting to anaphoric or deictic references to what is being quantified (Bond and Paik, 2000) . Similarly, synecdoches also generate noise when pattern matching (Mok et al., 2012) .",
"cite_spans": [
{
"start": 322,
"end": 343,
"text": "(Bond and Paik, 2000)",
"ref_id": "BIBREF2"
},
{
"start": 411,
"end": 429,
"text": "(Mok et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our corpus joins data from three sources: the latest dump of the Chinese Wikipedia, the second version of Chinese Gigaword (Graff et al., 2005) and the UM-Corpus (Tian et al., 2014) . This data was cleaned, sentence delimited and converted to simplified Chinese script. It was further preprocessed using the Stanford Segmentor and POS tagger (Chang et al., 2008; Tseng et al., 2005; Toutanova et al., 2003) . The final version of this corpus has over 30 million sentences (950 million words). For comparison, the largest reported corpora from previous studies contained 38,000 sentences (Mok et al., 2012) . In addition, we also used the latest version (2012) of the Google Ngram corpus for Chinese (Michel et al., 2011) .",
"cite_spans": [
{
"start": 123,
"end": 143,
"text": "(Graff et al., 2005)",
"ref_id": "BIBREF10"
},
{
"start": 152,
"end": 181,
"text": "UM-Corpus (Tian et al., 2014)",
"ref_id": null
},
{
"start": 342,
"end": 362,
"text": "(Chang et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 363,
"end": 382,
"text": "Tseng et al., 2005;",
"ref_id": "BIBREF24"
},
{
"start": 383,
"end": 406,
"text": "Toutanova et al., 2003)",
"ref_id": "BIBREF23"
},
{
"start": 587,
"end": 605,
"text": "(Mok et al., 2012)",
"ref_id": "BIBREF18"
},
{
"start": 699,
"end": 720,
"text": "(Michel et al., 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3"
},
{
"text": "There are some differences between the usage of classifiers in different dialects and variations of Chinese in these different corpora, but our current goal focused on collecting generalizations. Future work could be done to single out differences across dialects and variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3"
},
{
"text": "We used COW (Wang and Bond, 2013) as our lexical ontology, which shares the structure of the Princeton Wordnet (PWN) (Fellbaum, 1998) . To minimize coverage issues, we enriched it with data from the Bilingual Ontological Wordnet (BOW) (Huang et al., 2004) , the Southeast University Wordnet (SEW) (Xu et al., 2008) , and automatically collected data from Wiktionary and CLDR, made available by the Extended OMW (Bond and Foster, 2013) . The final version of this resource had information for over 261k nominal lemmas, from which over 184k were unambiguous (i.e. have only a single sense).",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "(Wang and Bond, 2013)",
"ref_id": "BIBREF25"
},
{
"start": 117,
"end": 133,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 235,
"end": 255,
"text": "(Huang et al., 2004)",
"ref_id": "BIBREF13"
},
{
"start": 297,
"end": 314,
"text": "(Xu et al., 2008)",
"ref_id": "BIBREF26"
},
{
"start": 411,
"end": 434,
"text": "(Bond and Foster, 2013)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3"
},
{
"text": "We filtered all CLs against a list of 204 S-CLs provided by Huang et al. (1997) . Following Lai (2011), we treated both Huang's individual classifiers and event classifiers as S-CLs.",
"cite_spans": [
{
"start": 60,
"end": 79,
"text": "Huang et al. (1997)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resources",
"sec_num": "3"
},
{
"text": "Our algorithm produces two CL dictionaries with frequency information: a lemma based dictionary, and a concept based dictionary, using COW's extended ontology. We tested both dictionaries with a generation task, automatically validated against a held out portion the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Algorithm",
"sec_num": "4"
},
{
"text": "Extracting CL-noun pairs is done by matching POS patterns against the training section of our corpus. To avoid, as much as possible, noise in the extracted data, we choose to take advantage of our large corpus to apply restrictive pattern variations of the basic form: (determiner or numeral) + (CL) + (noun) + (end of sentence punctuation/select conjunctions). Our patterns assure that no long dependencies exist after the CL, and try to maximally reduce the noise introduced by anaphoric, deictic or synecdochic uses of classifiers (Mok et al., 2012) . Variations of this pattern were also included to cover for different segmentations produced by the preprocessing tools.",
"cite_spans": [
{
"start": 534,
"end": 552,
"text": "(Mok et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Classifier-Noun Pairs",
"sec_num": "4.1"
},
{
"text": "If an extracted CL matches the list of S-CLs, we include this noun-CL pair in the lemma based dictionary. The frequency with which a specific noun-CL pair is seen in the corpus is also stored, showing the strength of the association.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Classifier-Noun Pairs",
"sec_num": "4.1"
},
{
"text": "Extracting noun-CL pairs from the Chinese Google NGram corpus required a special treatment. We used the available 4 gram version of this corpus to match a similar pattern (and variations) to the one mentioned above: (determiner or numeral) + (CL) + (X) + (end of sentence punctuation/select conjunctions). Given we had no POS information available for the NGram corpus, we used regular expression matching, listing common determiners, numerals, punctuation, and our list of 204 S-CLs. We did not restrict the third gram. We also transferred the frequency information provided for matched ngrams to our lemma based dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Classifier-Noun Pairs",
"sec_num": "4.1"
},
{
"text": "Our training set included 80% of the text portion of the corpus, from which we extracted over 435k tokens of noun-CL associations, along with the full Chinese Google NGram corpus, from which we extracted 13.5 million tokens of noun-CL associations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Classifier-Noun Pairs",
"sec_num": "4.1"
},
{
"text": "This lemma based dictionary contained, for example, 59 pairs of noun-CL containing the lemma \u7c7b\u522b l\u00e8ibi\u00e9 \"category\". It occurred 58 times with the CL \u4e2a ge, and once with the CL \u9879 xi\u00e0ng. Despite the large difference in frequencies, both CLs can be used with this lemma. Another example, where the relevance of the frequency becomes evident, is the word \u517b\u9e21\u573a y\u01cengj\u012bch\u01ceng \"chicken farm\", which was seen in our corpus 12 times: 6 times with the CL \u4e2a ge, 3 times with the CL \u5bb6 ji\u0101, twice with the CL \u53ea zh\u01d0, and once with the CL \u5ea7 zu\u00f2. Chinese native speaker judgments identified that three out of the 4 CLs identified were correct ( \u4e2a ge, \u5bb6 ji\u0101 and \u5ea7 zu\u00f2). In addition, two other classifiers would also be possible: \u95f4 ji\u0101n and \u6240 su\u01d2. This second example shows that while the automatic matching process is still somewhat noisy, and incomplete, the frequency information can help to filter out ungrammatical examples. When used to generate a classifier, our lemma based dictionary can use the frequency information stored for each identified CL for a particular lemma, and choose the most frequent CL. This process will likely increase the likelihood of it being a valid CL. Also, by setting a minimum frequency threshold for which noun-CLs pair would have to be seen before being added to the dictionary, we can exchange precision for coverage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extracting Classifier-Noun Pairs",
"sec_num": "4.1"
},
{
"text": "The concept based dictionary is created by mapping and expanding the lemma based dictionary onto COW's expanded concept hierarchy. Since ambiguous lemmas can, in principle, use different CLs depending on their sense, we map only unambiguous lemmas (i.e. that belong to a single concept). This way, each unambiguous entry from the lemma based dictionary matching to COW contributes information to a single concept. Frequency information and possible CLs are collected for each matched sense. The resulting conceptbased mapping, for each concept, is the union of CLs for each unambiguous lemma along with sum of frequencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Based Dictionary",
"sec_num": "4.2"
},
{
"text": "Following one of the examples above, the lemma \u7c7b\u522b l\u00e8ibi\u00e9, was unambiguously mapped to the concept ID 05838765-n -defined as \"a general concept that marks divisions or coordinations in a conceptual scheme\". This concept provides two other synonyms: \u8303\u7574 f\u00e0nch\u00f3u and \u79cd\u7c7b zh\u01d2ngl\u00e8i. In the concept based dictionary, the concept ID 05838765-n will aggregate the information provided by all its unambiguous senses. This results in a frequency count of 132 for the CL \u4e2a ge, and of 2 for \u9879 xi\u00e0ng (both valid uses).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Based Dictionary",
"sec_num": "4.2"
},
{
"text": "As has been shown in previous works, semantic ontologies should, in principle, be able to simulate the taxonomic features hierarchy that link nouns and CLs. We use this to further expand the concept based dictionary of CLs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concept Based Dictionary",
"sec_num": "4.2"
},
{
"text": "For each concept that didn't receive a classifier, we collect information concerning ten levels of hypernymy and hyponymy around it. If any pair of hypernym-hyponym was associated with the same CL, we assign this CL to the current concept. Since we're interested in the task of generating the best (or most common) CL, we rank CLs inside these expanded concepts by summing the frequencies of all hypernyms and hyponyms that shared the same CL. If more than one CL can be assigned this way, we do so. Figure 1 exemplifies this expansion. While concepts A, B and C did not get classifiers directly assigned to them, they are still assigned one or more classifiers based on their place in the concept hierarchy. For every concept that didn't receive any CL information, if it has at least a hypernym and a hyponym sharing a CL (within a distance of 10 jumps), then it will inherit this CL and the sum of their frequencies. Assuming a full concept hierarchy is represented in Figure 1 , concept A would inherit two classifiers, and concept B and C would inherit one each.",
"cite_spans": [],
"ref_spans": [
{
"start": 500,
"end": 508,
"text": "Figure 1",
"ref_id": null
},
{
"start": 972,
"end": 980,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Concept Based Dictionary",
"sec_num": "4.2"
},
{
"text": "This expansion provides extra coverage to the concept based dictionary. But we differ from previous works in the sense that we do not blindly assign CLs down the concept hierarchy, making it depend on previously extracted information for both hypernyms and hyponyms. By following a Figure 1 : Classifier Expansion stricter approach, we hope to provide results of better quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 282,
"end": 290,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Concept Based Dictionary",
"sec_num": "4.2"
},
{
"text": "We evaluated both lemma and concept based dictionaries with two tasks: predicting the validity of and generating CLs. We used roughly 10% of held out data (dev-set), from which we extracted about 37,4k tokens of noun-CL pairs, as described in 4.1. We used this data to evaluate the prediction and generation capabilities of both dictionaries in the following ways: predicting the validity of a CL was measured by comparing every noun-CL pair extracted from the dev-set to the data contained in the dictionary for that particular lemma (i.e. if that particular classifier was already predicted by the dictionary); generation was measured by selecting the best likely classifier, based on the cumulative frequencies of noun-CL pairs in the dictionary (i.e. if the classifier seen in the example matched the most frequent classifier). This was done separately for both dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3"
},
{
"text": "When no other classifier had been assigned, we used \u4e2a ge, the most frequent CL on the corpus, as the default classifier. And a baseline was established by assigning \u4e2a ge as the only CL for every entry.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3"
},
{
"text": "The dev-set was used to experiment with different thresholds (\u03c4 ) of the minimum frequency, from one to five, for which noun-CL pairs would have to be seen in the train-set in order to be considered into the dictionaries. These different minimum frequency thresholds were compared be- The best performing \u03c4 was then tested in a second held-out set of data (test-set), also containing roughly 10% of the size of the text corpus, roughly 39.9k tokens of noun-CL pairs. The test-set is used to report our final results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3"
},
{
"text": "\u03c4 = 1 \u03c4 = 3 \u03c4 =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3"
},
{
"text": "The results are presented in Table 1 , and are discussed in the following section.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.3"
},
{
"text": "In Table 1 we can start to note that the baseline, of consistently assigning \u4e2a ge to every entry in the dictionary is fairly high, of roughly 40%.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "In order to allow a fair comparison, since we decided that the concept based dictionary would contain only unambiguous lemmas, we only use unambiguous lemmas to compare the performance across dictionaries. All results can be compared across the different thresholds discussed in 4.3. \u03c4 = 1, 3 and 5 present the results obtained in the automatic evaluation, using minimum frequencies of one, three and five, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "The first three reported results report exclusively about the lemma dictionary (including both ambiguous and unambiguous lemmas). lem-all reports the results of the prediction task, lem-allmfcl reports the results of the generation task, and lem-all-no-info reports the relative frequency of lemmas for which there was no previous infor-mation in the dictionary, and which could have boosted both task's performance by falling back on the default CL \u4e2a ge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "These initial results show that it was easy to perform better than baseline, and that \u03c4 = 1 achieved the best results on both predicting noun-CL pairs, and generating CLs that matched the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Comparing different \u03c4 s shows that, even considering the over-generation reduction that imposing minimum frequencies brings (validated but not presented here), the best generation performance is achieved by not filtering the training data. And this will be consistent across the remainder of the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "When comparing both dictionaries, we look only at unambiguous lemmas. Similar to what was explained above, lem-unamb and wn-unamb report the results of the prediction task for the lemma based and concept based dictionary, respectively. The labels lem-unamb-mfcl and wn-unamb-mfcl report the results for the generation task. And the lem-unamb-no-info and wn-unamb-no-info report about the lack of informed coverage (where backing-off to the default CL might have help the performance).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Between the lemma and the concept based dictionaries, this automatic evaluation shows that while the concept based dictionary is better at predicting if a noun-CL pair was valid, the lemma based dictionary outperforms the former in the generation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "The final results of this automatic evaluation are shown in column Test, where we re-evaluated the dictionary produced by \u03c4 = 1 on the test-set. Test shows slightly better results, perhaps because the random sample was easier than the dev-set, but the same tendencies as reported above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Considering that the concept based dictionary should be able to provide CL information to some lemmas that have not been seen in the training data (either by expansion or by leveraging on a single lemma to provide information about synonyms), we expected the concept based dictionary to present the best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Many different reasons could be influencing these results, such as errors in the ontology, the fact that Chinese CLs relate better to specific senses than to concepts (i.e. different lemmas inside a concept prefer different CLs), or noise introduced by the test and dev-set (since we don't have a hand curated golden test-set). For this rea-son, we decided to hand validate a sample of each dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Based on a random sample of 100 concepts and 100 lemmas extracted from each dictionary, a Chinese native speaker checked if the top ranked CL (i.e. with highest frequency), that would be used to generate a CL for each of the randomly selected entries, was in fact a valid CL for that lemma or concept. This human validation showed the concept based dictionary outperforming the lemma based dictionary by a wide margin: 87% versus 76% valid guesses. This inversion of performance, when compared to the automatic evaluation, was confirmed to be mainly due to noisy data in the test-set caused by the automatic segmentation and POS tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "We then looked at a bigger sample of 200 lemmas and found roughly 7.5% of invalid lemmas in the lemma based dictionary. Conversely, the concept based dictionary assigns CLs by 'bags of lemmas' (i.e. synsets). This allows the noise introduced by a few senses to be attenuated by the 'bag' nature of the concept. More importantly, most of the nominal lemmas included in the extended version of COW are human validated, so the quality of the concept based dictionary was confirmed to be better -since most lemmas included in it are attested to be valid.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Comparing the size of both dictionaries in Table 1, even though the \u03c4 1 lemma based dictionary is considerably larger (32.4k compared to 22.5k entries of the concept based dictionary), we have shown that noise is a problem for the lemma based approach. Also, since the extended COW has, on average, 2.25 senses per concept, the concept based dictionary provides CL information for over 50.6k lemmas. When comparing the size of both dictionaries across \u03c4 s, we can also effectively verify the potential of the expansion step possible only for the concept based dictionary. As \u03c4 increases, the size of the concept based dictionary increases relatively to the lemma based. When applied to other tasks, where noise reduction would play a more important role (which can be done by raising \u03c4 ), the concept based dictionary is able to produce more informed decisions with less data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Lastly, coverage was also tested against data from a human curated database of noun-CL associations (Gao, 2014) , by replicating the automatic evaluation generation task described in 4.3. This dictionary contains information about more than 800 CLs and provides a few hand-selected examples for each CL -and hence it is not designed with the same mindset. Testing the best performing dictionaries (\u03c4 1) against the data provided for S-CLs, we achieved only 43.9% and 28.3% for prediction and generation, respectively, using the lemma based dictionary; compared to 49.8% and 22.4% using the concept based dictionary.",
"cite_spans": [
{
"start": 100,
"end": 111,
"text": "(Gao, 2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "The same trends in prediction and generation are observed, where the concept based dictionary is able to predict better than the lemma base, but it is outperformed by the later in the generation task. Ultimately, these weak results show that even though we used a very large quantity of data, our restrictive matching patterns in conjunction with infrequent noun-CLs pairs still leaves a long tail of difficult predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Results",
"sec_num": "5"
},
{
"text": "Since our method is mostly language independent, we would like to replicate it with other classifier languages for which there are open linked WN resources (such as Japanese, Indonesian and Thai). This would require access to large amounts of text segmented, POS tagged text, and adapting the matching expressions for extracting noun-CL pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing and Future Work",
"sec_num": "6"
},
{
"text": "More training data would not only help improving overall performance on open data, by minimizing unseen data, but would also allow us to make better use of frequency threshold filters for noise reduction. Lack of training data as our biggest drawback on performance, we would like to repeat this experiment with more data -including, for example, a very large web-crawled corpus in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing and Future Work",
"sec_num": "6"
},
{
"text": "In addition, we would also like to perform WSD on the training set, using UKB (Agirre and Soroa, 2009) for example. This would allow an informed mapping of ambiguous senses onto the semantic ontology and, arguably, comparable performance on generating CLs for ambiguous lemmas. We will also investigate further how to deal with words not in COW: first looking them up in the lemma dictionary, and then associating CLs to the head (character / noun) of unseen noun-phrases, as proposed in Bond and Paik (2000) .",
"cite_spans": [
{
"start": 78,
"end": 102,
"text": "(Agirre and Soroa, 2009)",
"ref_id": "BIBREF0"
},
{
"start": 488,
"end": 508,
"text": "Bond and Paik (2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing and Future Work",
"sec_num": "6"
},
{
"text": "Even though this work was mainly focused on producing an external resource linked to COW, we are also working on adding a new set of sortal classifiers concepts to COW (Morgado da Costa and Bond, 2016) . The absence of this class of words in COW currently prevents us from using the internal ontology structure to link nouns and classifiers. Once they are represented, we will make use of this work to link nominal concepts and corresponding valid classifiers.",
"cite_spans": [
{
"start": 190,
"end": 201,
"text": "Bond, 2016)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ongoing and Future Work",
"sec_num": "6"
},
{
"text": "Our work shows that it is possible to create a high quality dictionary of noun-CLs, with generation capabilities, by extracting frequency information from large corpora. We compared both a lemma based approach and a concept based approach, and our best results report a human validated performance of 87% on generation of classifiers using a concept based dictionary. This is roughly a 9% improvement against the only other known work done on Chinese CL generation using wordnet (Mok et al., 2012) .",
"cite_spans": [
{
"start": 479,
"end": 497,
"text": "(Mok et al., 2012)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Finally, we will merge all three data sets and, from them, produce a release of this data. We commit to make both lemma and WN mappings available under an open license, release along with the Chinese Open Wordnet at http:// compling.hss.ntu.edu.sg/cow/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This research was supported in part by the MOE Tier 2 grant That's what you meant: a Rich Representation for Manipulation of Meaning (MOE ARC41/13).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Personalizing pagerank for word sense disambiguation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre and Aitor Soroa. 2009. Personaliz- ing pagerank for word sense disambiguation. In Proceedings of the 12th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 33-41. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Linking and extending an open multilingual wordnet",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond and Ryan Foster. 2013. Linking and ex- tending an open multilingual wordnet. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1352-1362, Sofia, Bulgaria. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Reusing an ontology to generate numeral classifiers",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Kyonghee",
"middle": [],
"last": "Paik",
"suffix": ""
}
],
"year": 2000,
"venue": "The 18th International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "90--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond and Kyonghee Paik. 2000. Reusing an ontology to generate numeral classifiers. In COL- ING 2000 Volume 1: The 18th International Confer- ence on Computational Linguistics, pages 90-96.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Optimizing Chinese word segmentation for machine translation performance",
"authors": [
{
"first": "Pi-Chuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation, StatMT '08",
"volume": "",
"issue": "",
"pages": "224--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word seg- mentation for machine translation performance. In Proceedings of the Third Workshop on Statistical Machine Translation, StatMT '08, pages 224-232, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Grammar of Spoken Chinese",
"authors": [
{
"first": "Y",
"middle": [
"R"
],
"last": "Chao",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.R. Chao. 1965. A Grammar of Spoken Chinese. University of California Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Yi-wan tang, yi-ge tang: Classifiers and massifiers",
"authors": [],
"year": 1998,
"venue": "Tsing Hua journal of Chinese studies",
"volume": "28",
"issue": "3",
"pages": "385--412",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lisa Lai-Shen Cheng and Rint Sybesma. 1998. Yi-wan tang, yi-ge tang: Classifiers and massifiers. Tsing Hua journal of Chinese studies, 28(3):385-412.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Classifiers are for specification: Complementary functions for sortal and general classifiers in Cantonese and Mandarin. Cahiers de linguistique-Asie orientale",
"authors": [
{
"first": "S",
"middle": [],
"last": "Mary",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Erbaugh",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "31",
"issue": "",
"pages": "33--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mary S Erbaugh. 2002. Classifiers are for specifica- tion: Complementary functions for sortal and gen- eral classifiers in Cantonese and Mandarin. Cahiers de linguistique-Asie orientale, 31(1):33-69.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christine Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Computational lexicography: A feature-based approach in designing an e-dictionary of Chinese classifiers",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2nd Workshop on Cognitive Aspects of the Lexicon",
"volume": "",
"issue": "",
"pages": "56--65",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helena Gao. 2010. Computational lexicography: A feature-based approach in designing an e-dictionary of Chinese classifiers. In Proceedings of the 2nd Workshop on Cognitive Aspects of the Lexicon, pages 56-65. Coling 2010.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Database design of an online elearning tool of Chinese classifiers",
"authors": [
{
"first": "Helena",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 4th Workshop on Cognitive Aspects of the Lexicon (CogALex)",
"volume": "",
"issue": "",
"pages": "126--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helena Gao. 2014. Database design of an online e- learning tool of Chinese classifiers. In Proceedings of the 4th Workshop on Cognitive Aspects of the Lex- icon (CogALex), pages 126-137.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chinese Gigaword Second Edition LDC2005T14. Web Download. Linguistic Data Consortium",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Graff, Ke Chen, Junbo Kong, and Kazuaki Maeda. 2005. Chinese Gigaword Second Edi- tion LDC2005T14. Web Download. Linguistic Data Consortium.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mandarin Daily Dictionary of Chinese Classifiers",
"authors": [
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu-Ren Huang, Keh-Jiann Chen, and Ching-Hsiung Lai, editors. 1997. Mandarin Daily Dictionary of Chinese Classifiers. Mandarin Daily Press, Taipei.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Noun class extraction from a corpus-based collocation dictionary: An integration of computational and qualitative approaches. Quantitative and Computational Studies of Chinese Linguistics",
"authors": [
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhao-Ming",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "339--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu-Ren Huang, Keh-jiann Chen, and Zhao-ming Gao. 1998. Noun class extraction from a corpus-based collocation dictionary: An integration of computa- tional and qualitative approaches. Quantitative and Computational Studies of Chinese Linguistics, pages 339-352.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sinica BOW (Bilingual Ontological Wordnet): Integration of bilingual wordnet and sumo",
"authors": [
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Ru-Yng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hshiang-Pin",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04)",
"volume": "",
"issue": "",
"pages": "825--826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu-Ren Huang, Ru-Yng Chang, and Hshiang-Pin Lee. 2004. Sinica BOW (Bilingual Ontologi- cal Wordnet): Integration of bilingual wordnet and sumo. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC'04), pages 825-826. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Goi-Taikei -A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 volumes/CDROM",
"authors": [
{
"first": "Satoru",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Miyazaki",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Akio",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "Hiromi",
"middle": [],
"last": "Nakaiwa",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Ooyama",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio Yokoo, Hiromi Nakaiwa, Kentaro Ogura, Yoshifumi Ooyama, and Yoshihiko Hayashi. 1997. Goi-Taikei -A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 volumes/CDROM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Development of the Japanese WordNet",
"authors": [
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Kyoko",
"middle": [],
"last": "Kanzaki",
"suffix": ""
}
],
"year": 2008,
"venue": "Sixth International conference on Language Resources and Evaluation (LREC 2008)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hitoshi Isahara, Francis Bond, Kiyotaka Uchimoto, Masao Utiyama, and Kyoko Kanzaki. 2008. De- velopment of the Japanese WordNet. In Sixth In- ternational conference on Language Resources and Evaluation (LREC 2008), Marrakech.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Identifying True Classifiers in Mandarin Chinese",
"authors": [
{
"first": "Wan-Chun",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wan-chun Lai. 2011. Identifying True Classifiers in Mandarin Chinese. Master's thesis, National Chengchi University, Taiwan.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Quantitative analysis of culture using millions of digitized books",
"authors": [
{
"first": "Jean-Baptiste",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Yuan Kui",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Aviva",
"middle": [],
"last": "Presser Aiden",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Veres",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gray",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Joseph",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Pickett",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Hoiberg",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clancy",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Norvig",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Orwant",
"suffix": ""
}
],
"year": 2011,
"venue": "Science",
"volume": "331",
"issue": "6014",
"pages": "176--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, et al. 2011. Quantitative analysis of culture using millions of digitized books. Science 14 January 2011, 331(6014):176-182.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Generating numeral classifiers in Chinese and Japanese",
"authors": [
{
"first": "Hazel",
"middle": [],
"last": "Mok",
"suffix": ""
},
{
"first": "Eshley",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th Global Word-Net Conference (GWC 2012)",
"volume": "",
"issue": "",
"pages": "211--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hazel Mok, Eshley Gao, and Francis Bond. 2012. Generating numeral classifiers in Chinese and Japanese. In Proceedings of the 6th Global Word- Net Conference (GWC 2012), Matsue. 211-218.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Wow! What a useful extension to wordnet!",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Morgado Da Costa",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Morgado da Costa and Francis Bond. 2016. Wow! What a useful extension to wordnet! In Pro- ceedings of the International Conference on Lan- guage Resources and Evaluation, Slovenia.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Multilingual generation of numeral classifiers using a common ontology",
"authors": [
{
"first": "Kyonghee",
"middle": [],
"last": "Paik",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
}
],
"year": 2001,
"venue": "19th International Conference on Computer Processing of Oriental Languages: ICCPOL-2001",
"volume": "",
"issue": "",
"pages": "141--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyonghee Paik and Francis Bond. 2001. Multilingual generation of numeral classifiers using a common ontology. In 19th International Conference on Com- puter Processing of Oriental Languages: ICCPOL- 2001, Seoul. 141-147.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Classifier assignment by corpus-based approach",
"authors": [
{
"first": "Virach",
"middle": [],
"last": "Sornlertlamvanich",
"suffix": ""
},
{
"first": "Wantanee",
"middle": [],
"last": "Pantachat",
"suffix": ""
},
{
"first": "Surapant",
"middle": [],
"last": "Meknavin",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 15th conference on Computational linguistics",
"volume": "1",
"issue": "",
"pages": "556--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Virach Sornlertlamvanich, Wantanee Pantachat, and Surapant Meknavin. 1994. Classifier assignment by corpus-based approach. In Proceedings of the 15th conference on Computational linguistics-Volume 1, pages 556-561. ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "UM-Corpus: A large English-Chinese parallel corpus for statistical machine translation",
"authors": [
{
"first": "Derek",
"middle": [
"F"
],
"last": "Liang Tian",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"S"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Paulo",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Quaresma",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Oliveira",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Tian, Derek F. Wong, Lidia S. Chao, Paulo Quaresma, Francisco Oliveira, and Lu Yi. 2014. UM-Corpus: A large English-Chinese parallel cor- pus for statistical machine translation. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Feature-rich part-ofspeech tagging with a cyclic dependency network",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the NAACL HLT",
"volume": "",
"issue": "",
"pages": "173--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the NAACL HLT 2003 2003 -Vol- ume 1, NAACL '03, pages 173-180, Stroudsburg, PA, USA. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A conditional random field word segmenter for sighan bakeoff",
"authors": [
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Pichuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A condi- tional random field word segmenter for sighan bake- off 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 168-171.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Building the Chinese Open Wordnet (COW): Starting from core synsets",
"authors": [
{
"first": "Shan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 11th Workshop on Asian Language Resources, a Workshop at IJCNLP-2013",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shan Wang and Francis Bond. 2013. Building the Chinese Open Wordnet (COW): Starting from core synsets. In Proceedings of the 11th Workshop on Asian Language Resources, a Workshop at IJCNLP- 2013, pages 10-18, Nagoya.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An integrated approach for automatic construction of bilingual Chinese-English wordnet",
"authors": [
{
"first": "Renjie",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhiqiang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Yingji",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yuzhong",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Zhisheng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "The Semantic Web",
"volume": "5367",
"issue": "",
"pages": "302--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Renjie Xu, Zhiqiang Gao, Yingji Pan, Yuzhong Qu, and Zhisheng Huang. 2008. An integrated approach for automatic construction of bilingual Chinese-English wordnet. In John Domingue and Chutiporn Anu- tariya, editors, The Semantic Web, volume 5367 of Lecture Notes in Computer Science, pages 302-314. Springer Berlin Heidelberg.",
"links": null
}
},
"ref_entries": {}
}
}