Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N15-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:35:18.431400Z"
},
"title": "Empty Category Detection With Joint Context-Label Embeddings",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Laboratories",
"location": {
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "wang.xun@lab.ntt.co.jp"
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Laboratories",
"location": {
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "sudoh.katsuhito@lab.ntt.co.jp"
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NTT Communication Science Laboratories",
"location": {
"postCode": "619-0237",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "nagata.masaaki@lab.ntt.co.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a novel technique for empty category (EC) detection using distributed word representations. A joint model is learned from the labeled data to map both the distributed representations of the contexts of ECs and EC types to a low dimensional space. In the testing phase, the context of possible EC positions will be projected into the same space for empty category detection. Experiments on Chinese Treebank prove the effectiveness of the proposed method. We improve the precision by about 6 points on a subset of Chinese Treebank, which is a new state-ofthe-art performance on CTB.",
"pdf_parse": {
"paper_id": "N15-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a novel technique for empty category (EC) detection using distributed word representations. A joint model is learned from the labeled data to map both the distributed representations of the contexts of ECs and EC types to a low dimensional space. In the testing phase, the context of possible EC positions will be projected into the same space for empty category detection. Experiments on Chinese Treebank prove the effectiveness of the proposed method. We improve the precision by about 6 points on a subset of Chinese Treebank, which is a new state-ofthe-art performance on CTB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The empty category (EC) is an important concept in linguistic theories. It is used to describe nominal words that do not have explicit phonological forms (they are also called \"covert nouns\"). This kind of grammatical phenomenons is usually caused by the omission or dislocation of nouns or pronouns. Empty categories are the \"hidden\" parts of text and are essential for syntactic parsing (Gabbard et al., 2006; Yang and Xue, 2010) . As a basic problem in NLP, the resolution of ECs also has a huge impact on lots of downstream tasks, such as co-reference resolution (Ponzetto and Strube, 2006; Kong and Ng, 2013) , long distance dependency relation analysis (Marcus et al., 1993; Xue et al., 2005) . Research also uncovers the important role of ECs in machine translation. Some recent work (Chung and Gildea, 2010; Xiang et al., 2013) demonstrates the improvements they manage to obtain through EC detection in Chinese-English translation.",
"cite_spans": [
{
"start": 389,
"end": 411,
"text": "(Gabbard et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 412,
"end": 431,
"text": "Yang and Xue, 2010)",
"ref_id": "BIBREF25"
},
{
"start": 567,
"end": 594,
"text": "(Ponzetto and Strube, 2006;",
"ref_id": "BIBREF17"
},
{
"start": 595,
"end": 613,
"text": "Kong and Ng, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 659,
"end": 680,
"text": "(Marcus et al., 1993;",
"ref_id": "BIBREF15"
},
{
"start": 681,
"end": 698,
"text": "Xue et al., 2005)",
"ref_id": "BIBREF23"
},
{
"start": 791,
"end": 815,
"text": "(Chung and Gildea, 2010;",
"ref_id": "BIBREF2"
},
{
"start": 816,
"end": 835,
"text": "Xiang et al., 2013)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To resolve ECs, we need to decide 1) the position and type of the EC and 2) the content of the EC (to which element the EC is linked to if plausible). Existing research mainly focuses on the first problem which is referred to as EC detection (Cai et al., 2011; Yang and Xue, 2010) , and so is this paper. As ECs are words or phrases inferable from their context, previous work mainly designs features mining the contexts of ECs and then trains classification models or parsers using these features (Xue and Yang, 2013; Johnson, 2002; Gabbard et al., 2006; Kong and Zhou, 2010) . One problem with these human-developed features are that they are not fully capable of representing the semantics and syntax of contexts. Besides, the feature engineering is also time consuming and labor intensive.",
"cite_spans": [
{
"start": 242,
"end": 260,
"text": "(Cai et al., 2011;",
"ref_id": "BIBREF1"
},
{
"start": 261,
"end": 280,
"text": "Yang and Xue, 2010)",
"ref_id": "BIBREF25"
},
{
"start": 498,
"end": 518,
"text": "(Xue and Yang, 2013;",
"ref_id": "BIBREF22"
},
{
"start": 519,
"end": 533,
"text": "Johnson, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 534,
"end": 555,
"text": "Gabbard et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 556,
"end": 576,
"text": "Kong and Zhou, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently neural network models have proven their superiority in capturing features using low dense vector compared with traditional manually designed features in dozens of NLP tasks (Bengio et al., 2006; Collobert and Weston, 2008; Socher et al., 2010; Collobert et al., 2011; .",
"cite_spans": [
{
"start": 182,
"end": 203,
"text": "(Bengio et al., 2006;",
"ref_id": "BIBREF0"
},
{
"start": 204,
"end": 231,
"text": "Collobert and Weston, 2008;",
"ref_id": "BIBREF3"
},
{
"start": 232,
"end": 252,
"text": "Socher et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 253,
"end": 276,
"text": "Collobert et al., 2011;",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper demonstrates the advantages of distributed representations and neural networks in predicting the locations and types of ECs. We formulate the EC detection as an annotation task, to assign predefined labels (EC types) to given contexts. Recently, proposed a system taking advantages of the hidden representations of neural networks for image annotation which is to annotate images with a set of textual words. Following the work, we design a novel method for EC detection. We represent possible EC positions using the word embeddings of their contexts and then map them to a low dimension space for EC detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments on Chinese Treebank show that the proposed model obtains significant improvements over the previous state-of-the-art methods based on strict evaluation metrics. We also identify the dependency relations between ECs and their heads, which is not reported in previous work. The dependency relations can help us with the resolution of ECs and benefit other tasks, such as full parsing and machine translation in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We represent each EC as a vector by concatenating the word embeddings of its contexts. As is shown in Fig. 1 , we learn a map M AP A from the annotated data, to project the ECs' feature vectors to a low dimension space K. Meanwhile, we also obtain the distributed representations of EC types in the same low dimension space K. In the testing phase, for each possible EC position, we use M AP A to project its context feature to the same space and further compare it with the representations of EC types for EC detection. Distributed representations are good at capturing the semantics and syntax of contexts. For example, with word embeddings we are able to tell that \"\u5403/eat\" and \"\u559d/drink\" have a closer relationship than \"\u5403/eat\" and \"\u8d70/walk\" or \"\u559d/drink\" and \"\u8d70/walk\". Thus the knowledge we learn from: \"EC(\u4f60/You)-\u5403/have-EC(\u665a \u996d/supper)-\u4e86/past tense marker-\u4e48/question marker\" could help us to detect ECs in sentences such as \"EC(\u4f60/You)-\u996e\u6599/beverage-\u559d/drink-\u4e86/past tense marker-\u4e48/question marker\", which are similar, though different from the original sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 108,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "Below is a list of EC types contained in the Chinese Treebank, which are also the types of EC we are to identity in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 pro: small pro, refer to dropped pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 PRO: big PRO, refer to shared elements in control structures or elements that have generic references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 OP: null operator, refer to empty relative pronouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 T: trace left by A'-movement, e.g., topicalization, relativization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 RNR: used in right nodes rising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 *: trace left by passivization, raising.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "\u2022 Others: other ECs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "According to the reason that one EC is caused, we are able to assign it one of the above categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "We can formulate EC detection as a combination of a two-class classification problem (is there an EC or not) and a seven-class classification problem (what type the EC is if there is one) following the two-pass method. For onepass method, EC detection can be formulated as an eight-class (seven EC types listed above plus a dummy \"No\" type) classification problem. Previous research shows there is no significant differences between their performances (Xue and Yang, 2013) . Here we adopt the onepass method for simplicity.",
"cite_spans": [
{
"start": 452,
"end": 472,
"text": "(Xue and Yang, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Method",
"sec_num": "2"
},
{
"text": "The proposed system consists of two maps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "M AP A is from the feature vector of an EC position to a low dimensional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M AP A : R n \u2192 R k , k \u226a n f A (X) \u2192 W A X",
"eq_num": "(1)"
}
],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "M AP A is a linear transformation, and W A is a k * n matrix. The other one is from labels to the same low dimensional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "M AP B : {Label 1 , Label 2 , ...} \u2208 R \u2192 R k f B (Label i ) \u2192 W i B (2) M AP B is also a linear transformation. W i B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "is a k dimensional vector and it is also the distributed representation of Label i in the low dimensional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "The two maps are learned from the training data simultaneously. In the testing phase, for any possible EC position to be classified, we extract the corresponding feature vector X, and then map it to the low dimensional space using f A (X) = W A X. Then we have g i (X) for each Label i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "g i (X) = (f A (X)) T W i B (3) For each possible label Label i , g i (X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "is the score that the example having a Label i and the label predicted for the example is the i that maximizes g i (X).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "Following the method of Weston et al. 2011, we try to minimize a weighted pairwise loss, learned using stochastic gradient descent:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "\u2211 X \u2211 i\u0338 =c L(rank c (X)) max(0, (g i (X) \u2212 g c (X))) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "Here c is the correct label for example X, and rank c (X) is the rank of Label c among all possible labels for X. L is a function which reflects our attitude towards errors. A constant function L = C implies we aim to optimize the full ranking list. Here we adopt L(\u03b1) = \u2211 \u03b1 i=1 1/i, which aims to optimize the top 1 in the ranking list, as stated in (Usunier et al., 2009) . The learning rate and some other parameters of the stochastic gradient descent algorithm are to be optimized using the development set.",
"cite_spans": [
{
"start": 351,
"end": 373,
"text": "(Usunier et al., 2009)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "An alternative method is to train a neural network model for multi-class classification directly. It is plausible when the number of classes is not large. One of the advantages of representing ECs and labels in a hidden space is that EC detection usually serves as an intermediate task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "Usually we want to know more about the ECs such as their roles and explicit content. Representing labels and ECs as dense vectors will greatly benefit other work such as EC resolution or full parsing. Besides, such a joint embedding framework can scale up to the large set of labels as is shown in the image annotation task , which makes the identification of dependency types of ECs (which is a large set) possible.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Overview",
"sec_num": "2.1"
},
{
"text": "In a piece of text, possible EC positions can be described with references to tokens, e.g., before the n th token (Yang and Xue, 2010) . One problem with such methods is that if there are more than one ECs preceding the n th token, they will occupy the same position and can not be distinguished. One solution is to decide the number of ECs for each position, which complicates the problem. But if we do nothing, some ECs will be ignored.",
"cite_spans": [
{
"start": 114,
"end": 134,
"text": "(Yang and Xue, 2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Locations",
"sec_num": "2.2.1"
},
{
"text": "A compromised solution is to describe positions using parse trees (Xue and Yang, 2013) . Adjacent ECs before a certain token usually have different head words, which means they are attached to different nodes (head words) in a parse tree. Therefore it is possible to define positions using \"head word, following word\" pairs. Thus the problem of EC detection can be formulated as a classification problem: for each \"head word, following word\" pair, what is the type of the EC? An example is shown in figure 2, in which there are 2 possible EC positions, (\u5403, \u4e86) and (\u5403, \u3002) 1 . Besides, we keep punctuations in the parse tree so that we can describe all the possible positions using the \"head node, following word\" pairs, as no elements will appear after a full stop in a sentence.",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Xue and Yang, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Defining Locations",
"sec_num": "2.2.1"
},
{
"text": "\u5403 \u3002 Position-2 \u4e86 Position-1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ROOT",
"sec_num": null
},
{
"text": "The feature vector is constructed by concatenating the word embeddings of context words that are expected to contribute to the detection of ECs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "1. The head word (except the dummy root node). Suppose words are represented using d dimension vectors, we need d elements to represent this feature. The distributed representations of the head word would be placed at the corresponding positions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "2. The following word in the text. This feature is extracted using the same method with head words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "3. \"Nephews\", the sons of the following word. We choose the leftmost two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "4. Words in dependency paths. ECs usually have long distance dependencies with words which cannot be fully captured by the above categories. We need a new feature to describe such long distance semantic relations: Dependency Paths. From the training data, we collect all the paths from root nodes to ECs (ECs excluded) together with dependency types. Below we give an example to illustrate the extraction of this kind of features using a complex sentence following word (\u5fb7\u56fd). But such phenomenas are rare, so here we still adopt the tree based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "with a multi-layer hierarchical dependency tree as in Fig. 3 . If we have m kinds of such paths with different path types or dependency types, we need md elements to represent this kind of features. The distributed representations of the words would be placed at the corresponding positions in the feature vector and the remaining are set to 0.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 60,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "Previous work usually involves lots of syntactic and semantic features. In the work of (Xue and Yang, 2013), 6 kinds of features are used, including those derived from constituency parse trees, dependency parse trees, semantic roles and others. Here we use only the dependency parse trees for the feature extraction. The words in dependency paths we use have proven their potential in representing the meanings of text in frame identification (Hermann et al., 2014) .",
"cite_spans": [
{
"start": 443,
"end": 465,
"text": "(Hermann et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "Take the OP in the sentence shown in Fig. 3 for example. For the OP, its head word is \"\u7684\", its following word is \"\u544a\u522b\" and its nephews are \"NULL\" and \"NULL\" (ECs are invisible).",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 43,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "The dependency path from root to OP is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "Root ROOT \u2212 \u2212\u2212\u2212 \u2192 \u4e3e\u884c/hold COM P \u2212 \u2212\u2212\u2212\u2212 \u2192 \u4eea\u5f0f/ceremony RELC \u2212 \u2212\u2212\u2212 \u2192 \u7684/DE COM P \u2212 \u2212\u2212\u2212\u2212 \u2192 OP",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "For such a path, we have the following subpaths:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "Root ROOT \u2212 \u2212\u2212\u2212 \u2192 . COM P \u2212 \u2212\u2212\u2212\u2212 \u2192 . RELC \u2212 \u2212\u2212\u2212 \u2192 X Root ROOT \u2212 \u2212\u2212\u2212 \u2192 . COM P \u2212 \u2212\u2212\u2212\u2212 \u2192 X Root ROOT \u2212 \u2212\u2212\u2212 \u2192 X",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "For the position of the OP in the given example, the words with corresponding dependency paths are \"\u7684\", \"\u4eea\u5f0f\" and \"\u4e3e\u884c\". Similarly, we collects all the paths from other ECs in the training examples to build the feature template.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "In the testing phase, for each possible EC position, we place the distributed representations of the right words at the corresponding positions of its feature vector. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction",
"sec_num": "2.2.2"
},
{
"text": "The proposed method can be applied to various kinds of languages as long as annotated corpus are available. In our experiments, we use a subset of Chinese Treebank V7.0. We split the data set into three parts, training, development and test data. Following the previous research, we use File 1-40 and 901-931 as the test data, File 41-80 as the development data. The training data includes File {81-325, 400-454, 500-554, 590-596, 6000-885, 900}. The development data is used to tune parameters and the final results are reported on the test data. CTB trees are transferred to dependency trees for feature extraction with ECs preserved (Xue, 2007) .",
"cite_spans": [
{
"start": 636,
"end": 647,
"text": "(Xue, 2007)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "The distributed word representation we use is learned using the word2vec toolkit (Mikolov et al., 2013) . We train the model on a large Chinese news copora provided by Sogou 2 , which contains about 1 billion words after necessary preprocessing. The text is segmented into words using ICTCLAS (Zhang et al., 2003) 3 .",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 293,
"end": 313,
"text": "(Zhang et al., 2003)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "Initialization W A is initialized according to unif orm[\u2212 Parameter Tuning To optimize the parameters, firstly, we set the dimension of word vectors to be 80, the dimension of hidden space to be 50. We search for the suitable learning rate in {10 \u22121 , 10 \u22122 , 10 \u22124 }. Then we deal with the dimension of word vectors {80, 100, 200}. Finally we tune the dimension of hidden space in {50, 200, 500} against the F-1 scores. . Those underlined figures are the value of the parameters after optimization. We use the stochastic gradient descent algorithm to optimize the model. The details can be checked here . The maximum iteration number we used is 10K. In the following experiments, we set the parameters to be learning rate=10 \u22121 , word vector dimension=80 and hidden layer di-mension=500.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "3.2"
},
{
"text": "From the experiments for parameter tuning, we find that for the word embeddings in the proposed model, low dimension vectors are better than high dimensions one for low dimension vectors are better in sharing meanings. For the hidden space which represents inputs as uninterpreted vectors, high dimensional vectors are better than low dimensional vectors. The learning rates also have an impact on the performance. If the learning rate is too small, we need more iterations to achieve convergence. If we stop iterations too early, we will suffer under-fitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "3.2"
},
{
"text": "Previous work reports results based on different evaluation metrics. Some work uses linear positions to describe ECs. ECs are judged on a \"whether there is an EC of type A before a certain token in the text\" basis (Cai et al., 2011) . Collapsing ECs before the same token to one, Cai et al. (2011) has 1352 ECs in the test data. Xue and Yang (2013) has stated that some ECs that share adjacent positions have different heads in the parse tree. They judge ECs on a \"whether there is an EC of type A with a certain head word and a certain following token in the text\" basis. Using this kind of metric, they gets 1765 ECs.",
"cite_spans": [
{
"start": 214,
"end": 232,
"text": "(Cai et al., 2011)",
"ref_id": "BIBREF1"
},
{
"start": 280,
"end": 297,
"text": "Cai et al. (2011)",
"ref_id": "BIBREF1"
},
{
"start": 329,
"end": 348,
"text": "Xue and Yang (2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Evaluation",
"sec_num": "3.3.1"
},
{
"text": "Here we use the same evaluation metric with Xue and Yang (2013) . Note that we still cannot describe all the 1838 ECs in the corpora, for on some occasions ECs preceding the same token share the same head word. We also omit some ECs which cause cycles in dependency trees as described in the previous sections. We have 1748 ECs, 95% of all the ECs in the test data, very close to 1765 used by Xue and Yang (2013) . The total number of ECs has an impact on the recall. In Table 3 , we include results based on each method's own EC count (1748, 1765, 1352 for Ours, Xue's and Cai's respectively) and the real total EC count 1838 (figures in brackets). Yang and Xue (2010) report an experiment result based on a classification model in a unified Table 2 .",
"cite_spans": [
{
"start": 44,
"end": 63,
"text": "Xue and Yang (2013)",
"ref_id": "BIBREF22"
},
{
"start": 393,
"end": 412,
"text": "Xue and Yang (2013)",
"ref_id": "BIBREF22"
},
{
"start": 536,
"end": 542,
"text": "(1748,",
"ref_id": null
},
{
"start": 543,
"end": 548,
"text": "1765,",
"ref_id": null
},
{
"start": 549,
"end": 553,
"text": "1352",
"ref_id": null
},
{
"start": 650,
"end": 669,
"text": "Yang and Xue (2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 471,
"end": 478,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 743,
"end": 750,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Metrics and Evaluation",
"sec_num": "3.3.1"
},
{
"text": "The results are shown in Table 3 . We present the results for each kind of EC and compare our results with two previous state-of-the-art methods (Cai et al., 2011; Xue and Yang, 2013) .",
"cite_spans": [
{
"start": 145,
"end": 163,
"text": "(Cai et al., 2011;",
"ref_id": "BIBREF1"
},
{
"start": 164,
"end": 183,
"text": "Xue and Yang, 2013)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Metrics and Evaluation",
"sec_num": "3.3.1"
},
{
"text": "The proposed method yields the newest stateof-the-art performances on CTB as far as we know. We also identify the dependency types between ECs and their heads. Some ECs, such as pro and PRO, are latent subjects of sentences. They usually serve as SBJ with very few exceptions. While the others may play various roles. There are 31 possible (EC, Dep) pairs. Using the same model, the overall result is p = 0.701, r = 0.703, f = 0.702.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metrics and Evaluation",
"sec_num": "3.3.1"
},
{
"text": "We compare the effectiveness of different features by eliminating each kind of features described in the previous section. As Table 4 shows, the most important kind is the dependency paths, which cause a huge drop in performance if eliminated. Dependency paths encode words and path pattern information which is proved essential for the detection of ECs. Besides, headwords are also useful. others, we cannot easily make the conclusion that they are of little usage in the identification of ECs. They are not fully explored in the proposed model, but may be vital for EC detection in reality. Worth to mention is that of the several kinds of ECs, the proposed method shows the best performance on ECs of type T, which represents ECs that are the trace of \"A'\"-movement, which moves a word to a position where no fixed grammatical function is assigned. Here we give an example:",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3.2"
},
{
"text": "\"[ ] \u770b\u8d77\u6765/seem A \u559c\u6b22/like B.\" \"A \u770b\u8d77\u6765/seem (EC) \u559c\u6b22/like B.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3.2"
},
{
"text": "A is moved to the head of the sentence as the topic (topicalization) and left a trace which is the EC. To detect this EC, we need information about the action \"\u559c\u6b22/like\", the link verb \"\u770b\u8d77 \u6765/seem\" and the arguments \"A\" and \"B\". ECs of type T are very common in Chinese, since Chinese is a topic-prominent language. Using distributed representations, it is easy to encode the context information in our feature vectors for EC detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3.2"
},
{
"text": "We also have satisfying results and significant improvements for the other types except * (trace of A-movement), which make up about 1% of all the ECs in the test data. Partly because there are too few * examples in the training data. We need to further improve our models to detect such ECs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "3.3.2"
},
{
"text": "The proposed method is capable of handling large set of labels. Hence it is possible to detect EC types and dependency types simultaneously. Besides, some other NLP tasks can also be formulated as annotation tasks, and therefore can be resolved using the same scheme, such as the frame identification for verbs (Hermann et al., 2014) .",
"cite_spans": [
{
"start": 311,
"end": 333,
"text": "(Hermann et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "This work together with some previous work that uses classification methods (Cai et al., 2011; Xue and Yang, 2013; Xue, 2007) , regards ECs in a sentence as independent to each other and even independent to words that do not appear in the feature vectors. Such an assumption makes it easier to design models and features but does not reflect the grammatic constraints of languages. For example, simple sentences in Chinese contain one and only one subject, whether it is an EC or not. If it is decided there is an EC as a subject in a certain place, there should be no more ECs as subjects in the same sentence. But such an important property is not reflected in these classification models. Methods that adopt parsing techniques take the whole parse tree as input and output a parse tree with EC anchored. So we can view the sentence as a whole and deal with ECs with regarding to all the words in the sentence. Iida and Poesio (2011) also take the grammar constraints into consideration by formulating EC detection as an ILP problem. But they usually yield poor performances compared with classification methods partly because the methods they use can not fully explore the syntactic and semantic features.",
"cite_spans": [
{
"start": 76,
"end": 94,
"text": "(Cai et al., 2011;",
"ref_id": "BIBREF1"
},
{
"start": 95,
"end": 114,
"text": "Xue and Yang, 2013;",
"ref_id": "BIBREF22"
},
{
"start": 115,
"end": 125,
"text": "Xue, 2007)",
"ref_id": "BIBREF24"
},
{
"start": 913,
"end": 935,
"text": "Iida and Poesio (2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Empty category is a complex problem . Existing methods for EC detection mainly explores syntactic and semantic features using classification models or parsing techniques. Johnson (2002) proposes a simple pattern based algorithm to recover ECs, both the positions and their antecedents in phrase structure trees. Gabbard et al. (2006) presents a two stage parser that uses syntactical features to recover Penn Treebank style syntactic analyses, including the ECs. The first stage, sentences are parse as usual without ECs, and in the second stage, ECs are detected using a learned model with rich text features in the tree structures. Kong and Zhou (2010) reports a tree kernel-based model which takes as input parse trees for EC detection. They also deal with EC resolution, to link ECs to text pieces if possible. They reports their results on Chinese Treebank. Yang and Xue (2010) try to restore ECs from parse trees using a Maximum Entropy model. Iida and Poesio (2011) propose an cross-lingual ILPbased model for zero anaphora detection. Cai et al. (2011) reports a classification model for EC detection. Their method is based on \"is there an EC before a certain token\".",
"cite_spans": [
{
"start": 171,
"end": 185,
"text": "Johnson (2002)",
"ref_id": "BIBREF8"
},
{
"start": 312,
"end": 333,
"text": "Gabbard et al. (2006)",
"ref_id": "BIBREF5"
},
{
"start": 634,
"end": 654,
"text": "Kong and Zhou (2010)",
"ref_id": "BIBREF10"
},
{
"start": 863,
"end": 882,
"text": "Yang and Xue (2010)",
"ref_id": "BIBREF25"
},
{
"start": 950,
"end": 972,
"text": "Iida and Poesio (2011)",
"ref_id": "BIBREF7"
},
{
"start": 1042,
"end": 1059,
"text": "Cai et al. (2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Recently Xue and Yang (2013) further develop the method of Yang and Xue (2010) and explore rich syntactical and semantical features, including paths in parse trees and semantic roles, to train an ME classification model for EC detection and yield the best performance reported using a strict evaluation metric on Chinese Treebank as far as we know.",
"cite_spans": [
{
"start": 9,
"end": 28,
"text": "Xue and Yang (2013)",
"ref_id": "BIBREF22"
},
{
"start": 59,
"end": 78,
"text": "Yang and Xue (2010)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "As we have stated, the traditional features used by above methods are not good at capturing the meanings of contexts. Currently the distributed representations together with deep neural networks have proven their ability not only in representing meaning of words, inferring words from the context, but also in representing structures of text (Socher et al., 2010; . Deep neural networks are capable of learning features from corpus, therefore saves the labor of feature engineering and have proven their ability in lots of NLP task (Collobert et al., 2011; Bengio et al., 2006) .",
"cite_spans": [
{
"start": 342,
"end": 363,
"text": "(Socher et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 532,
"end": 556,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 557,
"end": 577,
"text": "Bengio et al., 2006)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The most relevant work to this paper are that of and that of Hermann et al. (2014). propose a deep neural network scheme exploring the hidden space for image annotation. They map both the images and labels to the same hidden space and annotate new images according to their representations in the hidden space. Hermann et al. (2014) extend the scheme to frame identification, for which they obtain satisfying results. This paper further uses it for empty category detection with features designed for EC detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Compared with previous research, the proposed model simplifies the feature engineering greatly and produces distributed representations for both ECs and EC types which will benefit other tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we propose a new empty category detection method using distributed word representations. Using the word embeddings of the contexts of ECs as features enables us to employ rich information in the context without much feature engineering. Experiments on CTB have verified the advantages of the proposed method. We successfully beat the existing state-of-theart methods based on a strict evaluation metric. The proposed method can be further applied to other languages such as Japanese. We will further explore the feasibility of using neural networks to resolve empty categories: to link ECs to their antecedents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Note that there are still problems with the tree based method. As is shown inFig. 3, the pro and T are attached to the same head word (\u544a\u522b) and share the same",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.sogou.com/labs/dl/cs.html 3 The word segment standards used by CTB and ICT-CLAS are roughly the same with minor differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural probabilistic language models",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Jean-S\u00e9bastien",
"middle": [],
"last": "Sen\u00e9cal",
"suffix": ""
},
{
"first": "Fr\u00e9deric",
"middle": [],
"last": "Morin",
"suffix": ""
},
{
"first": "Jean-Luc",
"middle": [],
"last": "Gauvain",
"suffix": ""
}
],
"year": 2006,
"venue": "Innovations in Machine Learning",
"volume": "",
"issue": "",
"pages": "137--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, Holger Schwenk, Jean-S\u00e9bastien Sen\u00e9cal, Fr\u00e9deric Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, pages 137-186. Springer.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Language-independent parsing with empty elements",
"authors": [
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "212--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shu Cai, David Chiang, and Yoav Goldberg. 2011. Language-independent parsing with empty ele- ments. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Lin- guistics: Human Language Technologies: short papers-Volume 2, pages 212-216. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Effects of empty categories on machine translation",
"authors": [
{
"first": "Tagyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "636--645",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In Proceedings of EMNLP, pages 636-645. ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A unified architecture for natural language processing: Deep neural networks with multitask learning",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert and Jason Weston. 2008. A uni- fied architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160-167. ACM.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "The Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (al- most) from scratch. The Journal of Machine Learning Research, 12:2493-2537.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fully parsing the penn treebank",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Gabbard",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on human language technology conference of the North American chapter of the association of computational linguistics",
"volume": "",
"issue": "",
"pages": "184--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Gabbard, Mitchell Marcus, and Seth Kulick. 2006. Fully parsing the penn treebank. In Pro- ceedings of the main conference on human lan- guage technology conference of the North Amer- ican chapter of the association of computational linguistics, pages 184-191. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic frame identification with distributed word representations",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame identification with distributed word representa- tions. In Proceedings of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A cross-lingual ilp solution to zero anaphora resolution",
"authors": [
{
"first": "Ryu",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "804--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryu Iida and Massimo Poesio. 2011. A cross-lingual ilp solution to zero anaphora resolution. In Pro- ceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Language Technologies-Volume 1, pages 804-813. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A simple pattern-matching algorithm for recovering empty nodes and their antecedents",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting on Association for Computational Lin- guistics, pages 136-143. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploiting zero pronouns to improve chinese coreference resolution",
"authors": [
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "278--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fang Kong and Hwee Tou Ng. 2013. Exploiting zero pronouns to improve chinese coreference res- olution. In EMNLP, pages 278-288.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A tree kernelbased unified framework for chinese zero anaphora resolution",
"authors": [
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "882--891",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fang Kong and Guodong Zhou. 2010. A tree kernel- based unified framework for chinese zero anaphora resolution. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Pro- cessing, pages 882-891. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A model of coherence based on distributed sentence representation",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2061--2069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Eduard Hovy. 2014. A model of coher- ence based on distributed sentence representation. In Proceedings of the 2014 Conference on Empiri- cal Methods in Natural Language Processing, pages 2061-2069.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The nlp engine: A universal turing machine for nlp",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00168"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li and Eduard Hovy. 2015. The nlp engine: A universal turing machine for nlp. arXiv preprint arXiv:1503.00168.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recursive deep models for discourse parsing",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rumeng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2061--2069",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Rumeng Li, and Eduard Hovy. 2014. Re- cursive deep models for discourse parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 2061-2069.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "When are tree structures necessary for deep learning of representations? arXiv preprint",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Eudard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.00185"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Li, Dan Jurafsky, and Eudard Hovy. 2015. When are tree structures necessary for deep learning of representations? arXiv preprint arXiv:1503.00185.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "P",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large anno- tated corpus of english: The penn treebank. Com- putational linguistics, 19(2):313-330.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NIPS2013",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed rep- resentations of words and phrases and their com- positionality. In Proceedings of NIPS2013, pages 3111-3119.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution",
"authors": [
{
"first": "Paolo",
"middle": [],
"last": "Simone",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ponzetto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Strube",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution. In Proceed- ings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Lin- guistics, pages 192-199. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning continuous phrase representations and syntactic parsing with recursive neural networks",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "Andrew Y",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Christopher D Manning, and An- drew Y Ng. 2010. Learning continuous phrase representations and syntactic parsing with recur- sive neural networks. In Proceedings of the NIPS- 2010 Deep Learning and Unsupervised Feature Learning Workshop, pages 1-9.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ranking with ordered weighted pairwise classification",
"authors": [
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Buffoni",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Gallinari",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ICML2009",
"volume": "",
"issue": "",
"pages": "1057--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicolas Usunier, David Buffoni, and Patrick Galli- nari. 2009. Ranking with ordered weighted pair- wise classification. In Proceedings of ICML2009, pages 1057-1064. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Wsabie: Scaling up to large vocabulary image annotation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCAI2011",
"volume": "",
"issue": "",
"pages": "2764--2770",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary im- age annotation. In Proceedings of IJCAI2011, vol- ume 11, pages 2764-2770.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Enlisting the ghost: Modeling empty categories for machine translation",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2013,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "822--831",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Xiang, Xiaoqiang Luo, and Bowen Zhou. 2013. Enlisting the ghost: Modeling empty categories for machine translation. In ACL (1), pages 822- 831. Citeseer.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dependencybased empty category detection via phrase structure trees",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Yaqin",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2013,
"venue": "HLT-NAACL",
"volume": "",
"issue": "",
"pages": "1051--1060",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue and Yaqin Yang. 2013. Dependency- based empty category detection via phrase struc- ture trees. In HLT-NAACL, pages 1051-1060.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The penn chinese treebank: Phrase structure annotation of a large corpus",
"authors": [
{
"first": "Naiwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural language engineering",
"volume": "11",
"issue": "02",
"pages": "207--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural language engineering, 11(02):207-238.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Tapping the implicit information for the ps to ds conversion of the chinese treebank",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Sixth International Workshop on Treebanks and Linguistics Theories",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue. 2007. Tapping the implicit informa- tion for the ps to ds conversion of the chinese tree- bank. In Proceedings of the Sixth International Workshop on Treebanks and Linguistics Theories.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Chasing the ghost: recovering empty categories in the chinese treebank",
"authors": [
{
"first": "Yaqin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "1382--1390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaqin Yang and Nianwen Xue. 2010. Chasing the ghost: recovering empty categories in the chinese treebank. In Proceedings of the 23rd Interna- tional Conference on Computational Linguistics: Posters, pages 1382-1390. ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Hhmm-based chinese lexical analyzer ictclas",
"authors": [
{
"first": "Hua-Ping",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hong-Kui",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "De-Yi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the second SIGHAN workshop on Chinese language processing",
"volume": "17",
"issue": "",
"pages": "184--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hua-Ping Zhang, Hong-Kui Yu, De-Yi Xiong, and Qun Liu. 2003. Hhmm-based chinese lexi- cal analyzer ictclas. In Proceedings of the sec- ond SIGHAN workshop on Chinese language processing-Volume 17, pages 184-187. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "System Architecture",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Possible EC Positions in a Dependency Tree",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "in +d hidden ]. And W B is initialized using unif orm[\u2212 24 d hidden +dout , 24 d hidden +dout ]. Here d in , d hidden and d out are the dimensions of the input layer, the hidden space and the label space.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "\u4fc4\u7f57\u65af \u519b\u961f 31 \u65e5 \u4e3e\u884c \u4e86 OP pro T \u544a\u522b \u5fb7\u56fd \u7684 \u6700\u540e \u4eea\u5f0f \u3002",
"html": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">UNK</td><td/></tr><tr><td/><td/><td/><td/><td/><td>COMP</td><td/><td/></tr><tr><td/><td/><td>ROOT</td><td/><td/><td>COMP</td><td/><td/></tr><tr><td>NMOD</td><td>SUBJ</td><td>TMP</td><td>PRT</td><td>SBJ</td><td>ADV</td><td>COMP COMP</td><td>RELC AMOD</td></tr><tr><td colspan=\"8\">\u4fc4\u7f57\u65af/Russian \u519b\u961f/troops 31 \u65e5/31rd \u4e3e\u884c/hold \u4e86/past-tense-marker \u544a\u522b/farewell \u5fb7</td></tr><tr><td/><td/><td colspan=\"5\">\u56fd/Germany \u7684/DE \u6700\u540e/final \u4eea\u5f0f/ceremony \u3002</td><td/></tr><tr><td/><td/><td colspan=\"5\">Figure 3: ECs in a Dependency Tree</td><td/></tr><tr><td/><td>Train</td><td colspan=\"2\">Dev</td><td>Test</td><td/><td/><td/></tr><tr><td>File</td><td colspan=\"3\">81-325, 400-454 41-80</td><td>1-40</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">500-554, 590-596</td><td/><td>901-931</td><td/><td/><td/></tr><tr><td/><td colspan=\"2\">600-885, 900</td><td/><td/><td/><td/><td/></tr><tr><td>#pro</td><td>1023</td><td colspan=\"2\">166</td><td>297</td><td/><td/><td/></tr><tr><td>#PRO</td><td>1089</td><td colspan=\"2\">210</td><td>298</td><td/><td/><td/></tr><tr><td>#OP</td><td>2099</td><td colspan=\"2\">301</td><td>575</td><td/><td/><td/></tr><tr><td>#T</td><td>1981</td><td colspan=\"2\">287</td><td>527</td><td/><td/><td/></tr><tr><td>#RNR</td><td>91</td><td colspan=\"2\">15</td><td>32</td><td/><td/><td/></tr><tr><td>#*</td><td>22</td><td>0</td><td/><td>19</td><td/><td/><td/></tr><tr><td>#Others</td><td>0</td><td>0</td><td/><td>0</td><td/><td/><td/></tr><tr><td>Total</td><td>6305</td><td colspan=\"2\">979</td><td>1748</td><td/><td/><td/></tr><tr><td colspan=\"5\">Table 1: Data Division and EC Distribution</td><td/><td/><td/></tr><tr><td colspan=\"3\">3 Experiments on CTB</td><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null
},
"TABREF2": {
"text": "EC Distribution in the Test Data",
"html": null,
"content": "<table><tr><td>class</td><td>correct</td><td>p</td><td>r</td><td>F1</td></tr><tr><td>PRO</td><td>162</td><td>.479</td><td>.545</td><td>.510</td></tr><tr><td>pro</td><td>161</td><td>.564</td><td>.540</td><td>.552</td></tr><tr><td>OP</td><td>409</td><td>.707</td><td>.776</td><td>.740</td></tr><tr><td>T</td><td>506</td><td>.939</td><td>.88</td><td>.908</td></tr><tr><td>RNR</td><td>23</td><td>.767</td><td>.719</td><td>.742</td></tr><tr><td>*</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Overall</td><td>1261</td><td>.712</td><td>.721</td><td>.717</td></tr><tr><td/><td/><td/><td colspan=\"2\">(.686) (.699)</td></tr><tr><td>(Xue)</td><td>903</td><td>.653</td><td>.512</td><td>.574</td></tr><tr><td/><td/><td/><td colspan=\"2\">(.491) (.561)</td></tr><tr><td>(Cai)</td><td>737</td><td>.660</td><td>.545</td><td>.586</td></tr><tr><td/><td/><td/><td colspan=\"2\">(.401) (.499)</td></tr></table>",
"type_str": "table",
"num": null
},
"TABREF3": {
"text": "Performance on the CTB Test Data parsing frame. We do not include it for it uses different and relativelyThe distributions of ECs in the test data are shown in",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
},
"TABREF5": {
"text": "Effectiveness of Features",
"html": null,
"content": "<table/>",
"type_str": "table",
"num": null
}
}
}
}