{ "paper_id": "P12-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:29:05.574961Z" }, "title": "Verb Classification using Distributional Similarity in Syntactic and Semantic Structures", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tor Vergata", "location": { "postCode": "00133", "settlement": "Roma", "country": "Italy" } }, "email": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": { "postCode": "38123", "settlement": "Povo", "region": "TN", "country": "Italy" } }, "email": "moschitti@disi.unitn.it" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Tor", "location": { "postCode": "00133", "settlement": "Vergata, Roma", "country": "Italy" } }, "email": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Colorado at Boulder Boulder", "location": { "postCode": "80302", "region": "CO", "country": "USA" } }, "email": "mpalmer@colorado.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose innovative representations for automatic classification of verbs according to mainstream linguistic theories, namely VerbNet and FrameNet. First, syntactic and semantic structures capturing essential lexical and syntactic properties of verbs are defined. Then, we design advanced similarity functions between such structures, i.e., semantic tree kernel functions, for exploiting distributional and grammatical information in Support Vector Machines. The extensive empirical analysis on VerbNet class and frame detection shows that our models capture meaningful syntactic/semantic structures, which allows for improving the state-of-the-art.", "pdf_parse": { "paper_id": "P12-1028", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose innovative representations for automatic classification of verbs according to mainstream linguistic theories, namely VerbNet and FrameNet. First, syntactic and semantic structures capturing essential lexical and syntactic properties of verbs are defined. Then, we design advanced similarity functions between such structures, i.e., semantic tree kernel functions, for exploiting distributional and grammatical information in Support Vector Machines. The extensive empirical analysis on VerbNet class and frame detection shows that our models capture meaningful syntactic/semantic structures, which allows for improving the state-of-the-art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Verb classification is a fundamental topic of computational linguistics research given its importance for understanding the role of verbs in conveying semantics of natural language (NL). Additionally, generalization based on verb classification is central to many NL applications, ranging from shallow semantic parsing to semantic search or information extraction. Currently, a lot of interest has been paid to two verb categorization schemes: VerbNet (Schuler, 2005) and FrameNet (Baker et al., 1998) , which has also fostered production of many automatic approaches to predicate argument extraction.", "cite_spans": [ { "start": 452, "end": 467, "text": "(Schuler, 2005)", "ref_id": "BIBREF38" }, { "start": 481, "end": 501, "text": "(Baker et al., 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Such work has shown that syntax is necessary for helping to predict the roles of verb arguments and consequently their verb sense (Gildea and Jurasfky, 2002; Pradhan et al., 2005; Gildea and Palmer, 2002) . However, the definition of models for optimally combining lexical and syntactic constraints is still far for being accomplished. In particular, the exhaustive design and experimentation of lexical and syntactic features for learning verb classification appears to be computationally problematic. For example, the verb order can belongs to the two VerbNet classes:", "cite_spans": [ { "start": 130, "end": 157, "text": "(Gildea and Jurasfky, 2002;", "ref_id": "BIBREF16" }, { "start": 158, "end": 179, "text": "Pradhan et al., 2005;", "ref_id": "BIBREF35" }, { "start": 180, "end": 204, "text": "Gildea and Palmer, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "-The class 60.1, i.e., order someone to do something as shown in: The Illinois Supreme Court ordered the commission to audit Commonwealth Edison 's construction expenses and refund any unreasonable expenses .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "-The class 13.5.1: order or request something like in: ... Michelle blabs about it to a sandwich man while ordering lunch over the phone .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Clearly, the syntactic realization can be used to discern the cases above but it would not be enough to correctly classify the following verb occurrence: .. ordered the lunch to be delivered .. in Verb class 13.5.1. For such a case, selectional restrictions are needed. These have also been shown to be useful for semantic role classification (Zapirain et al., 2010) . Note that their coding in learning algorithms is rather complex: we need to take into account syntactic structures, which may require an exponential number of syntactic features (i.e., all their possible substructures). Moreover, these have to be enriched with lexical information to trig lexical preference.", "cite_spans": [ { "start": 343, "end": 366, "text": "(Zapirain et al., 2010)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we tackle the problem above by studying innovative representations for automatic verb classification according to VerbNet and FrameNet. We define syntactic and semantic structures capturing essential lexical and syntactic properties of verbs. Then, we apply similarity between such structures, i.e., kernel functions, which can also exploit distributional lexical semantics, to train automatic classifiers. The basic idea of such functions is to compute the similarity between two verbs in terms of all the possible substructures of their syntactic frames. We define and automatically extract a lexicalized approximation of the latter. Then, we apply kernel functions that jointly model structural and lexical similarity so that syntactic properties are combined with generalized lexemes. The nice property of kernel functions is that they can be used in place of the scalar product of feature vectors to train algorithms such as Support Vector Machines (SVMs). This way SVMs can learn the association between syntactic (sub-) structures whose lexical arguments are generalized and target verb classes, i.e., they can also learn selectional restrictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We carried out extensive experiments on verb class and frame detection which showed that our models greatly improve on the state-of-the-art (up to about 13% of relative error reduction). Such results are nicely assessed by manually inspecting the most important substructures used by the classifiers as they largely correlate with syntactic frames defined in VerbNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the rest of the paper, Sec. 2 reports on related work, Sec. 3 and Sec. 4 describe previous and our models for syntactic and semantic similarity, respectively, Sec. 5 illustrates our experiments, Sec. 6 discusses the output of the models in terms of error analysis and important structures and finally Sec. 7 derives the conclusions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our target task is verb classification but at the same time our models exploit distributional models as well as structural kernels. The next three subsections report related work in such areas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Verb Classification. The introductory verb classification example has intuitively shown the complexity of defining a comprehensive feature representation. Hereafter, we report on analysis carried out in previous work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "It has been often observed that verb senses tend to show different selectional constraints in a specific argument position and the above verb order is a clear example. In the direct object position of the example sentence for the first sense 60.1 of order, we found commission in the role PATIENT of the predicate. It clearly satisfies the +ANIMATE/+ORGANIZATION restriction on the PATIENT role. This is not true for the direct object dependency of the alternative sense 13.5.1, which usually expresses the THEME role, with unrestricted type selection. When properly generalized, the direct object information has thus been shown highly predictive about verb sense distinctions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In (Brown et al., 2011) , the so called dynamic dependency neighborhoods (DDN), i.e., the set of verbs that are typically collocated with a direct object, are shown to be more helpful than lexical information (e.g., WordNet). The set of typical verbs taking a noun n as a direct object is in fact a strong characterization for semantic similarity, as all the nouns m similar to n tend to collocate with the same verbs. This is true also for other syntactic dependencies, among which the direct object dependency is possibly the strongest cue (as shown for example in (Dligach and Palmer, 2008) ).", "cite_spans": [ { "start": 3, "end": 23, "text": "(Brown et al., 2011)", "ref_id": "BIBREF4" }, { "start": 567, "end": 593, "text": "(Dligach and Palmer, 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In order to generalize the above DDN feature, distributional models are ideal, as they are designed to model all the collocations of a given noun, according to large scale corpus analysis. Their ability to capture lexical similarity is well established in WSD tasks (e.g. (Schutze, 1998) ), thesauri harvesting (Lin, 1998) , semantic role labeling (Croce et al., 2010) ) as well as information retrieval (e.g. (Furnas et al., 1988) ).", "cite_spans": [ { "start": 272, "end": 287, "text": "(Schutze, 1998)", "ref_id": "BIBREF39" }, { "start": 311, "end": 322, "text": "(Lin, 1998)", "ref_id": "BIBREF26" }, { "start": 348, "end": 368, "text": "(Croce et al., 2010)", "ref_id": "BIBREF9" }, { "start": 410, "end": 431, "text": "(Furnas et al., 1988)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Distributional Models (DMs). These models follow the distributional hypothesis (Firth, 1957) and characterize lexical meanings in terms of context of use, (Wittgenstein, 1953) . By inducing geometrical notions of vectors and norms through corpus analysis, they provide a topological definition of semantic similarity, i.e., distance in a space. DMs can capture the similarity between words such as delegation, deputation or company and commission. In case of sense 60.1 of the verb order, DMs can be used to suggest that the role PATIENT can be inherited by all these words, as suitable Organisations.", "cite_spans": [ { "start": 79, "end": 92, "text": "(Firth, 1957)", "ref_id": "BIBREF14" }, { "start": 155, "end": 175, "text": "(Wittgenstein, 1953)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In supervised language learning, when few examples are available, DMs support cost-effective lexical generalizations, often outperforming knowledge based resources (such as WordNet, as in (Pantel et al., 2007) ). Obviously, the choice of the context type determines the type of targeted semantic properties. Wider contexts (e.g., entire documents) are shown to suggest topical relations. Smaller contexts tend to capture more specific semantic aspects, e.g. the syntactic behavior, and better capture paradigmatic relations, such as synonymy. In particular, word space models, as described in (Sahlgren, 2006) , define contexts as the words appearing in a n-sized window, centered around a target word. Cooccurrence counts are thus collected in a words-bywords matrix, where each element records the number of times two words co-occur within a single window of word tokens. Moreover, robust weighting schemas are used to smooth counts against too frequent co-occurrence pairs: Pointwise Mutual Information (PMI) scores (Turney and Pantel, 2010) are commonly adopted.", "cite_spans": [ { "start": 188, "end": 209, "text": "(Pantel et al., 2007)", "ref_id": "BIBREF31" }, { "start": 593, "end": 609, "text": "(Sahlgren, 2006)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Structural Kernels. Tree and sequence kernels have been successfully used in many NLP applications, e.g., parse reranking and adaptation, (Collins and Duffy, 2002; Shen et al., 2003; Toutanova et al., 2004; Kudo et al., 2005; Titov and Henderson, 2006) , chunking and dependency parsing, e.g., (Kudo and Matsumoto, 2003; Daum\u00e9 III and Marcu, 2004) , named entity recognition, (Cumby and Roth, 2003) , text categorization, e.g., (Cancedda et al., 2003; Gliozzo et al., 2005) , and relation extraction, e.g., (Zelenko et al., 2002; Bunescu and Mooney, 2005; Zhang et al., 2006) .", "cite_spans": [ { "start": 138, "end": 163, "text": "(Collins and Duffy, 2002;", "ref_id": "BIBREF7" }, { "start": 164, "end": 182, "text": "Shen et al., 2003;", "ref_id": "BIBREF41" }, { "start": 183, "end": 206, "text": "Toutanova et al., 2004;", "ref_id": "BIBREF43" }, { "start": 207, "end": 225, "text": "Kudo et al., 2005;", "ref_id": "BIBREF24" }, { "start": 226, "end": 252, "text": "Titov and Henderson, 2006)", "ref_id": "BIBREF42" }, { "start": 294, "end": 320, "text": "(Kudo and Matsumoto, 2003;", "ref_id": "BIBREF23" }, { "start": 321, "end": 347, "text": "Daum\u00e9 III and Marcu, 2004)", "ref_id": "BIBREF12" }, { "start": 376, "end": 398, "text": "(Cumby and Roth, 2003)", "ref_id": "BIBREF11" }, { "start": 428, "end": 451, "text": "(Cancedda et al., 2003;", "ref_id": null }, { "start": 452, "end": 473, "text": "Gliozzo et al., 2005)", "ref_id": "BIBREF19" }, { "start": 507, "end": 529, "text": "(Zelenko et al., 2002;", "ref_id": "BIBREF48" }, { "start": 530, "end": 555, "text": "Bunescu and Mooney, 2005;", "ref_id": "BIBREF5" }, { "start": 556, "end": 575, "text": "Zhang et al., 2006)", "ref_id": "BIBREF49" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Recently, DMs have been also proposed in integrated syntactic-semantic structures that feed advanced learning functions, such as the semantic tree kernels discussed in (Bloehdorn and Moschitti, 2007a; Bloehdorn and Moschitti, 2007b; Mehdad et al., 2010; Croce et al., 2011) .", "cite_spans": [ { "start": 168, "end": 200, "text": "(Bloehdorn and Moschitti, 2007a;", "ref_id": "BIBREF2" }, { "start": 201, "end": 232, "text": "Bloehdorn and Moschitti, 2007b;", "ref_id": "BIBREF3" }, { "start": 233, "end": 253, "text": "Mehdad et al., 2010;", "ref_id": "BIBREF28" }, { "start": 254, "end": 273, "text": "Croce et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this paper we model verb classifiers by exploiting previous technology for kernel methods. In particular, we design new models for verb classification by adopting algorithms for structural similarity, known as Smoothed Partial Tree Kernels (SPTKs) (Croce et al., 2011) . We define new innovative structures and similarity functions based on LSA.", "cite_spans": [ { "start": 251, "end": 271, "text": "(Croce et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "The main idea of SPTK is rather simple: (i) measuring the similarity between two trees in terms of the number of shared subtrees; and (ii) such number also includes similar fragments whose lexical nodes are just related (so they can be different). The contribution of (ii) is proportional to the lexical similarity of the tree lexical nodes, where the latter can be evaluated according to distributional models or also lexical resources, e.g., WordNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "In the following, we define our models based on previous work on LSA and SPTKs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "3.1 LSA as lexical similarity model Robust representations can be obtained through intelligent dimensionality reduction methods. In LSA the original word-by-context matrix M is decomposed through Singular Value Decomposition (SVD) (Landauer and Dumais, 1997; Golub and Kahan, 1965) into the product of three new matrices: U , S, and V so that S is diagonal and", "cite_spans": [ { "start": 231, "end": 258, "text": "(Landauer and Dumais, 1997;", "ref_id": "BIBREF25" }, { "start": 259, "end": 281, "text": "Golub and Kahan, 1965)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "M = U SV T . M is then approximated by M k = U k S k V T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "k , where only the first k columns of U and V are used, corresponding to the first k greatest singular values. This approximation supplies a way to project a generic term w i into the k-dimensional space us-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "ing W = U k S 1/2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "k , where each row corresponds to the representation vectors w i . The original statistical information about M is captured by the new kdimensional space, which preserves the global structure while removing low-variant dimensions, i.e., distribution noise. Given two words w 1 and w 2 , the term similarity function \u03c3 is estimated as the cosine similarity between the corresponding projections w 1 , w 2 in the LSA space, i.e \u03c3(w 1 , w 2 ) = w 1 \u2022 w 2 w 1 w 2 . This is known as Latent Semantic Kernel (LSK), proposed in (Cristianini et al., 2001) , as it defines a positive semi-definite Gram matrix G = \u03c3(w 1 , w 2 ) \u2200w 1 , w 2 (Shawe-Taylor and Cristianini, 2004) . \u03c3 is thus a valid kernel and can be combined with other kernels, as discussed in the next session.", "cite_spans": [ { "start": 521, "end": 547, "text": "(Cristianini et al., 2001)", "ref_id": "BIBREF8" }, { "start": 648, "end": 666, "text": "Cristianini, 2004)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Structural Similarity Functions", "sec_num": "3" }, { "text": "To our knowledge, two main types of tree kernels exploit lexical similarity: the syntactic semantic tree kernel defined in (Bloehdorn and Moschitti, 2007a) applied to constituency trees and the smoothed partial tree kernels (SPTKs) defined in (Croce et al., 2011) , which generalizes the former. We report the definition of the latter as we modified it for our purposes. SPTK computes the number of common substructures between two trees T 1 and T 2 without explicitly considering the whole fragment space. Its general equations are reported hereafter:", "cite_spans": [ { "start": 123, "end": 155, "text": "(Bloehdorn and Moschitti, 2007a)", "ref_id": "BIBREF2" }, { "start": 243, "end": 263, "text": "(Croce et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T K(T 1 , T 2 ) = n 1 \u2208N T 1 n 2 \u2208N T 2 \u2206(n 1 , n 2 ),", "eq_num": "(1)" } ], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "where N T 1 and N T 2 are the sets of the T 1 's and T 2 's nodes, respectively and \u2206(n 1 , n 2 ) is equal to the number of common fragments rooted in the n 1 and n 2 nodes 1 . The \u2206 function determines the richness of the kernel space and thus induces different tree kernels, for example, the syntactic tree kernel (STK) (Collins and Duffy, 2002) or the partial tree kernel (PTK) (Moschitti, 2006) . The algorithm for SPTK's \u2206 is the following: if n 1 and n 2 are leaves then \u2206 \u03c3 (n 1 , n 2 ) = \u00b5\u03bb\u03c3(n 1 , n 2 ); else", "cite_spans": [ { "start": 322, "end": 347, "text": "(Collins and Duffy, 2002)", "ref_id": "BIBREF7" }, { "start": 381, "end": 398, "text": "(Moschitti, 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2206 \u03c3 (n 1 , n 2 ) = \u00b5\u03c3(n 1 , n 2 ) \u00d7 \u03bb 2 + I1, I2,l( I1)=l( I2) \u03bb d( I1)+d( I2) l( I1) j=1 \u2206 \u03c3 (c n1 ( I 1j ), c n2 ( I 2j )) ,", "eq_num": "(2)" } ], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "where (1) \u03c3 is any similarity between nodes, e.g., between their lexical labels;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "(2) \u03bb, \u00b5 \u2208 [0, 1] are decay factors;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "(3) c n 1 (h) is the h th child of the node n 1 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "(4) I 1 and I 2 are two sequences of indexes, i.e., I =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "(i 1 , i 2 , .., l(I)), with 1 \u2264 i 1 < i 2 < .. < i l(I) ; and (5) d( I 1 ) = I 1l( I 1 ) \u2212 I 11 +1 and d( I 2 ) = I 2l( I 2 ) \u2212 I 21 +1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "Note that, as shown in (Croce et al., 2011) , the average running time of SPTK is sub-quadratic in the number of the tree nodes. In the next section we show how we exploit the class of SPTKs, for verb classification.", "cite_spans": [ { "start": 23, "end": 43, "text": "(Croce et al., 2011)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "1 To have a similarity score between 0 and 1, a normalization in the kernel space, i.e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "T K(T 1 ,T 2 ) \u221a T K(T 1 ,T 1 )\u00d7T K(T 2 ,T 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tree Kernels driven by Semantic Similarity", "sec_num": "3.2" }, { "text": "The design of SPTK-based algorithms for our verb classification requires the modeling of two different aspects: (i) a tree representation for the verbs; and (ii) the lexical similarity suitable for the task. We also modified SPTK to apply different similarity functions to different nodes to introduce flexibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Classification Models", "sec_num": "4" }, { "text": "The implicit feature space generated by structural kernels and the corresponding notion of similarity between verbs obviously depends on the input structures. In the cases of STK, PTK and SPTK different tree representations lead to engineering more or less expressive linguistic feature spaces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Verb Structural Representation", "sec_num": "4.1" }, { "text": "With the aim of capturing syntactic features, we started from two different parsing paradigms: phrase and dependency structures. For example, for representing the first example of the introduction, we can use the constituency tree (CT) in Figure 1 , where the target verb node is enriched with the TARGET label. Here, we apply tree pruning to reduce the computational complexity of tree kernels as it is proportional to the number of nodes in the input trees. Accordingly, we only keep the subtree dominated by the target VP by pruning from it all the S-nodes along with their subtrees (i.e, all nested sentences are removed). To further improve generalization, we lemmatize lexical nodes and add generalized POS-Tags, i.e., noun (n::), verb (v::), adjective (::a), determiner (::d) and so on, to them. This is useful for constraining similarity to be only contributed by lexical pairs of the same grammatical category. To encode dependency structure information in a tree (so that we can use it in tree kernels), we use (i) lexemes as nodes of our tree, (ii) their dependencies as edges between the nodes and (iii) the dependency labels, e.g., grammatical functions (GR), and POS-Tags, again as tree nodes. We designed two different tree types: (i) in the first type, GR are central nodes from which dependencies are drawn and all the other features of the central node, i.e., lexical surface form and its POS-Tag, are added as additional children. An example of the GR Centered Tree (GRCT) is shown in Figure 2 , where the POS-Tags and lexemes are children of GR nodes. (ii) The second type of tree uses lexicals as central nodes on which both GR and POS-Tag are added as the rightmost children. For example, Figure 3 shows an example of a Lexical Centered Tree (LCT). For both trees, the pruning strategy only preserves the verb node, its direct ancestors (father and siblings) and its descendants up to two levels (i.e., direct children and grandchildren of the verb node). Note that, our dependency tree can capture the semantic head of the verbal argument along with the main syntactic construct, e.g., to audit.", "cite_spans": [], "ref_spans": [ { "start": 239, "end": 247, "text": "Figure 1", "ref_id": null }, { "start": 1504, "end": 1512, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 1711, "end": 1719, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Verb Structural Representation", "sec_num": "4.1" }, { "text": "We have defined the new similarity \u03c3 \u03c4 to be used in Eq. 2, which makes SPTK more effective as shown by Alg. 1. \u03c3 \u03c4 takes two nodes n 1 and n 2 and applies a different similarity for each node type. The latter is derived by \u03c4 and can be: GR (i.e., SYNT), POS-Tag (i.e., POS) or a lexical (i.e., LEX) type. In our experiment, we assign 0/1 similarity for SYNT and POS nodes according to string matching. For LEX type, we apply a lexical similarity learned with LSA to only pairs of lexicals associated with the same POS-Tag. It should be noted that the type-based similarity allows for potentially applying a different similarity for each node. Indeed, we also tested an amplification factor, namely, leaf weight (lw), which amplifies the matching values of the leaf nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized node similarity for SPTK", "sec_num": "4.2" }, { "text": "Algorithm 1 \u03c3 \u03c4 (n 1 , n 2 , lw) \u03c3\u03c4 \u2190 0, if \u03c4 (n 1 ) = \u03c4 (n 2 ) = SYNT \u2227 label(n 1 ) = label(n 2 ) then \u03c3\u03c4 \u2190 1 end if if \u03c4 (n 1 ) = \u03c4 (n 2 ) = POS \u2227 label(n 1 ) = label(n 2 ) then \u03c3\u03c4 \u2190 1 end if if \u03c4 (n 1 ) = \u03c4 (n 2 ) = LEX \u2227 pos(n 1 ) = pos(n 2 ) then \u03c3\u03c4 \u2190 \u03c3 LEX (n 1 , n 2 ) end if if leaf(n 1 ) \u2227 leaf(n 2 ) then \u03c3\u03c4 \u2190 \u03c3\u03c4 \u00d7 lw end if return \u03c3\u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generalized node similarity for SPTK", "sec_num": "4.2" }, { "text": "In these experiments, we tested the impact of our different verb representations using different kernels, similarities and parameters. We also compared with simple bag-of-words (BOW) models and the stateof-the-art.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We consider two different corpora: one for VerbNet and the other for FrameNet. For the former, we used the same verb classification setting of (Brown et al., 2011). Sentences are drawn from the Semlink corpus (Loper et al., 2007) , which consists of the Prop-Banked Penn Treebank portions of the Wall Street Journal. It contains 113K verb instances, 97K of which are verbs represented in at least one VerbNet class. Semlink includes 495 verbs, whose instances are labeled with more than one class (including one single VerbNet class or none). We used all instances of the corpus for a total of 45,584 instances for 180 verb classes. When instances labeled with the none class are not included, the number of examples becomes 23,719.", "cite_spans": [ { "start": 209, "end": 229, "text": "(Loper et al., 2007)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "General experimental setup", "sec_num": "5.1" }, { "text": "The second corpus refers to FrameNet frame classification. The training and test data are drawn from the FrameNet 1.5 corpus 2 , which consists of 135K sentences annotated according the frame semantics (Baker et al., 1998) . We selected the subset of frames containing more than 100 sentences annotated with a verbal predicate for a total of 62,813 sentences in 187 frames (i.e., very close to the Verb-Net datasets). For both the datasets, we used 70% of instances for training and 30% for testing.", "cite_spans": [ { "start": 202, "end": 222, "text": "(Baker et al., 1998)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "General experimental setup", "sec_num": "5.1" }, { "text": "Our verb (multi) classifier is designed with the one-vs-all (Rifkin and Klautau, 2004) multiclassification schema. This uses a set of binary SVM classifiers, one for each verb class (frame) i. The sentences whose verb is labeled with the class i are positive examples for the classifier i. The sentences whose verbs are compatible with the class i but evoking a different class or labeled with none (no current verb class applies) are added as negative examples. In the classification phase the binary classifiers are applied by (i) only considering classes that are compatible with the target verbs; and (ii) selecting the class associated with the maximum positive SVM margin. If all classifiers provide a negative score the example is labeled with none.", "cite_spans": [ { "start": 60, "end": 86, "text": "(Rifkin and Klautau, 2004)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "General experimental setup", "sec_num": "5.1" }, { "text": "To learn the binary classifiers of the schema above, we coded our modified SPTK in SVM-Light-TK 3 (Moschitti, 2006) . The parameterization of each classifier is carried on a held-out set (30% of the training) and is concerned with the setting of the trade-off parameter (option -c) and the leaf weight (lw) (see Alg. 1), which is used to linearly scale the contribution of the leaf nodes. In contrast, the cost-factor parameter of SVM-Light-TK is set as the ratio between the number of negative and positive examples for attempting to have a balanced Precision/Recall.", "cite_spans": [ { "start": 98, "end": 115, "text": "(Moschitti, 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "General experimental setup", "sec_num": "5.1" }, { "text": "Regarding SPTK setting, we used the lexical similarity \u03c3 defined in Sec. 3.1. In more detail, LSA was applied to ukWak (Baroni et al., 2009) , which is a large scale document collection made up of 2 billion tokens. M is constructed by applying POS tagging to build rows with pairs lemma, ::POS (lemma::POS in brief). The contexts of such items are the columns of M and are short windows of size [\u22123, +3], centered on the items. This allows for better capturing syntactic properties of words. The most frequent 20,000 items are selected along with their 20k contexts. The entries of M are the point-wise mutual 3 (Structural kernels in SVMLight (Joachims, 2000) information between them. SVD reduction is then applied to M, with a dimensionality cut of l = 250. For generating the CT, GRCT and LCT structures, we used the constituency trees generated by the Charniak parser (Charniak, 2000) and the dependency structures generated by the LTH syntactic parser (described in (Johansson and Nugues, 2008) ).", "cite_spans": [ { "start": 119, "end": 140, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF1" }, { "start": 644, "end": 660, "text": "(Joachims, 2000)", "ref_id": "BIBREF21" }, { "start": 873, "end": 889, "text": "(Charniak, 2000)", "ref_id": "BIBREF6" }, { "start": 972, "end": 1000, "text": "(Johansson and Nugues, 2008)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "General experimental setup", "sec_num": "5.1" }, { "text": "The classification performance is measured with accuracy (i.e., the percentage of correct classification). We also derive statistical significance of the results by using the model described in (Yeh, 2000) and implemented in (Pad\u00f3, 2006) .", "cite_spans": [ { "start": 194, "end": 205, "text": "(Yeh, 2000)", "ref_id": "BIBREF46" }, { "start": 225, "end": 237, "text": "(Pad\u00f3, 2006)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "General experimental setup", "sec_num": "5.1" }, { "text": "Results To assess the performance of our settings, we also derive a simple baseline based on the bag-of-words (BOW) model. For it, we represent an instance of a verb in a sentence using all words of the sentence (by creating a special feature for the predicate word).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VerbNet and FrameNet Classification", "sec_num": "5.2" }, { "text": "We also used sequence kernels (SK), i.e., PTK applied to a tree composed of a fake root and only one level of sentence words. For efficiency reasons 4 , we only consider the 10 words before and after the predicate with subsequence features of length up to 5. Table 1 reports the accuracy of different models for VerbNet classification. It should be noted that: first, SK produces a much higher accuracy than BOW, i.e., 82.08 vs. 79.08. On one hand, this is Table 3 : VerbNet accuracy without the none class generally in contrast with standard text categorization tasks, for which n-gram models show accuracy comparable to the simpler BOW. On the other hand, it simply confirms that verb classification requires the dependency information between words (i.e., at least the sequential structure information provided by SK). Second, SK is 2.56 percent points below the stateof-the-art achieved in (Brown et al., 2011) (BR), i.e, 82.08 vs. 84.64. In contrast, STK applied to our representation (CT, GRCT and LCT) produces comparable accuracy, e.g., 84.83, confirming that syntactic representation is needed to reach the state-of-the-art.", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 457, "end": 464, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "VerbNet and FrameNet Classification", "sec_num": "5.2" }, { "text": "Third, PTK, which produces more general structures, improves over BR by almost 1.5 (statistically significant result) when using our dependency structures GRCT and LCT. CT does not produce the same improvement since it does not allow PTK to directly compare the lexical structure (lexemes are all leaf nodes in CT and to connect some pairs of them very large trees are needed).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VerbNet and FrameNet Classification", "sec_num": "5.2" }, { "text": "Finally, the best model of SPTK (i.e, using LCT) improves over the best PTK (i.e., using LCT) by almost 1 point (statistically significant result): this difference is only given by lexical similarity. SPTK improves on the state-of-the-art by about 2.08 absolute percent points, which, given the high accuracy of the baseline, corresponds to 13.5% of relative error reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VerbNet and FrameNet Classification", "sec_num": "5.2" }, { "text": "We carried out similar experiments for frame classification. One interesting difference is that SK improves BOW by only 0.70, i.e., 4 times less than in the VerbNet setting. This suggests that word order around the predicate is more important for deriving the VerbNet class than the FrameNet frame. Additionally, LCT or GRCT seems to be invariant for both PTK and SPTK whereas the lexical similarity still produces a relevant improvement on PTK, i.e., 13% of relative error reduction, for an absolute accuracy of 93.78%. The latter improves over the state- of-the-art, i.e., 92.63% derived in (Giuglea and Moschitti, 2006) , by using STK on CT on 133 frames. We also carried out experiments to understand the role of the none class. Table 3 reports on the VerbNet classification without its instances. This is of course an unrealistic setting as it would assume that the current VerbNet release already includes all senses for English verbs. In the table, we note that the overall accuracy highly increases and the difference between models reduces. The similarities play no role anymore. This may suggest that SPTK can help in complex settings, where verb class characterization is more difficult. Another important role of SPTK models is their ability to generalize. To test this aspect, Figure 4 illustrates the learning curves of SPTK with respect to BOW and the accuracy achieved by BR (with a constant line). It is impressive to note that with only 40% of the data SPTK can reach the state-of-the-art.", "cite_spans": [ { "start": 593, "end": 622, "text": "(Giuglea and Moschitti, 2006)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 733, "end": 740, "text": "Table 3", "ref_id": null }, { "start": 1290, "end": 1298, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "VerbNet and FrameNet Classification", "sec_num": "5.2" }, { "text": "We carried out analysis of system errors and its induced features. These can be examined by applying the reverse engineering tool 5 proposed in (Pighin and Moschitti, 2010; Pighin and Moschitti, 2009a; Pighin and Moschitti, 2009b) , which extracts the most important features for the classification model. Many mistakes are related to false positives and negatives of the none class (about 72% of the errors). This class also causes data imbalance. Most errors are also due to lack of lexical information available to the SPTK kernel: (i) in 30% of the errors, the argument heads were proper nouns for which the lexical generalization provided by the DMs was not VerbNet class 13.5.1 (IM(VB(target))(OBJ)) (VC(VB(target))(OBJ)) (VC(VBG(target))(OBJ)) (OPRD(TO)(IM(VB(target))(OBJ))) (PMOD(VBG(target))(OBJ)) (VB(target)) (VC (VBN(target) available; and (ii) in 76% of the errors only 2 or less argument heads are included in the extracted tree, therefore tree kernels cannot exploit enough lexical information to disambiguate verb senses. Additionally, ambiguity characterizes errors where the system is linguistically consistent but the learned selectional preferences are not sufficient to separate verb senses. These errors are mainly due to the lack of contextual information. While error analysis suggests that further improvement is possible (e.g. by exploiting proper nouns), the type of generalizations currently achieved by SPTK are rather effective. Table 4 and 5 report the tree structures characterizing the most informative training examples of the two senses of the verb order, i.e. the VerbNet classes 13.5.1 (make a request for something) and 60 (give instructions to or direct somebody to do something with authority).", "cite_spans": [ { "start": 144, "end": 172, "text": "(Pighin and Moschitti, 2010;", "ref_id": "BIBREF34" }, { "start": 173, "end": 201, "text": "Pighin and Moschitti, 2009a;", "ref_id": "BIBREF32" }, { "start": 202, "end": 230, "text": "Pighin and Moschitti, 2009b)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 825, "end": 837, "text": "(VBN(target)", "ref_id": null } ], "eq_spans": [], "section": "Model Analysis and Discussion", "sec_num": "6" }, { "text": "In line with the method discussed in (Pighin and Moschitti, 2009b) , these fragments are extracted as they appear in most of the support vectors selected during SVM training. As easily seen, the two classes are captured by rather different patterns. The typical accusative form with an explicit direct object emerges as characterizing the sense 13.5.1, denoting the THEME role. All fragments of the sense 60 emphasize instead the sentential complement of the verb that in fact expresses the standard PROPOSI-TION role in VerbNet. Notice that tree fragments correspond to syntactic patterns. The a posteriori VerbNet class 13.5.1 (VP(VB(target))(NP)) (VP(VBG(target))(NP)) (VP(VBD(target))(NP)) (VP(TO)(VP(VB(target))(NP))) (S(NP-SBJ)(VP(VBP(target))(NP))) VerbNet class 60 (VBN(target)) (VP(VBD(target))(S)) (VP(VBZ(target))(S)) (VBP(target)) (VP(VBD(target))(NP-1)(S(NP-SBJ)(VP))) Table 5 : CT fragments analysis of the learned models (i.e. the underlying support vectors) confirm very interesting grammatical generalizations, i.e. the capability of tree kernels to implicitly trigger useful linguistic inductions for complex semantic tasks. When SPTK are adopted, verb arguments can be lexically generalized into word classes, i.e., clusters of argument heads (e.g. commission vs. delegation, or gift vs. present). Automatic generation of such classes is an interesting direction for future research.", "cite_spans": [ { "start": 37, "end": 66, "text": "(Pighin and Moschitti, 2009b)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 882, "end": 889, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Model Analysis and Discussion", "sec_num": "6" }, { "text": "We have proposed new approaches to characterize verb classes in learning algorithms. The key idea is the use of structural representation of verbs based on syntactic dependencies and the use of structural kernels to measure similarity between such representations. The advantage of kernel methods is that they can be directly used in some learning algorithms, e.g., SVMs, to train verb classifiers. Very interestingly, we can encode distributional lexical similarity in the similarity function acting over syntactic structures and this allows for generalizing selection restrictions through a sort of (supervised) syntactic and semantic co-clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "The verb classification results show a large improvement over the state-of-the-art for both Verb-Net and FrameNet, with a relative error reduction of about 13.5% and 16.0%, respectively. In the future, we plan to exploit the models learned from FrameNet and VerbNet to carry out automatic mapping of verbs from one theory to the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "http://framenet.icsi.berkeley.edu", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The average running time of the SK is much higher than the one of PTK. When a tree is composed by only one level PTK collapses to SK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://danielepighin.net/cms/software/flink", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The berkeley framenet project", "authors": [ { "first": "Collin", "middle": [ "F" ], "last": "Baker", "suffix": "" }, { "first": "Charles", "middle": [ "J" ], "last": "Fillmore", "suffix": "" }, { "first": "John", "middle": [ "B" ], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The wacky wide web: a collection of very large linguistically processed web-crawled corpora", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "LRE", "volume": "43", "issue": "3", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collec- tion of very large linguistically processed web-crawled corpora. LRE, 43(3):209-226.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Combined syntactic and semantic kernels for text classification", "authors": [ { "first": "Stephan", "middle": [], "last": "Bloehdorn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ECIR", "volume": "", "issue": "", "pages": "307--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Bloehdorn and Alessandro Moschitti. 2007a. Combined syntactic and semantic kernels for text clas- sification. In Gianni Amati, Claudio Carpineto, and Gianni Romano, editors, Proceedings of ECIR, vol- ume 4425 of Lecture Notes in Computer Science, pages 307-318. Springer, APR.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Structure and semantics for expressive text kernels", "authors": [ { "first": "Stephan", "middle": [], "last": "Bloehdorn", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2007, "venue": "CIKM'07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management", "volume": "", "issue": "", "pages": "861--864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephan Bloehdorn and Alessandro Moschitti. 2007b. Structure and semantics for expressive text kernels. In CIKM'07: Proceedings of the sixteenth ACM con- ference on Conference on information and knowledge management, pages 861-864, New York, NY, USA. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Verbnet class assignment as a wsd task", "authors": [ { "first": "Dmitriy", "middle": [], "last": "Susan Windisch Brown", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Ninth International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan Windisch Brown, Dmitriy Dligach, and Martha Palmer. 2011. Verbnet class assignment as a wsd task. In Proceedings of the Ninth International Conference on Computational Semantics, IWCS '11, pages 85-94, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Nicola Cancedda, Eric Gaussier, Cyril Goutte, and Jean Michel Renders", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT and EMNLP", "volume": "3", "issue": "", "pages": "1059--1082", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Raymond Mooney. 2005. A short- est path dependency kernel for relation extraction. In Proceedings of HLT and EMNLP, pages 724-731, Vancouver, British Columbia, Canada, October. Nicola Cancedda, Eric Gaussier, Cyril Goutte, and Jean Michel Renders. 2003. Word sequence kernels. Journal of Machine Learning Research, 3:1059-1082.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "Proceedings of NAACL'00", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL'00.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Duffy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of ACL'02", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Nigel Duffy. 2002. New Rank- ing Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron. In Pro- ceedings of ACL'02.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Latent semantic kernels", "authors": [ { "first": "Nello", "middle": [], "last": "Cristianini", "suffix": "" }, { "first": "John", "middle": [], "last": "Shawe-Taylor", "suffix": "" }, { "first": "Huma", "middle": [], "last": "Lodhi", "suffix": "" } ], "year": 2001, "venue": "Proceedings of ICML-01, 18th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "66--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nello Cristianini, John Shawe-Taylor, and Huma Lodhi. 2001. Latent semantic kernels. In Carla Brodley and Andrea Danyluk, editors, Proceedings of ICML-01, 18th International Conference on Machine Learning, pages 66-73, Williams College, US. Morgan Kauf- mann Publishers, San Francisco, US.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Towards open-domain semantic role labeling", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Cristina", "middle": [], "last": "Giannone", "suffix": "" }, { "first": "Paolo", "middle": [], "last": "Annesi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "237--246", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Cristina Giannone, Paolo Annesi, and Roberto Basili. 2010. Towards open-domain semantic role labeling. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 237-246, Uppsala, Sweden, July. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Structured Lexical Similarity via Convolution Kernels on Dependency Trees", "authors": [ { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured Lexical Similarity via Convolution Kernels on Dependency Trees. In Proceedings of EMNLP 2011.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Kernel Methods for Relational Learning", "authors": [ { "first": "Chad", "middle": [], "last": "Cumby", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chad Cumby and Dan Roth. 2003. Kernel Methods for Relational Learning. In Proceedings of ICML 2003.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Np bracketing by maximum entropy tagging and SVM reranking", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP'04", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2004. Np bracketing by maximum entropy tagging and SVM reranking. In Proceedings of EMNLP'04.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Novel semantic features for verb sense disambiguation", "authors": [ { "first": "Dmitriy", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2008, "venue": "The Association for Computer Linguistics", "volume": "", "issue": "", "pages": "29--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitriy Dligach and Martha Palmer. 2008. Novel se- mantic features for verb sense disambiguation. In ACL (Short Papers), pages 29-32. The Association for Computer Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A synopsis of linguistic theory 1930-1955", "authors": [ { "first": "J", "middle": [], "last": "Firth", "suffix": "" } ], "year": 1957, "venue": "Studies in Linguistic Analysis. Philological Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Firth. 1957. A synopsis of linguistic theory 1930- 1955. In Studies in Linguistic Analysis. Philological Society, Oxford. reprinted in Palmer, F. (ed. 1968) Se- lected Papers of J. R. Firth, Longman, Harlow.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Information retrieval using a singular value decomposition model of latent semantic structure", "authors": [ { "first": "G", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "R", "middle": [ "A" ], "last": "Harshman", "suffix": "" }, { "first": "L", "middle": [ "A" ], "last": "Streeter", "suffix": "" }, { "first": "K", "middle": [ "E" ], "last": "Lochbaum", "suffix": "" } ], "year": 1988, "venue": "Proc. of SIGIR '88", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. W. Furnas, S. Deerwester, S. T. Dumais, T. K. Lan- dauer, R. A. Harshman, L. A. Streeter, and K. E. Lochbaum. 1988. Information retrieval using a sin- gular value decomposition model of latent semantic structure. In Proc. of SIGIR '88, New York, USA.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurasfky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistic", "volume": "28", "issue": "3", "pages": "496--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Daniel Jurasfky. 2002. Automatic la- beling of semantic roles. Computational Linguistic, 28(3):496-530.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The necessity of parsing for predicate argument recognition", "authors": [ { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Gildea and Martha Palmer. 2002. The neces- sity of parsing for predicate argument recognition. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02), Philadelphia, PA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Semantic role labeling via framenet, verbnet and propbank", "authors": [ { "first": "Ana-Maria", "middle": [], "last": "Giuglea", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "929--936", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana-Maria Giuglea and Alessandro Moschitti. 2006. Se- mantic role labeling via framenet, verbnet and prop- bank. In Proceedings of ACL, pages 929-936, Sydney, Australia, July.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Domain kernels for word sense disambiguation", "authors": [ { "first": "Alfio", "middle": [], "last": "Gliozzo", "suffix": "" }, { "first": "Claudio", "middle": [], "last": "Giuliano", "suffix": "" }, { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL'05", "volume": "", "issue": "", "pages": "403--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alfio Gliozzo, Claudio Giuliano, and Carlo Strapparava. 2005. Domain kernels for word sense disambiguation. In Proceedings of ACL'05, pages 403-410.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Calculating the singular values and pseudo-inverse of a matrix", "authors": [ { "first": "G", "middle": [], "last": "Golub", "suffix": "" }, { "first": "W", "middle": [], "last": "Kahan", "suffix": "" } ], "year": 1965, "venue": "Journal of the Society for Industrial and Applied Mathematics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Golub and W. Kahan. 1965. Calculating the singular values and pseudo-inverse of a matrix. Journal of the Society for Industrial and Applied Mathematics: Se- ries B, Numerical Analysis.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Estimating the generalization performance of a SVM efficiently", "authors": [ { "first": "T", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ICML'00", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Joachims. 2000. Estimating the generalization per- formance of a SVM efficiently. In Proceedings of ICML'00.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dependency-based syntactic-semantic analysis with PropBank and NomBank", "authors": [ { "first": "Richard", "middle": [], "last": "Johansson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Nugues", "suffix": "" } ], "year": 2008, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "183--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Johansson and Pierre Nugues. 2008. Dependency-based syntactic-semantic analysis with PropBank and NomBank. In Proceedings of CoNLL 2008, pages 183-187.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Fast methods for kernel-based text analysis", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL'03", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2003. Fast methods for kernel-based text analysis. In Proceedings of ACL'03.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Boosting-based parse reranking with subtree features", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Hideki", "middle": [], "last": "Isozaki", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL'05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo, Jun Suzuki, and Hideki Isozaki. 2005. Boosting-based parse reranking with subtree features. In Proceedings of ACL'05.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A solution to plato's problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge", "authors": [ { "first": "Tom", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "Sue", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Landauer and Sue Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction and representation of knowl- edge. Psychological Review, 104.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Automatic retrieval and clustering of similar word", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. Automatic retrieval and clustering of similar word. In Proceedings of COLING-ACL, Mon- treal, Canada.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Combining lexical resources: Mapping between propbank and verbnet", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Szu Ting", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 7th International Workshop on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper, Szu ting Yi, and Martha Palmer. 2007. Combining lexical resources: Mapping between prop- bank and verbnet. In In Proceedings of the 7th Inter- national Workshop on Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Syntactic/semantic structures for textual entailment recognition", "authors": [ { "first": "Yashar", "middle": [], "last": "Mehdad", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Fabio", "middle": [ "Massimo" ], "last": "Zanzotto", "suffix": "" } ], "year": 2010, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "1020--1028", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yashar Mehdad, Alessandro Moschitti, and Fabio Mas- simo Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. In HLT-NAACL, pages 1020-1028.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Efficient convolution kernels for dependency and constituent syntactic trees", "authors": [ { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of ECML'06", "volume": "", "issue": "", "pages": "318--329", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Moschitti. 2006. Efficient convolution ker- nels for dependency and constituent syntactic trees. In Proceedings of ECML'06, pages 318-329.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "User's guide to sigf: Significance testing by approximate randomisation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3, 2006. User's guide to sigf: Signifi- cance testing by approximate randomisation.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Isp: Learning inferential selectional preferences", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Bhagat", "suffix": "" }, { "first": "Bonaventura", "middle": [], "last": "Coppola", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Chklovski", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2007, "venue": "Proceedings of HLT/NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy. 2007. Isp: Learning inferential selectional preferences. In Pro- ceedings of HLT/NAACL 2007.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Efficient linearization of tree kernel functions", "authors": [ { "first": "Daniele", "middle": [], "last": "Pighin", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2009, "venue": "Proceedings of CoNLL'09", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniele Pighin and Alessandro Moschitti. 2009a. Ef- ficient linearization of tree kernel functions. In Pro- ceedings of CoNLL'09.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Reverse engineering of tree kernel feature spaces", "authors": [ { "first": "Daniele", "middle": [], "last": "Pighin", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "111--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniele Pighin and Alessandro Moschitti. 2009b. Re- verse engineering of tree kernel feature spaces. In Pro- ceedings of EMNLP, pages 111-120, Singapore, Au- gust. Association for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "On reverse feature engineering of syntactic tree kernels", "authors": [ { "first": "Daniele", "middle": [], "last": "Pighin", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL '10", "volume": "", "issue": "", "pages": "223--233", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniele Pighin and Alessandro Moschitti. 2010. On reverse feature engineering of syntactic tree kernels. In Proceedings of the Fourteenth Conference on Com- putational Natural Language Learning, CoNLL '10, pages 223-233, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Support vector learning for semantic argument classification", "authors": [ { "first": "Kadri", "middle": [], "last": "Sameer Pradhan", "suffix": "" }, { "first": "Valeri", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Krugler", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Ward", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2005, "venue": "Machine Learning Journal", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Kadri Hacioglu, Valeri Krugler, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Support vector learning for semantic argument classi- fication. Machine Learning Journal.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "In defense of one-vs-all classification", "authors": [ { "first": "Ryan", "middle": [], "last": "Rifkin", "suffix": "" }, { "first": "Aldebaro", "middle": [], "last": "Klautau", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "101--141", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Rifkin and Aldebaro Klautau. 2004. In defense of one-vs-all classification. Journal of Machine Learning Research, 5:101-141.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The Word-Space Model", "authors": [ { "first": "Magnus", "middle": [], "last": "Sahlgren", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magnus Sahlgren. 2006. The Word-Space Model. Ph.D. thesis, Stockholm University.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "VerbNet: A broadcoverage, comprehensive verb lexicon", "authors": [ { "first": "Karin Kipper", "middle": [], "last": "Schuler", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Kipper Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. Ph.D. thesis, University of Pennsylyania.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Automatic word sense discrimination", "authors": [ { "first": "Hinrich", "middle": [], "last": "Schutze", "suffix": "" } ], "year": 1998, "venue": "Journal of Computational Linguistics", "volume": "24", "issue": "", "pages": "97--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hinrich Schutze. 1998. Automatic word sense discrimi- nation. Journal of Computational Linguistics, 24:97- 123.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Kernel Methods for Pattern Analysis", "authors": [ { "first": "John", "middle": [], "last": "Shawe", "suffix": "" }, { "first": "-", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Nello", "middle": [], "last": "Cristianini", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Using LTAG Based Features in Parse Reranking", "authors": [ { "first": "Libin", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Anoop", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "Aravind", "middle": [ "K" ], "last": "Joshi", "suffix": "" } ], "year": 2003, "venue": "Empirical Methods for Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "Libin Shen, Anoop Sarkar, and Aravind k. Joshi. 2003. Using LTAG Based Features in Parse Reranking. In Empirical Methods for Natural Language Processing (EMNLP), pages 89-96, Sapporo, Japan.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Porting statistical parsers with data-defined kernels", "authors": [ { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" }, { "first": "James", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2006, "venue": "Proceedings of CoNLL-X", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Titov and James Henderson. 2006. Porting statisti- cal parsers with data-defined kernels. In Proceedings of CoNLL-X.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "The Leaf Path Projection View of Parse Trees: Exploring String Kernels for HPSG Parse Selection", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Penka", "middle": [], "last": "Markova", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Penka Markova, and Christopher Manning. 2004. The Leaf Path Projection View of Parse Trees: Exploring String Kernels for HPSG Parse Selection. In Proceedings of EMNLP 2004.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "From frequency to meaning: Vector space models of semantics", "authors": [ { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Turney", "suffix": "" }, { "first": "", "middle": [], "last": "Pantel", "suffix": "" } ], "year": 2010, "venue": "Journal of Artificial Intelligence Research", "volume": "37", "issue": "", "pages": "141--188", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141- 188.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Philosophical Investigations", "authors": [ { "first": "Ludwig", "middle": [], "last": "Wittgenstein", "suffix": "" } ], "year": 1953, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ludwig Wittgenstein. 1953. Philosophical Investiga- tions. Blackwells, Oxford.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "More accurate tests for the statistical significance of result differences", "authors": [ { "first": "S", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "", "middle": [], "last": "Yeh", "suffix": "" } ], "year": 2000, "venue": "COLING", "volume": "", "issue": "", "pages": "947--953", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander S. Yeh. 2000. More accurate tests for the sta- tistical significance of result differences. In COLING, pages 947-953.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Improving semantic role classification with selectional preferences", "authors": [ { "first": "Eneko", "middle": [], "last": "Be\u00f1at Zapirain", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "", "middle": [], "last": "Surdeanu", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10", "volume": "", "issue": "", "pages": "373--376", "other_ids": {}, "num": null, "urls": [], "raw_text": "Be\u00f1at Zapirain, Eneko Agirre, Llu\u00eds M\u00e0rquez, and Mi- hai Surdeanu. 2010. Improving semantic role classi- fication with selectional preferences. In Human Lan- guage Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 373-376, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Kernel methods for relation extraction", "authors": [ { "first": "Dmitry", "middle": [], "last": "Zelenko", "suffix": "" }, { "first": "Chinatsu", "middle": [], "last": "Aone", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Richardella", "suffix": "" } ], "year": 2002, "venue": "Proceedings of EMNLP-ACL", "volume": "", "issue": "", "pages": "181--201", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proceedings of EMNLP-ACL, pages 181-201.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Exploring Syntactic Features for Relation Extraction using a Convolution tree kernel", "authors": [ { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Su", "suffix": "" } ], "year": 2006, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Min Zhang, Jie Zhang, and Jian Su. 2006. Explor- ing Syntactic Features for Relation Extraction using a Convolution tree kernel. In Proceedings of NAACL.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Representation of verbs according to the Grammatical Relation Centered Tree (GRCT)" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Representation of verbs according to the Lexical Centered Tree (LCT)" }, "FIGREF3": { "num": null, "uris": null, "type_str": "figure", "text": "Learning curves: VerbNet accuracy with the none Class" }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": ")) (PRP(TO)(IM(VB(target))(OBJ))) (IM(VB(target))(OBJ)(ADV(IN)(PMOD))) (OPRD(TO)(IM(VB(target))(OBJ)(ADV(IN)(PMOD)))) VerbNet class 60 (VC(VB(target))(OBJ)) (NMOD(VBG(target))(OPRD)) (VC(VBN(target))(OPRD)) (NMOD(VBN(target))(OPRD)) (PMOD(VBG(target))(OBJ)) (ROOT(SBJ)(VBD(target))(OBJ)(P(,))) (VC(VB(target))(OPRD)) (ROOT(SBJ)(VBZ(target))(OBJ)(P(,))) (NMOD(SBJ(WDT))(VBZ(target))(OPRD)) (NMOD(SBJ)(VBZ(target))(OPRD(SBJ)(TO)(IM)))" }, "TABREF1": { "type_str": "table", "content": "
STKPTKSPTK
lwAcc.lwAcc.lwAcc.
GRCT-92.67%692.97% 0.4 93.54%
LCT-90.28%692.99% 0.3 93.78%
BOW91.13%
SK91.84%
", "text": "VerbNet accuracy with the none class", "num": null, "html": null }, "TABREF2": { "type_str": "table", "content": "", "text": "FrameNet accuracy without the none class", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
", "text": "GRCT fragments", "num": null, "html": null } } } }