{ "paper_id": "D10-1030", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:52:50.190735Z" }, "title": "Joint Inference for Bilingual Semantic Role Labeling", "authors": [ { "first": "Tao", "middle": [], "last": "Zhuang", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "tzhuang@nlpr.ia.ac.cn" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "", "affiliation": { "laboratory": "National Laboratory of Pattern Recognition", "institution": "Chinese Academy of Sciences", "location": {} }, "email": "cqzong@nlpr.ia.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We show that jointly performing semantic role labeling (SRL) on bitext can improve SRL results on both sides. In our approach, we use monolingual SRL systems to produce argument candidates for predicates in bitext at first. Then, we simultaneously generate SRL results for two sides of bitext using our joint inference model. Our model prefers the bilingual SRL result that is not only reasonable on each side of bitext, but also has more consistent argument structures between two sides. To evaluate the consistency between two argument structures, we also formulate a log-linear model to compute the probability of aligning two arguments. We have experimented with our model on Chinese-English parallel Prop-Bank data. Using our joint inference model, F1 scores of SRL results on Chinese and English text achieve 79.53% and 77.87% respectively, which are 1.52 and 1.74 points higher than the results of baseline monolingual SRL combination systems respectively.", "pdf_parse": { "paper_id": "D10-1030", "_pdf_hash": "", "abstract": [ { "text": "We show that jointly performing semantic role labeling (SRL) on bitext can improve SRL results on both sides. In our approach, we use monolingual SRL systems to produce argument candidates for predicates in bitext at first. Then, we simultaneously generate SRL results for two sides of bitext using our joint inference model. Our model prefers the bilingual SRL result that is not only reasonable on each side of bitext, but also has more consistent argument structures between two sides. To evaluate the consistency between two argument structures, we also formulate a log-linear model to compute the probability of aligning two arguments. We have experimented with our model on Chinese-English parallel Prop-Bank data. Using our joint inference model, F1 scores of SRL results on Chinese and English text achieve 79.53% and 77.87% respectively, which are 1.52 and 1.74 points higher than the results of baseline monolingual SRL combination systems respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In recent years, there has been an increasing interest in SRL on several languages. However, little research has been done on how to effectively perform SRL on bitext, which has important applications including machine translation (Wu and Fung, 2009) . A conventional way to perform SRL on bitext is performing SRL on each side of bitext separately, as has been done by Fung et al. (2007) on Chinese-English bitext. However, it is very difficult to obtain good SRL results on both sides of bitext in this way. The reason is that even the state-ofthe-art SRL systems do not have very high accuracy on both English text (M\u00e0rquez et al., 2008; Pradhan et al., 2008; Punyakanok et al., 2008; Toutanova et al., 2008) , and Chinese text (Che et al., 2008; Xue, 2008; Li et al., 2009; Sun et al., 2009) .", "cite_spans": [ { "start": 231, "end": 250, "text": "(Wu and Fung, 2009)", "ref_id": "BIBREF23" }, { "start": 370, "end": 388, "text": "Fung et al. (2007)", "ref_id": "BIBREF5" }, { "start": 618, "end": 640, "text": "(M\u00e0rquez et al., 2008;", "ref_id": "BIBREF10" }, { "start": 641, "end": 662, "text": "Pradhan et al., 2008;", "ref_id": null }, { "start": 663, "end": 687, "text": "Punyakanok et al., 2008;", "ref_id": "BIBREF18" }, { "start": 688, "end": 711, "text": "Toutanova et al., 2008)", "ref_id": "BIBREF22" }, { "start": 731, "end": 749, "text": "(Che et al., 2008;", "ref_id": "BIBREF4" }, { "start": 750, "end": 760, "text": "Xue, 2008;", "ref_id": "BIBREF24" }, { "start": 761, "end": 777, "text": "Li et al., 2009;", "ref_id": "BIBREF9" }, { "start": 778, "end": 795, "text": "Sun et al., 2009)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "On the other hand, the semantic equivalence between two sides of bitext means that they should have consistent predicate-argument structures. This bilingual argument structure consistency can guide us to find better SRL results. For example, in Figure 1(a) , the argument structure consistency can guide us to choose a correct SRL result on Chinese side. Consistency between two argument structures is reflected by sound argument alignments between them, as shown in Figure 1 (b) . Previous research has shown that bilingual constraints can be very helpful for parsing (Burkett and Klein, 2008; Huang et al., 2008) . In this paper, we show that the bilingual argument structure consistency can be leveraged to substantially improve SRL results on both sides of bitext.", "cite_spans": [ { "start": 476, "end": 479, "text": "(b)", "ref_id": null }, { "start": 569, "end": 594, "text": "(Burkett and Klein, 2008;", "ref_id": "BIBREF2" }, { "start": 595, "end": 614, "text": "Huang et al., 2008)", "ref_id": null } ], "ref_spans": [ { "start": 245, "end": 256, "text": "Figure 1(a)", "ref_id": "FIGREF0" }, { "start": 467, "end": 475, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Formally, we present a joint inference model to preform bilingual SRL. Using automatic word alignment on bitext, we first identify a pair of predicates that align with each other. And we use monolingual SRL systems to produce argument candidates for each predicate. Then, our model jointly generate SRL results for both predicates from their argument candidates, using integer linear programming (ILP) technique. An overview of our approach is shown in Figure 2 .", "cite_spans": [ { "start": 396, "end": 401, "text": "(ILP)", "ref_id": null } ], "ref_spans": [ { "start": 453, "end": 461, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our joint inference model consists of three components: the source side, the target side, and the ar- In (a), the SRL results are generated by the stateof-the-art monolingual SRL systems. The English SRL result is correct. But it is to more difficult to get correct SRL result on Chinese side, because the AM-TMP argument embeds into a discontinuous A1 argument. The Chinese SRL result in the row marked by 'R1' is correct and consistent with the result on English side. Whereas the result in the row marked by 'R2' is incorrect and inconsistent with the result on English side, with the circles showing their inconsistency. The argument structure consistency can guide us to choose the correct Chinese SRL result. gument alignment between two sides. These three components correspond to three interrelated factors: the quality of the SRL result on source side, the quality of the SRL result on target side, and the argument structure consistency between the SRL results on both sides. To evaluate the consistency between the two argument structures in our joint inference model, we formulate a log-linear model to compute the probability of aligning two arguments. Experiments on Chinese-English parallel PropBank shows that our model significantly outperforms monolingual SRL combination systems on both Chinese and English sides. The rest of this paper is organized as follows: Section 2 introduces related work. Section 3 describes how we generate SRL candidates on each side of bitext. Section 4 presents our joint inference model. Section 5 presents our experiments. And Section 6 concludes our work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Some existing work on monolingual SRL combination is related to our work. Punyakanok et al. (2004; 2008 ) formulated an ILP model for SRL. Koomen et al. (2005) combined several SRL outputs using ILP method. and Pradhan et al. (2005) proposed combination strategies that are not based on ILP method. Surdeanu et al. (2007) did a complete research on a variety of combination strategies. Zhuang and Zong (2010) proposed a minimum error weighting combination strategy for Chinese SRL combination.", "cite_spans": [ { "start": 74, "end": 98, "text": "Punyakanok et al. (2004;", "ref_id": "BIBREF19" }, { "start": 99, "end": 103, "text": "2008", "ref_id": "BIBREF24" }, { "start": 139, "end": 159, "text": "Koomen et al. (2005)", "ref_id": "BIBREF8" }, { "start": 211, "end": 232, "text": "Pradhan et al. (2005)", "ref_id": "BIBREF16" }, { "start": 299, "end": 321, "text": "Surdeanu et al. (2007)", "ref_id": "BIBREF21" }, { "start": 386, "end": 408, "text": "Zhuang and Zong (2010)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Research on SRL utilizing parallel corpus is also related to our work. Pad\u00f3 and Lapata (2009) did research on cross-lingual annotation projection on English-German parallel corpus. They performed SRL only on the English side, and then mapped the English SRL result to German side. Fung et al. (2007) did pioneering work on studying argument alignment on Chinese-English parallel Prop-Bank. They performed SRL on Chinese and English sides separately. Then, given the SRL result on both sides, they automatically induced the argument alignment between two sides.", "cite_spans": [ { "start": 71, "end": 93, "text": "Pad\u00f3 and Lapata (2009)", "ref_id": "BIBREF13" }, { "start": 281, "end": 299, "text": "Fung et al. (2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The major difference between our work and all existing research is that our model performs SRL inference on two sides of bitext simultaneously. In our model, we jointly consider three interrelated factors: SRL result on the source side, SRL result on the target side, and the argument alignment between them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As shown in Figure 2 , we need to use a monolingual SRL system to generate candidates for our joint inference model. We have implemented a monolingual SRL system which utilize full phrase-structure parse trees to perform SRL. In this system, the whole SRL process is comprised of three stages: pruning, argument identification, and argument classification. In the pruning stage, the heuristic pruning method in (Xue, 2008) is employed. In the argument identification stage, a number of argument locations are identified in a sentence. In the argument classification stage, each location identified in the previous stage is assigned a semantic role label. Maximum entropy classifier is employed for both the argument identification and classification tasks. And Zhang Le's MaxEnt toolkit 1 is used for implementation.", "cite_spans": [ { "start": 411, "end": 422, "text": "(Xue, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Monolingual SRL System", "sec_num": "3.1" }, { "text": "We use the monolingual SRL system described above for both Chinese and English SRL tasks. For the Chinese SRL task, the features used in this paper are the same with those used in (Xue, 2008) . For the English SRL task, the features used are the same with those used in (Pradhan et al., 2008) .", "cite_spans": [ { "start": 180, "end": 191, "text": "(Xue, 2008)", "ref_id": "BIBREF24" }, { "start": 270, "end": 292, "text": "(Pradhan et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Monolingual SRL System", "sec_num": "3.1" }, { "text": "The maximum entropy classifier in our monolingual SRL system can output classification probabilities. We use the classification probability of the argument classification stage as an argument's probability. As illustrated in Figure 3 , in an individual system's output, each argument has three attributes: its location in sentence loc, represented by the number of its first word and last word; its semantic role label l; and its probability p.", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 233, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Output of the Monolingual SRL System", "sec_num": "3.2" }, { "text": "So each argument outputted by a system is a triple (loc, l, p). For example, the A0 argument in Figure 3 is ((0, 2), A0, 0.94). Because these outputs are to be combined, we call such triple a candidate. ", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 104, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Output of the Monolingual SRL System", "sec_num": "3.2" }, { "text": "To generate candidates for joint inference, we need to have multiple SRL results on each side of bitext. Therefore, for both Chinese and English SRL systems, we use the 3-best parse trees of Berkeley parser (Petrov and Klein, 2007) and 1-best parse trees of Bikel parser (Bikel, 2004) and Stanford parser (Klein and Manning, 2003) as inputs. All the three parsers are multilingual parsers. The second and third best parse trees of Berkeley parser are used for their good quality. Therefore, each monolingual SRL system produces 5 different outputs.", "cite_spans": [ { "start": 207, "end": 231, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF15" }, { "start": 271, "end": 284, "text": "(Bikel, 2004)", "ref_id": "BIBREF0" }, { "start": 305, "end": 330, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Generating and Merging Candidates", "sec_num": "3.3" }, { "text": "Candidates from different outputs may have the same loc and l but different p. So we merge all candidates with the same loc and l into one by averaging their probabilities. For a merged candidate (loc, l, p), we say that p is the probability of assigning l to loc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generating and Merging Candidates", "sec_num": "3.3" }, { "text": "Our model can be conceptually decomposed to three components: the source side, the target side, and the argument alignment. The objective function of our joint inference model is the weighted sum of three sub-objectives:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference Model", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max O s + \u03bb 1 O t + \u03bb 2 O a", "eq_num": "(1)" } ], "section": "Joint Inference Model", "sec_num": "4" }, { "text": "where O s and O t represent the quality of the SRL results on source and target sides, and O a represents the soundness of the argument alignment between the SRL results on two sides, \u03bb 1 , \u03bb 2 are positive weights corresponding to the importance of O t and O a respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Inference Model", "sec_num": "4" }, { "text": "The source side component aims to improve the SRL result on source side. This is equivalent to a monolingual SRL combination problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "For convenience, we denote the whole semantic role label set for source language as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "{l s 1 , l s 2 , . . . , l s Ls }, in which l s", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "1 \u223c l s 6 stand for the key argument labels A0 \u223c A5 respectively. Suppose there are N s different locations, denoted as loc s 1 , . . . , loc s Ns , among all candidates on the source side. The probability of assigning l s j to loc s i is p s ij . An indicator variable x ij is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "x ij = [loc s i is assigned label l s j ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "Then the source side sub-objective O s in equation 1is the sum of arguments' probabilities on source side:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "O s = Ns i=1 Ls j=1 (p s ij \u2212 T s )x ij (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "where T s is a bias to prevent including too many candidates in solution (Surdeanu et al., 2007) . We consider the following two linguistically motivated constraints:", "cite_spans": [ { "start": 73, "end": 96, "text": "(Surdeanu et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "1. No duplication: There is no duplication for key arguments: A0 \u223c A5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "2. No overlapping: Arguments cannot overlap with each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "In (Punyakanok et al., 2004) , several more constraints are considered. According to (Surdeanu et al., 2007) , however, no significant performance improvement can be obtained by considering more constraints than the two above. So we do not consider other constraints.", "cite_spans": [ { "start": 3, "end": 28, "text": "(Punyakanok et al., 2004)", "ref_id": "BIBREF19" }, { "start": 85, "end": 108, "text": "(Surdeanu et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "The inequalities in (3) make sure that each loc s i is assigned at most one label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22001 \u2264 i \u2264 N s : Ls j=1 x ij \u2264 1", "eq_num": "(3)" } ], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "The inequalities in (4) satisfy the No duplication constraint.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22001 \u2264 j \u2264 6 : Ns i=1 x ij \u2264 1", "eq_num": "(4)" } ], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "For any source side location loc s i , let C i denote the index set of the locations that overlap with it. Then the No overlapping constraint means that if loc s i is assigned a label, i.e., Ns j=1 x ij = 1, then for any u \u2208 C i , loc s u cannot be assigned any label, i.e., Ns j=1 x uj = 0. A common technique in ILP modeling to form such a constraint is to use a sufficiently large auxiliary constant M . And the constraint is formulated as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "\u22001 \u2264 i \u2264 N s : u\u2208C i Ls j=1 x uj \u2264 (1\u2212 Ls j=1 x ij )M (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "In this case, M only needs to be larger than the number of candidates to be combined. In this paper, M = 500 is large enough.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Side Component", "sec_num": "4.1.1" }, { "text": "In principle, the target side component of our joint inference model is the same with the source side component.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "The whole semantic role label set for target language is denoted by {l t 1 , l t 2 , . . . , l t Lt }. There are N t different locations, denoted as loc t 1 , . . . , loc t Nt , among all candidates in the target side. And l t 1 \u223c l t 6 stand for the key argument labels A0 \u223c A5 respectively. The probability of assigning l t j to loc t k is p t kj . An indicator variable y kj is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "y kj = [loc t k is assigned label l t j ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "Then the target side sub-objective O t in equation 1is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "O t = Nt k=1 Lt j=1 (p t kj \u2212 T t )y kj (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "The constraints on target side are as follows: Each loc t k is assigned at most one label:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22001 \u2264 k \u2264 N t : Lt j=1 y kj \u2264 1", "eq_num": "(7)" } ], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "The No duplication constraint:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22001 \u2264 j \u2264 6 : Nt k=1 y kj \u2264 1", "eq_num": "(8)" } ], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "The No overlapping constraint:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "\u22001 \u2264 k \u2264 N t : v\u2208C k Lt j=1 y vj \u2264 (1\u2212 Lt j=1 y kj )M (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "In (9), C k denotes the index set of the locations that overlap with loc t k , and the constant M is set to 500 in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Target Side Component", "sec_num": "4.1.2" }, { "text": "The argument alignment component is the core of our joint inference model. It gives preference to the bilingual SRL results that have more consistent argument structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment", "sec_num": "4.2" }, { "text": "For a source side argument arg s i = (loc s i , l s ) and a target side argument arg t k = (loc t k , l t ), let z ik be the following indicator variable:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment", "sec_num": "4.2" }, { "text": "z ik = [arg s i aligns with arg t k ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment", "sec_num": "4.2" }, { "text": "We use p a ik to represent the probability that arg s i and arg t k align with each other, i.e., p a ik = P (z ik = 1). We call p a ik the argument alignment probability between arg s i and arg t k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment", "sec_num": "4.2" }, { "text": "We use a log-linear model to compute the argument alignment probability p a ik between arg s i and arg t k . Let (s, t) denote a bilingual sentence pair and wa denote the word alignment on (s, t). Our loglinear model defines a distribution on z ik given the tuple tup = (arg s i , arg t k , wa, s, t):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Probability Model", "sec_num": "4.2.1" }, { "text": "P (z ik |tup) \u221d exp(w T \u03c6(tup))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Probability Model", "sec_num": "4.2.1" }, { "text": "where \u03c6(tup) is the feature vector. With this model, p a ik can be computed as p a ik = P (z ik = 1|tup). In order to study the argument alignment in corpus and to provide training data for our log-linear model, we have manually aligned the arguments in 60 files (chtb 0121.fid to chtb 0180.fid) of Chinese-English parallel PropBank. On this data set, we get the argument alignment matrix in Each entry in Table 1 is the number of times for which one type of Chinese argument aligns with one type of English argument. AM * stands for all adjuncts types like AM-TMP, AM-LOC, etc., and NUL means that the argument on the other side cannot be aligned with any argument on this side. For example, the number 46 in the A0 row and NUL column means that Chinese A0 argument cannot be aligned with any argument on English side for 46 times in our manually aligned corpus.", "cite_spans": [], "ref_spans": [ { "start": 406, "end": 413, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Argument Alignment Probability Model", "sec_num": "4.2.1" }, { "text": "We use the following features in our model. Word alignment feature: If there are many wordto-word alignments between arg s i and arg t k , then it is very probable that arg s i and arg t k would align with each other. We adopt the method used in (Pad\u00f3 and Lapata, 2009) to measure the word-to-word alignments between arg s i and arg t k . And the word alignment feature is defined as same as the word alignment-based word overlap in (Pad\u00f3 and Lapata, 2009) . Note that this is a real-valued feature. Head word alignment feature: The head word of an argument is usually more representative than other words. So we use whether the head words of arg s i and arg t k align with each other as a binary feature. The use of this feature is inspired by the work in (Burkett and Klein, 2008) . Semantic role labels of two arguments: From Table 1, we can see that semantic role labels of two arguments are a good indicator of whether they should align with each other. For example, a Chinese A0 argument aligns with an English A0 argument most of the times, and never aligns with an English AM * argument in Table 1 . Therefore, the semantic role labels of arg s i and arg t k are used as a feature. Predicate verb pair: Different predicate pairs have different argument alignment patterns. Let's take the Chinese predicate /zengzhang and the English predicate grow as an example. The argument alignment matrix for all instances of the Chinese-English predicate pair (zengzhang, grow) in our manually aligned corpus is shown in Table 2. CH \\EN A0 A1 A2 AM * NUL A0 0 16 0 0 0 A1 0 0 12 0 0 AM * 0 0 4 7 10 NUL 0 0 0 2 0 From Table 2 we can see that all A0 arguments of zengzhang align with A1 arguments of grow. This is very different from the results in Table 1 , where a Chinese A0 argument tends to align with an English A0 argument. This phenomenon shows that a predicate pair can determine which types of arguments should align with each other. Therefore, we use the predicate pair as a feature.", "cite_spans": [ { "start": 246, "end": 269, "text": "(Pad\u00f3 and Lapata, 2009)", "ref_id": "BIBREF13" }, { "start": 433, "end": 456, "text": "(Pad\u00f3 and Lapata, 2009)", "ref_id": "BIBREF13" }, { "start": 757, "end": 782, "text": "(Burkett and Klein, 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1098, "end": 1105, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1518, "end": 1526, "text": "Table 2.", "ref_id": "TABREF3" }, { "start": 1615, "end": 1622, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 1745, "end": 1752, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Argument Alignment Probability Model", "sec_num": "4.2.1" }, { "text": "The argument alignment sub-objective O a in equation 1is the sum of argument alignment probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "O a = Ns i=1 Nt k=1 (p a ik \u2212 T a )z ik (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "where T a is a bias to prevent including too many alignments in final solution, and p a ik is computed using the log-linear model described in subsection 4.2.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "O a reflects the consistency between argument structures on two sides of bitext. Larger O a means better argument alignment between two sides, thus indicates more consistency between argument structures on two sides.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "The following constraints are considered: 1. Conformity with bilingual SRL result. For all candidates on both source and target sides, only those that are chosen to be arguments on each side can be aligned.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "2. One-to-many alignment limit. Each argument can not be aligned with more than 3 arguments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "3. Complete argument alignment. Each argument on source side must be aligned with at least one argument on target side, and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "The Conformity with bilingual SRL result constraint is necessary to validly integrate the bilingual SRL result with the argument alignment. This constraint means that if arg s i and arg t k align with each other, i.e., z ik = 1, then loc s i must be assigned a label on source side, i.e.,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Argument Alignment Component", "sec_num": "4.2.2" }, { "text": "x ij = 1, and loc t k must be assigned a label on target side, i.e., Lt j=1 y kj = 1. So this constraint can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "\u22001 \u2264 i \u2264 N s , 1 \u2264 k \u2264 N t : Ls j=1 x ij \u2265 z ik (11) \u22001 \u2264 k \u2264 N t , 1 \u2264 i \u2264 N s : Lt j=1 y kj \u2265 z ik (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "The One-to-many alignment limit constraint comes from our observation on manually aligned corpus. We have found that no argument aligns with more than 3 arguments in our manually aligned corpus. This constraint can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22001 \u2264 i \u2264 N s : Nt k=1 z ik \u2264 3 (13) \u22001 \u2264 k \u2264 N t : Ns i=1 z ik \u2264 3", "eq_num": "(14)" } ], "section": "Ls j=1", "sec_num": null }, { "text": "The Complete argument alignment constraint comes from the semantic equivalence between two sides of bitext. For each source side location loc s i , if it is assigned a label, i.e., Ls j=1 x ij = 1, then it must be aligned with some arguments on target side, i.e., Nt k=1 z ik \u2265 1. This can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "\u22001 \u2264 i \u2264 N s : Nt k=1 z ik \u2265 Ls j=1 x ij (15)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "Similarly, each target side argument must be aligned to at least one source side argument. This can be represented as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "\u22001 \u2264 k \u2264 N t : Ns i=1 z ik \u2265 Lt j=1 y kj (16)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ls j=1", "sec_num": null }, { "text": "Although the hard Complement argument alignment constraint is ideally reasonable, in real situations this constraint does not always hold. The manual argument alignment result shown in Table 1 indicates that in some cases an argument cannot be aligned with any argument on the other side (see the NUL row and column in Table 1 ). Therefore, it would be reasonable to change the hard Complement argument alignment constraint to a soft one. To do so, we need to remove the hard Complement argument alignment constraint and add penalty for violation of this constraint. If an argument does not align with any argument on the other side, we say it aligns with NUL. And we define the following indicator variables:", "cite_spans": [], "ref_spans": [ { "start": 185, "end": 192, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 319, "end": 326, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "z i,N U L = [arg s i aligns with NUL], 1 \u2264 i \u2264 N s . z N U L,k = [arg t k aligns with NUL], 1 \u2264 k \u2264 N t . Then Ns i=1 z i,N U L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "is the number of source side arguments that align with NUL. And Nt k=1 z N U L,k is the number of target side arguments that align with NUL. For each argument that aligns with NUL, we add a penalty \u03bb 3 to the argument alignment subobjective O a . Therefore, the sub-objective O a in equation 10is changed to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "O a = Ns i=1 Nt k=1 (p a ik \u2212 T a )z ik \u2212\u03bb 3 ( Ns i=1 z i,N U L + Nt k=1 z N U L,k ) (17) From the definition of z i,N U L , it is obvious that, for any 1 \u2264 i \u2264 N s , z i,N U L and z ik (1 \u2264 k \u2264 N t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "have the following relationship: If Nt k=1 z ik \u2265 1, i.e., arg s i aligns with some arguments on target side, then z i,N U L = 0; Otherwise, z i,N U L = 1. These relationships can be captured by the following constraints:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u22001 \u2264 i \u2264 N s , 1 \u2264 k \u2264 N t : z i,N U L \u2264 1\u2212z ik (18) \u22001 \u2264 i \u2264 N s : Nt k=1 z ik + z i,N U L \u2265 1", "eq_num": "(19)" } ], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "Similarly, for any 1 \u2264 k \u2264 N t , z N U L,k and z ik (1 \u2264 i \u2264 N s ) observe the following constraints:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "\u22001 \u2264 k \u2264 N t , 1 \u2264 i \u2264 N s : z N U L,k \u2264 1 \u2212 z ik (20) \u22001 \u2264 k \u2264 N t : Ns i=1 z ik + z N U L,k \u2265 1 (21)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complete Argument Alignment as a Soft Constraint", "sec_num": "4.3" }, { "text": "So far, we have presented two versions of our joint inference model. The first version treats Complement argument alignment as a hard constraint. We will refer to this version as Joint1. The objective function of Joint1 is defined by equations (1, 2, 6, 10). And the constraints of Joint1 are defined by equations (3-5, 7-9, 11-16).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Summary", "sec_num": "4.4" }, { "text": "The sencond version treats Complement argument alignment as a soft constraint. We will refer to this version as Joint2. The objective function of Joint2 is defined by equations (1, 2, 6, 17) . And the constraints of Joint2 are defined by equations (3-5, 7-9, 11-14, 18-21) .", "cite_spans": [ { "start": 248, "end": 272, "text": "(3-5, 7-9, 11-14, 18-21)", "ref_id": null } ], "ref_spans": [ { "start": 177, "end": 190, "text": "(1, 2, 6, 17)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Models Summary", "sec_num": "4.4" }, { "text": "Our baseline models are monolingual SRL combination models. We will refer to the source side combination model as SrcCmb. The objective of Sr-cCmb is to maximize O s , which is defined in equation (2). And the constraints of SrcCmb are defined by equations (3-5). Similarly, we will refer to the target side combination model as TrgCmb. The objective of TrgCmb is to maximize O t defined in equation (6). And the constraints of TrgCmb are defined by equations (7-9). In this paper, we employ lpsolve 2 to solve all ILP models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models Summary", "sec_num": "4.4" }, { "text": "In our experiments, we use the Xinhua News portion of Chinese and English data in LDC OntoNotes Release 3.0. This data is a Chinese-English parallel proposition bank described in (Palmer et al., 2005) . It contains parallel proposition annotations for 325 files (chtb 0001.fid to chtb 0325.fid) from Chinese-English parallel Treebank. The English part of this data contains proposition annotations only for verbal predicates. Therefore, we only consider verbal predicates in this paper.", "cite_spans": [ { "start": 179, "end": 200, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We employ the GIZA++ toolkit (Och and Ney, 2003) to perform automatic word alignment. Besides the parallel PropBank data, we use additional 4,500K Chinese-English sentence pairs 3 to induce word alignments for both directions, with the default GIZA++ settings. The alignments are symmetrized using the intersection heuristic (Och and Ney, 2003) , which is known to produce high-precision alignments.", "cite_spans": [ { "start": 29, "end": 48, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF12" }, { "start": 325, "end": 344, "text": "(Och and Ney, 2003)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "We use 80 files (chtb 0001.fid to chtb 0080.fid) as test data, and 40 files (chtb 0081.fid to chtb 0120.fid) as development data. Although our joint inference model needs no training, we still need to train a log-linear argument alignment probability model, which is used in the joint inference model. As specified in subsection 4.2.1, the train-ing set for the argument alignment probability model consists of 60 files (chtb 0121.fid to chtb 0180.fid) with manual argument alignment. Unfortunately, the quality of automatic word alignment on oneto-many Chinese-English sentence pairs is usually very poor. So we only include one-to-one Chinese-English sentence pairs in all data. And not all predicates in a sentence pair can be included. Only bilingual predicate pairs are included. A bilingual predicate pair is defined to be a pair of predicates in bitext which align with each other in automatic word alignment. Table 3 shows how many sentences and predicates are included in each data set. Our monolingual SRL systems are trained separately. Our Chinese SRL system is trained on 640 files (chtb 0121.fid to chtb 0931.fid) in Chinese Propbank 1.0. Because Xinhua News is a quite different domain from WSJ, the training set for our English SRL system includes not only Sections 02\u223c21 of WSJ data in English Propbank, but also 205 files (chtb 0121.fid to chtb 0325.fid) in the English part of parallel PropBank. For Chinese, the syntactic parsers are trained on 640 files (chtb 0121.fid to chtb 0931.fid) plus the broadcast news portion of Chinese Treebank 6.0. For English, the syntactic parsers are trained on the following data: Sections 02\u223c21 of WSJ data in English Treebank, 205 files (chtb 0121.fid to chtb 0325.fid) of Xinhua News data in OntoNotes 3.0, and the Sinorama data in OntoNotes 3.0. We treat discontinuous and coreferential arguments in accordance to the CoNLL-2005 shared task (Carreras and M\u00e0rquez, 2005) . The first part of a discontinuous argument is labeled as it is, and the second part is labeled with a prefix \"C-\". All coreferential arguments are labeled with a prefix \"R-\".", "cite_spans": [ { "start": 1899, "end": 1927, "text": "(Carreras and M\u00e0rquez, 2005)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 917, "end": 924, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "5.1" }, { "text": "The models Joint1, Joint2, SrcCmb, and TrgCmb have different parameters. For each model, we have automatically tuned its parameters on development set using Powell's Mothod (Brent, 1973) . Powell's Method is a heuristic optimization algorithm that does not require the objective function to have an explicit analytical formula. For a monolingual model like SrcCmb or TrgCmb, our objective is to maximize the F 1 score of the model's result on development set. But a joint model, like Joint1 or Joint2, generates SRL results on both sides of bitext. So our objective is to maximize the sum of the two F 1 scores of the model's results for both Chinese and English on development set. For all models, we regard the parameters to be tuned as variables. Then we optimize our objective using Powell's Method. The solution of this optimization is the values of parameters. To avoid finding poor local optimum, we perform the optimization 30 times with different initial parameter values, and choose the best solution found. The final parameter values are listed in Table 4.", "cite_spans": [ { "start": 173, "end": 186, "text": "(Brent, 1973)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Tuning Parameters in Models", "sec_num": "5.2" }, { "text": "Model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Parameters in Models", "sec_num": "5.2" }, { "text": "T s T t T a \u03bb 1 \u03bb 2 \u03bb 3 SrcCmb 0.21 - - - - - TrgCmb - 0.32 - - - - Joint1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tuning Parameters in Models", "sec_num": "5.2" }, { "text": "0.17 0.22 0.36 0.96 1.04 -Joint2 0.15 0.26 0.42 1.02 1.21 0.15 Table 4 : Parameter values in models.", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Tuning Parameters in Models", "sec_num": "5.2" }, { "text": "As specified in subsection 3.3, the monoligual SRL system uses different parse trees to generate multiple SRL outputs. The performance of these outputs on test set is shown in Table 5 . In Table 5 , O1\u223cO3 are the outputs using 3-best parse trees of Berkeley parser respectively, O4 and O5 are the outputs using the best parse trees of Stanford parser and Bikel parser respectively. As specified in subsection 5.1, only a small part of English SRL training data is in the same domain with test data. Therefore, the English SRL result in Table 5 is not very impressive. But the Chinese SRL result is pretty good. ", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 189, "end": 196, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 536, "end": 543, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Individual SRL Outputs' Performance", "sec_num": "5.3" }, { "text": "The One-to-many limit and Complete argument alignment constraints in subsection 4.2.2 comes from our empirical knowledge. To investigate the effect of these two constraits, we remove them from our joint inference models one by one, and observe the performance variations on test set. The results are shown in Table 6 . In Table 6 , 'c2' refers to the One-to-many limit constraint, 'c3' refers to the Complete argument alignment constraint, and '-' means removing. For example, 'Joint1 -c2' means removing the constraint 'c2' from the model Joint1. Recall that the only difference between Joint1 and Joint2 is that 'c3' is a hard constraint in Joint1, but a soft constraint in Joint2. Therefore, 'Joint2 -c3' and 'Joint2 -c2 -c3' do not appear in Table 6 , because they are the same with 'Joint1 -c3' and 'Joint1 -c2 -c3' respectively. From Table 6 , we can see that the constraints 'c2' and 'c3' both have positive effect in our joint inference model, because removing any one of them causes performance degradation. And removing 'c3' from Joint1 causes more performance degradation than removing 'c2'. This means that 'c3' plays a more important role than 'c2' in our joint inference model. Indeed, by treating 'c3' as a soft constraint, the model Joint2 has the best performance on both sides of bitext.", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 322, "end": 329, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 746, "end": 753, "text": "Table 6", "ref_id": "TABREF9" }, { "start": 840, "end": 847, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Effects of Different Constraints", "sec_num": "5.4" }, { "text": "We use Joint2 as our final joint inference model. And as specified in subsection 4.4, our baselines are monolingual SRL combination models: SrcCmb for Chinese, and TrgCmb for English. Note that SrcCmb and TrgCmb are basically the same as the state-ofthe-art combination model in (Surdeanu et al., 2007) with No overlapping and No duplication constraints. The final results on test set are shown in Table 7 . From Table 5 and Table 7 , we can see that SrcCmb and TrgCmb improve F 1 scores over the best individual SRL outputs by 2.32 points and 2.51 points on Chinese and English seperately. Thus they form strong baselines for our joint inference model. Even so, our joint inference model still improves F 1 score over SrcCmb by 1.52 points, and over TrgCmb by 1.74 points.", "cite_spans": [ { "start": 279, "end": 302, "text": "(Surdeanu et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 398, "end": 405, "text": "Table 7", "ref_id": "TABREF11" }, { "start": 413, "end": 420, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 425, "end": 432, "text": "Table 7", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Final Results", "sec_num": "5.5" }, { "text": "From Table 7 , we can see that, despite only part of training data for English SRL system is in-domain, our joint inference model still produces good English SRL result. And the F 1 score of Chinese SRL result reaches 79.53%, which represents the stateof-the-art Chinese SRL performance to date.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 7", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Final Results", "sec_num": "5.5" }, { "text": "In this paper, we propose a joint inference model to perform bilingual SRL. Our joint inference model incorporates not only linguistic constraints on source and target sides of bitext, but also the bilingual argument structure consistency requirement on bitext. Experiments on Chinese-English parallel PropBank show that our joint inference model is very effective for bilingual SRL. Compared to stateof-the-art monolingual SRL combination baselines, our joint inference model substantially improves SRL results on both sides of bitext. In fact, the solution of our joint inference model contains not only the SRL results on bitext, but also the optimal argument alignment between two sides of bitext. This makes our model especially suitable for application in machine translation, which needs to obtain the argument alignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "http://homepages.inf.ed.ac.uk/lzhang10/maxent toolkit .html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://lpsolve.sourceforge.net/ 3 These data includes the following LDC corpus: LDC2002E18, LDC2003E07, LDC2003E14, LDC2005T06, LDC2004T07, LDC2000T50.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research work has been partially funded by the Natural Science Foundation of China under Grant No. 60975053 and 60736014, the National Key Technology R&D Program under Grant No. 2006BAH03B02. We would like to thank Jiajun Zhang for helpful discussions and the anonymous reviewers for their valuable comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Intricacies of Collins Parsing Model", "authors": [ { "first": "Daniel", "middle": [], "last": "Bikel", "suffix": "" } ], "year": 2004, "venue": "Computational Linguistics", "volume": "30", "issue": "4", "pages": "480--511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Bikel. 2004. Intricacies of Collins Parsing Model. Computational Linguistics, 30(4):480-511.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Algorithms for Minimization without Derivatives", "authors": [ { "first": "Richard", "middle": [ "P" ], "last": "Brent", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard P. Brent. 1973. Algorithms for Minimization without Derivatives. Prentice-Hall, Englewood Cliffs, NJ.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Two Languages are Better than One (for Syntactic Parsing)", "authors": [ { "first": "David", "middle": [], "last": "Burkett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP-2008", "volume": "", "issue": "", "pages": "877--886", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Burkett, and Dan Klein. 2008. Two Languages are Better than One (for Syntactic Parsing). In Pro- ceedings of EMNLP-2008, pages 877-886.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Introduction to the CoNLL-2005 shared task: semantic role labeling", "authors": [ { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" } ], "year": 2005, "venue": "Proceedings of CoNLL-2005", "volume": "", "issue": "", "pages": "152--164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Carreras, and Llu\u00eds M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: semantic role label- ing. In Proceedings of CoNLL-2005, pages 152-164.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Using a Hybrid Convolution Tree Kernel for Semantic Role Labeling", "authors": [ { "first": "Wanxiang", "middle": [], "last": "Che", "suffix": "" }, { "first": "Min", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ai", "middle": [ "Ti" ], "last": "Aw", "suffix": "" }, { "first": "Chew", "middle": [], "last": "Lim Tan", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2008, "venue": "ACM Transactions on Asian Language Information Processing", "volume": "7", "issue": "4", "pages": "1--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wanxiang Che, Min Zhang, Ai Ti Aw, Chew Lim Tan, Ting Liu, and Sheng Li. 2008. Using a Hybrid Convo- lution Tree Kernel for Semantic Role Labeling. ACM Transactions on Asian Language Information Process- ing, 2008, 7(4): 1-23.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning Bilingual Semantic Frames: Shallow Semantic Parsing vs. Semantic Role Projection", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Zhaojun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yongsheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "75--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascale Fung, Zhaojun Wu, Yongsheng Yang and Dekai Wu. 2007. Learning Bilingual Semantic Frames: Shallow Semantic Parsing vs. Semantic Role Projec- tion. In Proceedings of the 11th Conference on The- oretical and Methodological Issues in Machine Trans- lation, pages 75-84.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bilingually-Constrained (Monolingual) Shift-Reduce Parsing", "authors": [ { "first": "Liang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wenbin", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP-2009", "volume": "", "issue": "", "pages": "1222--1231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liang Huang, Wenbin Jiang, Qun Liu. 2009. Bilingually-Constrained (Monolingual) Shift-Reduce Parsing. In Proceedings of EMNLP-2009, pages 1222- 1231.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of ACL-2003", "volume": "", "issue": "", "pages": "423--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL-2003, pages 423-430.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generalized Inference with Multiple Semantic Role Labeling Systems", "authors": [ { "first": "Peter", "middle": [], "last": "Koomen", "suffix": "" }, { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wentau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2005, "venue": "Proceedings of CoNLL-2005 shared task", "volume": "", "issue": "", "pages": "181--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Koomen, Vasin Punyakanok, Dan Roth, and Wen- tau Yih. 2005. Generalized Inference with Multiple Semantic Role Labeling Systems. In Proceedings of CoNLL-2005 shared task, pages 181-184.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Improving Nominal SRL in Chinese Language with Verbal SRL Information and Automatic Predicate Recognition", "authors": [ { "first": "Junhui", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Hai", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Qiaoming", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Peide", "middle": [], "last": "Qian", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP-2009", "volume": "", "issue": "", "pages": "1280--1288", "other_ids": {}, "num": null, "urls": [], "raw_text": "Junhui Li, Guodong Zhou, Hai Zhao, Qiaoming Zhu, and Peide Qian. 2009. Improving Nominal SRL in Chinese Language with Verbal SRL Information and Automatic Predicate Recognition. In Proceedings of EMNLP-2009, pages 1280-1288.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Semantic Role Labeling: An Introduction to the Special Issue", "authors": [ { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Kenneth", "middle": [ "C" ], "last": "Litkowski", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "145--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Llu\u00eds M\u00e0rquez, Xavier Carreras, Kenneth C. Litkowski, Suzanne Stevenson. 2008. Semantic Role Labeling: An Introduction to the Special Issue. Computational Linguistics, 34(2):145-159.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Robust Combination Strategy for Semantic Role Labeling", "authors": [ { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Pere", "middle": [], "last": "Comas", "suffix": "" }, { "first": "Jordi", "middle": [], "last": "Turmo", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EMNLP-2005", "volume": "", "issue": "", "pages": "644--651", "other_ids": {}, "num": null, "urls": [], "raw_text": "Llu\u00eds M\u00e0rquez, Mihai Surdeanu, Pere Comas, and Jordi Turmo. 2005. A Robust Combination Strategy for Semantic Role Labeling. In Proceedings of EMNLP- 2005, pages 644-651.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A systematic comparison of various statistical alignment models", "authors": [ { "first": "J", "middle": [], "last": "Frans", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Och", "suffix": "" }, { "first": "", "middle": [], "last": "Ney", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "19--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frans J. Och, and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29:19-51.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cross-lingual Annotation Projection of Semantic Roles", "authors": [ { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2009, "venue": "Journal of Artificial Intelligence Research (JAIR)", "volume": "36", "issue": "", "pages": "307--340", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Pad\u00f3, and Mirella Lapata. 2009. Cross-lingual Annotation Projection of Semantic Roles. Journal of Artificial Intelligence Research (JAIR), 36:307-340.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A Parallel Proposition Bank II for Chinese and English", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Babko-Malaya", "suffix": "" }, { "first": "Jinying", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" } ], "year": 2005, "venue": "Frontiers in Corpus Annotation, Workshop in conjunction with ACL-05", "volume": "", "issue": "", "pages": "61--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Nianwen Xue, Olga Babko-Malaya, Jiny- ing Chen, Benjamin Snyder. 2005. A Parallel Propo- sition Bank II for Chinese and English. In Frontiers in Corpus Annotation, Workshop in conjunction with ACL-05, pages 61-67.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improved Inference for Unlexicalized parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL-2007", "volume": "", "issue": "", "pages": "46--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized parsing. In Proceedings of ACL- 2007, pages 46-54.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Semantic Role Labeling Using Different Syntactic Views", "authors": [ { "first": "S", "middle": [], "last": "Sameer", "suffix": "" }, { "first": "Wayne", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Kadri", "middle": [], "last": "Ward", "suffix": "" }, { "first": "James", "middle": [ "H" ], "last": "Hacioglu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Martin", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2005, "venue": "Proceedings of ACL-2005", "volume": "", "issue": "", "pages": "581--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer S. Pradhan, Wayne Ward, Kadri Hacioglu, James H. Martin, and Daniel Jurafsky. 2005. Semantic Role Labeling Using Different Syntactic Views. In Pro- ceedings of ACL-2005, pages 581-588.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Importance of Syntactic Parsing and Inference in Semantic Role Labeling", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "", "middle": [], "last": "Wen-Tauyih", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "257--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasin Punyakanok, Dan Roth, Wen-tauYih. 2008. The Importance of Syntactic Parsing and Inference in Se- mantic Role Labeling. Computational Linguistics, 34(2):257-287.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semantic Role Labeling via Integer Linear Programming Inference", "authors": [ { "first": "Vasin", "middle": [], "last": "Punyakanok", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" }, { "first": "Dav", "middle": [], "last": "Zimak", "suffix": "" } ], "year": 2004, "venue": "Proceedings of COLING-2004", "volume": "", "issue": "", "pages": "1346--1352", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zimak. 2004. Semantic Role Labeling via Integer Linear Programming Inference. In Proceedings of COLING-2004, pages 1346-1352.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Chinese Semantic Role Labeling with Shallow Parsing", "authors": [ { "first": "Weiwei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhifang", "middle": [], "last": "Sui", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of EMNLP-2009", "volume": "", "issue": "", "pages": "1475--1483", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weiwei Sun, Zhifang Sui, Meng Wang, and Xin Wang. 2009. Chinese Semantic Role Labeling with Shallow Parsing. In Proceedings of EMNLP-2009, pages 1475- 1483.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Combination Strategies for Semantic Role Labeling", "authors": [ { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Pere", "middle": [ "R" ], "last": "Comas", "suffix": "" } ], "year": 2007, "venue": "Journal of Artificial Intelligence Research (JAIR)", "volume": "29", "issue": "", "pages": "105--151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihai Surdeanu, Llu\u00eds M\u00e0rquez, Xavier Carreras, and Pere R. Comas. 2007. Combination Strategies for Semantic Role Labeling. Journal of Artificial Intel- ligence Research (JAIR), 29:105-151.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A Global Joint Model for Semantic Role Labeling", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "145--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A Global Joint Model for Seman- tic Role Labeling. Computational Linguistics, 34(2): 145-159.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantic Roles for SMT: A Hybrid Two-Pass Model", "authors": [ { "first": "Dekai", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NAACL-2009", "volume": "", "issue": "", "pages": "13--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekai Wu, and Pascale Fung. 2009. Semantic Roles for SMT: A Hybrid Two-Pass Model. In Proceedings of NAACL-2009, pages 13-16.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Labeling Chinese Predicates with Semantic Roles", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "225--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue. 2008. Labeling Chinese Predicates with Semantic Roles. Computational Linguistics, 34(2): 225-255.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A Minimum Error Weighting Combination Strategy for Chinese Semantic Role Labeling", "authors": [ { "first": "Tao", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Chengqing", "middle": [], "last": "Zong", "suffix": "" } ], "year": 2010, "venue": "Proceedings of COLING-2010", "volume": "", "issue": "", "pages": "1362--1370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Zhuang, and Chengqing Zong. 2010. A Minimum Error Weighting Combination Strategy for Chinese Se- mantic Role Labeling. In Proceedings of COLING- 2010, pages 1362-1370.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "An example from Chinese-English parallel PropBank." }, "FIGREF1": { "num": null, "type_str": "figure", "uris": null, "text": "Overview of our approach." }, "FIGREF2": { "num": null, "type_str": "figure", "uris": null, "text": "Three attributes of an output argument: location loc, label l, and probability p." }, "TABREF0": { "type_str": "table", "content": "
R1: [A1] [ AM-TMP ] [C-A1] [ AM-ADV ] [Pred]
R2: [A1] [ AM-ADV ] [Pred]
\u4e2d\u56fd \u5efa\u7b51 \u5e02\u573a\u8fd1\u5e74 \u6765\u5bf9 \u5916 \u5f00\u653e \u6b65\u4f10\u8fdb\u4e00\u6b65\u52a0\u5feb
zhongguo jianzhu shichang jinnian laidui wai kaifang bufajinyibujiakuai
[ AM-TMP ] [A1][ A2 ] [ Pred ]
(a) Word alignment and SRL results for a Chinese-English predicate pair.
\u4e2d\u56fd \u5efa\u7b51 \u5e02\u573a \u8fd1\u5e74 \u6765\u5bf9 \u5916 \u5f00\u653e \u6b65\u4f10\u8fdb\u4e00\u6b65\u52a0\u5feb
[A1] [ AM-TMP ] [C-A1] [AM-ADV] [Pred]
[ AM-TMP ] [A1][ A2 ] [ Pred ]
(b) Argument alignments for a Chinese-English predicate pair.
", "num": null, "text": "In recent years the pace of opening up to the outside of China `s construction market has further accelerated In recent years the pace of opening up to the outside of China `s construction market has further accelerated", "html": null }, "TABREF2": { "type_str": "table", "content": "
Ch\\En A0 A1 A2 A3 A4 AM * NUL
A0492 30400046
A198 853 432008
A29575110470
A31026000
A40020300
AM *023900895221
NUL53142700450
Table 1: The argument alignment matrix on manually
aligned corpus.
", "num": null, "text": "", "html": null }, "TABREF3": { "type_str": "table", "content": "", "num": null, "text": "The argument alignment matrix for the predicate pair (zengzhang, grow).", "html": null }, "TABREF5": { "type_str": "table", "content": "
", "num": null, "text": "Sentence and predicate counts.", "html": null }, "TABREF7": { "type_str": "table", "content": "
", "num": null, "text": "The results of individual monolingual SRL outputs on test set.", "html": null }, "TABREF9": { "type_str": "table", "content": "
", "num": null, "text": "Results of different joint models on test set.", "html": null }, "TABREF11": { "type_str": "table", "content": "
", "num": null, "text": "Comparison between monolingual combination model and our joint inference model on test set.", "html": null } } } }