{ "paper_id": "D10-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:51:55.187933Z" }, "title": "On Dual Decomposition and Linear Programming Relaxations for Natural Language Processing", "authors": [ { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "", "affiliation": { "laboratory": "", "institution": "MIT CSAIL", "location": { "postCode": "02139", "settlement": "Cambridge", "region": "MA", "country": "USA" } }, "email": "srush@csail.mit.edu" }, { "first": "David", "middle": [], "last": "Sontag", "suffix": "", "affiliation": { "laboratory": "", "institution": "MIT CSAIL", "location": { "postCode": "02139", "settlement": "Cambridge", "region": "MA", "country": "USA" } }, "email": "dsontag@csail.mit.edu" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "", "affiliation": { "laboratory": "", "institution": "MIT CSAIL", "location": { "postCode": "02139", "settlement": "Cambridge", "region": "MA", "country": "USA" } }, "email": "mcollins@csail.mit.edu" }, { "first": "Tommi", "middle": [], "last": "Jaakkola", "suffix": "", "affiliation": { "laboratory": "", "institution": "MIT CSAIL", "location": { "postCode": "02139", "settlement": "Cambridge", "region": "MA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, in that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram part-of-speech tagger.", "pdf_parse": { "paper_id": "D10-1001", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces dual decomposition as a framework for deriving inference algorithms for NLP problems. The approach relies on standard dynamic-programming algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. The approach provably solves a linear programming (LP) relaxation of the global inference problem. It leads to algorithms that are simple, in that they use existing decoding algorithms; efficient, in that they avoid exact algorithms for the full model; and often exact, in that empirically they often recover the correct solution in spite of using an LP relaxation. We give experimental results on two problems: 1) the combination of two lexicalized parsing models; and 2) the combination of a lexicalized parsing model and a trigram part-of-speech tagger.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dynamic programming algorithms have been remarkably useful for inference in many NLP problems. Unfortunately, as models become more complex, for example through the addition of new features or components, dynamic programming algorithms can quickly explode in terms of computational or implementational complexity. 1 As a result, efficiency of inference is a critical bottleneck for many problems in statistical NLP.", "cite_spans": [ { "start": 314, "end": 315, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper introduces dual decomposition (Dantzig and Wolfe, 1960; Komodakis et al., 2007) as a framework for deriving inference algorithms in NLP. Dual decomposition leverages the observation that complex inference problems can often be decomposed into efficiently solvable sub-problems. The approach leads to inference algorithms with the following properties:", "cite_spans": [ { "start": 41, "end": 66, "text": "(Dantzig and Wolfe, 1960;", "ref_id": "BIBREF5" }, { "start": 67, "end": 90, "text": "Komodakis et al., 2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The resulting algorithms are simple and efficient, building on standard dynamic-programming algorithms as oracle solvers for sub-problems, 2 together with a method for forcing agreement between the oracles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The algorithms provably solve a linear programming (LP) relaxation of the original inference problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Empirically, the LP relaxation often leads to an exact solution to the original problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The approach is very general, and should be applicable to a wide range of problems in NLP. The connection to linear programming ensures that the algorithms provide a certificate of optimality when they recover the exact solution, and also opens up the possibility of methods that incrementally tighten the LP relaxation until it is exact (Sherali and Adams, 1994; Sontag et al., 2008) . The structure of this paper is as follows. We first give two examples as an illustration of the approach: 1) integrated parsing and trigram part-ofspeech (POS) tagging; and 2) combined phrasestructure and dependency parsing. In both settings, it is possible to solve the integrated problem through an \"intersected\" dynamic program (e.g., for integration of parsing and tagging, the construction from Bar-Hillel et al. (1964) can be used). However, these methods, although polynomial time, are substantially less efficient than our algorithms, and are considerably more complex to implement.", "cite_spans": [ { "start": 338, "end": 363, "text": "(Sherali and Adams, 1994;", "ref_id": "BIBREF21" }, { "start": 364, "end": 384, "text": "Sontag et al., 2008)", "ref_id": "BIBREF23" }, { "start": 787, "end": 811, "text": "Bar-Hillel et al. (1964)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Next, we describe exact polyhedral formulations for the two problems, building on connections between dynamic programming algorithms and marginal polytopes, as described in Martin et al. (1990) . These allow us to precisely characterize the relationship between the exact formulations and the LP relaxations that we solve. We then give guarantees of convergence for our algorithms by showing that they are instantiations of Lagrangian relaxation, a general method for solving linear programs of a particular form.", "cite_spans": [ { "start": 173, "end": 193, "text": "Martin et al. (1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, we describe experiments that demonstrate the effectiveness of our approach. First, we consider the integration of the generative model for phrase-structure parsing of Collins (2003) , with the second-order discriminative dependency parser of . This is an interesting problem in its own right: the goal is to inject the high performance of discriminative dependency models into phrase-structure parsing. The method uses off-theshelf decoders for the two models. We find three main results: 1) in spite of solving an LP relaxation, empirically the method finds an exact solution on over 99% of the examples; 2) the method converges quickly, typically requiring fewer than 10 iterations of decoding; 3) the method gives gains over a baseline method that forces the phrase-structure parser to produce the same dependencies as the firstbest output from the dependency parser (the Collins (2003) model has an F1 score of 88.1%; the baseline method has an F1 score of 89.7%; and the dual decomposition method has an F1 score of 90.7%).", "cite_spans": [ { "start": 176, "end": 190, "text": "Collins (2003)", "ref_id": "BIBREF4" }, { "start": 884, "end": 898, "text": "Collins (2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a second set of experiments, we use dual decomposition to integrate the trigram POS tagger of Toutanova and Manning (2000) with the parser of Collins (2003) . We again find that the method finds an exact solution in almost all cases, with convergence in just a few iterations of decoding.", "cite_spans": [ { "start": 97, "end": 125, "text": "Toutanova and Manning (2000)", "ref_id": "BIBREF25" }, { "start": 145, "end": 159, "text": "Collins (2003)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Although the focus of this paper is on dynamic programming algorithms-both in the experiments, and also in the formal results concerning marginal polytopes-it is straightforward to use other combinatorial algorithms within the approach. For example, Koo et al. (2010) describe a dual decomposition approach for non-projective dependency parsing, which makes use of both dynamic programming and spanning tree inference algorithms.", "cite_spans": [ { "start": 250, "end": 267, "text": "Koo et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dual decomposition is a classical method for solving optimization problems that can be decomposed into efficiently solvable sub-problems. Our work is inspired by dual decomposition methods for inference in Markov random fields (MRFs) (Wainwright et al., 2005a; Komodakis et al., 2007; Globerson and Jaakkola, 2007) . In this approach, the MRF is decomposed into sub-problems corresponding to treestructured subgraphs that together cover all edges of the original graph. The resulting inference algorithms provably solve an LP relaxation of the MRF inference problem, often significantly faster than commercial LP solvers (Yanover et al., 2006) .", "cite_spans": [ { "start": 234, "end": 260, "text": "(Wainwright et al., 2005a;", "ref_id": "BIBREF27" }, { "start": 261, "end": 284, "text": "Komodakis et al., 2007;", "ref_id": "BIBREF9" }, { "start": 285, "end": 314, "text": "Globerson and Jaakkola, 2007)", "ref_id": "BIBREF8" }, { "start": 621, "end": 643, "text": "(Yanover et al., 2006)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work is also related to methods that incorporate combinatorial solvers within loopy belief propagation (LBP), either for MAP inference (Duchi et al., 2007) or for computing marginals (Smith and Eisner, 2008) . Our approach similarly makes use of combinatorial algorithms to efficiently solve subproblems of the global inference problem. However, unlike LBP, our algorithms have strong theoretical guarantees, such as guaranteed convergence and the possibility of a certificate of optimality. These guarantees are possible because our algorithms directly solve an LP relaxation.", "cite_spans": [ { "start": 139, "end": 159, "text": "(Duchi et al., 2007)", "ref_id": "BIBREF6" }, { "start": 187, "end": 211, "text": "(Smith and Eisner, 2008)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Other work has considered LP or integer linear programming (ILP) formulations of inference in NLP (Martins et al., 2009; Riedel and Clarke, 2006; Roth and Yih, 2005) . These approaches typically use general-purpose LP or ILP solvers. Our method has the advantage that it leverages underlying structure arising in LP formulations of NLP problems. We will see that dynamic programming algorithms such as CKY can be considered to be very efficient solvers for particular LPs. In dual decomposition, these LPs-and their efficient solvers-can be embedded within larger LPs corresponding to more complex inference problems.", "cite_spans": [ { "start": 98, "end": 120, "text": "(Martins et al., 2009;", "ref_id": "BIBREF15" }, { "start": 121, "end": 145, "text": "Riedel and Clarke, 2006;", "ref_id": "BIBREF19" }, { "start": 146, "end": 165, "text": "Roth and Yih, 2005)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We now describe the type of models used throughout the paper. We take some care to set up notation that will allow us to make a clear connection between inference problems and linear programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "Our first example is weighted CFG parsing. We assume a context-free grammar, in Chomsky normal form, with a set of non-terminals N . The grammar contains all rules of the form A \u2192 B C and A \u2192 w where A, B, C \u2208 N and w \u2208 V (it is simple to relax this assumption to give a more constrained grammar). For rules of the form A \u2192 w we refer to A as the part-of-speech tag for w. We allow any non-terminal to be at the root of the tree.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "Given a sentence with n words, w 1 , w 2 , . . . w n , a parse tree is a set of rule productions of the form A \u2192 B C, i, k, j where A, B, C \u2208 N , and 1 \u2264 i \u2264 k < j \u2264 n. Each rule production represents the use of CFG rule A \u2192 B C where nonterminal A spans words w i . . . w j , non-terminal B spans words w i . . . w k , and non-terminal C spans words w k+1 . . . w j . There are O(|N | 3 n 3 ) such rule productions. Each parse tree corresponds to a subset of these rule productions, of size n \u2212 1, that forms a well-formed parse tree. 3 We now define the index set for CFG parsing as", "cite_spans": [ { "start": 536, "end": 537, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "I = { A \u2192 B C, i, k, j : A, B, C \u2208 N , 1 \u2264 i \u2264 k < j \u2264 n}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "Each parse tree is a vector y = {y r : r \u2208 I}, with y r = 1 if rule r is in the parse tree, and y r = 0 otherwise. Hence each parse tree is represented as a vector in {0, 1} m , where m = |I|. We use Y to denote the set of all valid parse-tree vectors; the set Y is a subset of {0, 1} m (not all binary vectors correspond to valid parse trees).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "In addition, we assume a vector \u03b8 = {\u03b8 r : r \u2208 I} that specifies a weight for each rule production. 4 Each \u03b8 r can take any value in the reals. The optimal parse tree is y * = arg max y\u2208Y y \u2022 \u03b8 where y \u2022 \u03b8 = r y r \u03b8 r is the inner product between y and \u03b8. We use y r and y(r) interchangeably (similarly for \u03b8 r and \u03b8(r)) to refer to the r'th component of the vector y. For example \u03b8(A \u2192 B C, i, k, j) is a weight for the rule A \u2192 B C, i, k, j .", "cite_spans": [ { "start": 100, "end": 101, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "We will use similar notation for other problems. As a second example, in POS tagging the task is to map a sentence of n words w 1 . . . w n to a tag sequence t 1 . . . t n , where each t i is chosen from a set T of possible tags. We assume a trigram tagger, where a tag sequence is represented through decisions (A, B) \u2192 C, i where A, B, C \u2208 T , and i \u2208 {3 . . . n}. Each production represents a transition where C is the tag of word w i , and (A, B) are the previous two tags. The index set for tagging is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "I tag = { (A, B) \u2192 C, i : A, B, C \u2208 T , 3 \u2264 i \u2264 n}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "Note that we do not need transitions for i = 1 or i = 2, because the transition (A, B) \u2192 C, 3 specifies the first three tags in the sentence. 5 Each tag sequence is represented as a vector z = {z r : r \u2208 I tag }, and we denote the set of valid tag sequences, a subset of {0, 1} |Itag| , as Z. Given a parameter vector \u03b8 = {\u03b8 r : r \u2208 I tag }, the optimal tag sequence is arg max z\u2208Z z \u2022 \u03b8.", "cite_spans": [ { "start": 142, "end": 143, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "As a modification to the above approach, we will find it convenient to introduce extended index sets for both the CFG and POS tagging examples. For the CFG case we define the extended index set to be I = I \u222a I uni where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "I uni = {(i, t) : i \u2208 {1 . . . n}, t \u2208 T }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "Here each pair (i, t) represents word w i being assigned the tag t. Thus each parse-tree vector y will have additional (binary) components y(i, t) specifying whether or not word i is assigned tag t. (Throughout this paper we will assume that the tagset used by the tagger, T , is a subset of the set of nonterminals considered by the parser, N .) Note that this representation is over-complete, since a parse tree determines a unique tagging for a sentence: more explicitly, for any i \u2208 {1 . . . n}, Y \u2208 T , the following linear constraint holds:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "y(i, Y ) = n k=i+1 X,Z\u2208N y(X \u2192 Y Z, i, i, k) + i\u22121 k=1 X,Z\u2208N y(X \u2192 Z Y, k, i \u2212 1, i)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "We apply the same extension to the tagging index set, effectively mapping trigrams down to unigram assignments, again giving an over-complete representation. The extended index set for tagging is referred to as I tag .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "From here on we will make exclusive use of extended index sets for CFG parsing and trigram tagging. We use the set Y to refer to the set of valid parse structures under the extended representation;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "5 As one example, in an HMM, the parameter \u03b8((A, B) \u2192 C, 3) would be log P (A| * * )+log P (B| * A)+log P (C|AB)+ log P (w1|A) + log P (w2|B) + log P (w3|C), where * is the start symbol. each y \u2208 Y is a binary vector of length |I |. We similarly use Z to refer to the set of valid tag structures under the extended representation. We assume parameter vectors for the two problems, \u03b8 cfg \u2208 R |I | and \u03b8 tag \u2208 R |I tag | .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background: Structured Models for NLP", "sec_num": "3" }, { "text": "This section describes the dual decomposition approach for two inference problems in NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Two Examples", "sec_num": "4" }, { "text": "We now describe the dual decomposition approach for integrated parsing and trigram tagging. First, define the set Q as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "Q = {(y, z) : y \u2208 Y, z \u2208 Z, y(i, t) = z(i, t) for all (i, t) \u2208 I uni } (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "Hence Q is the set of all (y, z) pairs that agree on their part-of-speech assignments. The integrated parsing and trigram tagging problem is then to solve", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "max (y,z)\u2208Q y \u2022 \u03b8 cfg + z \u2022 \u03b8 tag (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "This problem is equivalent to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "max y\u2208Y y \u2022 \u03b8 cfg + g(y) \u2022 \u03b8 tag", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "where g : Y \u2192 Z is a function that maps a parse tree y to its set of trigrams z = g(y). The benefit of the formulation in Eq. 2 is that it makes explicit the idea of maximizing over all pairs (y, z) under a set of agreement constraints y(i, t) = z(i, t)-this concept will be central to the algorithms in this paper. With this in mind, we note that we have efficient methods for the inference problems of tagging and parsing alone, and that our combined objective almost separates into these two independent problems. In fact, if we drop the y(i, t) = z(i, t) constraints from the optimization problem, the problem splits into two parts, each of which can be efficiently solved using dynamic programming:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "(y * , z * ) = (arg max y\u2208Y y \u2022 \u03b8 cfg , arg max z\u2208Z z \u2022 \u03b8 tag )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "Dual decomposition exploits this idea; it results in the algorithm given in figure 1. The algorithm optimizes the combined objective by repeatedly solving the two sub-problems separately-that is, it directly Set u (1) (i, t) \u2190 0 for all (i, t) \u2208 I uni for k = 1 to K do", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "y (k) \u2190 arg max y\u2208Y (y \u2022 \u03b8 cfg \u2212 (i,t)\u2208Iuni u (k) (i, t)y(i, t)) z (k) \u2190 arg max z\u2208Z (z \u2022 \u03b8 tag + (i,t)\u2208Iuni u (k) (i, t)z(i, t)) if y (k) (i, t) = z (k) (i, t) for all (i, t) \u2208 I uni then return (y (k) , z (k) ) for all (i, t) \u2208 I uni , u (k+1) (i, t) \u2190 u (k) (i, t) + \u03b1 k (y (k) (i, t) \u2212 z (k) (i, t))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "return (y (K) , z (K) ) Figure 1 : The algorithm for integrated parsing and tagging. The parameters \u03b1 k > 0 for k = 1 . . . K specify step sizes for each iteration, and are discussed further in the Appendix. The two arg max problems can be solved using dynamic programming.", "cite_spans": [ { "start": 10, "end": 13, "text": "(K)", "ref_id": null } ], "ref_spans": [ { "start": 24, "end": 32, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "solves the harder optimization problem using an existing CFG parser and trigram tagger. After each iteration the algorithm adjusts the weights u(i, t); these updates modify the objective functions for the two models, encouraging them to agree on the same POS sequence. In section 6.1 we will show that the variables u(i, t) are Lagrange multipliers enforcing agreement constraints, and that the algorithm corresponds to a (sub)gradient method for optimization of a dual function. The algorithm is easy to implement: all that is required is a decoding algorithm for each of the two models, and simple additive updates to the Lagrange multipliers enforcing agreement between the two models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrated Parsing and Trigram Tagging", "sec_num": "4.1" }, { "text": "Our second example problem is the integration of a phrase-structure parser with a higher-order dependency parser. The goal is to add higher-order features to phrase-structure parsing without greatly increasing the complexity of inference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "First, we define an index set for second-order unlabeled projective dependency parsing. The secondorder parser considers first-order dependencies, as well as grandparent and sibling second-order dependencies (e.g., see Carreras (2007) ). We assume that I dep is an index set containing all such dependencies (for brevity we omit the details of this index set). For convenience we define an extended index set that makes explicit use of first-order dependen-cies, I dep = I dep \u222a I first , where", "cite_spans": [ { "start": 219, "end": 234, "text": "Carreras (2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "I first = {(i, j) : i \u2208 {0 . . . n}, j \u2208 {1 . . . n}, i = j}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "Here (i, j) represents a dependency with head w i and modifier w j (i = 0 corresponds to the root symbol in the parse). We use D \u2286 {0, 1} |I dep | to denote the set of valid projective dependency parses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "The second model we use is a lexicalized CFG. Each symbol in the grammar takes the form A(h) where A \u2208 N is a non-terminal, and h \u2208 {1 . . . n} is an index specifying that w h is the head of the constituent. Rule productions take the form A(a) \u2192 B(b) C(c), i, k, j where b \u2208 {i . . . k}, c \u2208 {(k + 1) . . . j}, and a is equal to b or c, depending on whether A receives its head-word from its left or right child. Each such rule implies a dependency (a, b) if a = c, or (a, c) if a = b. We take I head to be the index set of all such rules, and I head = I head \u222a I first to be the extended index set. We define H \u2286 {0, 1} |I head | to be the set of valid parse trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "The integrated parsing problem is then to find", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "(y * , d * ) = arg max (y,d)\u2208R y \u2022 \u03b8 head + d \u2022 \u03b8 dep (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "R = {(y, d) : y \u2208 H, d \u2208 D, y(i, j) = d(i, j) for all (i, j) \u2208 I first }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "This problem has a very similar structure to the problem of integrated parsing and tagging, and we can derive a similar dual decomposition algorithm. The Lagrange multipliers u are a vector in R |I first | enforcing agreement between dependency assignments. The algorithm (omitted for brevity) is identical to the algorithm in figure 1, but with I uni , Y, Z, \u03b8 cfg , and \u03b8 tag replaced with I first , H, D, \u03b8 head , and \u03b8 dep respectively. The algorithm only requires decoding algorithms for the two models, together with simple updates to the Lagrange multipliers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integrating Two Lexicalized Parsers", "sec_num": "4.2" }, { "text": "We now give formal guarantees for the algorithms in the previous section, showing that they solve LP relaxations of the problems in Eqs. 2 and 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes and LP Relaxations", "sec_num": "5" }, { "text": "To make the connection to linear programming, we first introduce the idea of marginal polytopes in section 5.1. In section 5.2, we give a precise statement of the LP relaxations that are being solved by the example algorithms, making direct use of marginal polytopes. In section 6 we will prove that the example algorithms solve these LP relaxations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes and LP Relaxations", "sec_num": "5" }, { "text": "For a finite set Y, define the set of all distributions over elements in Y as \u2206 = {\u03b1 \u2208 R |Y| : \u03b1 y \u2265 0, y\u2208Y \u03b1 y = 1}. Each \u03b1 \u2208 \u2206 gives a vector of marginals, \u00b5 = y\u2208Y \u03b1 y y, where \u00b5 r can be interpreted as the probability that y r = 1 for a y selected at random from the distribution \u03b1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "The set of all possible marginal vectors, known as the marginal polytope, is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "M = {\u00b5 \u2208 R m : \u2203\u03b1 \u2208 \u2206 such that \u00b5 = y\u2208Y \u03b1 y y}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "M is also frequently referred to as the convex hull of Y, written as conv(Y). We use the notation conv(Y) in the remainder of this paper, instead of M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "For an arbitrary set Y, the marginal polytope conv(Y) can be complex to describe. 6 However, Martin et al. (1990) show that for a very general class of dynamic programming problems, the corresponding marginal polytope can be expressed as", "cite_spans": [ { "start": 93, "end": 113, "text": "Martin et al. (1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "conv(Y) = {\u00b5 \u2208 R m : A\u00b5 = b, \u00b5 \u2265 0} (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "where A is a p \u00d7 m matrix, b is vector in R p , and the value p is linear in the size of a hypergraph representation of the dynamic program. Note that A and b specify a set of p linear constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "We now give an explicit description of the resulting constraints for CFG parsing: 7 similar constraints arise for other dynamic programming algorithms for parsing, for example the algorithms of Eisner (2000) . The exact form of the constraints, and the fact that they are polynomial in number, is not essential for the formal results in this paper. However, a description of the constraints gives valuable intuition for the structure of the marginal polytope.", "cite_spans": [ { "start": 194, "end": 207, "text": "Eisner (2000)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "The constraints are given in figure 2. To develop some intuition, consider the case where the variables \u00b5 r are restricted to be binary: hence each binary vector \u00b5 specifies a parse tree. The second constraint in Eq. 5 specifies that exactly one rule must be used at the top of the tree. The set of constraints in Eq. 6 specify that for each production of the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "\u2200r \u2208 I , \u00b5r \u2265 0 ; X X,Y,Z\u2208N k=1...(n\u22121) \u00b5(X \u2192 Y Z, 1, k, n) = 1 (5) \u2200X \u2208 N , \u2200(i, j) such that 1 \u2264 i < j \u2264 n and (i, j) = (1, n): X Y,Z\u2208N k=i...(j\u22121) \u00b5(X \u2192 Y Z, i, k, j) = X Y,Z\u2208N k=1...(i\u22121) \u00b5(Y \u2192 Z X, k, i \u2212 1, j) + X Y,Z\u2208N k=(j+1)...n \u00b5(Y \u2192 X Z, i, j, k) (6) \u2200Y \u2208 T, \u2200i \u2208 {1 . . . n} : \u00b5(i, Y ) = X X,Z\u2208N k=(i+1)...n \u00b5(X \u2192 Y Z, i, i, k) + X X,Z\u2208N k=1...(i\u22121) \u00b5(X \u2192 Z Y, k, i \u2212 1, i) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "Figure 2: The linear constraints defining the marginal polytope for CFG parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "X \u2192 Y Z, i, k, j in a parse tree, there must be exactly one production higher in the tree that generates (X, i, j) as one of its children. The constraints in Eq. 7 enforce consistency between the \u00b5(i, Y ) variables and rule variables higher in the tree. Note that the constraints in Eqs.(5-7) can be written in the form A\u00b5 = b, \u00b5 \u2265 0, as in Eq. 4. Under these definitions, we have the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "Theorem 5.1 Define Y to be the set of all CFG parses, as defined in section 4. Then conv(Y) = {\u00b5 \u2208 R m : \u00b5 satisifies Eqs.(5-7)}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "Proof: This theorem is a special case of Martin et al. (1990) , theorem 2. The marginal polytope for tagging, conv(Z), can also be expressed using linear constraints as in Eq. 4; see figure 3. These constraints follow from results for graphical models (Wainwright and Jordan, 2008) , or from the Martin et al. (1990) construction.", "cite_spans": [ { "start": 41, "end": 61, "text": "Martin et al. (1990)", "ref_id": "BIBREF14" }, { "start": 252, "end": 281, "text": "(Wainwright and Jordan, 2008)", "ref_id": "BIBREF26" }, { "start": 296, "end": 316, "text": "Martin et al. (1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "As a final point, the following theorem gives an important property of marginal polytopes, which we will use at several points in this paper:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "Theorem 5.2 (Korte and Vygen (2008) , page 66.) For any set Y \u2286 {0, 1} k , and for any vector", "cite_spans": [ { "start": 12, "end": 35, "text": "(Korte and Vygen (2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 \u2208 R k , max y\u2208Y y \u2022 \u03b8 = max \u00b5\u2208conv(Y) \u00b5 \u2022 \u03b8", "eq_num": "(8)" } ], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "The theorem states that for a linear objective function, maximization over a discrete set Y can be replaced by maximization over the convex hull", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "\u2200r \u2208 I tag , \u03bdr \u2265 0 ; X X,Y,Z\u2208T \u03bd((X, Y ) \u2192 Z, 3) = 1 \u2200X \u2208 T , \u2200i \u2208 {3 . . . n \u2212 1}: X Y,Z\u2208T \u03bd((Y, Z) \u2192 X, i) = X Y,Z\u2208T \u03bd((Y, X) \u2192 Z, i + 1) \u2200X \u2208 T , \u2200i \u2208 {3 . . . n \u2212 2}: X Y,Z\u2208T \u03bd((Y, Z) \u2192 X, i) = X Y,Z\u2208T \u03bd((X, Y ) \u2192 Z, i + 2) \u2200X \u2208 T, \u2200i \u2208 {3 . . . n} : \u03bd(i, X) = X Y,Z\u2208T \u03bd((Y, Z) \u2192 X, i) \u2200X \u2208 T : \u03bd(1, X) = X Y,Z\u2208T \u03bd((X, Y ) \u2192 Z, 3) \u2200X \u2208 T : \u03bd(2, X) = X Y,Z\u2208T \u03bd((Y, X) \u2192 Z, 3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "Figure 3: The linear constraints defining the marginal polytope for trigram POS tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "conv(Y). The problem max \u00b5\u2208conv(Y) \u00b5\u2022\u03b8 is a linear programming problem. For parsing, this theorem implies that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "1. Weighted CFG parsing can be framed as a linear programming problem, of the form max \u00b5\u2208conv(Y) \u00b5\u2022 \u03b8, where conv(Y) is specified by a polynomial number of linear constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "2. Conversely, dynamic programming algorithms such as the CKY algorithm can be considered to be oracles that efficiently solve LPs of the form max \u00b5\u2208conv(Y) \u00b5 \u2022 \u03b8. Similar results apply for the POS tagging case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Marginal Polytopes", "sec_num": "5.1" }, { "text": "We now describe the LP relaxations that are solved by the example algorithms in section 4. We begin with the algorithm in Figure 1 . The original optimization problem was to find max (y,z)\u2208Q y \u2022 \u03b8 cfg + z \u2022 \u03b8 tag (see Eq. 2). By theorem 5.2, this is equivalent to solving", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 130, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max (\u00b5,\u03bd)\u2208conv(Q) \u00b5 \u2022 \u03b8 cfg + \u03bd \u2022 \u03b8 tag", "eq_num": "(9)" } ], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "To formulate our approximation, we first define:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "Q = {(\u00b5, \u03bd) : \u00b5 \u2208 conv(Y), \u03bd \u2208 conv(Z), \u00b5(i, t) = \u03bd(i, t) for all (i, t) \u2208 I uni }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "The definition of Q is very similar to the definition of Q (see Eq. 1), the only difference being that Y and Z are replaced by conv(Y) and conv(Z) respectively. Hence any point in Q is also in Q . It follows that any point in conv(Q) is also in Q , because Q is a convex set defined by linear constraints. The LP relaxation then corresponds to the following optimization problem: max", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "(\u00b5,\u03bd)\u2208Q \u00b5 \u2022 \u03b8 cfg + \u03bd \u2022 \u03b8 tag (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "Q is defined by linear constraints, making this a linear program. Since Q is an outer bound on conv(Q), i.e. conv(Q) \u2286 Q , we obtain the guarantee that the value of Eq. 10 always upper bounds the value of Eq. 9. In Appendix A we give an example showing that in general Q includes points that are not in conv(Q). These points exist because the agreement between the two parts is now enforced in expectation (\u00b5(i, t) = \u03bd(i, t) for (i, t) \u2208 I uni ) rather than based on actual assignments. This agreement constraint is weaker since different distributions over assignments can still result in the same first order expectations. Thus, the solution to Eq. 10 may be in Q but not in conv(Q). It can be shown that all such solutions will be fractional, making them easy to distinguish from Q. In many applications of LP relaxations-including the examples discussed in this paper-the relaxation in Eq. 10 turns out to be tight, in that the solution is often integral (i.e., it is in Q). In these cases, solving the LP relaxation exactly solves the original problem of interest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "In the next section we prove that the algorithm in Figure 1 solves the problem in Eq 10. A similar result holds for the algorithm in section 4.2: it solves a relaxation of Eq. 3, where R is replaced by", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 59, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "R = {(\u00b5, \u03bd) : \u00b5 \u2208 conv(H), \u03bd \u2208 conv(D), \u00b5(i, j) = \u03bd(i, j) for all (i, j) \u2208 I first } 6 Convergence Guarantees", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Linear Programming Relaxations", "sec_num": "5.2" }, { "text": "We now show that the example algorithms solve their respective LP relaxations given in the previous section. We do this by first introducing a general class of linear programs, together with an optimization method, Lagrangian relaxation, for solving these LPs. We then show that the algorithms in section 4 are special cases of the general algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The linear programs we consider take the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "max x1\u2208X1,x2\u2208X2 (\u03b8 1 \u2022 x 1 + \u03b8 2 \u2022 x 2 ) such that Ex 1 = F x 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The matrices E \u2208 R q\u00d7m and F \u2208 R q\u00d7l specify q linear \"agreement\" constraints between x 1 \u2208 R m and x 2 \u2208 R l . The sets X 1 , X 2 are also specified by linear constraints, X 1 = {x 1 \u2208 R m : Ax 1 = b, x 1 \u2265 0} and X 2 = x 2 \u2208 R l : Cx 2 = d, x 2 \u2265 0 , hence the problem is an LP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "Note that if we set X 1 = conv(Y), X 2 = conv(Z), and define E and F to specify the agreement constraints \u00b5(i, t) = \u03bd(i, t), then we have the LP relaxation in Eq. 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "It is natural to apply Lagrangian relaxation in cases where the sub-problems max x 1 \u2208X 1 \u03b8 1 \u2022 x 1 and max x 2 \u2208X 2 \u03b8 2 \u2022 x 2 can be efficiently solved by combinatorial algorithms for any values of \u03b8 1 , \u03b8 2 , but where the constraints Ex 1 = F x 2 \"complicate\" the problem. We introduce Lagrange multipliers u \u2208 R q that enforce the latter set of constraints, giving the Lagrangian:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "L(u, x 1 , x 2 ) = \u03b8 1 \u2022 x 1 + \u03b8 2 \u2022 x 2 + u \u2022 (Ex 1 \u2212 F x 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The dual objective function is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "L(u) = max x1\u2208X1,x2\u2208X2 L(u, x 1 , x 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "and the dual problem is to find min u\u2208R q L(u). Because X 1 and X 2 are defined by linear constraints, by strong duality we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "min u\u2208R q L(u) = max x 1 \u2208X 1 ,x 2 \u2208X 2 :Ex 1 =F x 2 (\u03b8 1 \u2022 x 1 + \u03b8 2 \u2022 x 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "Hence minimizing L(u) will recover the maximum value of the original problem. This leaves open the question of how to recover the LP solution (i.e., the pair (x * 1 , x * 2 ) that achieves this maximum); we discuss this point in section 6.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The dual L(u) is convex. However, L(u) is not differentiable, so we cannot use gradient-based methods to optimize it. Instead, a standard approach is to use a subgradient method. Subgradients are tangent lines that lower bound a function even at points of non-differentiability: formally, a subgradient of a convex function L :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "R n \u2192 R at a point u is a vector g u such that for all v, L(v) \u2265 L(u) + g u \u2022 (v \u2212 u). u (1) \u2190 0 for k = 1 to K do x (k) 1 \u2190 arg max x 1 \u2208X 1 (\u03b8 1 + (u (k) ) T E) \u2022 x 1 x (k) 2 \u2190 arg max x 2 \u2208X 2 (\u03b8 2 \u2212 (u (k) ) T F ) \u2022 x 2 if Ex (k) 1 = F x (k) 2 return u (k) u (k+1) \u2190 u (k) \u2212 \u03b1 k (Ex (k) 1 \u2212 F x (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "2 ) return u (K) Figure 4 : The Lagrangian relaxation algorithm.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "By standard results, the subgradient for L at a point u takes a simple form,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "g u = Ex * 1 \u2212 F x * 2 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "x * 1 = arg max x 1 \u2208X 1 (\u03b8 1 + (u (k) ) T E) \u2022 x 1 x * 2 = arg max x 2 \u2208X 2 (\u03b8 2 \u2212 (u (k) ) T F ) \u2022 x 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The beauty of this result is that the values of x * 1 and x * 2 , and by implication the value of the subgradient, can be computed using oracles for the two arg max sub-problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "Subgradient algorithms perform updates that are similar to gradient descent:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "u (k+1) \u2190 u (k) \u2212 \u03b1 k g (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "where g (k) is the subgradient of L at u (k) and \u03b1 k > 0 is the step size of the update. The complete subgradient algorithm is given in figure 4 . The following convergence theorem is well-known (e.g., see page 120 of Korte and Vygen (2008) ", "cite_spans": [ { "start": 8, "end": 11, "text": "(k)", "ref_id": null }, { "start": 41, "end": 44, "text": "(k)", "ref_id": null }, { "start": 218, "end": 240, "text": "Korte and Vygen (2008)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 136, "end": 144, "text": "figure 4", "ref_id": null } ], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "): Theorem 6.1 If lim k\u2192\u221e \u03b1 k = 0 and \u221e k=1 \u03b1 k = \u221e, then lim k\u2192\u221e L(u (k) ) = min u L(u).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The following proposition is easily verified:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "Proposition 6.1 The algorithm in figure 1 is an instantiation of the algorithm in figure 4, 8 with X 1 = conv(Y), X 2 = conv(Z), and the matrices E and F defined to be binary matrices specifying the constraints \u00b5(i, t) = \u03bd(i, t) for all (i, t) \u2208 I uni .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "Under an appropriate definition of the step sizes \u03b1 k , it follows that the algorithm in figure 1 defines a sequence of Lagrange multiplers u (k) minimizing a dual of the LP relaxation in Eq. 10. A similar result holds for the algorithm in section 4.2. 8 with the caveat that it returns (x", "cite_spans": [ { "start": 253, "end": 254, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "(k) 1 , x (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "2 ) rather than u (k) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lagrangian Relaxation", "sec_num": "6.1" }, { "text": "The previous section described how the method in figure 4 can be used to minimize the dual L(u) of the original linear program. We now turn to the problem of recovering a primal solution (x * 1 , x * 2 ) of the LP. The method we propose considers two cases:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "(Case 1) If Ex (k) 1 = F x (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "2 at any stage during the algorithm, then simply take (x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "(k) 1 , x (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "2 ) to be the primal solution. In this case the pair (x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "(k) 1 , x (k)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "2 ) exactly solves the original LP. 9 If this case arises in the algorithm in figure 1, then the resulting solution is binary (i.e., it is a member of Q), and the solution exactly solves the original inference problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "(Case 2) If case 1 does not arise, then a couple of strategies are possible. (This situation could arise in cases where the LP is not tight-i.e., it has a fractional solution-or where K is not large enough for convergence.) The first is to define the primal solution to be the average of the solutions encountered during the algorithm: Nedi\u0107 and Ozdaglar (2009) show that as K \u2192 \u221e, these averaged solutions converge to the optimal primal solution. 10 A second strategy (as given in figure 1) is to simply take (x", "cite_spans": [ { "start": 336, "end": 361, "text": "Nedi\u0107 and Ozdaglar (2009)", "ref_id": "BIBREF17" }, { "start": 448, "end": 450, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "x 1 = k x (k) 1 /K, x 2 = k x (k) 2 /K. Results from", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "(K) 1 , x (K)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "2 ) as an approximation to the primal solution. This method is a heuristic, but previous work (e.g., Komodakis et al. (2007) ) has shown that it is effective in practice; we use it in this paper.", "cite_spans": [ { "start": 101, "end": 124, "text": "Komodakis et al. (2007)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "In our experiments we found that in the vast majority of cases, case 1 applies, after a small number of iterations; see the next section for more details. We have that \u03b81", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2022 x (k) 1 + \u03b82 \u2022 x (k) 2 = L(u (k) , x (k) 1 , x", "eq_num": "(k)" } ], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "2 ) = L(u (k) ), where the last equality is because x 1), 11 and the 2nd order discriminative dependency parser of . The inference problem for a sentence x is to find", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y * = arg max y\u2208Y (f 1 (y) + \u03b3f 2 (y))", "eq_num": "(11)" } ], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "where Y is the set of all lexicalized phrase-structure trees for the sentence x; f 1 (y) is the score (log probability) under Model 1; f 2 (y) is the score under for the dependency structure implied by y; and \u03b3 > 0 is a parameter dictating the relative weight of the two models. 12 This problem is similar to the second example in section 4; a very similar dual decomposition algorithm to that described in section 4.2 can be derived. We used the Penn Wall Street Treebank (Marcus et al., 1994) for the experiments, with sections 2-21 for training, section 22 for development, and section 23 for testing. The parameter \u03b3 was chosen to optimize performance on the development set.", "cite_spans": [ { "start": 473, "end": 494, "text": "(Marcus et al., 1994)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "We ran the dual decomposition algorithm with a limit of K = 50 iterations. The dual decomposition algorithm returns an exact solution if case 1 occurs as defined in section 6.2; we found that of 2416 sentences in section 23, case 1 occurred for 2407 (99.6%) sentences. Table 1 gives statistics showing the number of iterations required for convergence. Over 80% of the examples converge in 5 iterations or fewer; over 90% converge in 10 iterations or fewer.", "cite_spans": [], "ref_spans": [ { "start": 269, "end": 276, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "We compare the accuracy of the dual decomposition approach to two baselines: first, Model 1; and second, a naive integration method that enforces the hard constraint that Model 1 must only consider de- 11 We use a reimplementation that is a slight modification of Collins Model 1, with very similar performance, and which uses the TAG formalism of .", "cite_spans": [ { "start": 202, "end": 204, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "12 Note that the models f1 and f2 were trained separately, using the methods described by Collins (2003) and (Collins, 2002) . Koo08 Baseline: Model 1 with a hard restriction to dependencies predicted by the discriminative dependency parser of . DD Combination: a model that maximizes the joint score of the two parsers. Dep shows the unlabeled dependency accuracy of each system. Figure 5 : Performance on the parsing task assuming a fixed number of iterations K. f-score: accuracy of the method. % certificates: percentage of examples for which a certificate of optimality is provided. % match: percentage of cases where the output from the method is identical to the output when using K = 50.", "cite_spans": [ { "start": 90, "end": 104, "text": "Collins (2003)", "ref_id": "BIBREF4" }, { "start": 109, "end": 124, "text": "(Collins, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 381, "end": 389, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "pendencies seen in the first-best output from the dependency parser. Table 2 shows all three results. The dual decomposition method gives a significant gain in precision and recall over the naive combination method, and boosts the performance of Model 1 to a level that is close to some of the best single-pass parsers on the Penn treebank test set. Dependency accuracy is also improved over the model, in spite of the relatively low dependency accuracy of Model 1 alone. Figure 5 shows performance of the approach as a function of K, the maximum number of iterations of dual decomposition. For this experiment, for cases where the method has not converged for k \u2264 K, the output from the algorithm is chosen to be the y (k) for k \u2264 K that maximizes the objective function in Eq. 11. The graphs show that values of K less than 50 produce almost identical performance to K = 50, but with fewer cases giving certificates of optimality (with K = 10, the f-score of the method is 90.69%; with K = 5 it is 90.63%). Table 3 : Performance results for Section 23 of the WSJ. Model 1 (Fixed Tags): a baseline parser initialized to the best tag sequence of from the tagger of Toutanova and Manning (2000) . DD Combination: a model that maximizes the joint score of parse and tag selection.", "cite_spans": [ { "start": 1165, "end": 1193, "text": "Toutanova and Manning (2000)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 69, "end": 76, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 472, "end": 480, "text": "Figure 5", "ref_id": null }, { "start": 1009, "end": 1016, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Recovering the LP Solution", "sec_num": "6.2" }, { "text": "In a second experiment, we used dual decomposition to integrate the Model 1 parser with the Stanford max-ent trigram POS tagger (Toutanova and Manning, 2000) , using a very similar algorithm to that described in section 4.1. We use the same training/dev/test split as in section 7.1. The two models were again trained separately. We ran the algorithm with a limit of K = 50 iterations. Out of 2416 test examples, the algorithm found an exact solution in 98.9% of the cases. Table 1 gives statistics showing the speed of convergence for different examples: over 94% of the examples converge to an exact solution in 10 iterations or fewer. In terms of accuracy, we compare to a baseline approach of using the first-best tag sequence as input to the parser. The dual decomposition approach gives 88.3 F1 measure in recovering parsetree constituents, compared to 87.9 for the baseline.", "cite_spans": [ { "start": 128, "end": 157, "text": "(Toutanova and Manning, 2000)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 474, "end": 481, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Integrated Phrase-Structure Parsing and Trigram POS tagging", "sec_num": "7.2" }, { "text": "We have introduced dual-decomposition algorithms for inference in NLP, given formal properties of the algorithms in terms of LP relaxations, and demonstrated their effectiveness on problems that would traditionally be solved using intersections of dynamic programs (Bar-Hillel et al., 1964) . Given the widespread use of dynamic programming in NLP, there should be many applications for the approach.", "cite_spans": [ { "start": 265, "end": 290, "text": "(Bar-Hillel et al., 1964)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "There are several possible extensions of the method we have described. We have focused on cases where two models are being combined; the extension to more than two models is straightforward (e.g., see Komodakis et al. (2007) ). This paper has considered approaches for MAP inference; for closely related methods that compute approximate marginals, see Wainwright et al. (2005b) .", "cite_spans": [ { "start": 201, "end": 224, "text": "Komodakis et al. (2007)", "ref_id": "BIBREF9" }, { "start": 352, "end": 377, "text": "Wainwright et al. (2005b)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "8" }, { "text": "We now give an example of a point (\u00b5, \u03bd) \u2208 Q \\conv(Q) that demonstrates that the relaxation Q is strictly larger than conv(Q). Fractional points such as this one can arise as solutions of the LP relaxation for worst case instances, preventing us from finding an exact solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Fractional Solutions", "sec_num": null }, { "text": "Recall that the constraints for Q specify that \u00b5 \u2208 conv(Y), \u03bd \u2208 conv(Z), and \u00b5(i, t) = \u03bd(i, t) for all (i, t) \u2208 I uni . Since \u00b5 \u2208 conv(Y), \u00b5 must be a convex combination of 1 or more members of Y; a similar property holds for \u03bd. The example is as follows. There are two possible parts of speech, A and B, and an additional non-terminal symbol X. The sentence is of length 3, w 1 w 2 w 3 . Let \u03bd be the convex combination of the following two tag sequences, each with probability 0.5: w 1 /A w 2 /A w 3 /A and w 1 /A w 2 /B w 3 /B. Let \u00b5 be the convex combination of the following two parses, each with probability 0.5: (X(A w 1 )(X(A w 2 )(B w 3 ))) and (X(A w 1 )(X(B w 2 )(A w 3 ))). It can be verified that \u00b5(i, t) = \u03bd(i, t) for all (i, t), i.e., the marginals for single tags for \u00b5 and \u03bd agree. Thus, (\u00b5, \u03bd) \u2208 Q .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Fractional Solutions", "sec_num": null }, { "text": "To demonstrate that this fractional point is not in conv(Q), we give parameter values such that this fractional point is optimal and all integral points (i.e., actual parses) are suboptimal. For the tagging model, set \u03b8(AA \u2192 A, 3) = \u03b8(AB \u2192 B, 3) = 0, with all other parameters having a negative value. For the parsing model, set \u03b8(X \u2192 A X, 1, 1, 3) = \u03b8(X \u2192 A B, 2, 2, 3) = \u03b8(X \u2192 B A, 2, 2, 3) = 0, with all other rule parameters being negative. For this objective, the fractional solution has value 0, while all integral points (i.e., all points in Q) have a negative value. By Theorem 5.2, the maximum of any linear objective over conv(Q) is equal to the maximum over Q. Thus, (\u00b5, \u03bd) \u2208 conv(Q).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Fractional Solutions", "sec_num": null }, { "text": "We used the following step size in our experiments. First, we initialized \u03b1 0 to equal 0.5, a relatively large value. Then we defined \u03b1 k = \u03b1 0 * 2 \u2212\u03b7 k , where \u03b7 k is the number of times that L(u (k ) ) > L(u (k \u22121) ) for k \u2264 k. This learning rate drops at a rate of 1/2 t , where t is the number of times that the dual increases from one iteration to the next. See Koo et al. (2010) for a similar, but less aggressive step size used to solve a different task.", "cite_spans": [ { "start": 367, "end": 384, "text": "Koo et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "B Step Size", "sec_num": null }, { "text": "The same is true for NLP inference algorithms based on other exact combinatorial methods, for example methods based on minimum-weight spanning trees(McDonald et al., 2005), or graph cuts(Pang and Lee, 2004).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "More generally, other exact inference methods can be used as oracles, for example spanning tree algorithms for nonprojective dependency structures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do not require rules of the form A \u2192 wi in this representation, as they are redundant: specifically, a rule production A \u2192 B C, i, k, j implies a rule B \u2192 wi iff i = k, and C \u2192 wj iff j = k + 1.4 We do not require parameters for rules of the form A \u2192 w, as they can be folded into rule production parameters. E.g., under a PCFG we define \u03b8(A \u2192 B C, i, k, j) = log P (A \u2192 B C | A) + \u03b4 i,k log P (B \u2192 wi|B) + \u03b4 k+1,j log P (C \u2192 wj|C) where \u03b4x,y = 1 if x = y, 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For any finite set Y, conv(Y) can be expressed as {\u00b5 \u2208 R m : A\u00b5 \u2264 b} where A is a matrix of dimension p \u00d7 m, and b \u2208 R p (see, e.g.,Korte and Vygen (2008), pg. 65). The value for p depends on the set Y, and can be exponential in size.7Taskar et al. (2004) describe the same set of constraints, but without proof of correctness or reference toMartin et al. (1990).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ") and u(k) are primal and dual optimal.10 The resulting fractional solution can be projected back to the set Q, see(Smith and Eisner, 2008;Martins et al., 2009).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments MIT gratefully acknowledges the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government. Alexander Rush was supported under the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-C-0022. David Sontag was supported by a Google PhD Fellowship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On formal properties of simple phrase structure grammars", "authors": [ { "first": "Y", "middle": [], "last": "Bar-Hillel", "suffix": "" }, { "first": "M", "middle": [], "last": "Perles", "suffix": "" }, { "first": "E", "middle": [], "last": "Shamir", "suffix": "" } ], "year": 1964, "venue": "Language and Information: Selected Essays on their Theory and Application", "volume": "", "issue": "", "pages": "116--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Bar-Hillel, M. Perles, and E. Shamir. 1964. On formal properties of simple phrase structure grammars. In Language and Information: Selected Essays on their Theory and Application, pages 116-150.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "TAG, dynamic programming, and the perceptron for efficient, feature-rich parsing", "authors": [ { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "T", "middle": [], "last": "Koo", "suffix": "" } ], "year": 2008, "venue": "Proc CONLL", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Carreras, M. Collins, and T. Koo. 2008. TAG, dy- namic programming, and the perceptron for efficient, feature-rich parsing. In Proc CONLL, pages 9-16.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Experiments with a higher-order projective dependency parser", "authors": [ { "first": "X", "middle": [], "last": "Carreras", "suffix": "" } ], "year": 2007, "venue": "Proc. CoNLL", "volume": "", "issue": "", "pages": "957--961", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proc. CoNLL, pages 957-961.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2002, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proc. EMNLP, page 8.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Head-driven statistical models for natural language parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2003, "venue": "Computational linguistics", "volume": "29", "issue": "", "pages": "589--637", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 2003. Head-driven statistical models for nat- ural language parsing. In Computational linguistics, volume 29, pages 589-637.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Decomposition principle for linear programs", "authors": [ { "first": "G", "middle": [ "B" ], "last": "Dantzig", "suffix": "" }, { "first": "P", "middle": [], "last": "Wolfe", "suffix": "" } ], "year": 1960, "venue": "Operations research", "volume": "8", "issue": "", "pages": "101--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "G.B. Dantzig and P. Wolfe. 1960. Decomposition princi- ple for linear programs. In Operations research, vol- ume 8, pages 101-111.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Using combinatorial optimization within max-product belief propagation", "authors": [ { "first": "J", "middle": [], "last": "Duchi", "suffix": "" }, { "first": "D", "middle": [], "last": "Tarlow", "suffix": "" }, { "first": "G", "middle": [], "last": "Elidan", "suffix": "" }, { "first": "D", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2007, "venue": "NIPS", "volume": "19", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Duchi, D. Tarlow, G. Elidan, and D. Koller. 2007. Using combinatorial optimization within max-product belief propagation. In NIPS, volume 19.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Bilexical grammars and their cubic-time parsing algorithms", "authors": [ { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2000, "venue": "Advances in Probabilistic and Other Parsing Technologies", "volume": "", "issue": "", "pages": "29--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in Probabilistic and Other Parsing Technologies, pages 29-62.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fixing maxproduct: Convergent message passing algorithms for MAP LP-relaxations", "authors": [ { "first": "A", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" } ], "year": 2007, "venue": "NIPS", "volume": "21", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Globerson and T. Jaakkola. 2007. Fixing max- product: Convergent message passing algorithms for MAP LP-relaxations. In NIPS, volume 21.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "MRF optimization via dual decomposition: Messagepassing revisited", "authors": [ { "first": "N", "middle": [], "last": "Komodakis", "suffix": "" }, { "first": "N", "middle": [], "last": "Paragios", "suffix": "" }, { "first": "G", "middle": [], "last": "Tziritas", "suffix": "" } ], "year": 2007, "venue": "International Conference on Computer Vision", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Komodakis, N. Paragios, and G. Tziritas. 2007. MRF optimization via dual decomposition: Message- passing revisited. In International Conference on Computer Vision.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Simple semisupervised dependency parsing", "authors": [ { "first": "T", "middle": [], "last": "Koo", "suffix": "" }, { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2008, "venue": "Proc. ACL/HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Koo, X. Carreras, and M. Collins. 2008. Simple semi- supervised dependency parsing. In Proc. ACL/HLT.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Dual Decomposition for Parsing with Non-Projective Head Automata", "authors": [ { "first": "T", "middle": [], "last": "Koo", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "D", "middle": [], "last": "Sontag", "suffix": "" } ], "year": 2010, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "63--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Koo, A.M. Rush, M. Collins, T. Jaakkola, and D. Son- tag. 2010. Dual Decomposition for Parsing with Non- Projective Head Automata. In Proc. EMNLP, pages 63-70.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Combinatorial optimization: theory and algorithms", "authors": [ { "first": "B", "middle": [ "H" ], "last": "Korte", "suffix": "" }, { "first": "J", "middle": [], "last": "Vygen", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B.H. Korte and J. Vygen. 2008. Combinatorial optimiza- tion: theory and algorithms. Springer Verlag.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "M", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "B", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1994, "venue": "Computational linguistics", "volume": "19", "issue": "", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn Treebank. In Computational linguistics, vol- ume 19, pages 313-330.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Polyhedral characterization of discrete dynamic programming", "authors": [ { "first": "R", "middle": [ "K" ], "last": "Martin", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Rardin", "suffix": "" }, { "first": "B", "middle": [ "A" ], "last": "Campbell", "suffix": "" } ], "year": 1990, "venue": "Operations research", "volume": "38", "issue": "1", "pages": "127--138", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.K. Martin, R.L. Rardin, and B.A. Campbell. 1990. Polyhedral characterization of discrete dynamic pro- gramming. Operations research, 38(1):127-138.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Concise integer linear programming formulations for dependency parsing", "authors": [ { "first": "A", "middle": [ "F T" ], "last": "Martins", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "E", "middle": [ "P" ], "last": "Xing", "suffix": "" } ], "year": 2009, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A.F.T. Martins, N.A. Smith, and E.P. Xing. 2009. Con- cise integer linear programming formulations for de- pendency parsing. In Proc. ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Non-projective dependency parsing using spanning tree algorithms", "authors": [ { "first": "R", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "K", "middle": [], "last": "Ribarov", "suffix": "" }, { "first": "J", "middle": [], "last": "Hajic", "suffix": "" } ], "year": 2005, "venue": "Proc. HLT/EMNLP", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. HLT/EMNLP, pages 523- 530.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Approximate primal solutions and rate analysis for dual subgradient methods", "authors": [ { "first": "Angelia", "middle": [], "last": "Nedi\u0107", "suffix": "" }, { "first": "Asuman", "middle": [], "last": "Ozdaglar", "suffix": "" } ], "year": 2009, "venue": "SIAM Journal on Optimization", "volume": "19", "issue": "4", "pages": "1757--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelia Nedi\u0107 and Asuman Ozdaglar. 2009. Approxi- mate primal solutions and rate analysis for dual sub- gradient methods. SIAM Journal on Optimization, 19(4):1757-1780.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2004, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proc. ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Incremental integer linear programming for non-projective dependency parsing", "authors": [ { "first": "S", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "J", "middle": [], "last": "Clarke", "suffix": "" } ], "year": 2006, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "129--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Riedel and J. Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proc. EMNLP, pages 129-137.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Integer linear programming inference for conditional random fields", "authors": [ { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "W", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2005, "venue": "Proc. ICML", "volume": "", "issue": "", "pages": "737--744", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Roth and W. Yih. 2005. Integer linear program- ming inference for conditional random fields. In Proc. ICML, pages 737-744.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "A hierarchy of relaxations and convex hull characterizations for mixed-integer zero-one programming problems", "authors": [ { "first": "D", "middle": [], "last": "Hanif", "suffix": "" }, { "first": "Warren", "middle": [ "P" ], "last": "Sherali", "suffix": "" }, { "first": "", "middle": [], "last": "Adams", "suffix": "" } ], "year": 1994, "venue": "Discrete Applied Mathematics", "volume": "52", "issue": "1", "pages": "83--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanif D. Sherali and Warren P. Adams. 1994. A hi- erarchy of relaxations and convex hull characteriza- tions for mixed-integer zero-one programming prob- lems. Discrete Applied Mathematics, 52(1):83 -106.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Dependency parsing by belief propagation", "authors": [ { "first": "D", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2008, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "145--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "D.A. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proc. EMNLP, pages 145-156.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Tightening LP relaxations for MAP using message passing", "authors": [ { "first": "D", "middle": [], "last": "Sontag", "suffix": "" }, { "first": "T", "middle": [], "last": "Meltzer", "suffix": "" }, { "first": "A", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "Y", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2008, "venue": "Proc. UAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. 2008. Tightening LP relaxations for MAP using message passing. In Proc. UAI.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Max-margin parsing", "authors": [ { "first": "B", "middle": [], "last": "Taskar", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "D", "middle": [], "last": "Koller", "suffix": "" }, { "first": "C", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Taskar, D. Klein, M. Collins, D. Koller, and C. Man- ning. 2004. Max-margin parsing. In Proc. EMNLP, pages 1-8.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Enriching the knowledge sources used in a maximum entropy partof-speech tagger", "authors": [ { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2000, "venue": "Proc. EMNLP", "volume": "", "issue": "", "pages": "63--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Toutanova and C.D. Manning. 2000. Enriching the knowledge sources used in a maximum entropy part- of-speech tagger. In Proc. EMNLP, pages 63-70.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Graphical Models, Exponential Families, and Variational Inference", "authors": [ { "first": "M", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Wainwright and M. I. Jordan. 2008. Graphical Mod- els, Exponential Families, and Variational Inference. Now Publishers Inc., Hanover, MA, USA.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MAP estimation via agreement on trees: messagepassing and linear programming", "authors": [ { "first": "M", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "A", "middle": [], "last": "Willsky", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Information Theory", "volume": "51", "issue": "", "pages": "3697--3717", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Wainwright, T. Jaakkola, and A. Willsky. 2005a. MAP estimation via agreement on trees: message- passing and linear programming. In IEEE Transac- tions on Information Theory, volume 51, pages 3697- 3717.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A new class of upper bounds on the log partition function", "authors": [ { "first": "M", "middle": [], "last": "Wainwright", "suffix": "" }, { "first": "T", "middle": [], "last": "Jaakkola", "suffix": "" }, { "first": "A", "middle": [], "last": "Willsky", "suffix": "" } ], "year": 2005, "venue": "IEEE Transactions on Information Theory", "volume": "51", "issue": "", "pages": "2313--2335", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Wainwright, T. Jaakkola, and A. Willsky. 2005b. A new class of upper bounds on the log partition func- tion. In IEEE Transactions on Information Theory, volume 51, pages 2313-2335.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Linear Programming Relaxations and Belief Propagation-An Empirical Study", "authors": [ { "first": "C", "middle": [], "last": "Yanover", "suffix": "" }, { "first": "T", "middle": [], "last": "Meltzer", "suffix": "" }, { "first": "Y", "middle": [], "last": "Weiss", "suffix": "" } ], "year": 2006, "venue": "The Journal of Machine Learning Research", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Yanover, T. Meltzer, and Y. Weiss. 2006. Linear Programming Relaxations and Belief Propagation-An Empirical Study. In The Journal of Machine Learning Research, volume 7, page 1907. MIT Press.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "fined by the respective arg max's. Thus, (x", "type_str": "figure", "uris": null }, "TABREF2": { "text": "Convergence results for Section 23 of the WSJ Treebank for the dependency parsing and POS experiments. Each column gives the percentage of sentences whose exact solutions were found in a given range of subgradient iterations. ** is the percentage of sentences that did not converge by the iteration limit (K=50).", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "text": "Performance results for Section 23 of the WSJ Treebank. Model 1: a reimplementation of the generative parser of", "content": "
", "html": null, "num": null, "type_str": "table" } } } }