{ "paper_id": "P13-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:33:44.208347Z" }, "title": "Nonconvex Global Optimization for Latent-Variable Models *", "authors": [ { "first": "Matthew", "middle": [ "R" ], "last": "Gormley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "settlement": "Baltimore", "region": "MD" } }, "email": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "settlement": "Baltimore", "region": "MD" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many models in NLP involve latent variables, such as unknown parses, tags, or alignments. Finding the optimal model parameters is then usually a difficult nonconvex optimization problem. The usual practice is to settle for local optimization methods such as EM or gradient ascent. We explore how one might instead search for a global optimum in parameter space, using branch-and-bound. Our method would eventually find the global maximum (up to a user-specified) if run for long enough, but at any point can return a suboptimal solution together with an upper bound on the global maximum. As an illustrative case, we study a generative model for dependency parsing. We search for the maximum-likelihood model parameters and corpus parse, subject to posterior constraints. We show how to formulate this as a mixed integer quadratic programming problem with nonlinear constraints. We use the Reformulation Linearization Technique to produce convex relaxations during branch-and-bound. Although these techniques do not yet provide a practical solution to our instance of this NP-hard problem, they sometimes find better solutions than Viterbi EM with random restarts, in the same time.", "pdf_parse": { "paper_id": "P13-1044", "_pdf_hash": "", "abstract": [ { "text": "Many models in NLP involve latent variables, such as unknown parses, tags, or alignments. Finding the optimal model parameters is then usually a difficult nonconvex optimization problem. The usual practice is to settle for local optimization methods such as EM or gradient ascent. We explore how one might instead search for a global optimum in parameter space, using branch-and-bound. Our method would eventually find the global maximum (up to a user-specified) if run for long enough, but at any point can return a suboptimal solution together with an upper bound on the global maximum. As an illustrative case, we study a generative model for dependency parsing. We search for the maximum-likelihood model parameters and corpus parse, subject to posterior constraints. We show how to formulate this as a mixed integer quadratic programming problem with nonlinear constraints. We use the Reformulation Linearization Technique to produce convex relaxations during branch-and-bound. Although these techniques do not yet provide a practical solution to our instance of this NP-hard problem, they sometimes find better solutions than Viterbi EM with random restarts, in the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Rich models with latent linguistic variables are popular in computational linguistics, but in general it is not known how to find their optimal parameters. In this paper, we present some \"new\" attacks for this common optimization setting, drawn from the mathematical programming toolbox.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We focus on the well-studied but unsolved task of unsupervised dependency parsing (i.e., depen--180.2 -231.0 -254.3 -387.1 -287.3 -311.1 -467.5 -298 -342 ! 5 \" -0.6 -0.6 \" ! 5 ! 5 \" -2 -2 \" ! 5 ! 3 \" -0.6 -0.6 \" ! 3 The node branches on a single model parameter \u03b8 m to partition its subspace. The lower bound, -400, is given by the best solution seen so far, the incumbent. The upper bound, -298, is the min of all remaining leaf nodes. The node with a local bound of -467.5 can be pruned because no solution within its subspace could be better than the incumbent. dency grammar induction). This may be a particularly hard case, but its structure is typical. Many parameter estimation techniques have been attempted, including expectation-maximization (EM) (Klein and Manning, 2004; Spitkovsky et al., 2010a) , contrastive estimation (Smith and Eisner, 2006; Smith, 2006) , Viterbi EM (Spitkovsky et al., 2010b) , and variational EM (Naseem et al., 2010; . These are all local search techniques, which improve the parameters by hill-climbing.", "cite_spans": [ { "start": 82, "end": 153, "text": "(i.e., depen--180.2 -231.0 -254.3 -387.1 -287.3 -311.1 -467.5 -298 -342", "ref_id": null }, { "start": 757, "end": 782, "text": "(Klein and Manning, 2004;", "ref_id": "BIBREF19" }, { "start": 783, "end": 808, "text": "Spitkovsky et al., 2010a)", "ref_id": "BIBREF38" }, { "start": 834, "end": 858, "text": "(Smith and Eisner, 2006;", "ref_id": "BIBREF36" }, { "start": 859, "end": 871, "text": "Smith, 2006)", "ref_id": "BIBREF37" }, { "start": 885, "end": 911, "text": "(Spitkovsky et al., 2010b)", "ref_id": "BIBREF39" }, { "start": 933, "end": 954, "text": "(Naseem et al., 2010;", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem with local search is that it gets stuck in local optima. This is evident for grammar induction. An algorithm such as EM will find numerous different solutions when randomly initialized to different points (Charniak, 1993; Smith, 2006) . A variety of ways to find better local optima have been explored, including heuristic initialization of the model parameters (Spitkovsky et al., 2010a) , random restarts (Smith, 2006) , and annealing (Smith and Eisner, 2006; Smith, 2006) . Others have achieved accuracy improvements by enforcing linguistically motivated posterior constraints on the parameters (Gillenwater et al., 2010; Naseem et al., 2010) , such as requiring most sentences to have verbs or encouraging nouns to be children of verbs or prepositions.", "cite_spans": [ { "start": 217, "end": 233, "text": "(Charniak, 1993;", "ref_id": "BIBREF6" }, { "start": 234, "end": 246, "text": "Smith, 2006)", "ref_id": "BIBREF37" }, { "start": 374, "end": 400, "text": "(Spitkovsky et al., 2010a)", "ref_id": "BIBREF38" }, { "start": 419, "end": 432, "text": "(Smith, 2006)", "ref_id": "BIBREF37" }, { "start": 449, "end": 473, "text": "(Smith and Eisner, 2006;", "ref_id": "BIBREF36" }, { "start": 474, "end": 486, "text": "Smith, 2006)", "ref_id": "BIBREF37" }, { "start": 610, "end": 636, "text": "(Gillenwater et al., 2010;", "ref_id": "BIBREF15" }, { "start": 637, "end": 657, "text": "Naseem et al., 2010)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce a method that performs global search with certificates of -optimality for both the corpus parse and the model parameters. Our search objective is log-likelihood. We can also impose posterior constraints on the latent structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As we show, maximizing the joint loglikelihood of the parses and the parameters can be formulated as a mathematical program (MP) with a nonconvex quadratic objective and with integer linear and nonlinear constraints. Note that this objective is that of hard (Viterbi) EM-we do not marginalize over the parses as in classical EM. 1 To globally optimize the objective function, we employ a branch-and-bound algorithm that searches the continuous space of the model parameters by branching on individual parameters (see Figure 1 ). Thus, our branch-and-bound tree serves to recursively subdivide the global parameter hypercube. Each node represents a search problem over one of the resulting boxes (i.e., orthotopes).", "cite_spans": [ { "start": 329, "end": 330, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 517, "end": 525, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The crucial step is to prune nodes high in the tree by determining that their boxes cannot contain the global maximum. We compute an upper bound at each node by solving a relaxed maximization problem tailored to its box. If this upper bound is worse than our current best solution, we can prune the node. If not, we split the box again via another branching decision and retry on the two halves.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "At each node, our relaxation derives a linear programming problem (LP) that can be efficiently solved by the dual simplex method. First, we linearly relax the constraints that grammar rule probabilities sum to 1-these constraints are nonlinear in our parameters, which are log-probabilities. Second, we linearize the quadratic objective by applying the Reformulation Linearization Technique (RLT) (Sherali and Adams, 1990) , a method of forming tight linear relaxations of various types of MPs: the reformulation step multiplies together pairs of the original linear constraints to generate new quadratic constraints, and then the linearization step replaces quadratic terms in the new constraints with auxiliary variables.", "cite_spans": [ { "start": 397, "end": 422, "text": "(Sherali and Adams, 1990)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Finally, if the node is not pruned, we search for a better incumbent solution under that node by projecting the solution of the RLT relaxation back onto the feasible region. In the relaxation, the model parameters might sum to slightly more than one and the parses can consist of fractional dependency edges. We project in order to compute the true objective and compare with other solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our results demonstrate that our method can obtain higher likelihoods than Viterbi EM with random restarts. Furthermore, we show how posterior constraints inspired by Gillenwater et al. (2010) and Naseem et al. (2010) can easily be applied in our framework to obtain competitive accuracies using a simple model, the Dependency Model with Valence (Klein and Manning, 2004) . We also obtain an -optimal solution on a toy dataset.", "cite_spans": [ { "start": 167, "end": 192, "text": "Gillenwater et al. (2010)", "ref_id": "BIBREF15" }, { "start": 197, "end": 217, "text": "Naseem et al. (2010)", "ref_id": "BIBREF27" }, { "start": 346, "end": 371, "text": "(Klein and Manning, 2004)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We caution that the linear relaxations are very loose on larger boxes. Since we have many dimensions, the binary branch-and-bound tree may have to grow quite deep before the boxes become small enough to prune. This is why nonconvex quadratic optimization by LP-based branch-and-bound usually fails with more than 80 variables (Burer and Vandenbussche, 2009) . Even our smallest (toy) problems have hundreds of variables, so our experimental results mainly just illuminate the method's behavior. Nonetheless, we offer the method as a new tool which, just as for local search, might be combined with other forms of problem-specific guidance to produce more practical results.", "cite_spans": [ { "start": 326, "end": 357, "text": "(Burer and Vandenbussche, 2009)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We begin by describing how for our typical model, the Viterbi EM objective can be formulated as a mixed integer quadratic programming (MIQP) problem with nonlinear constraints (Figure 2 ).", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 185, "text": "(Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "Other locally normalized log-linear generative models (Berg-Kirkpatrick et al., 2010) would have a similar formulation. In such models, the loglikelihood objective is simply a linear function of the feature counts. However, the objective becomes quadratic in unsupervised learning, because the feature counts are themselves unknown variables to be optimized. The feature counts are constrained to be derived from the latent variables (e.g., parses), which are unknown discrete structures that must be encoded with integer variables. The nonlinear constraints ensure that the model parameters are true log-probabilities.", "cite_spans": [ { "start": 54, "end": 85, "text": "(Berg-Kirkpatrick et al., 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "Concretely, (1) specifies the Viterbi EM objective: the total log-probability of the best parse trees under the parameters \u03b8, given by a sum of log-probabilities \u03b8 m of the individual steps needed to generate the tree, as encoded by the features f m . The (nonlinear) sum-to-one constraints on the (3) will ensure that the arc variables for each sentence e s encode a valid latent dependency tree, and that the f variables count up the features of these trees. The final constraints (4) simply specify the range of possible values for the model parameters and their integer count variables. Our experiments use the dependency model with valence (DMV) (Klein and Manning, 2004 ). This generative model defines a joint distribution over the sentences and their dependency trees.", "cite_spans": [ { "start": 651, "end": 675, "text": "(Klein and Manning, 2004", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "max m \u03b8 m f m (1) s.t. m\u2208Mc exp(\u03b8 m ) = 1, \u2200c (2) A f e \u2264 b (Model constraints) (3) \u03b8 m \u2264 0, f m , e sij \u2208 Z, \u2200m, s, i, j (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "We encode the DMV using integer linear constraints on the arc variables e and feature counts f . These will constitute the model constraints in (3). The constraints must declaratively specify that the arcs form a valid dependency tree and that the resulting feature values are as defined by the DMV.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "Tree Constraints To ensure that our arc variables, e s , form a dependency tree, we employ the same single-commodity flow constraints of Magnanti and Wolsey (1994) as adapted by Martins et al. (2009) for parsing. We also use the projectivity constraints of Martins et al. (2009) .", "cite_spans": [ { "start": 178, "end": 199, "text": "Martins et al. (2009)", "ref_id": "BIBREF25" }, { "start": 257, "end": 278, "text": "Martins et al. (2009)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "The single-commodity flow constraints simultaneously enforce that each node has exactly one parent, the special root node (position 0) has no in-coming arcs, and the arcs form a connected graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "For each sentence, s, the variable \u03c6 sij indicates the amount of flow traversing the arc from i to j in sentence s. The constraints below specify that the root node emits N s units of flow (5), that one unit of flow is consumed by each each node (6), that the flow is zero on each disabled arc 7, and that the arcs are binary variables (8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Single-commodity flow (Magnanti & Wolsey, 1994) Ns j=1 \u03c6 s0j = N s , \u2200j (5) Ns i=0 \u03c6 sij \u2212 Ns k=1 \u03c6 sjk = 1, \u2200j (6) \u03c6 sij \u2264 N s e sij , \u2200i, j", "eq_num": "(7)" } ], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "e sij \u2208 {0, 1}, \u2200i, j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "Projectivity is enforced by adding a constraint (9) for each arc ensuring that no edges will cross that arc if it is enabled. X ij is the set of arcs (k, l) that cross the arc (i, j).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "Projectivity (Martins et al., 2009 )", "cite_spans": [ { "start": 13, "end": 34, "text": "(Martins et al., 2009", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(k,l)\u2208X ij e skl \u2264 N s (1 \u2212 e sij ), \u2200s, i, j", "eq_num": "(9)" } ], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "DMV Feature Counts The DMV generates a dependency tree recursively as follows. First the head word of the sentence is generated, t \u223c Discrete(\u03b8 root ), where \u03b8 root is a subvector of \u03b8. To generate its children on the left side, we flip a coin to decide whether an adjacent child is generated, d \u223c Bernoulli(\u03b8 dec.L.0,t ). If the coin flip d comes up continue, we sample the word of that child as t \u223c Discrete(\u03b8 child.L,t ). We continue generating non-adjacent children in this way, using coin weights \u03b8 dec.L.\u2265 1,t until the coin comes up stop. We repeat this procedure to generate children on the right side, using the model parameters \u03b8 dec.R.0,t , \u03b8 child.R,t , and \u03b8 dec.R.\u2265 1,t . For each new child, we apply this process recursively to generate its descendants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "The feature count variables for the DMV are encoded in our MP as various sums over the edge variables. We begin with the root/child feature counts. The constraint (10) defines the feature count for model parameter \u03b8 root,t as the number of all enabled arcs connecting the root node to a word of type t, summing over all sentences s. The constraint in (11) similarly defines f child.L,t,t to be the number of enabled arcs connecting a parent of type t to a left child of type t . W st is the index set of tokens in sentences s with word type t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "DMV root/child feature counts", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "f root,t = Ns s=1 j\u2208Wst e s0j , \u2200t (10) f child.L,t,t = Ns s=1 j 0. This ensures that its entire subregion will not yield a -better solution than the current incumbent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "The overall optimistic bound is given by the worst optimistic bound of all current leaf nodes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "The projecting step, if the node is not pruned, projects the solution of the relaxation back to the feasible region, replacing the current incumbent if this projection provides a better lower bound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "In the branching step, we choose a variable \u03b8 m on which to divide. Each of the child nodes receives a lower \u03b8 min m and upper \u03b8 max m bound for \u03b8 m . The child subspaces partition the parent subspace.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "The search tree is defined by a variable ordering and the splitting procedure. We do binary branching on the variable \u03b8 m with the highest regret, defined as z m \u2212 \u03b8 m f m , where z m is the auxiliary objective variable we will introduce in \u00a7 4.2. Since \u03b8 m is a log-probability, we split its current range at the midpoint in probability space, log((exp \u03b8 min m + exp \u03b8 max m )/2). We perform best-first search, ordering the nodes by the the optimistic bound of their parent. We also use the LP-guided rule (Martin, 2000; Achterberg, 2007, section 6 .1) to perform depth-first plunges in search of better incumbents.", "cite_spans": [ { "start": 507, "end": 521, "text": "(Martin, 2000;", "ref_id": "BIBREF24" }, { "start": 522, "end": 549, "text": "Achterberg, 2007, section 6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Constrained Optimization Task", "sec_num": "2" }, { "text": "The relaxation in the bounding step computes an optimistic bound for a subspace of the model parameters. This upper bound would ideally be not much greater than the true maximum achievable on that region, but looser upper bounds are generally faster to compute.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxations", "sec_num": "4" }, { "text": "We present successive relaxations to the original nonconvex mixed integer quadratic program with nonlinear constraints from (1)-(4). First, we show how the nonlinear sum-to-one constraints can be relaxed into linear constraints and tightened. Second, we apply a classic approach to bound the nonconvex quadratic objective by a linear concave envelope. Finally, we present our full relaxation based on the Reformulation Linearization Technique (RLT) (Sherali and Adams, 1990) . We solve these LPs by the dual simplex algorithm.", "cite_spans": [ { "start": 449, "end": 474, "text": "(Sherali and Adams, 1990)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Relaxations", "sec_num": "4" }, { "text": "In this section, we use cutting planes to create a linear relaxation for the sum-to-one constraint (2). When relaxing a constraint, we must ensure that any assignment of the variables that was feasible (i.e. respected the constraints) in the original problem must also be feasible in the relaxation. In most cases, the relaxation is not perfectly tight and so will have an enlarged space of feasible solutions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "We begin by weakening constraint (2) to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "m\u2208Mc exp(\u03b8 m ) \u2264 1", "eq_num": "(17)" } ], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "The optimal solution under (17) still satisfies the original equality constraint (2) because of the maximization. We now relax (17) by approximating the surface z = m\u2208Mc exp(\u03b8 m ) by the max of N lower-bounding linear functions on R |Mc| . Instead of requiring z \u2264 1, we only require each of these lower bounds to be \u2264 1, slightly enlarging the feasible space into a convex polytope. Figure 3a shows the feasible region constructed from N =3 linear functions on two logprobabilities \u03b8 1 , \u03b8 2 .", "cite_spans": [], "ref_spans": [ { "start": 384, "end": 393, "text": "Figure 3a", "ref_id": null } ], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "Formally, for each c, we define the i th linear lower bound (i = 1, . . . , N ) to be the tangent hyperplane at some point\u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "(i) c = [\u03b8 (i) c,1 , . . . ,\u03b8 (i) c,|Mc| ] \u2208 R |Mc| , where each coordinate is a log-probabilit\u0177 \u03b8 (i) c,m < 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "We require each of these linear functions to be \u2264 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Sum-to-one Relaxation m\u2208Mc \u03b8 m + 1 \u2212\u03b8 (i) c,m exp \u03b8 (i) c,m \u2264 1, \u2200i, \u2200c", "eq_num": "(18)" } ], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "4.2 \"Relaxing\" the objective Figure 3 : In (a), the area under the curve corresponds to those points (\u03b8 1 , \u03b8 2 ) that satisfy (17) (z \u2264 1), with equality (2) achieved along the curve (z = 1). The shaded area shows the enlarged feasible region under the linear relaxation. In (b), the curved lower surface represents a single product term in the objective. The piecewise-linear upper surface is its concave envelope (raised by 20 for illustration; in reality they touch).", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 37, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "were fixed, the objective would become linear in the latent features. Although the parameters are not fixed, the branch-and-bound algorithm does box them into a small region, where the quadratic objective is \"more linear.\" Since it is easy to maximize a concave function, we will maximize the concave envelope-the concave function that most tightly upper-bounds our objective over the region. This turns out to be piecewise linear and can be maximized with an LP solver. Smaller regions yield tighter bounds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "Each node of the branch-and-bound tree specifies a region via bounds constraints \u03b8 min m < \u03b8 m < \u03b8 max m , \u2200m. In addition, we have known bounds f min m \u2264 f m \u2264 f max m , \u2200m for the count variables.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relaxing the sum-to-one constraint", "sec_num": "4.1" }, { "text": "The Reformulation Linearization Technique (RLT) 2 (Sherali and Adams, 1990 ) is a method of forming tighter relaxations of various types of MPs. The basic method reformulates the problem by adding products of existing constraints. The quadratic terms in the objective and in these new constraints are redefined as auxiliary variables, thereby linearizing the program. In this section, we will show how the RLT can be applied to our grammar induction problem and contrast it with the concave envelope relaxation presented in section 4.2.", "cite_spans": [ { "start": 50, "end": 74, "text": "(Sherali and Adams, 1990", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "Consider the original MP in equations (1) -(4), with the nonlinear sum-to-one constraints in (2) replaced by our linear constraints proposed in (18). If we remove the integer constraints in (4), the result is a quadratic program with purely linear constraints. Such problems have the form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max x T Qx (22) s.t. Ax \u2264 b (23) \u2212 \u221e < L i \u2264 x i \u2264 U i < \u221e, \u2200i", "eq_num": "(24)" } ], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "where the variables are x \u2208 R n , A is an m \u00d7 n matrix, and b \u2208 R m , and Q is an n \u00d7 n indefinite 3 matrix. Without loss of generality we assume Q is symmetric. The application of the RLT here was first considered by Sherali and Tuncbilek (1995) . For convenience of presentation, we represent both the linear inequality constraints and the bounds constraints, under a different parameterization using the matrix G and vector g.", "cite_spans": [ { "start": 218, "end": 246, "text": "Sherali and Tuncbilek (1995)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "(bi \u2212 Aix) \u2265 0, 1 \u2264 i \u2264 m (U k \u2212 x k ) \u2265 0, 1 \u2264 k \u2264 n (\u2212L k + x k ) \u2265 0, 1 \u2264 k \u2264 n \u2261 (gi \u2212 Gix) \u2265 0, 1 \u2264 i \u2264 m + 2n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "The reformulation step forms all possible products of these linear constraints and then adds them to the original quadratic program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "(g i \u2212 G i x)(g j \u2212 G j x) \u2265 0, \u22001 \u2264 i \u2264 j \u2264 m + 2n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "In the linearization step, we replace all quadratic terms in the quadratic objective and new quadratic constraints with auxiliary variables:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "w ij \u2261 x i x j , \u22001 \u2264 i \u2264 j \u2264 n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "This yields the following RLT relaxation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "RLT Relaxation max 1\u2264i\u2264j\u2264n Q ij w ij (25) s.t. g i g j \u2212 n k=1 g j G ik x k \u2212 n k=1 g i G jk x k + n k=1 n l=1 G ik G jl w kl \u2265 0, \u22001 \u2264 i \u2264 j \u2264 m + 2n", "eq_num": "(26)" } ], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "Notice above that we have omitted the original inequality constraints (23) and bounds (24), because they are fully enforced by the new RLT constraints (26) from the reformulation step (Sherali and Tuncbilek, 1995) . In our experiments, we keep the original constraints and instead explore subsets of the RLT constraints. If the original QP contains equality constraints of the form G e x = g e , then we can form constraints by multiplying this one by each variable x i . This gives us the following new set of constraints, for each equality constraint e:", "cite_spans": [ { "start": 184, "end": 213, "text": "(Sherali and Tuncbilek, 1995)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "g e x i + n j=1 \u2212G ej w ij = 0, \u22001 \u2264 i \u2264 n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "Theoretical Properties The new constraints in eq. (26) will impose the concave envelope constraints (20)-(21) (Anstreicher, 2009) .", "cite_spans": [ { "start": 110, "end": 129, "text": "(Anstreicher, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "The constraints presented above are considered to be first-level constraints corresponding to the first-level variables w ij . However, the same technique can be applied repeatedly to produce polynomial constraints of higher degree. These higher level constraints/variables have been shown to provide increasingly tighter relaxations (Sherali and Adams, 1990) at the cost of a large number of variables and constraints. In the case where x \u2208 {0, 1} n the degree-n RLT constraints will restrict to the convex hull of the feasible solutions (Sherali and Adams, 1990) . This is in direct contrast to the concave envelope relaxation presented in section 4.2 which relaxes to the convex hull of each quadratic term independently. This demonstrates the key intuition of the RLT relaxation: The products of constraints are implied (and unnecessary) in the original variable space. Yet when we project to a higherdimentional space by including the auxiliary variables, the linearized constraints cut off portions of the feasible region given by only the concave envelope relaxation in eqs. (20)-(21) .", "cite_spans": [ { "start": 334, "end": 359, "text": "(Sherali and Adams, 1990)", "ref_id": "BIBREF33" }, { "start": 539, "end": 564, "text": "(Sherali and Adams, 1990)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Reformulation Linearization Technique", "sec_num": "4.3" }, { "text": "It is a simple extension to impose posterior constraints within our framework. Here we emphasize constraints that are analogous to the universal linguistic constraints from Naseem et al. (2010) . Since we optimize the Viterbi EM objective, we directly constrain the counts in the single corpus parse rather than expected counts from a distribution over parses. Let E be the index set of model parameters corresponding to edge types from Table 1 of Naseem et al. (2010) , and N s be the number of words in the sth sentence. We impose the constraint that 75% of edges come from E:", "cite_spans": [ { "start": 173, "end": 193, "text": "Naseem et al. (2010)", "ref_id": "BIBREF27" }, { "start": 448, "end": 468, "text": "Naseem et al. (2010)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Adding Posterior Constraints", "sec_num": "4.4" }, { "text": "m\u2208E f m \u2265 0.75 S s=1 N s .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Posterior Constraints", "sec_num": "4.4" }, { "text": "A pessimistic bound, from the projecting step, will correspond to a feasible but not necessarily optimal solution to the original problem. We propose several methods for obtaining pessimistic bounds during the branch-and-bound search, by projecting and improving the solutions found by the relaxation. A solution to the relaxation may be infeasible in the original problem for two reasons: the model parameters might not sum to one, and/or the parse may contain fractional edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Projections", "sec_num": "5" }, { "text": "For each set of model parameters M c that should sum-to-one, we project the model parameters onto the M c \u2212 1 simplex by one of two methods: (1) normalize the infeasible parameters or (2) find the point on the simplex that has minimum Euclidean distance to the infeasible parameters using the algorithm of Chen and Ye (2011) . For both methods, we can optionally apply add-\u03bb smoothing before projecting.", "cite_spans": [ { "start": 306, "end": 324, "text": "Chen and Ye (2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model Parameters", "sec_num": null }, { "text": "Parses Since we are interested in projecting the fractional parse onto the space of projective spanning trees, we can simply employ a dynamic programming parsing algorithm (Eisner and Satta, 1999) where the weight of each edge is given as the fraction of the edge variable.", "cite_spans": [ { "start": 172, "end": 196, "text": "(Eisner and Satta, 1999)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Model Parameters", "sec_num": null }, { "text": "Only one of these projection techniques is needed. We then either use parsing to fill in the optimal parse trees given the projected model parameters, or use supervised parameter estimation to fill in the optimal model parameters given the projected parses. These correspond to the Viterbi E step and M step, respectively. We can locally improve the projected solution by continuing with a few additional iterations of Viterbi EM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Parameters", "sec_num": null }, { "text": "Related models could use very similar projection techniques. Given a relaxed joint solution to the parameters and the latent variables, one must be able to project it to a nearby feasible one, by projecting either the fractional parameters or the fractional latent variables into the feasible space and then solving exactly for the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Parameters", "sec_num": null }, { "text": "The goal of this work was to better understand and address the non-convexity of maximum-likelihood training with latent variables, especially parses. Gimpel and Smith (2012) proposed a concave model for unsupervised dependency parsing using IBM Model 1. This model did not include a tree constraint, but instead initialized EM on the DMV. By contrast, our approach incorporates the tree constraints directly into our convex relaxation and embeds the relaxation in a branch-and-bound algorithm capable of solving the original DMV maximum-likelihood estimation problem.", "cite_spans": [ { "start": 150, "end": 173, "text": "Gimpel and Smith (2012)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Spectral learning constitutes a wholly different family of consistent estimators, which achieve efficiency because they sidestep maximizing the nonconvex likelihood function. Hsu et al. (2009) introduced a spectral learner for a large class of HMMs. For supervised parsing, spectral learning has been used to learn latent variable PCFGs (Cohen et al., 2012) and hidden-state dependency grammars (Luque et al., 2012) . Alas, there are not yet any spectral learning methods that recover latent tree structure, as in grammar induction.", "cite_spans": [ { "start": 175, "end": 192, "text": "Hsu et al. (2009)", "ref_id": "BIBREF18" }, { "start": 337, "end": 357, "text": "(Cohen et al., 2012)", "ref_id": "BIBREF12" }, { "start": 395, "end": 415, "text": "(Luque et al., 2012)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Several integer linear programming (ILP) formulations of dependency parsing (Riedel and Clarke, 2006; Martins et al., 2009; Riedel et al., 2012) inspired our definition of grammar induction as a MP. Recent work uses branch-and-bound for decoding with non-local features (Qian and Liu, 2013) . These differ from our work by treating the model parameters as constants, thereby yielding a linear objective.", "cite_spans": [ { "start": 76, "end": 101, "text": "(Riedel and Clarke, 2006;", "ref_id": "BIBREF31" }, { "start": 102, "end": 123, "text": "Martins et al., 2009;", "ref_id": "BIBREF25" }, { "start": 124, "end": 144, "text": "Riedel et al., 2012)", "ref_id": "BIBREF32" }, { "start": 270, "end": 290, "text": "(Qian and Liu, 2013)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "For semi-supervised dependency parsing, Wang et al. (2008) used a convex objective, combining unsupervised least squares loss and a supervised large margin loss, This does not apply to our unsupervised setting. Branch-and-bound has also been applied to semi-supervised SVM training, a nonconvex search problem (Chapelle et al., 2007) , with a relaxation derived from the dual.", "cite_spans": [ { "start": 40, "end": 58, "text": "Wang et al. (2008)", "ref_id": "BIBREF41" }, { "start": 310, "end": 333, "text": "(Chapelle et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "We first analyze the behavior of our method on a toy synthetic dataset. Next, we compare various parameter settings for branch-and-bound by estimating the total solution time. Finally, we compare our search method to Viterbi EM on a small subset of the Penn Treebank.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "All our experiments use the DMV for unsupervised dependency parsing of part-of-speech (POS) tag sequences. For Viterbi EM we initialize the parameters of the model uniformly, breaking parser ties randomly in the first E-step (Spitkovsky et al., 2010b) . This initializer is state-of-the-art for Viterbi EM. We also apply add-one smoothing during each M-step. We use random restarts, and select the model with the highest likelihood.", "cite_spans": [ { "start": 225, "end": 251, "text": "(Spitkovsky et al., 2010b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "We add posterior constraints to Viterbi EM's Estep. First, we run a relaxed linear programming (LP) parser, then project the (possibly fractional) parses back to the feasible region. If the resulting parse does not respect the posterior constraints, we discard it. The posterior constraint in the LP parser is tighter 4 than the one used in the true optimization problem, so the projections tends to be feasible under the true (looser) posterior constraints. In our experiments, all but one projection respected the constraints. We solve all LPs with CPLEX.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "For our toy example, we generate sentences from a synthetic DMV over three POS tags (Verb, Noun, Adjective) with parameters chosen to favor short sentences with English word order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synthetic Data", "sec_num": "7.1" }, { "text": "In Figure 4 we show that the quality of the root relaxation increases as we approach the full set of RLT constraints. That the number of possible RLT constraints increases quadratically with the length of the corpus poses a serious challenge. For just 20 sentences from this synthetic model, the RLT generates 4,056,498 constraints.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Synthetic Data", "sec_num": "7.1" }, { "text": "For a single run of branch-and-bound, Figure 5 shows the global upper and lower bounds over time. 5 We consider five relaxations, each using only a subset of the RLT constraints. Max.0k uses only the concave envelope (20)-(21). Max.1k uses the concave envelope and also randomly samples 1,000 other RLT constraints, and so on for Max.10k and Max.100k. Obj.Filter includes all constraints with a nonzero coefficient for one of the RLT variables z m from the linearized objective. The rightmost lines correspond to RLT Max.10k: despite providing the tightest (local) bound at each node, it processed only 110 nodes in the time it took RLT Max.1k to process 1164. RLT Max.0k achieves the best balance of tight bounds and speed per node.", "cite_spans": [ { "start": 98, "end": 99, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 38, "end": 46, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Synthetic Data", "sec_num": "7.1" }, { "text": "It is prohibitively expensive to repeatedly run our algorithm to completion with a variety of parameter settings. Instead, we estimate the size of the branch-and-bound tree and the solution time using a high-variance estimate that is effective for comparisons (Lobjois and Lema\u00eetre, 1998) .", "cite_spans": [ { "start": 260, "end": 288, "text": "(Lobjois and Lema\u00eetre, 1998)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Comparing branch-and-bound strategies", "sec_num": "7.2" }, { "text": "Given a fixed set of parameters for our algorithm and an -optimality stopping criterion, we Table 1 : Branch-and-bound node count and completion time estimates. Each standard deviation was close in magnitude to the estimate itself. We ran for 8 hours, stopping at 10,000 samples on 8 synthetic sentences.", "cite_spans": [], "ref_spans": [ { "start": 92, "end": 99, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Comparing branch-and-bound strategies", "sec_num": "7.2" }, { "text": "can view the branch-and-bound tree T as fixed and finite in size. We wish to estimate some cost associated with the tree C(T ) = \u03b1\u2208nodes(T ) f (\u03b1). Letting f (\u03b1) = 1 estimates the number of nodes; if f (\u03b1) is the time to solve a node, then we estimate the total solution time using the Monte Carlo method of Knuth (1975) . Table 1 gives these estimates, for the same five RLT relaxations. Obj.Filter yields the smallest estimated tree size.", "cite_spans": [ { "start": 308, "end": 320, "text": "Knuth (1975)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 323, "end": 330, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Comparing branch-and-bound strategies", "sec_num": "7.2" }, { "text": "In this section, we compare our global search method to Viterbi EM with random restarts each with or without posterior constraints. We use 200 sentences of no more than 10 tokens from the WSJ portion of the Penn Treebank. We reduce the treebank's gold part-of-speech (POS) tags to a universal set of 12 tags (Petrov et al., 2012) plus a tag for auxiliaries, ignoring punctuation. Each search method is run for 8 hours. We obtain the initial incumbent solution for branch-and-bound by running Viterbi EM for 45 minutes. The average time to solve a node's relaxation ranges from 3 seconds for RLT Max.0k to 42 seconds for RLT Max.100k. Figure 6a shows the log-likelihood of the incumbent solution over time. In our global search method, like Viterbi EM, the posterior constraints lead to lower log-likelihoods. RLT Max.0k finds the highest log-likelihood solution. Figure 6b compares the unlabeled directed dependency accuracy of the incumbent solution. In both global and local search, the posterior constraints lead to higher accuracies. Viterbi EM with posterior constraints demonstrates the oscillation of incumbent accuracy: starting at 58.02% accuracy, it finds several high accuracy solutions early on (61.02%), but quickly abandons them to increase likelihood, yielding a final accuracy of 60.65%. RLT Max.0k with posterior constraints obtains the highest overall accuracy of 61.09% at ", "cite_spans": [ { "start": 308, "end": 329, "text": "(Petrov et al., 2012)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 634, "end": 643, "text": "Figure 6a", "ref_id": "FIGREF6" }, { "start": 863, "end": 872, "text": "Figure 6b", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Real Data", "sec_num": "7.3" }, { "text": "In principle, our branch-and-bound method can approach -optimal solutions to Viterbi training of locally normalized generative models, including the NP-hard case of grammar induction with the DMV. The method can also be used with posterior constraints or a regularized objective. Future work includes algorithmic improvements for solving the relaxation and the development of tighter relaxations. The Dantzig-Wolfe decomposition (Dantzig and Wolfe, 1960) or Lagrangian Relaxation (Held and Karp, 1970) might satisfy both of these goals by pushing the integer tree constraints into a subproblem solved by a dynamic programming parser. Recent work on semidefinite relaxations (Anstreicher, 2009) suggests they may provide tighter bounds at the expense of greater computation time.", "cite_spans": [ { "start": 429, "end": 454, "text": "(Dantzig and Wolfe, 1960)", "ref_id": "BIBREF13" }, { "start": 480, "end": 501, "text": "(Held and Karp, 1970)", "ref_id": "BIBREF17" }, { "start": 674, "end": 693, "text": "(Anstreicher, 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "Perhaps even more important than tightening the bounds at each node are search heuristics (e.g., surface cues) and priors (e.g., universal grammar) that guide our global search by deciding which node to expand next (Chomsky and Lasnik, 1993) .", "cite_spans": [ { "start": 215, "end": 241, "text": "(Chomsky and Lasnik, 1993)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "8" }, { "text": "This objective might not be a great sacrifice: Spitkovsky et al. (2010b) present evidence that hard EM can outperform soft EM for grammar induction in a hill-climbing setting. We use it because it is a quadratic objective. However, maximizing it remains NP-hard(Cohen and Smith, 2010).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The key idea underlying the RLT was originally introduced in Adams andSherali (1986) for 0-1 quadratic programming. It has since been extended to various other settings; seeSherali and Liberti (2008) for a complete survey.3 In the general case, that Q is indefinite causes this program to be nonconvex, making this problem NP-hard to solve(Vavasis, 1991;Pardalos, 1991).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "80% of edges must come from E as opposed to 75%. 5 The initial incumbent solution for branch-and-bound is obtained by running Viterbi EM with 10 random restarts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Constraint integer programming", "authors": [ { "first": "Tobias", "middle": [], "last": "Achterberg", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Achterberg. 2007. Constraint integer program- ming. Ph.D. thesis, TU Berlin.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A tight linearization and an algorithm for zero-one quadratic programming problems", "authors": [ { "first": "P", "middle": [], "last": "Warren", "suffix": "" }, { "first": "", "middle": [], "last": "Adams", "suffix": "" }, { "first": "D", "middle": [], "last": "Hanif", "suffix": "" }, { "first": "", "middle": [], "last": "Sherali", "suffix": "" } ], "year": 1986, "venue": "October. ArticleType: research-article / Full publication date: Oct", "volume": "32", "issue": "", "pages": "1274--1290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Warren P. Adams and Hanif D. Sherali. 1986. A tight linearization and an algorithm for zero-one quadratic programming problems. Management Science, 32(10):1274-1290, October. ArticleType: research-article / Full publication date: Oct., 1986 / Copyright 1986 INFORMS.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic programming", "authors": [ { "first": "Kurt", "middle": [], "last": "Anstreicher", "suffix": "" } ], "year": 2009, "venue": "Journal of Global Optimization", "volume": "43", "issue": "2", "pages": "471--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kurt Anstreicher. 2009. Semidefinite programming versus the reformulation-linearization technique for nonconvex quadratically constrained quadratic pro- gramming. Journal of Global Optimization, 43(2):471-484.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Painless unsupervised learning with features", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "John", "middle": [], "last": "Denero", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Denero", "suffix": "" }, { "first": "", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick, Alexandre Bouchard-C\u00f4t\u00e9, DeNero, John DeNero, and Dan Klein. 2010. Pain- less unsupervised learning with features. In Proc. of NAACL, June.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Globally solving box-constrained nonconvex quadratic programs with semidefinite-based finite branch-andbound", "authors": [ { "first": "Samuel", "middle": [], "last": "Burer", "suffix": "" }, { "first": "Dieter", "middle": [], "last": "Vandenbussche", "suffix": "" } ], "year": 2009, "venue": "Computational Optimization and Applications", "volume": "43", "issue": "2", "pages": "181--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samuel Burer and Dieter Vandenbussche. 2009. Glob- ally solving box-constrained nonconvex quadratic programs with semidefinite-based finite branch-and- bound. Computational Optimization and Applica- tions, 43(2):181-195.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Branch and bound for semisupervised support vector machines", "authors": [ { "first": "Olivier", "middle": [], "last": "Chapelle", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Sindhwani", "suffix": "" }, { "first": "S", "middle": [], "last": "Sathiya Keerthi", "suffix": "" } ], "year": 2007, "venue": "Proc. of NIPS 19", "volume": "", "issue": "", "pages": "217--224", "other_ids": {}, "num": null, "urls": [], "raw_text": "Olivier Chapelle, Vikas Sindhwani, and S. Sathiya Keerthi. 2007. Branch and bound for semi- supervised support vector machines. In Proc. of NIPS 19, pages 217-224. MIT Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Statistical language learning", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 1993. Statistical language learning. MIT press.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Principles and parameters theory", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" }, { "first": "Howard", "middle": [], "last": "Lasnik", "suffix": "" } ], "year": 1993, "venue": "Syntax: An International Handbook of Contemporary Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky and Howard Lasnik. 1993. Princi- ples and parameters theory. In Syntax: An Interna- tional Handbook of Contemporary Research. Berlin: de Gruyter.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction", "authors": [ { "first": "Shay", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proc. of HLT-NAACL", "volume": "", "issue": "", "pages": "74--82", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay Cohen and Noah A. Smith. 2009. Shared logis- tic normal distributions for soft parameter tying in unsupervised grammar induction. In Proc. of HLT- NAACL, pages 74-82, June.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization", "authors": [ { "first": "Shay", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "1502--1511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay Cohen and Noah A. Smith. 2010. Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization. In Proc. of ACL, pages 1502- 1511, July.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Logistic normal priors for unsupervised probabilistic grammar induction", "authors": [ { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. B. Cohen, K. Gimpel, and N. A. Smith. 2009. Lo- gistic normal priors for unsupervised probabilistic grammar induction. In Proceedings of NIPS.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Spectral learning of latent-variable PCFGs", "authors": [ { "first": "B", "middle": [], "last": "Shay", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Stratos", "suffix": "" }, { "first": "Dean", "middle": [ "P" ], "last": "Collins", "suffix": "" }, { "first": "Lyle", "middle": [], "last": "Foster", "suffix": "" }, { "first": "", "middle": [], "last": "Ungar", "suffix": "" } ], "year": 2012, "venue": "Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "223--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. 2012. Spectral learning of latent-variable PCFGs. In Proc. of ACL (Volume 1: Long Papers), pages 223-231. Association for Com- putational Linguistics, July.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Decomposition principle for linear programs", "authors": [ { "first": "B", "middle": [], "last": "George", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Dantzig", "suffix": "" }, { "first": "", "middle": [], "last": "Wolfe", "suffix": "" } ], "year": 1960, "venue": "Operations Research", "volume": "8", "issue": "1", "pages": "101--111", "other_ids": {}, "num": null, "urls": [], "raw_text": "George B. Dantzig and Philip Wolfe. 1960. Decom- position principle for linear programs. Operations Research, 8(1):101-111, January.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Efficient parsing for bilexical context-free grammars and head automaton grammars", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" } ], "year": 1999, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "457--464", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner and Giorgio Satta. 1999. Efficient pars- ing for bilexical context-free grammars and head au- tomaton grammars. In Proc. of ACL, pages 457- 464, June.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sparsity in dependency grammar induction", "authors": [ { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Joo", "middle": [], "last": "Graa", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 Conference Short Papers", "volume": "", "issue": "", "pages": "194--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jennifer Gillenwater, Kuzman Ganchev, Joo Graa, Fer- nando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of the ACL 2010 Conference Short Papers, pages 194-199. Association for Computational Linguis- tics, July.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Concavity and initialization for unsupervised dependency parsing", "authors": [ { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2012, "venue": "Proc. of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Gimpel and N. A. Smith. 2012. Concavity and ini- tialization for unsupervised dependency parsing. In Proc. of NAACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The travelingsalesman problem and minimum spanning trees", "authors": [ { "first": "M", "middle": [], "last": "Held", "suffix": "" }, { "first": "R", "middle": [ "M" ], "last": "Karp", "suffix": "" } ], "year": 1970, "venue": "Operations Research", "volume": "18", "issue": "6", "pages": "1138--1162", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Held and R. M. Karp. 1970. The traveling- salesman problem and minimum spanning trees. Operations Research, 18(6):1138-1162.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A spectral algorithm for learning hidden markov models", "authors": [ { "first": "D", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "S", "middle": [], "last": "Kakade", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "COLT 2009 -The 22nd Conference on Learning Theory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Hsu, S. M Kakade, and T. Zhang. 2009. A spec- tral algorithm for learning hidden markov models. In COLT 2009 -The 22nd Conference on Learning Theory.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Corpusbased induction of syntactic structure: Models of dependency and constituency", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "478--485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher Manning. 2004. Corpus- based induction of syntactic structure: Models of de- pendency and constituency. In Proc. of ACL, pages 478-485, July.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Estimating the efficiency of backtrack programs", "authors": [ { "first": "D", "middle": [ "E" ], "last": "Knuth", "suffix": "" } ], "year": 1975, "venue": "Mathematics of computation", "volume": "29", "issue": "129", "pages": "121--136", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. E. Knuth. 1975. Estimating the efficiency of backtrack programs. Mathematics of computation, 29(129):121-136.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Branch and bound algorithm selection by performance prediction", "authors": [ { "first": "L", "middle": [], "last": "Lobjois", "suffix": "" }, { "first": "M", "middle": [], "last": "Lema\u00eetre", "suffix": "" } ], "year": 1998, "venue": "Proc. of the National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "353--358", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Lobjois and M. Lema\u00eetre. 1998. Branch and bound algorithm selection by performance prediction. In Proc. of the National Conference on Artificial Intel- ligence, pages 353-358.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Spectral learning for non-deterministic dependency parsing", "authors": [ { "first": "M", "middle": [], "last": "Franco", "suffix": "" }, { "first": "Ariadna", "middle": [], "last": "Luque", "suffix": "" }, { "first": "Borja", "middle": [], "last": "Quattoni", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Balle", "suffix": "" }, { "first": "", "middle": [], "last": "Carreras", "suffix": "" } ], "year": 2012, "venue": "Proc. of EACL", "volume": "", "issue": "", "pages": "409--419", "other_ids": {}, "num": null, "urls": [], "raw_text": "Franco M. Luque, Ariadna Quattoni, Borja Balle, and Xavier Carreras. 2012. Spectral learning for non-deterministic dependency parsing. In Proc. of EACL, pages 409-419, April.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Optimal Trees", "authors": [ { "first": "L", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Laurence", "middle": [ "A" ], "last": "Magnanti", "suffix": "" }, { "first": "", "middle": [], "last": "Wolsey", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas L. Magnanti and Laurence A. Wolsey. 1994. Optimal Trees. Center for Operations Research and Econometrics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Integer programs with block structure", "authors": [ { "first": "Alexander", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Martin. 2000. Integer programs with block structure. Technical Report SC-99-03, ZIB.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Concise integer linear programming formulations for dependency parsing", "authors": [ { "first": "Andr\u00e9", "middle": [], "last": "Martins", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2009, "venue": "Proc. of ACL-IJCNLP", "volume": "", "issue": "", "pages": "342--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andr\u00e9 Martins, Noah A. Smith, and Eric Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proc. of ACL-IJCNLP, pages 342-350, August.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Computability of global solutions to factorable nonconvex programs: Part I-Convex underestimating problems", "authors": [ { "first": "Garth", "middle": [ "P" ], "last": "Mccormick", "suffix": "" } ], "year": 1976, "venue": "Mathematical Programming", "volume": "10", "issue": "1", "pages": "147--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garth P. McCormick. 1976. Computability of global solutions to factorable nonconvex programs: Part I-Convex underestimating problems. Mathemati- cal Programming, 10(1):147-175.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Using universal linguistic knowledge to guide grammar induction", "authors": [ { "first": "Tahira", "middle": [], "last": "Naseem", "suffix": "" }, { "first": "Harr", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2010, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1234--1244", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguistic knowl- edge to guide grammar induction. In Proc. of EMNLP, pages 1234-1244, October.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Global optimization algorithms for linearly constrained indefinite quadratic problems", "authors": [ { "first": "P", "middle": [ "M" ], "last": "Pardalos", "suffix": "" } ], "year": 1991, "venue": "Computers & Mathematics with Applications", "volume": "21", "issue": "6", "pages": "87--97", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. M. Pardalos. 1991. Global optimization algorithms for linearly constrained indefinite quadratic prob- lems. Computers & Mathematics with Applications, 21(6):87-97.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A universal part-of-speech tagset", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2012, "venue": "Proc. of LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In Proc. of LREC.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Branch and bound algorithm for dependency parsing with non-local features", "authors": [ { "first": "Xian", "middle": [], "last": "Qian", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2013, "venue": "TACL", "volume": "1", "issue": "", "pages": "37--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xian Qian and Yang Liu. 2013. Branch and bound al- gorithm for dependency parsing with non-local fea- tures. TACL, 1:37-48.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Incremental integer linear programming for non-projective dependency parsing", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "James", "middle": [], "last": "Clarke", "suffix": "" } ], "year": 2006, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "129--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel and James Clarke. 2006. Incremental integer linear programming for non-projective de- pendency parsing. In Proc. of EMNLP, pages 129- 137, July.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Parse, price and cut-Delayed column and row generation for graph based parsers", "authors": [ { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "David", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2012, "venue": "Proc. of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "732--743", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Riedel, David Smith, and Andrew McCal- lum. 2012. Parse, price and cut-Delayed column and row generation for graph based parsers. In Proc. of EMNLP-CoNLL, pages 732-743, July.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems", "authors": [ { "first": "D", "middle": [], "last": "Hanif", "suffix": "" }, { "first": "Warren", "middle": [ "P" ], "last": "Sherali", "suffix": "" }, { "first": "", "middle": [], "last": "Adams", "suffix": "" } ], "year": 1990, "venue": "SIAM Journal on Discrete Mathematics", "volume": "3", "issue": "3", "pages": "411--430", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanif D. Sherali and Warren P. Adams. 1990. A hi- erarchy of relaxations between the continuous and convex hull representations for zero-one program- ming problems. SIAM Journal on Discrete Math- ematics, 3(3):411-430, August.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Reformulationlinearization technique for global optimization", "authors": [ { "first": "H", "middle": [], "last": "Sherali", "suffix": "" }, { "first": "L", "middle": [], "last": "Liberti", "suffix": "" } ], "year": 2008, "venue": "Encyclopedia of Optimization", "volume": "2", "issue": "", "pages": "3263--3268", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Sherali and L. Liberti. 2008. Reformulation- linearization technique for global optimization. En- cyclopedia of Optimization, 2:3263-3268.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A reformulation-convexification approach for solving nonconvex quadratic programming problems", "authors": [ { "first": "D", "middle": [], "last": "Hanif", "suffix": "" }, { "first": "Cihan", "middle": [ "H" ], "last": "Sherali", "suffix": "" }, { "first": "", "middle": [], "last": "Tuncbilek", "suffix": "" } ], "year": 1995, "venue": "Journal of Global Optimization", "volume": "7", "issue": "1", "pages": "1--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanif D. Sherali and Cihan H. Tuncbilek. 1995. A reformulation-convexification approach for solving nonconvex quadratic programming problems. Jour- nal of Global Optimization, 7(1):1-31.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Annealing structural bias in multilingual weighted grammar induction", "authors": [ { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2006, "venue": "Proc. of COLING-ACL", "volume": "", "issue": "", "pages": "569--576", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar in- duction. In Proc. of COLING-ACL, pages 569-576, July.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Novel estimation methods for unsupervised discovery of latent structure in natural language text", "authors": [ { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N.A. Smith. 2006. Novel estimation methods for unsu- pervised discovery of latent structure in natural lan- guage text. Ph.D. thesis, Johns Hopkins University, Baltimore, MD.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "From baby steps to leapfrog: How Less is more in unsupervised dependency parsing", "authors": [ { "first": "Hiyan", "middle": [], "last": "Valentin I Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2010, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "751--759", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I Spitkovsky, Hiyan Alshawi, and Daniel Ju- rafsky. 2010a. From baby steps to leapfrog: How Less is more in unsupervised dependency parsing. In Proc. of HLT-NAACL, pages 751-759. Associa- tion for Computational Linguistics, June.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Viterbi training improves unsupervised dependency parsing", "authors": [ { "first": "Hiyan", "middle": [], "last": "Valentin I Spitkovsky", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Alshawi", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "9--17", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin I Spitkovsky, Hiyan Alshawi, Daniel Jurafsky, and Christopher D Manning. 2010b. Viterbi train- ing improves unsupervised dependency parsing. In Proc. of CoNLL, pages 9-17. Association for Com- putational Linguistics, July.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Nonlinear optimization: complexity issues", "authors": [ { "first": "S", "middle": [ "A" ], "last": "Vavasis", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. A. Vavasis. 1991. Nonlinear optimization: com- plexity issues. Oxford University Press, Inc.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Semi-supervised convex training for dependency parsing", "authors": [ { "first": "Qin Iris", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2008, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "532--540", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2008. Semi-supervised convex training for de- pendency parsing. In Proc of ACL-HLT, pages 532-540. Association for Computational Linguis- tics, June.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Each node contains a local upper bound for its subspace, computed by a relaxation." }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Viterbi EM as a mathematical program probabilities are in (2). The linear constraints in" }, "FIGREF2": { "type_str": "figure", "uris": null, "num": null, "text": "Our true maximization objective m \u03b8 m f m in (1) is a sum of quadratic terms. If the parameters \u03b8" }, "FIGREF4": { "type_str": "figure", "uris": null, "num": null, "text": "The bound quality at the root improves as the proportion of RLT constraints increases, on 5 synthetic sentences. A random subset of 70% of the 320,126 possible RLT constraints matches the relaxation quality of the full set. This bound is very tight: the relaxations inFigure 5solve hundreds of nodes before such a bound is achieved." }, "FIGREF5": { "type_str": "figure", "uris": null, "num": null, "text": "The global upper and lower bounds improve over time for branch-and-bound using different subsets of RLT constraints on 5 synthetic sentences. Each solves the problem tooptimality for = 0.01. A point marks every 200 nodes processed. (The time axis is log-scaled.)" }, "FIGREF6": { "type_str": "figure", "uris": null, "num": null, "text": "Likelihood (a) and accuracy (b) of incumbent solution so far, on a small real dataset. 306 min and the highest final accuracy 60.73%." }, "TABREF0": { "num": null, "type_str": "table", "content": "
Variables:
\u03b8 mLog-probability for feature m
f m
Indices and constants:
mFeature / model parameter index
sSentence index
cConditional distribution index
MNumber of model parameters
C M c c th Set of feature indices that sum to 1.0 Number of conditional distributions S Number of sentences N s Number of words in the s th sentence
Objective and constraints:
", "text": "Corpus-wide feature count for m e sij Indicator of an arc from i to j in tree s", "html": null } } } }