{ "paper_id": "P05-1044", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:37:52.669342Z" }, "title": "Contrastive Estimation: Training Log-Linear Models on Unlabeled Data *", "authors": [ { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "postCode": "21218", "settlement": "Baltimore", "region": "MD", "country": "USA" } }, "email": "nasmith@cs.jhu.edu" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "postCode": "21218", "settlement": "Baltimore", "region": "MD", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on unlabeled data, we require unsupervised estimation methods for log-linear models; few exist. We describe a novel approach, contrastive estimation. We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence labeling problem-POS tagging given a tagging dictionary and unlabeled text-contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can largely recover by modeling additional features.", "pdf_parse": { "paper_id": "P05-1044", "_pdf_hash": "", "abstract": [ { "text": "Conditional random fields (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on unlabeled data, we require unsupervised estimation methods for log-linear models; few exist. We describe a novel approach, contrastive estimation. We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efficient. Applied to a sequence labeling problem-POS tagging given a tagging dictionary and unlabeled text-contrastive estimation outperforms EM (with the same feature set), is more robust to degradations of the dictionary, and can largely recover by modeling additional features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Finding linguistic structure in raw text is not easy. The classical forward-backward and inside-outside algorithms try to guide probabilistic models to discover structure in text, but they tend to get stuck in local maxima (Charniak, 1993) . Even when they avoid local maxima (e.g., through clever initialization) they typically deviate from human ideas of what the \"right\" structure is (Merialdo, 1994) .", "cite_spans": [ { "start": 223, "end": 239, "text": "(Charniak, 1993)", "ref_id": "BIBREF2" }, { "start": 387, "end": 403, "text": "(Merialdo, 1994)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One strategy is to incorporate domain knowledge into the model's structure. Instead of blind HMMs or PCFGs, one could use models whose features are crafted to pay attention to a range of domainspecific linguistic cues. Log-linear models can be so crafted and have already achieved excellent performance when trained on annotated data, where they are known as \"maximum entropy\" models (Ratnaparkhi et al., 1994; Rosenfeld, 1994) .", "cite_spans": [ { "start": 384, "end": 410, "text": "(Ratnaparkhi et al., 1994;", "ref_id": "BIBREF19" }, { "start": 411, "end": 427, "text": "Rosenfeld, 1994)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our goal is to learn log-linear models from unannotated data. Since the forward-backward and inside-outside algorithms are instances of Expectation-Maximization (EM) (Dempster et al., 1977) , a natural approach is to construct EM algorithms that handle log-linear models. Riezler (1999) did so, then resorted to an approximation because the true objective function was hard to normalize.", "cite_spans": [ { "start": 166, "end": 189, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF6" }, { "start": 272, "end": 286, "text": "Riezler (1999)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Stepping back from EM, we may generally envision parameter estimation for probabilistic modeling as pushing probability mass toward the training examples. We must consider not only where the learner pushes the mass, but also from where the mass is taken. In this paper, we describe an alternative to EM: contrastive estimation (CE), which (unlike EM) explicitly states the source of the probability mass that is to be given to an example. 1 One reason is to make normalization efficient. Indeed, CE generalizes EM and other practical techniques used to train log-linear models, including conditional estimation (for the supervised case) and Riezler's approximation (for the unsupervised case).", "cite_spans": [ { "start": 439, "end": 440, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The other reason to use CE is to improve accuracy. CE offers an additional way to inject domain knowledge into unsupervised learning (Smith and Eisner, 2005) . CE hypothesizes that each positive example in training implies a domain-specific set of examples which are (for the most part) degraded ( \u00a72). This class of implicit negative evidence provides the source of probability mass for the observed example. We discuss the application of CE to loglinear models in \u00a73.", "cite_spans": [ { "start": 133, "end": 157, "text": "(Smith and Eisner, 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are particularly interested in log-linear models over sequences, like the conditional random fields (CRFs) of Lafferty et al. (2001) and weighted CFGs (Miyao and Tsujii, 2002) . For a given sequence, implicit negative evidence can be represented as a lattice derived by finite-state operations ( \u00a74). Effectiveness of the approach on POS tagging using unlabeled data is demonstrated ( \u00a75). We discuss future work ( \u00a76) and conclude ( \u00a77).", "cite_spans": [ { "start": 113, "end": 135, "text": "Lafferty et al. (2001)", "ref_id": "BIBREF14" }, { "start": 154, "end": 178, "text": "(Miyao and Tsujii, 2002)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Natural language is a delicate thing. For any plausible sentence, there are many slight perturbations of it that will make it implausible. Consider, for example, the first sentence of this section. Suppose we choose one of its six words at random and remove it; on this example, odds are two to one that the resulting sentence will be ungrammatical. Or, we could randomly choose two adjacent words and transpose them; none of the results are valid conversational English. The learner we describe here takes into account not only the observed positive example, but also a set of similar but deprecated negative examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implicit Negative Evidence", "sec_num": "2" }, { "text": "Let x = x 1 , x 2 , ... , be our observed example sentences, where each x i \u2208 X, and let y * i \u2208 Y be the unobserved correct hidden structure for x i (e.g., a POS sequence). We seek a model, parameterized by \u03b8, such that the (unknown) correct analysis y * i is the best analysis for x i (under the model). If y * i were observed, a variety of training criteria would be available (see Tab. 1), but y * i is unknown, so none apply. Typically one turns to the EM algorithm (Dempster et al., 1977) , which locally maximizes", "cite_spans": [ { "start": 471, "end": 494, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "i p X = xi | \u03b8 = i y\u2208Y p X = xi, Y = y | \u03b8 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "where X is a random variable over sentences and Y a random variable over analyses (notation is often abbreviated, eliminating the random variables). An often-used alternative to EM is a class of socalled Viterbi approximations, which iteratively find the probabilistically-best\u0177 and then, on each iteration, solve a supervised problem (see Tab. 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "joint likelihood (JL) i p xi, y * i | \u03b8 conditional likelihood (CL) i p y * i | xi, \u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "classification accuracy (Juang and Katagiri, 1992) ", "cite_spans": [ { "start": 24, "end": 50, "text": "(Juang and Katagiri, 1992)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "i \u03b4(y * i ,\u0177(xi))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "expected classification accuracy (Klein and Manning, 2002) i", "cite_spans": [ { "start": 33, "end": 58, "text": "(Klein and Manning, 2002)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "p y * i | xi, \u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "negated boosting loss (Collins, 2000 )", "cite_spans": [ { "start": 22, "end": 36, "text": "(Collins, 2000", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "\u2212 i p y * i | xi, \u03b8 \u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "margin (Crammer and Singer, 2001 ) (Altun et al., 2003) i j ", "cite_spans": [ { "start": 7, "end": 32, "text": "(Crammer and Singer, 2001", "ref_id": "BIBREF5" }, { "start": 35, "end": 55, "text": "(Altun et al., 2003)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "\u03b3 s.t. \u03b8 \u2264 1; \u2200i, \u2200y = y * i , \u03b8 \u2022 ( f (xi, y * i ) \u2212 f (xi, y)) \u2265 \u03b3 expected local accuracy", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "p j (Y ) = j (y * i ) | xi, \u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning setting", "sec_num": "2.1" }, { "text": "Our approach instead maximizes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A new approach: contrastive estimation", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i p Xi = xi | Xi \u2208 N(xi), \u03b8", "eq_num": "(2)" } ], "section": "A new approach: contrastive estimation", "sec_num": "2.2" }, { "text": "where the \"neighborhood\" N(x i ) \u2286 X is a set of implicit negative examples plus the example", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A new approach: contrastive estimation", "sec_num": "2.2" }, { "text": "x i it- self. As in EM, p(x i | ..., \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A new approach: contrastive estimation", "sec_num": "2.2" }, { "text": "is found by marginalizing over hidden variables (Eq. 1). Note that the x \u2208 N(x i ) are not treated as hard negative examples; we merely seek to move probability mass from them to the observed x. The neighborhood of x, N(x), contains examples that are perturbations of x. We refer to the mapping N : X \u2192 2 X as the neighborhood function, and the optimization of Eq. 2 as contrastive estimation (CE). CE seeks to move probability mass from the neighborhood of an observed x i to x i itself. The learner hypothesizes that good models are those which discriminate an observed example from its neighborhood. Put another way, the learner assumes not only that x i is good, but that x i is locally optimal in example space (X), and that alternative, similar examples (from the neighborhood) are inferior. Rather than explain all of the data, the model must only explain (using hidden variables) why the observed sentence is better than its neighbors. Of course, the validity of this hypothesis will depend on the form of the neighborhood function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A new approach: contrastive estimation", "sec_num": "2.2" }, { "text": "Consider, as a concrete example, learning natural language syntax. In Smith and Eisner (2005) , we define a sentence's neighborhood to be a set of slightly-altered sentences that use the same lexemes, as suggested at the start of this section. While their syntax is degraded, the inferred meaning of any of these altered sentences is typically close to the intended meaning, yet the speaker chose x and not one of the other x \u2208 N(x). Why? Deletions are likely to violate subcategorization requirements, and transpositions are likely to violate word order requirements-both of which have something to do with syntax. x was the most grammatical option that conveyed the speaker's meaning, hence (we hope) roughly the most grammatical option in the neighborhood N(x), and the syntactic model should make it so.", "cite_spans": [ { "start": 70, "end": 93, "text": "Smith and Eisner (2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "A new approach: contrastive estimation", "sec_num": "2.2" }, { "text": "We have not yet specified the form of our probabilistic model, only that it is parameterized by \u03b8 \u2208 R n . Log-linear models, which we will show are a natural fit for CE, assign probability to an (example, label) pair (x, y) according to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p x, y | \u03b8 def = 1 Z \u03b8 u x, y | \u03b8", "eq_num": "(3)" } ], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "where the \"unnormalized score\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u(x, y | \u03b8) is u x, y | \u03b8 def = exp \u03b8 \u2022 f (x, y)", "eq_num": "(4)" } ], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "The notation above is defined as follows. f : X \u00d7 Y \u2192 R n \u22650 is a nonnegative vector feature function, and \u03b8 \u2208 R n are the corresponding feature weights (the model's parameters). Because the features can take any form and need not be orthogonal, log-linear models can capture arbitrary dependencies in the data and cleanly incorporate them into a model. Z( \u03b8) (the partition function) is chosen so that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "(x,y) p(x, y | \u03b8) = 1; i.e., Z( \u03b8) = (x,y) u(x, y | \u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": ". u is typically easy to compute for a given (x, y), but Z may be much harder to compute. All the objective functions in this paper take the form where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "i (x,y)\u2208A i p x, y | \u03b8 (x,y)\u2208B i p x, y | \u03b8 (5) likelihood criterion Ai Bi joint {(xi, y * i )} X \u00d7 Y conditional {(xi, y * i )} {xi} \u00d7 Y marginal (a l\u00e0 EM) {xi} \u00d7 Y X \u00d7 Y contrastive {xi} \u00d7 Y N(xi) \u00d7 Y", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "A i \u2282 B i (for each i). For log-linear models this is simply i (x,y)\u2208A i u x, y | \u03b8 (x,y)\u2208B i u x, y | \u03b8", "eq_num": "(6)" } ], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "So there is no need to compute Z( \u03b8), but we do need to compute sums over A and B. Tab. 2 summarizes some concrete examples; see also \u00a73.1-3.2. We would prefer to choose an objective function such that these sums are easy. CE focuses on choosing appropriate small contrast sets B i , both for efficiency and to guide the learner. The natural choice for A i (which is usually easier to sum over) is the set of (x, y) that are consistent with what was observed (partially or completely) about the ith training example, i.e., the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "numerator (x,y)\u2208A i p(x, y | \u03b8) is designed to find p(observation i | \u03b8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "The idea is to focus the probability mass within B i on the subset A i where the i the training example is known to be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "It is possible to build log-linear models where each x i is a sequence. 2 In this paper, each model is a weighted finite-state automaton (WFSA) where states correspond to POS tags. The parameter vector \u03b8 \u2208 R n specifies a weight for each of the n transitions in the automaton. y is a hidden path through the automaton (determining a POS sequence), and x is the string it emits. u(x, y | \u03b8) is defined by applying exp to the total weight of all transitions in y. This is an example of Eqs. 4 and 6 where f j (x, y) is the number of times the path y takes the jth transition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "The partition function Z( \u03b8) of the WFSA is found by adding up the u-scores of all paths through the WFSA. For a k-state WFSA, this equates to solving a linear system of k equations in k variables (Tarjan, 1981) . But if the WFSA contains cycles this infinite sum may diverge. Alternatives to exact com-putation, like random sampling (see, e.g., Abney, 1997) , will not help to avoid this difficulty; in addition, convergence rates are in general unknown and bounds difficult to prove. We would prefer to sum over finitely many paths in B i .", "cite_spans": [ { "start": 197, "end": 211, "text": "(Tarjan, 1981)", "ref_id": "BIBREF26" }, { "start": 346, "end": 358, "text": "Abney, 1997)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Log-Linear Models", "sec_num": "3" }, { "text": "For log-linear models, both CL and JL estimation (Tab. 1) are available. In terms of Eq. 5, both set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (supervised)", "sec_num": "3.1" }, { "text": "A i = {(x i , y * i )}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (supervised)", "sec_num": "3.1" }, { "text": "The difference is in B: for JL, B i = X \u00d7 Y, so summing over B i is equivalent to computing the partition function Z( \u03b8). Because that sum is typically difficult, CL is preferred; B i = {x i } \u00d7 Y for x i , which is often tractable. For sequence models like WFSAs it is computed using a dynamic programming algorithm (the forward algorithm for WFSAs). Klein and Manning (2002) argue for CL on grounds of accuracy, but see also Johnson (2001) . See Tab. 2; other contrast sets B i are also possible.", "cite_spans": [ { "start": 352, "end": 376, "text": "Klein and Manning (2002)", "ref_id": "BIBREF13" }, { "start": 427, "end": 441, "text": "Johnson (2001)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (supervised)", "sec_num": "3.1" }, { "text": "When B i contains only x i paired with the current best competitor (\u0177) to y * i , we have a technique that resembles maximum margin training (Crammer and Singer, 2001) . Note that\u0177 will then change across training iterations, making B i dynamic.", "cite_spans": [ { "start": 141, "end": 167, "text": "(Crammer and Singer, 2001)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (supervised)", "sec_num": "3.1" }, { "text": "The difference between supervised and unsupervised learning is that in the latter case, A i is forced to sum over label sequences y because they weren't observed. In the unsupervised case, CE maximizes", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "L N \u03b8 = log i y\u2208Y u xi, y | \u03b8 (x,y)\u2208N(x i )\u00d7Y u x, y | \u03b8 (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "In terms of Eq. 5, A = {x i }\u00d7Y and B = N(x i )\u00d7Y. EM's objective function (Eq. 1) is a special case where N(x i ) = X, for all i, and the denominator becomes Z( \u03b8). An alternative is to restrict the neighborhood to the set of observed training examples rather than all possible examples (Riezler, 1999; Johnson et al., 1999; Riezler et al., 2000) :", "cite_spans": [ { "start": 288, "end": 303, "text": "(Riezler, 1999;", "ref_id": "BIBREF21" }, { "start": 304, "end": 325, "text": "Johnson et al., 1999;", "ref_id": "BIBREF10" }, { "start": 326, "end": 347, "text": "Riezler et al., 2000)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i u xi | \u03b8 j u xj | \u03b8", "eq_num": "(8)" } ], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "Viewed as a CE method, this approach (though effective when there are few hypotheses) seems misguided; the objective says to move mass to each example at the expense of all other training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "Another variant is conditional EM. Let x i be a pair (x i,1 , x i,2 ) and define the neighborhood to be N(x i ) = {x = (x 1 , x i,2 )}. This approach has been applied to conditional densities (Jebara and Pentland, 1998) and conditional training of acoustic models with hidden variables (Valtchev et al., 1997) .", "cite_spans": [ { "start": 192, "end": 219, "text": "(Jebara and Pentland, 1998)", "ref_id": "BIBREF9" }, { "start": 286, "end": 309, "text": "(Valtchev et al., 1997)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "Generally speaking, CE is equivalent to some kind of EM when N(\u2022) is an equivalence relation on examples, so that the neighborhoods partition X. Then if q is any fixed (untrained) distribution over neighborhoods, CE equates to running EM on the model defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p x, y | \u03b8 def = q (N(x)) \u2022 p x, y | N(x), \u03b8", "eq_num": "(9)" } ], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "CE may also be viewed as an importance sampling approximation to EM, where the sample space X is replaced by N(x i ). We will demonstrate experimentally that CE is not just an approximation to EM; it makes sense from a modeling perspective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "In \u00a74, we will describe neighborhoods of sequences that can be represented as acyclic lattices built directly from an observed sequence. The sum over B i is then the total u-score in our model of all paths in the neighborhood lattice. To compute this, intersect the WFSA and the lattice, obtaining a new acyclic WFSA, and sum the u-scores of all its paths (Eisner, 2002) using a simple dynamic programming algorithm akin to the forward algorithm. The sum over A i may be computed similarly.", "cite_spans": [ { "start": 356, "end": 370, "text": "(Eisner, 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "CE with lattice neighborhoods is not confined to the WFSAs of this paper; when estimating weighted CFGs, the key algorithm is the inside algorithm for lattice parsing (Smith and Eisner, 2005) .", "cite_spans": [ { "start": 167, "end": 191, "text": "(Smith and Eisner, 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Parameter estimation (unsupervised)", "sec_num": "3.2" }, { "text": "To maximize the neighborhood likelihood (Eq. 7), we apply a standard numerical optimization method (L-BFGS) that iteratively climbs the function using knowledge of its value and gradient (Liu and Nocedal, 1989 ). The partial derivative of L N with respect to the jth feature weight \u03b8 j is", "cite_spans": [ { "start": 187, "end": 209, "text": "(Liu and Nocedal, 1989", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2202L N \u2202\u03b8j = i E \u03b8 [fj | xi] \u2212 E \u03b8 [fj | N(xi)]", "eq_num": "(10)" } ], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": "This looks similar to the gradient of log-linear likelihood functions on complete data, though the expectation on the left is in those cases replaced by an observed feature value f j (x i , y * i ). In this paper, the l a n g u a g e l a n g u a g e d e l i c a t e t h i n g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": ":", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": "x x 2 1 x 2 x 1 : : x x 2 3 : x x 3 2 : x x m m\u22121 x m\u22121 :x m ? ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": "... Figure 1 : A sentence and three lattices representing some of its neighborhoods. The transducer used to generate each neighborhood lattice (via composition with the sentence, followed by determinization and minimization) is shown to its right. expectations in Eq. 10 are computed by the forwardbackward algorithm generalized to lattices.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": "We emphasize that the function L N is not globally concave; our search will lead only to a local optimum. 3 Therefore, as with all unsupervised statistical learning, the bias in the initialization of \u03b8 will affect the quality of the estimate and the performance of the method. In future we might wish to apply techniques for avoiding local optima, such as deterministic annealing (Smith and Eisner, 2004) .", "cite_spans": [ { "start": 380, "end": 404, "text": "(Smith and Eisner, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Numerical optimization", "sec_num": "3.3" }, { "text": "We next consider some non-classical neighborhood functions for sequences. When X = \u03a3 + for some symbol alphabet \u03a3, certain kinds of neighborhoods have natural, compact representations. Given an input string x = x 1 , x 2 , ..., x m , we write x j i for the substring x i , x i+1 , ..., x j and x m 1 for the whole string. Consider first the neighborhood consisting of all sequences generated by deleting a single symbol from the m-length sequence x m 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "DEL1WORD(x m 1 ) = x \u22121 1 x m +1 | 1 \u2264 \u2264 m \u222a {x m 1 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "This set consists of m + 1 strings and can be compactly represented as a lattice (see Fig. 1a ). Another 3 Without any hidden variables, L N is globally concave.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 93, "text": "Fig. 1a", "ref_id": null } ], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "neighborhood involves transposing any pair of adjacent words:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "TRANS1(x m 1 ) = x \u22121 1 x +1 x x m +2 | 1 \u2264 < m \u222a {x m 1 }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "This set can also be compactly represented as a lattice ( Fig. 1b) . We can combine DEL1WORD and TRANS1 by taking their union; this gives a larger neighborhood, DELORTRANS1. 4 The DEL1SUBSEQ neighborhood allows the deletion of any contiguous subsequence of words that is strictly smaller than the whole sequence. This lattice is similar to that of DEL1WORD, but adds some arcs (Fig. 1c) ; the size of this neighborhood is O(m 2 ).", "cite_spans": [ { "start": 174, "end": 175, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 58, "end": 66, "text": "Fig. 1b)", "ref_id": null }, { "start": 377, "end": 386, "text": "(Fig. 1c)", "ref_id": null } ], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "A final neighborhood we will consider is LENGTH, which consists of \u03a3 m . CE with the LENGTH neighborhood is very similar to EM; it is equivalent to using EM to estimate the parameters of a model defined by Eq. 9 where q is any fixed (untrained) distribution over lengths.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "When the vocabulary \u03a3 is the set of words in a natural language, it is never fully known; approximations for defining LENGTH = \u03a3 m include using observed \u03a3 from the training set (as we do) or adding a special OOV symbol. Figure 2: Percent ambiguous words tagged correctly in the 96K dataset, as the smoothing parameter (\u03bb in the case of EM, \u03c3 2 in the CE cases) varies. The model selected from each criterion using unlabeled development data is circled in the plot. Dataset size is varied in the table at right, which shows models selected using unlabeled development data (\"sel.\") and using an oracle (\"oracle,\" the highest point on a curve). Across conditions, some neighborhood roughly splits the difference between supervised models and EM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "24K 48K 96K sel. oracle sel. oracle sel. oracle sel. oracle CRF", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lattice Neighborhoods", "sec_num": "4" }, { "text": "We compare CE (using neighborhoods from \u00a74) with EM on POS tagging using unlabeled data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Our experiments are inspired by those in Merialdo (1994) ; we train a trigram tagger using only unlabeled data, assuming complete knowledge of the tagging dictionary. 5 In our experiments, we varied the amount of data available (12K-96K words of WSJ), the heaviness of smoothing, and the estimation criterion. In all cases, training stopped when the relative change in the criterion fell below 10 \u22124 between steps (typically \u2264 100 steps). For this corpus and tag set, on average, a tagger must decide between 2.3 tags for a given token. The generative model trained by EM was identical to Merialdo's: a second-order HMM. We smoothed using a flat Dirichlet prior with single parameter \u03bb for all distributions (\u03bb-values from 0 to 10 were tested). 6 The model was initialized uniformly.", "cite_spans": [ { "start": 41, "end": 56, "text": "Merialdo (1994)", "ref_id": "BIBREF17" }, { "start": 167, "end": 168, "text": "5", "ref_id": null }, { "start": 745, "end": 746, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "The log-linear models trained by CE used the same feature set, though the feature weights are no longer log-probabilities and there are no sum-to-one constraints. In addition to an unsmoothed trial, we tried diagonal Gaussian priors (quadratic penalty) with \u03c3 2 ranging from 0.1 to 10. The models were initialized with all \u03b8 j = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "Unsupervised model selection. For each (crite-5 Without a tagging dictionary, tag names are interchangeable and cannot be evaluated on gold-standard accuracy. We address the tagging dictionary assumption in \u00a75.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "6 This is equivalent to add-\u03bb smoothing within every M step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "rion, dataset) pair, we selected the smoothing trial that gave the highest estimation criterion score on a 5K-word development set (also unlabeled).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "Results. The plot in Fig. 2 shows the Viterbi accuracy of each criterion trained on the 96K-word dataset as smoothing was varied; the table shows, for each (criterion, dataset) pair the performance of the selected \u03bb or \u03c3 2 and the one chosen by an oracle. LENGTH, TRANS1, and DELORTRANS1 are consistently the best, far out-stripping EM. These gains dwarf the performance of EM on over 1.1M words (66.6% as reported by Smith and Eisner (2004) ), even when the latter uses improved search (70.0%). DEL1WORD and DEL1SUBSEQ, on the other hand, are poor, even worse than EM on larger datasets.", "cite_spans": [ { "start": 418, "end": 441, "text": "Smith and Eisner (2004)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 21, "end": 27, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "An important result is that neighborhoods do not succeed by virtue of approximating log-linear EM; if that were so, we would expect larger neighborhoods (like DEL1SUBSEQ) to out-perform smaller ones (like TRANS1)-this is not so. DEL1SUBSEQ and DEL1WORD are poor because they do not give helpful classes of negative evidence: deleting a word or a short subsequence often does very little damage. Put another way, models that do a good job of explaining why no word or subsequence should be deleted do not do so using the familiar POS categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "The LENGTH neighborhood is as close to loglinear EM as it is practical to get. The inconsistencies in the LENGTH curve (Fig. 2) are notable and also appeared at the other training set sizes. Believing this might be indicative of brittleness in Viterbi label selection, we computed the expected Table 3 : Percent of all words correctly tagged in the 24K dataset, as the tagging dictionary is diluted. Unsupervised model selection (\"sel.\") and oracle model selection (\"oracle\") across smoothing parameters are shown. Note that we evaluated on all words (unlike Fig. 3 ) and used 17 coarse tags, giving higher scores than in Fig. 2. accuracy of the LENGTH models; the same \"dips\" were present. This could indicate that the learner was trapped in a local maximum, suggesting that, since other criteria did not exhibit this behavior, LENGTH might be a bumpier objective surface. It would be interesting to measure the bumpiness (sensitivity to initial conditions) of different contrastive objectives. 7", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 127, "text": "(Fig. 2)", "ref_id": null }, { "start": 294, "end": 301, "text": "Table 3", "ref_id": null }, { "start": 559, "end": 565, "text": "Fig. 3", "ref_id": "FIGREF2" }, { "start": 622, "end": 629, "text": "Fig. 2.", "ref_id": null } ], "eq_spans": [], "section": "Comparison with EM", "sec_num": "5.1" }, { "text": "The assumption that the tagging dictionary is completely known is difficult to justify. While a POS lexicon might be available for a new language, certainly it will not give exhaustive information about all word types in a corpus. We experimented with removing knowledge from the tagging dictionary, thereby increasing the difficulty of the task, to see how well various objective functions could recover. One means to recovery is the addition of features to the model-this is easy with log-linear models but not with classical generative models. We compared the performance of the best neighborhoods (LENGTH, DELORTRANS1, and TRANS1) from the first experiment, plus EM, using three diluted dictionaries and the original one, on the 24K dataset. A diluted dictionary adds (tag, word) entries so that rare words are allowed with any tag, simulating zero prior knowledge about the word. \"Rare\" might be defined in different ways; we used three definitions: words unseen in the first 500 sentences (about half of the 24K training corpus); singletons (words with count \u2264 1); and words with count \u2264 2. To allow more trials, we projected the original 45 tags onto a coarser set of 17 (e.g., 7 A reviewer suggested including a table comparing different criterion values for each learned model (i.e., each neighborhood evaluated on each other neighborhood). This table contained no big surprises; we note only that most models were the best on their own criterion, and among unsupervised models, LENGTH performed best on the CL criterion.", "cite_spans": [ { "start": 1185, "end": 1186, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Removing knowledge, adding features", "sec_num": "5.2" }, { "text": "To take better advantage of the power of loglinear models-specifically, their ability to incorporate novel features-we also ran trials augmenting the model with spelling features, allowing exploitation of correlations between parts of the word and a possible tag. Our spelling features included all observed 1-, 2-, and 3-character suffixes, initial capitalization, containing a hyphen, and containing a digit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RB * \u2192ADV).", "sec_num": null }, { "text": "Results. Fig. 3 plots tagging accuracy (on ambiguous words) for each dictionary on the 24K dataset. The x-axis is the smoothing parameter (\u03bb for EM, \u03c3 2 for CE). Note that the different plots are not comparable, because their y-axes are based on different sets of ambiguous words.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 15, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "RB * \u2192ADV).", "sec_num": null }, { "text": "So that models under different dilution conditions could be compared, we computed accuracy on all words; these are shown in Tab. 3. The reader will notice that there is often a large gap between unsupervised and oracle model selection; this draws attention to a need for better unsupervised regularization and model selection techniques.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RB * \u2192ADV).", "sec_num": null }, { "text": "Without spelling features, all models perform worse as knowledge is removed. But LENGTH suffers most substantially, relative to its initial performance. Why is this? LENGTH (like EM) requires the model to explain why a given sentence was seen instead of some other sentence of the same length. One way to make this explanation is to manipulate emission weights (i.e., for (tag, word) features): the learner can construct a good class-based unigram model of the text (where classes are tags). This is good for the LENGTH objective, but not for learning good POS tag sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RB * \u2192ADV).", "sec_num": null }, { "text": "In contrast, DELORTRANS1 and TRANS1 do not allow the learner to manipulate emission weights for words not in the sentence. The sentence's goodness must be explained in a way other than by the words it contains: namely through the POS tags. To check this intuition, we built local normalized models p(word | tag) from the parameters learned by TRANS1 and LENGTH. For each tag, these were compared by KL divergence to the empirical lexical distributions (from labeled data). For the ten tags accounting for 95.6% of the data, LENGTH more closely matched the empirical lexical distributions. LENGTH is learning a correct distribution, but that distribution is not helpful for the task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RB * \u2192ADV).", "sec_num": null }, { "text": "The improvement from adding spelling features is striking: DELORTRANS1 and TRANS1 recover nearly completely (modulo the model selection problem) from the diluted dictionaries. LENGTH sees far less recovery. Hence even our improved feature sets cannot compensate for the choice of neighborhood. This highlights our argument that a neighborhood is not an approximation to log-linear EM; LENGTH tries very hard to approximate log-linear EM but requires a good dictionary to be on par with the other criteria. Good neighborhoods, rather, perform well in their own right.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RB * \u2192ADV).", "sec_num": null }, { "text": "Foremost for future work is the \"minimally supervised\" paradigm in which a small amount of labeled data is available (see, e.g., Clark et al. (2003) ). Unlike well-known \"bootstrapping\" approaches (Yarowsky, 1995) , EM and CE have the possible advantage of maintaining posteriors over hidden labels (or structure) throughout learning; bootstrapping either chooses, for each example, a single label, or remains completely agnostic. One can envision a mixed objective function that tries to fit the labeled examples while discriminating unlabeled examples from their neighborhoods. 8 Regardless of how much (if any) data are labeled, the question of good smoothing techniques requires more attention. Here we used a single zero-mean, constant-variance Gaussian prior for all parameters. Better performance might be achieved by allowing different variances for different feature types. This 8 Zhu and Ghahramani (2002) explored the semi-supervised classification problem for spatially-distributed data, where some data are labeled, using a Boltzmann machine to model the dataset. For them, the Markov random field is over labeling configurations for all examples, not, as in our case, complex structured labels for a particular example. Hence their B (Eq. 5), though very large, was finite and could be sampled. leads to a need for more efficient tuning of the prior parameters on development data. The effectiveness of CE (and different neighborhoods) for dependency grammar induction is explored in Smith and Eisner (2005) with considerable success. We introduce there the notion of designing neighborhoods to guide learning for particular tasks. Instead of guiding an unsupervised learner to match linguists' annotations, the choice of neighborhood might be made to direct the learner toward hidden structure that is helpful for error-correction tasks like spelling correction and punctuation restoration that may benefit from a grammatical model. Wang et al. (2002) discuss the latent maximum entropy principle. They advocate running EM many times and selecting the local maximum that maximizes entropy. One might do the same for the local maxima of any CE objective, though theoretical and experimental support for this idea remain for future work.", "cite_spans": [ { "start": 129, "end": 148, "text": "Clark et al. (2003)", "ref_id": "BIBREF3" }, { "start": 197, "end": 213, "text": "(Yarowsky, 1995)", "ref_id": "BIBREF29" }, { "start": 580, "end": 581, "text": "8", "ref_id": null }, { "start": 888, "end": 889, "text": "8", "ref_id": null }, { "start": 890, "end": 915, "text": "Zhu and Ghahramani (2002)", "ref_id": "BIBREF30" }, { "start": 1498, "end": 1521, "text": "Smith and Eisner (2005)", "ref_id": "BIBREF25" }, { "start": 1948, "end": 1966, "text": "Wang et al. (2002)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "We have presented contrastive estimation, a new probabilistic estimation criterion that forces a model to explain why the given training data were better than bad data implied by the positive examples. We have shown that for unsupervised sequence modeling, this technique is efficient and drastically outperforms EM; for POS tagging, the gain in accuracy over EM is twice what we would get from ten times as much data and improved search, sticking with EM's criterion (Smith and Eisner, 2004) . On this task, with certain neighborhoods, contrastive estimation suffers less than EM does from diminished prior knowledge and is able to exploit new features-that EM can't-to largely recover from the loss of knowledge.", "cite_spans": [ { "start": 468, "end": 492, "text": "(Smith and Eisner, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Not to be confused with contrastive divergence minimization(Hinton, 2003), a technique for training products of experts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "These are exemplified by CRFs(Lafferty et al., 2001), which can be viewed alternately as undirected dynamic graphical models with a chain topology, as log-linear models over entire sequences with local features, or as WFSAs. Because \"CRF\" implies CL estimation, we use the term \"WFSA.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In general, the lattices are obtained by composing the observed sequence with a small FST and determinizing and minimizing the result; the relevant transducers are shown inFig. 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic attribute-value grammars", "authors": [ { "first": "S", "middle": [ "P" ], "last": "Abney", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "4", "pages": "597--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. P. Abney. 1997. Stochastic attribute-value grammars. Com- putational Linguistics, 23(4):597-617.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Investigating loss functions and optimization methods for discriminative learning of label sequences", "authors": [ { "first": "Y", "middle": [], "last": "Altun", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2003, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Altun, M. Johnson, and T. Hofmann. 2003. Investigating loss functions and optimization methods for discriminative learning of label sequences. In Proc. of EMNLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical Language Learning", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 1993. Statistical Language Learning. MIT Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Bootstrapping POS taggers using unlabelled data", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Clark", "suffix": "" }, { "first": "M", "middle": [], "last": "Curran", "suffix": "" }, { "first": "", "middle": [], "last": "Osborne", "suffix": "" } ], "year": 2003, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, J. R. Curran, and M. Osborne. 2003. Bootstrapping POS taggers using unlabelled data. In Proc. of CoNLL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Discriminative reranking for natural language parsing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2000, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins. 2000. Discriminative reranking for natural lan- guage parsing. In Proc. of ICML.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the algorithmic implementation of multiclass kernel-based vector machines", "authors": [ { "first": "K", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2001, "venue": "Machine Learning Research", "volume": "2", "issue": "5", "pages": "265--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Crammer and Y. Singer. 2001. On the algorithmic imple- mentation of multiclass kernel-based vector machines. Jour- nal of Machine Learning Research, 2(5):265-92.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Maximum likelihood estimation from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society B", "volume": "39", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likeli- hood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Parameter estimation for probabilistic finitestate transducers", "authors": [ { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2002, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Eisner. 2002. Parameter estimation for probabilistic finite- state transducers. In Proc. of ACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Training products of experts by minimizing contrastive divergence", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Hinton", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. E. Hinton. 2003. Training products of experts by mini- mizing contrastive divergence. Technical Report GCNU TR 2000-004, University College London.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Maximum conditional likelihood via bound maximization and the CEM algorithm", "authors": [ { "first": "T", "middle": [], "last": "Jebara", "suffix": "" }, { "first": "A", "middle": [], "last": "Pentland", "suffix": "" } ], "year": 1998, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Jebara and A. Pentland. 1998. Maximum conditional like- lihood via bound maximization and the CEM algorithm. In Proc. of NIPS.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Estimators for stochastic \"unification-based\" grammars", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "S", "middle": [], "last": "Geman", "suffix": "" }, { "first": "S", "middle": [], "last": "Canon", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chi", "suffix": "" }, { "first": "S", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson, S. Geman, S. Canon, Z. Chi, and S. Riezler. 1999. Estimators for stochastic \"unification-based\" grammars. In Proc. of ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Joint and conditional estimation of tagging and parsing models", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2001, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Proc. of ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Discriminative learning for minimum error classification", "authors": [ { "first": "B.-H", "middle": [], "last": "Juang", "suffix": "" }, { "first": "S", "middle": [], "last": "Katagiri", "suffix": "" } ], "year": 1992, "venue": "IEEE Trans. Signal Processing", "volume": "40", "issue": "", "pages": "3043--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "B.-H. Juang and S. Katagiri. 1992. Discriminative learning for minimum error classification. IEEE Trans. Signal Process- ing, 40:3043-54.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Conditional structure vs. conditional estimation in NLP models", "authors": [ { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2002, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Klein and C. D. Manning. 2002. Conditional structure vs. conditional estimation in NLP models. In Proc. of EMNLP.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "J", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and la- beling sequence data. In Proc. of ICML.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On the limited memory method for large scale optimization", "authors": [ { "first": "D", "middle": [ "C" ], "last": "Liu", "suffix": "" }, { "first": "J", "middle": [], "last": "", "suffix": "" } ], "year": 1989, "venue": "Mathematical Programming B", "volume": "45", "issue": "3", "pages": "503--531", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. C. Liu and J. Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503-28.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Early results for namedentity extraction with conditional random fields", "authors": [ { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "W", "middle": [], "last": "Li", "suffix": "" } ], "year": 2003, "venue": "Proc. of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. McCallum and W. Li. 2003. Early results for named- entity extraction with conditional random fields. In Proc. of CoNLL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Tagging English text with a probabilistic model", "authors": [ { "first": "B", "middle": [], "last": "Merialdo", "suffix": "" } ], "year": 1994, "venue": "Computational Linguistics", "volume": "20", "issue": "2", "pages": "155--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155-72.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Maximum entropy estimation for feature forests", "authors": [ { "first": "Y", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "J", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2002, "venue": "Proc. of HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Miyao and J. Tsujii. 2002. Maximum entropy estimation for feature forests. In Proc. of HLT.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A maximum entropy model for parsing", "authors": [ { "first": "A", "middle": [], "last": "Ratnaparkhi", "suffix": "" }, { "first": "S", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "R", "middle": [ "T" ], "last": "Ward", "suffix": "" } ], "year": 1994, "venue": "Proc. of ICSLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Ratnaparkhi, S. Roukos, and R. T. Ward. 1994. A maximum entropy model for parsing. In Proc. of ICSLP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Lexicalized stochastic modeling of constraint-based grammars using log-linear measures and EM training", "authors": [ { "first": "S", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "D", "middle": [], "last": "Prescher", "suffix": "" }, { "first": "J", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2000, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Riezler, D. Prescher, J. Kuhn, and M. Johnson. 2000. Lex- icalized stochastic modeling of constraint-based grammars using log-linear measures and EM training. In Proc. of ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Probabilistic Constraint Logic Programming", "authors": [ { "first": "S", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Riezler. 1999. Probabilistic Constraint Logic Programming. Ph.D. thesis, Universit\u00e4t T\u00fcbingen.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Adaptive Statistical Language Modeling: A Maximum Entropy Approach", "authors": [ { "first": "R", "middle": [], "last": "Rosenfeld", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Rosenfeld. 1994. Adaptive Statistical Language Modeling: A Maximum Entropy Approach. Ph.D. thesis, CMU.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Shallow parsing with conditional random fields", "authors": [ { "first": "F", "middle": [], "last": "Sha", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2003, "venue": "Proc. of HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of HLT-NAACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Annealing techniques for unsupervised statistical language learning", "authors": [ { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2004, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. A. Smith and J. Eisner. 2004. Annealing techniques for unsupervised statistical language learning. In Proc. of ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Guiding unsupervised grammar induction using contrastive estimation", "authors": [ { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2005, "venue": "Proc. of IJ-CAI Workshop on Grammatical Inference Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. A. Smith and J. Eisner. 2005. Guiding unsupervised gram- mar induction using contrastive estimation. In Proc. of IJ- CAI Workshop on Grammatical Inference Applications.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A unified approach to path problems", "authors": [ { "first": "R", "middle": [ "E" ], "last": "Tarjan", "suffix": "" } ], "year": 1981, "venue": "Journal of the ACM", "volume": "28", "issue": "3", "pages": "577--93", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. E. Tarjan. 1981. A unified approach to path problems. Jour- nal of the ACM, 28(3):577-93.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "MMIE training of large vocabulary speech recognition systems", "authors": [ { "first": "V", "middle": [], "last": "Valtchev", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Odell", "suffix": "" }, { "first": "P", "middle": [ "C" ], "last": "Woodland", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Young", "suffix": "" } ], "year": 1997, "venue": "Speech Communication", "volume": "22", "issue": "4", "pages": "303--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "V. Valtchev, J. J. Odell, P. C. Woodland, and S. J. Young. 1997. MMIE training of large vocabulary speech recognition sys- tems. Speech Communication, 22(4):303-14.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The latent maximum entropy principle", "authors": [ { "first": "S", "middle": [], "last": "Wang", "suffix": "" }, { "first": "R", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "D", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2002, "venue": "Proc. of ISIT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Wang, R. Rosenfeld, Y. Zhao, and D. Schuurmans. 2002. The latent maximum entropy principle. In Proc. of ISIT.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Towards semi-supervised classification with Markov random fields", "authors": [ { "first": "X", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Zhu and Z. Ghahramani. 2002. Towards semi-supervised classification with Markov random fields. Technical Report CMU-CALD-02-106, Carnegie Mellon University.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Each bigram x i+1 i in the sentence has an arc pair (x i : x i+1 , x i+1 : x i ).)" }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "All train & development words are in the tagging dictionary: Percent ambiguous words tagged correctly (with coarse tags) on the 24K dataset, as the dictionary is diluted and with spelling features. Each graph corresponds to a different level of dilution. Models selected using unlabeled development data are circled. These plots (unlike Tab. 3) are not comparable to each other because each is measured on a different set of ambiguous words." }, "TABREF0": { "num": null, "html": null, "content": "", "text": "", "type_str": "table" }, "TABREF1": { "num": null, "html": null, "content": "
", "text": "Supervised (upper box) and unsupervised (lower box) estimation with log-linear models in terms of Eq. 5.", "type_str": "table" }, "TABREF3": { "num": null, "html": null, "content": "
(supervised)100.099.899.899.5
HMM (supervised)99.398.597.997.2
LENGTH74.9 77.4 78.7 81.5 78.3 81.3 78.9 79.3
DELORTR1 70.8 DEL1SSQ 53.0 53.3 55.0 56.7 55.3 55.4 57.3 58.7
random expected35.235.135.135.1
ambiguous words6,24412,92325,87951,521
", "text": "70.8 78.6 78.6 78.3 79.1 75.2 78.8 TRANS1 72.7 72.7 77.2 77.2 78.1 79.4 74.7 79.0 EM 49.5 52.9 55.5 58.0 59.4 60.9 60.9 62.1 DEL1WORD 55.4 55.6 58.6 60.3 59.9 60.2 59.9 60.4", "type_str": "table" }, "TABREF4": { "num": null, "html": null, "content": "
words in tagging dict.DELORTRANS1 trigram trigram + spelling sel. oracle sel. oraclesel.TRANS1 trigram trigram + spelling oracle sel. oraclesel.LENGTH trigram trigram + spelling oracle sel. oraclesel.EM trigram oraclera n d o m ex p ec te dam b ig u o u s w o rd sav e. ta g s/ to k en
all train & dev.78.3
", "text": "90.1 80.9 91.1 90.4 90.4 88.7 90.9 87.8 90.4 87.1 91.9 78.0 84.4 69.5 13,150 2.3 1 st 500 sents. 72.3 84.8 80.2 90.8 80.8 82.9 88.1 90.1 68.1 78.3 76.9 83.2 77.2 80.5 60.5 13,841 3.7 count \u2265 2 69.5 81.3 79.5 90.3 77.0 78.6 78.7 90.1 65.3 75.2 73.3 73.8 70.1 70.9 56.6 14,780 4.4 count \u2265 3 65.0 77.2 78.3 89.8 71.7 73.4 78.4 89.5 62.8 72.3 73.2 73.6 66.5 66.5 51.0 15,996 5.5", "type_str": "table" } } } }