{ "paper_id": "D12-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:22:40.232955Z" }, "title": "Name Phylogeny: A Generative Model of String Variation", "authors": [ { "first": "Nicholas", "middle": [], "last": "Andrews", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "addrLine": "3400 N. Charles St", "postCode": "21218", "settlement": "Baltimore", "region": "MD", "country": "USA" } }, "email": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "addrLine": "3400 N. Charles St", "postCode": "21218", "settlement": "Baltimore", "region": "MD", "country": "USA" } }, "email": "eisner@jhu.edu" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": { "addrLine": "3400 N. Charles St", "postCode": "21218", "settlement": "Baltimore", "region": "MD", "country": "USA" } }, "email": "mdredze@jhu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many linguistic and textual processes involve transduction of strings. We show how to learn a stochastic transducer from an unorganized collection of strings (rather than string pairs). The role of the transducer is to organize the collection. Our generative model explains similarities among the strings by supposing that some strings in the collection were not generated ab initio, but were instead derived by transduction from other, \"similar\" strings in the collection. Our variational EM learning algorithm alternately reestimates this phylogeny and the transducer parameters. The final learned transducer can quickly link any test name into the final phylogeny, thereby locating variants of the test name. We find that our method can effectively find name variants in a corpus of web strings used to refer to persons in Wikipedia, improving over standard untrained distances such as Jaro-Winkler and Levenshtein distance.", "pdf_parse": { "paper_id": "D12-1032", "_pdf_hash": "", "abstract": [ { "text": "Many linguistic and textual processes involve transduction of strings. We show how to learn a stochastic transducer from an unorganized collection of strings (rather than string pairs). The role of the transducer is to organize the collection. Our generative model explains similarities among the strings by supposing that some strings in the collection were not generated ab initio, but were instead derived by transduction from other, \"similar\" strings in the collection. Our variational EM learning algorithm alternately reestimates this phylogeny and the transducer parameters. The final learned transducer can quickly link any test name into the final phylogeny, thereby locating variants of the test name. We find that our method can effectively find name variants in a corpus of web strings used to refer to persons in Wikipedia, improving over standard untrained distances such as Jaro-Winkler and Levenshtein distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Systematic relationships between pairs of strings are at the core of problems such as transliteration (Knight and Graehl, 1998) , morphology (Dreyer and Eisner, 2011) , cross-document coreference resolution (Bagga and Baldwin, 1998) , canonicalization (Culotta et al., 2007) , and paraphrasing (Barzilay and Lee, 2003) . Stochastic transducers such as probabilistic finite-state transducers are often used to capture such relationships. They model a conditional distribution p(y | x), and are ordinarily trained on input-output pairs of strings (Dreyer et al., 2008) .", "cite_spans": [ { "start": 102, "end": 127, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF21" }, { "start": 141, "end": 166, "text": "(Dreyer and Eisner, 2011)", "ref_id": "BIBREF11" }, { "start": 207, "end": 232, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF1" }, { "start": 252, "end": 274, "text": "(Culotta et al., 2007)", "ref_id": "BIBREF8" }, { "start": 294, "end": 318, "text": "(Barzilay and Lee, 2003)", "ref_id": "BIBREF2" }, { "start": 545, "end": 566, "text": "(Dreyer et al., 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we are interested in learning from an unorganized collection of strings, some of which might have been derived from others by transformative linguistic processes such as abbreviation, morphological derivation, historical sound or spelling change, loanword formation, translation, transliteration, editing, or transcription error. We assume that each string was derived from at most one parent, but may give rise to any number of children.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The difficulty is that most or all of these parentchild relationships are unobserved. We must reconstruct this evolutionary phylogeny. At the same time, we must fit the parameters of a model of the relevant linguistic process p(y | x), which says what sort of children y might plausibly be derived from parent x. Learning this model of p(y | x) helps us organize the training collection by reconstructing its phylogeny, and also permits us to generalize to new forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will focus on the problem of name variation. We observe a collection of person names-full names, nicknames, abbreviated or misspelled names, etc. Some of these names can refer to the same person; we hope to detect this. It would be an unlikely coincidence if two mentions of John Jacob Jingleheimer Schmidt referred to different people, since this is a long and unusual name. Similarly, John Jacob Jingelhimer Smith and Dr. J. J. Jingleheimer may also be related names for this person. That is, these names may be derived from one another, via unseen relationships, although we cannot be sure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Readers may be reminded of unsupervised clustering, in which \"suspiciously similar\" points can be explained as having been generated by the same cluster. Since each name is linked to at most one parent, our setting resembles single-link clustering-with a learned, asymmetric distance measure p(y | x).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We will propose a generative process that makes explicit assumptions about how strings are copied with mutation. It is assumed to have generated all the names in the collection, in an unknown order. Given learned parameters, we can ask the model whether a name Dr. J. J. Jingelheimer in the collection is more likely to have been generated from scratch, or derived from some previous name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Several previous papers have also considered learning transducers or other models of word pairs when the pairing between inputs and outputs is not given. Most commonly, one observes parallel or comparable corpora in two languages, and must reconstruct a matching from one language's words to the other's before training on the resulting pairs (Schafer, 2006b; Klementiev and Roth, 2006; Haghighi et al., 2008; Snyder et al., 2010; Sajjad et al., 2011) . Hall and Klein (2010) extend this setting to more than two languages, where the phylogenetic tree is known. A given lexeme (abstract word) can be realized in each language by at most one word (string type), derived from the parent language's realization of the same lexeme. The system must match words that share an underlying lexeme (i.e., cognates), creating a matching of each language's vocabulary to its parent language's vocabulary. A further challenge is that the parent words are unobserved ancestral forms.", "cite_spans": [ { "start": 343, "end": 359, "text": "(Schafer, 2006b;", "ref_id": "BIBREF30" }, { "start": 360, "end": 386, "text": "Klementiev and Roth, 2006;", "ref_id": "BIBREF20" }, { "start": 387, "end": 409, "text": "Haghighi et al., 2008;", "ref_id": "BIBREF16" }, { "start": 410, "end": 430, "text": "Snyder et al., 2010;", "ref_id": "BIBREF33" }, { "start": 431, "end": 451, "text": "Sajjad et al., 2011)", "ref_id": "BIBREF27" }, { "start": 454, "end": 475, "text": "Hall and Klein (2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.1" }, { "text": "Similarly, Dreyer and Eisner (2011) organize words into morphological paradigms of a given structure. Again words with the same underlying lexeme (i.e., morphemes) must be identified. A lexeme can be realized in each grammatical inflection (such as \"first person plural present\") by exactly one word type, related to other inflected forms of the same lexeme, which as above may be unobserved. Their inference setting is closer to ours because the input is an unorganized collection of words-input words are not tagged with their grammatical inflections. This contrasts with the usual multilingual setting where each word is tagged with its true language.", "cite_spans": [ { "start": 11, "end": 35, "text": "Dreyer and Eisner (2011)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.1" }, { "text": "In one way, our problem differs significantly from the above problems. We are interested in random variation that may occur within a language as well as across languages. A person name may have unboundedly many different variants. This is unlike the above problems, in which a lexeme has at most K realizations, where K is the (small) number of languages or inflections. 1 We cannot assign the observed strings to positions in an existing structure that is shared across all lexemes, such as a given phy-logenetic tree whose K nodes represent languages, or a given inflectional grid whose K cells represent grammatical inflections. Rather, we must organize them into a idiosyncratic phylogenetic tree whose nodes are the string types or tokens themselves.", "cite_spans": [ { "start": 371, "end": 372, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.1" }, { "text": "Names and words are not the only non-biological objects that are copied with mutation. Documents, database records, bibliographic entries, code, and images can evolve in the same way. Reconstructing these relationships has been considered by a number of papers on authorship attribution, near-duplicate detection, deduplication, record linkage, and plagiarism detection. A few such papers reconstruct a phylogeny, as in the case of chain letters (Bennett et al., 2003 ), malware (Karim et al., 2005 , or images (Dias et al., 2012) . In fact, the last of these uses the same minimum spanning tree method that we apply in \u00a75.3. However, these papers do not train a similarity measure as we do. To our knowledge, these two techniques have not been combined outside biology.", "cite_spans": [ { "start": 446, "end": 467, "text": "(Bennett et al., 2003", "ref_id": "BIBREF3" }, { "start": 468, "end": 498, "text": "), malware (Karim et al., 2005", "ref_id": null }, { "start": 511, "end": 530, "text": "(Dias et al., 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.1" }, { "text": "In molecular evolutionary analysis, phylogenetic techniques have often been combined with estimation of some parametric model of mutation (Tamura et al., 2011) . However, names mutate differently from biological sequences, and our mutation model for names ( \u00a74, \u00a78) reflects that. We also posit a specific process ( \u00a73) that generates the name phylogeny.", "cite_spans": [ { "start": 138, "end": 159, "text": "(Tamura et al., 2011)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "1.1" }, { "text": "A fragment of a phylogeny for person names is shown in Figure 1 . Our procedure learned this automatically from a collection of name tokens, without observing any input/output pairs. The nodes of the phylogeny are the observed name types, 2 each one associated with a count of observed tokens.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "Each arrow corresponds to a hypothesized mutation. These mutations reflect linguistic processes such as misspelling, initialism, nicknaming, transliteration, etc. As an exception, however, each arrow from the distinguished root node \u2666 generates an initial name for a new entity. The descendants of this initial name are other names that subsequently evolved for that entity. Thus, the child subtrees of \u2666 give a partition of the name types into entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "Thanks to the phylogeny, the seemingly disparate names Ghareeb Nawaz and Muinuddin Chishti are seen to refer to the same entity. They may be traced back to their common ancestor Khawaja Gharibnawaz Muinuddin Hasan Chisty, from which both were derived via successive mutations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "Not shown in Figure 1 is our learned family p of conditional probability distributions, which models the likely mutations in this corpus. Our EM learning procedure found p jointly with the phylogeny. Specifically, it alternated between improving p and improving the distribution over phylogenies. At the end, we extracted the single best phylogeny.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 21, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "Together, the learned p and the phylogeny in Figure 1 form an explanation of the observed collection of names. What makes it more probable than other explanations? Informally, two properties:", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 51, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "\u2022 Each node in the tree is plausibly derived from its parent. More precisely, the product of the edge probabilities under p is comparatively high. A different p would have reduced the probability of the events in this phylogeny. A different phylogeny would have involved a more improbable collection of events, such as replacing Chishti with Pynchon, or generating many unrelated copies of Pynchon directly from \u2666.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "\u2022 In the phylogeny, the parent names tend to be used often enough that it is plausible for variants of these names to have emerged. Our model says that new tokens are derived from previously generated tokens. Thus-other things equal-Barack Obama is more plausibly a variant of Barack Obama, Jr. than of Barack Obama, Sr. (which has fewer tokens).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Example", "sec_num": "2" }, { "text": "Our model should reflect the reasons that name variation exists. A named entity has the form y = (e, w) where w is a string being used to refer to entity e. A single entity e may be referred to on different occasions by different name strings w. We suppose that this is the result of copying the entity with occasional mutation of its name (as in asexual reproduction).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Generative Model of Tokens", "sec_num": "3" }, { "text": "Thus, we assume the following simple generative process that produces an ordered sequence of tokens y 1 , y 2 , . . ., where y i = (e i , w i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Generative Model of Tokens", "sec_num": "3" }, { "text": "\u2022 After the first k tokens y 1 , . . . y k have been generated, the author responsible for generating y k+1 must choose whom to talk about next. She is likely to think of someone she has heard about often in the past. So to make this choice, she selects one of the previous tokens y i uniformly at random, each having probability 1/(k + \u03b1); or else she selects \u2666, with probability \u03b1/(k + \u03b1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Generative Model of Tokens", "sec_num": "3" }, { "text": "\u2022 If the author selected a previous token y i , then with probability 1 \u2212 \u00b5 she copies it faithfully, so y k+1 = y i . But with probability \u00b5, she instead draws a mutated token y k+1 = (e k+1 , w k+1 ) from the mutation model p(\u2022 | y i ). This preserves the entity (e k+1 = e i with probability 1), but the new name w k+1 is a stochastic transduction of w i drawn from p(\u2022 | w i ). 3 For example, in referring to e i , the author may shorten and respell w i = Khwaja Gharib Nawaz into w k+1 = Ghareeb Nawaz (Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 507, "end": 516, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A Generative Model of Tokens", "sec_num": "3" }, { "text": "\u2022 If the author selected \u2666, she must choose a fresh entity y k+1 = (e k+1 , w k+1 ) to talk about. So she sets e k+1 to a newly created entity, sampling its name w k+1 from the distribution p(\u2022 | \u2666). For example, w k+1 = Thomas Ruggles Pynchon, Jr. (Figure 1 ). Nothing prevents w k+1 from being a name that is already in use for another entity (i.e., w k+1 may equal w j for some j \u2264 k).", "cite_spans": [], "ref_spans": [ { "start": 249, "end": 258, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "A Generative Model of Tokens", "sec_num": "3" }, { "text": "If we ignore the name strings, we can see that the sequence of entities e 1 , e 2 , . . . e N is being generated from a Chinese restaurant process (CRP) with concentration parameter \u03b1. To the extent that \u03b1 is low (so that is rarely used), a few randomly chosen entities will dominate the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "The CRP is equivalent to sampling e 1 , e 2 , . . . IID from an unknown distribution that was itself drawn from a Dirichlet process with concentration \u03b1. This is indeed a standard model of a distribution over entities. For example, Hall et al. (2008) use it to model venues in bibliographic entries.", "cite_spans": [ { "start": 232, "end": 250, "text": "Hall et al. (2008)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "From this characterization of the CRP, one can see that any permutation of this entity sequence would have the same probability. That is, our distribution over sequences of entities e is exchangeable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "However, our distribution over sequences of named entities y = (e, w) is non-exchangeable. It assigns different probabilities to different orderings of the same tokens. This is because our model posits that later authors are influenced by earlier authors, copying entity names from them with mutation. So ordering is important. The mutation process is not symmetric-for example, Figure 1 reflects a tendency to shorten rather than lengthen names.", "cite_spans": [], "ref_spans": [ { "start": 379, "end": 387, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "Non-exchangeability is one way that our present model differs from (parametric) transformation models (Eisner, 2002) and (non-parametric) transformation processes (Andrews and Eisner, 2011). These too are defined using mutation of strings or other types. From a transformation process, one can draw a distribution over types, from which the tokens are then sampled IID. This results in an exchangeable sequence of tokens, just as in the Dirichlet process.", "cite_spans": [ { "start": 102, "end": 116, "text": "(Eisner, 2002)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "We avoid transformation models here for three reasons. (1) Inference is more expensive. (2) A transformation process seems less realistic as a model of authorship. It constructs a distribution over derivational paths, similar to the paths in Figure 1 . It effectively says that each token is generated by recapitulating some previously used path from \u2666, but with some chance of deviating at each step. For an author to generate a name token this way, she would have to know the whole derivational history of the previous name she was adapting. Our present model instead allows an author simply to select a name she previously saw and copy or mutate its surface form.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 250, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "(3) One should presumably prefer to explain a novel name y as a mutation of a frequent name x, other things equal ( \u00a72). But surprisingly, inference under the transformation process does not prefer this. 4 Another view of our present model comes from the literature on random graphs (e.g., for modeling social networks or the link structure of the web). In a preferential attachment model, a graph's vertices are added one by one, and each vertex selects some previous vertices as its neighbors. Our phylogeny is a preferential attachment tree, a random directed graph in which each vertex selects a single previous vertex as its parent. Specifically, it is a random recursive tree (Smythe and Mahmoud, 1995) whose vertices are the tokens. 5 To this simple random topology we have added a random labeling process with mutation. The first \u03b1 vertices are labeled with \u2666.", "cite_spans": [ { "start": 204, "end": 205, "text": "4", "ref_id": null }, { "start": 682, "end": 708, "text": "(Smythe and Mahmoud, 1995)", "ref_id": "BIBREF32" }, { "start": 740, "end": 741, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Relationship to other models", "sec_num": "3.1" }, { "text": "Our model in \u00a73 samples the next token y, when it is not simply a faithful copy, from p(y | x) or p(y | \u2666). The key step there is to sample the name string w y from p(w y | w x ) or p(w y | \u2666).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "Our model of these distributions could easily incorporate detailed linguistic knowledge of the mutation process (see \u00a78). Here we describe the specific model that we use in our experiments. Like many such models, it can be regarded as a stochastic finitestate string-to-string transducer parameterized by \u03b8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "There is much prior work on stochastic models of edit distance (Ristad and Yianilos, 1998; Bilenko and Mooney, 2003; Oncina and Sebban, 2006; Schafer, 2006a; Bouchard-C\u00f4t\u00e9 et al., 2008; Dreyer et al., 2008, among others) . For the present experiments, we designed a moderately simple one that employs (1) conditioning on one character of right context, (2) latent \"edit\" and \"no-edit\" regions to capture the fact that groups of edits are often made in close proximity, and (3) some simple special handling for the distribution conditioned on the root p(w y | \u2666).", "cite_spans": [ { "start": 63, "end": 90, "text": "(Ristad and Yianilos, 1998;", "ref_id": "BIBREF26" }, { "start": 91, "end": 116, "text": "Bilenko and Mooney, 2003;", "ref_id": "BIBREF5" }, { "start": 117, "end": 141, "text": "Oncina and Sebban, 2006;", "ref_id": "BIBREF23" }, { "start": 142, "end": 157, "text": "Schafer, 2006a;", "ref_id": "BIBREF29" }, { "start": 158, "end": 185, "text": "Bouchard-C\u00f4t\u00e9 et al., 2008;", "ref_id": "BIBREF6" }, { "start": 186, "end": 220, "text": "Dreyer et al., 2008, among others)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "We assume a stochastic mutation process which, when given an input string w x , edits it from left to right into an output string w y . Then p(w y | w x ) is the total probability of all operation sequences on w x that would produce w y . This total can be computed in time O(|w x | \u2022 |w y |) by dynamic programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "Our process has four character-level edit operations: copy, substitute, insert, delete. It also has a distinguished no-edit operation that behaves exactly like copy. At each step, the process first randomly chooses whether to edit or no-edit, conditioned only on whether the previous operation was an edit. If it chooses to edit, it chooses a random edit type with some probability conditioned on the next input character. In the case of insert or substitute, it then randomly chooses an output character, conditioned on the type of edit and the next input character.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "It is common to mutate a name by editing contiguous substrings (e.g., words). Contiguous regions of copying versus editing can be modeled by a low probability of transitioning between no-edit and edit regions. 6 Note that an edit region may include some copy edits (or substitute edits that replace a character with itself) without leaving the edit region. This is why we distinguish copy from no-edit.", "cite_spans": [ { "start": 210, "end": 211, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "Input and output strings are augmented with a trailing (\"end-of-string\") symbol that is seen by the single-character lookahead. If the next character is , the only available edit is insert. Alternatively, if the process selects no-edit, then is copied to the output string and the process terminates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "In the case of p(w y | \u2666), the input string is empty, and both input and output are augmented with a trailing character that behaves like . Then w y is generated by a sequence of insertions followed by a copy. These are conditioned as usual on the next character, here , so the model can learn to insert more or different characters when the input is \u2666.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "The parameters \u03b8 determining the conditional probabilities of the different operations and characters are estimated with backoff smoothing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Mutation Model for Strings", "sec_num": "4" }, { "text": "The input to inference is a collection of named entity tokens y. Most are untagged tokens of the form y = (?, w). In a semi-supervised setting, however, some of the tokens may be tagged tokens of the form y = (e, w), whose true entity is known. The entity tags place a constraint on the phylogeny, since each child subtree of \u2666 must correspond to exactly one entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inference", "sec_num": "5" }, { "text": "Suppose we were lucky enough to fully observe the sequence of named entity tokens y i = (e i , w i ) produced by our generative model. That is, suppose all tokens were tagged and we knew their ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "Yet there would still be something to infer: which tokens were derived from which previous tokens. This phylogeny is described by a spanning tree over the tokens. Let us see how to infer it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "For each potential edge x \u2192 y between named entity tokens, define \u03b4(y | x) to be the probability of choosing x and copying it (possibly with mutation) to obtain y. So", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "\u03b4(y j | \u2666) = \u03b1 p(y j | \u2666) (1) \u03b4(y j | y i ) = \u00b5 p(y j | y i ) + (1 \u2212 \u00b5)1(y j = y i ) (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "except that if i \u2265 j or if e i = e j , then \u03b4(y j | y i ) = 0 (since y j can only be derived from an earlier token y i with the same entity).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "Now the prior probability of generating y 1 , . . . y N with a given phylogenetic tree is easily seen to be a product over all tree edges, j \u03b4(y j | pa(y j )) where pa(y j ) is the parent of y j . As a result, it is known that the following are efficient to compute from the (N + 1) \u00d7 (N + 1) matrix of \u03b4 values (see \u00a75.3):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "(a) the max-probability spanning tree (b) the total probability of all spanning trees (c) the marginal probability of each edge, under the posterior distribution on spanning trees (a) is our single best guess of the phylogeny. We use this during evaluation. (b) gives the model likelihood, i.e., the total probability of the observed data y 1 , . . . y N . To locally maximize the model likelihood, (c) can serve as the E step of our EM algorithm ( \u00a76) for tuning our mutation model. The M step then retrains the mutation model's parameters \u03b8 on inputoutput pairs w i \u2192 w j , weighting each pair by its edge's posterior marginal probability (c), since that is the expected count of a w i \u2192 w j mutation. This computation is iterated. Note that a phylogeny partitions the name types into some number of \"clusters,\" where each cluster corresponds to a child subtree of \u2666 and represents an entity. We can increase the number of \"clusters\" inferred by our method by increasing the ratio \u03b1/\u00b5, which controls the preference for an entity to descend from \u2666 versus an existing entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An unrealistically supervised setting", "sec_num": "5.1" }, { "text": "Now we turn to a real setting-fully unsupervised data. Two issues will force us to use an approximate inference algorithm. First, we have an untagged corpus: a token's entity tag e is never observed. Second, the order of the tokens is not observed, so we do not know which other tokens are candidate parents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "Our first approximation is to consider only phylogenies over types rather than tokens. 7 The type phylogeny in Figure 1 represents a set of possible token phylogenies. Each node of Figure 1 represents an untagged name type y = (?, w). By grouping all n y tokens of this type into a single node, we mean that the first token of y was derived by mutation from the parent node, while each later token of y was derived by copying an (unspecified) earlier token of y.", "cite_spans": [ { "start": 87, "end": 88, "text": "7", "ref_id": null } ], "ref_spans": [ { "start": 111, "end": 119, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 181, "end": 189, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "A token phylogeny cannot be represented in this way if two or more tokens of y were created by mutations. In that case, their name strings are equal only by coincidence. They may have different parents (perhaps of different entities), whereas the y node in a type phylogeny can have only one parent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "We argue, however, that these unrepresentable token phylogenies are comparatively unlikely a posteriori and can be reasonably ignored during inference. The first token of y is necessarily a mutation, but later tokens are much more likely to be copies. The probability of generating a later token y by copying some previous token is at least", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "(1 \u2212 \u00b5)/(N + \u03b1), 7 Working over types improves the quality of our second approximation, and also speeds up the spanning tree algorithms. \u00a76 explains how to regard this approximation as variational EM.", "cite_spans": [ { "start": 17, "end": 18, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "while the probability of generating it in some other way is at most", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "max(\u03b1 p(y | \u2666), \u00b5 max x\u2208Y p(y | x))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "where Y is the set of observed types. The second probability is typically much smaller: an author is unlikely to invent exactly the observed string y, certainly from \u2666 but even by mutating a similar string x (especially when the mutation rate \u00b5 is small).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "How do we evaluate a type phylogeny? Consider the probability of generating untagged tokens y 1 , . . . y N in that order and respecting the phylogeny:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "N k=1 1 k + \u03b1 y\u2208Y g(y | pa(y)) \uf8eb \uf8ed ny\u22121 i=1 i (1 \u2212 \u00b5) \uf8f6 \uf8f8 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "where g(y | pa(y)) is a factor for generating the first token of y from its parent pa(y), defined by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "g(y | \u2666) = \u03b1 \u2022 p(y | \u2666) (4) g(y | x) = \u00b5 \u2022 (# tokens of x preceding first token of y) \u2022 p(y | x)", "eq_num": "(5)" } ], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "But we do not actually know the token order: by assumption, our input corpus is only an unordered bag of tokens. So we must treat the hidden ordering like any other hidden variable and maximize the marginal likelihood, which sums (3) over all possible orderings (permutations). This sum can be regarded as the number of permutations N ! (which is fixed given the corpus) times the expectation of (3) for a permutation chosen uniformly at random. This leads to our second approximation. We approximate this expectation of the product (3) with a product of expectations of its individual factors. 8 To find the expectation of (5), observe that the expected number of tokens of x that precede the first token of y is n x /(n y +1), since each of the n x tokens of x has a 1/(n y + 1) chance of falling before all n y tokens of y. It follows that the approximated probability of generating all tokens in some order, with our given type parentage, is proportional to y\u2208Y \u03b4(y | pa(y))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "\u03b4(y | \u2666) = \u03b1 \u2022 p(y | \u2666) (7) \u03b4(y | x) = \u00b5 \u2022 p(y | x) \u2022 n x /(n y + 1) (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "and the constant of proportionality depends on the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "The above equations are analogous to those in \u00a75.1. Again, the approximate posterior probability of a given type parentage tree is edge-factored-it is the product of individual edge weights defined by \u03b4. Thus, we are again eligible to use the spanning tree algorithms in \u00a75.3 below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "Notice that n x in the numerator of (8) means that y is more likely to select a frequent x as its parent. Also, n y +1 in the denominator means that a frequent y is not as likely to have any parent x = \u2666, because its first token probably falls early in the sequence where there are fewer available parents x = \u2666.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The unsupervised setting", "sec_num": "5.2" }, { "text": "Define a complete directed graph G over the vertices Y \u222a {\u2666}. The weight of an edge x \u2192 y is defined by \u03b4(y | x). The (approximate) posterior probability of a given phylogeny given our evidence, is proportional to the product of the \u03b4 values of its edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "Formally, let T \u2666 (G) denote the set of spanning trees of G rooted at \u2666, and define the weight of a particular spanning tree T \u2208 T \u2666 (G) to be the product of the weights of its edges:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w(T ) = (x\u2192y)\u2208T \u03b4(y | x)", "eq_num": "(9)" } ], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "Then the posterior probability of spanning tree T is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u03b8 (T ) = w(T ) Z(G)", "eq_num": "(10)" } ], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "where Z(G) = T \u2208T \u2666 (G) w(T ) is the partition function, i.e. the total probability of generating the data G via any spanning tree of the form we consider. This distribution is determined by the parameters \u03b8 of the transducer p \u03b8 , along with the ratio \u03b1/\u00b5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "There exist several algorithms to find the single maximum-probability spanning tree, notably Tarjan's implementation of the Chu-Liu-Edmonds algorithm, which runs in O(m log n) for a sparse graph or O(n 2 ) for a dense graph (Tarjan, 1977) . Figure 1 shows a spanning tree found by our model using Tarjan's algorithm. Here n is the number of vertices (in our case, types and ), while m is the number of edges (which we can keep small by pruning, \u00a76.1).", "cite_spans": [ { "start": 224, "end": 238, "text": "(Tarjan, 1977)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 241, "end": 249, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Spanning tree algorithms", "sec_num": "5.3" }, { "text": "Our inference algorithm assumes that we know the transducer parameters \u03b8. We now explain how to optimize \u03b8 to maximize the marginal likelihood of the training data. This marginal likelihood sums over all the other latent variables in the model-the spanning tree, the alignments between strings, and the hidden token ordering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "The EM procedure repeats the following until convergence: E-step: Given \u03b8, compute the posterior marginal probabilities c xy of all possible phylogeny edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "M-step Given all c xy , retrain \u03b8 to assign a high conditional probability to the mutations on the probable edges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "We actually use a variational EM algorithm: our E step approximates the true distribution q over all phylogenies with the closest distribution p that assigns positive probability only to type-based phylogenies. This distribution is given by (10) and minimizes KL(p || q). We argued in section \u00a75.2 that it should be a good approximation. The posterior marginal probability of a directed edge from vertex x to vertex y, according to (10), is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c xy = T \u2208T \u2666 (G):(x\u2192y)\u2208T p \u03b8 (T )", "eq_num": "(11)" } ], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "The probability c xy is a \"pseudocount\" for the expected number of mutations from x to y. This is at most 1 under our assumptions. Calculating c xy requires summing over all spanning trees of G, of which there are n n\u22122 for a fully connected graph with n vertices. Fortunately, Tutte (1984) shows how to compute this sum by the following method, which extends Kirchhoff's classical matrix-tree theorem to weighted directed graphs. This result has previously been employed in nonprojective dependency parsing (Koo et al., 2007; Smith and Smith, 2007) .", "cite_spans": [ { "start": 278, "end": 290, "text": "Tutte (1984)", "ref_id": "BIBREF36" }, { "start": 508, "end": 526, "text": "(Koo et al., 2007;", "ref_id": "BIBREF22" }, { "start": 527, "end": 549, "text": "Smith and Smith, 2007)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "Let L \u2208 R n\u00d7n denote the Laplacian of G, namely", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L = x \u03b4(y | x ) if x = y \u2212\u03b4(y | x) if x = y", "eq_num": "(12)" } ], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "Tutte's theorem relates the determinant of the Laplacian to the spanning trees in graph G. In particular, the cofactor L 0,0 equals the total weight of all directed spanning trees rooted at node 0. This yields the partition function Z(G) (assuming node 0 is \u2666).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "LetL be the matrix L with the 0th row and 0th column removed. Then the edge marginals of interest are related to the log partition function by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c xy = \u2202 log Z(G) \u2202 \u03b4(y | x) = \u2202 log |L| \u2202 \u03b4(y | x)", "eq_num": "(13)" } ], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "which has the closed-form solution", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c xy = \u03b4(y | \u2666)L \u22121 yy if x = y \u03b4(y | x) (L \u22121 xx \u2212L \u22121 xy ) if x = y", "eq_num": "(14)" } ], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "See (Koo et al., 2007) for a derivation. Thus, computing all edge marginals reduces to computing a matrix inverse, which may be done in O(n 3 ) time. At the M step, we retrain the mutation model parameters \u03b8 to maximize xy c xy log p(w y | w x ). This is tantamount to maximum conditional likelihood training on a supervised collection of (w x , w y ) pairs that are respectively weighted by c xy . The M step is nontrivial because the term p(w y | w x ) sums over a hidden alignment between two strings. It may be performed by an inner loop of EM, where the E step uses dynamic programming to efficiently consider all possible alignments, as in (Ristad and Yianilos, 1996) . In practice, we have found it effective to take only a single step of this inner loop. Such a Generalized EM procedure enjoys the same convergence properties as EM, but may reach a local optimum faster (Dempster et al., 1977) .", "cite_spans": [ { "start": 4, "end": 22, "text": "(Koo et al., 2007)", "ref_id": "BIBREF22" }, { "start": 646, "end": 673, "text": "(Ristad and Yianilos, 1996)", "ref_id": "BIBREF25" }, { "start": 878, "end": 901, "text": "(Dempster et al., 1977)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Training the Transducer with EM", "sec_num": "6" }, { "text": "For large graphs, it is essential to prune the number of edges to avoid considering all n(n \u2212 1) inputoutput pairs. To prune the graph, we eliminate all edges between strings that do not share any common trigrams (case-and diacritic-insensitive), by setting their matrix entries to 0. As a result, the graph Laplacian is a sparse matrix, which often allows faster matrix inversion using preconditioned iterative algorithms. Furthermore, pruned edges do not appear in any spanning tree, so the E step will find that their posterior marginal probabilities are 0. This means that the input-output pairs corresponding to these edges can be ignored when re-estimating the transducer parameters in the M step. We found that pruning significantly improves training time with no appreciable loss in performance. 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pruning the graph", "sec_num": "6.1" }, { "text": "A deficiency of our method is that it assumes that authors of our corpus have only been exposed to previous tokens in our corpus. In principle, one could also train with U additional tokens (e, w) where we observe neither e nor w, for very large U . This is the \"universe of discourse\" in which our authors operate. 10 In this case, we would need (expensive) new algorithms to reconstruct the strings w. However, this model could infer a more realistic phylogeny by positing unobserved ancestral or intermediate forms that relate the observed tokens, as in transformation models (Eisner, 2002; Andrews and Eisner, 2011) .", "cite_spans": [ { "start": 316, "end": 318, "text": "10", "ref_id": null }, { "start": 579, "end": 593, "text": "(Eisner, 2002;", "ref_id": "BIBREF14" }, { "start": 594, "end": 619, "text": "Andrews and Eisner, 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training with unobserved tokens?", "sec_num": "6.2" }, { "text": "Scraping Wikipedia. Wikipedia documents many variant names for entities. As a result, it has frequently been used as a source for mining name variations, both within and across languages (Parton et al., 2008; Cucerzan, 2007) . We used Wikipedia to create a list of name aliases for different entities. Specifically, we mined English Wikipedia 11 for all redirects: page names that lead directly to another page. Redirects are created by Wikipedia users for resolving common name variants to the correct page. For example, the pages titled Barack Obama Junior and Barack Hussein Obama automatically redirect to the page titled Barack Obama. This redirection implies that the first two are name variants of the third. Collecting all such links within English Wikipedia yields a large number of aliases for each page. However, many redirects are for topics other than individual people, and these would be poor examples of name variation. In addition, some phrases that redirect to an entity are descriptions rather than names. For example, 44th President of the United States also links to Barack Obama, but it is not a name variant. Freebase filtering. To improve data quality we used Freebase, a structured knowledge base that incorporates information from Wikipedia. Among its structured information are entity types, including the type \"person.\" We filtered the Wikipedia redirect collection to remove pairs where the target page was not listed as a person in Freebase. Additionally, to remove redirects that were not proper names (44th President of the United States), we applied a series of rule based filters to remove bad aliases: removing numerical names, parentheticals after names, quotation marks, and names longer than 5 tokens, since we found that these long names were rarely person names (e.g. United States Ambassador to the European Union, Success Through a Positive Mental Attitude which links to the author Napoleon Hill.) While not perfect, these modifications dramatically improved quality. The result was a list of 78,079 different person entities, each with one or more known names or aliases. Some typical names are shown in Figure 2 . Estimating empirical type counts. Our method is really intended to be run on a corpus of string tokens. However, for experimental purposes, we instead use the above dataset of string types because this allows us to use the \"ground truth\" given by the Wikipedia redirects. To synthesize token counts, empirical token frequencies for each type were estimated from the LDC Gigaword corpus, 12 which is a corpus of newswire text spanning several years. Wikipedia name types that did not appear in Giga-word were assigned a \"backoff count\" of one. Note that by virtue of the domain, many misspellings will not appear; however, edges \"popular\" names (which may be canonical names) will be assigned higher weight.", "cite_spans": [ { "start": 187, "end": 208, "text": "(Parton et al., 2008;", "ref_id": "BIBREF24" }, { "start": 209, "end": 224, "text": "Cucerzan, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 2148, "end": 2156, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Data preparation", "sec_num": "7.1" }, { "text": "We begin by evaluating the generalization ability of a transducer trained using a transformation model. To do so, we measure log-likelihood on held-out entity title and alias pairs. We then verify that the generalization ability according to log-likelihood translates into gains for a name matching task. For the experiments in this section, we use \u03b1 = 1.0 and \u00b5 = 0.1. 13 Held-out log-likelihood. We construct pairs of entity title (input) and alias (output) names from the Wikipedia data. For different amounts of supervised data, we trained the transformation model on the training set, and plotted the log-likelihood of heldout test data for the transducer parameters at each iteration of EM. The held-out test set is constructed from a disjoint set of Wikipedia entities, the same number of entities as in the training set. We used different corpora of 1000 and 1500 entities for train and test. Name matching. For each alias a in a test set (not seen at training time), we produce a ranking of test entity titles t according to transducer probabilities p \u03b8 (a | t). A good transducer should assign high probability to transformations from the correct title for the alias. Mean reciprocal rank (MRR) is a commonly used metric to estimate the quality of a ranking, which we report in Figure 4 . The reported mean is over all aliases in the test data. In addition to evaluating the ranking for different initializations of our transducer, we compare to two baselines: Leven- Figure 4 : Mean reciprocal rank (MRR) results for different training conditions: \"sup10\" means that 10 entities (roughly 40 name pairs) were used as training data for the transducer; \"semi10\" means that the \"sup10\" model was used as initialization before re-estimating the parameters using our model; \"unsup\" is the transducer trained using our model without any initial supervision; \"sup\" is trained on all 1500 entities in the training set; \"jwink\" and \"lev\" correspond to Jaro-Winkler and Levenshtein distance baselines.", "cite_spans": [], "ref_spans": [ { "start": 1288, "end": 1296, "text": "Figure 4", "ref_id": null }, { "start": 1478, "end": 1486, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "7.2" }, { "text": "shtein distance and Jaro-Winkler similarity (Winkler, 1999) . The matching experiments were performed on a corpus of 1500 entities (with separate corpora of the same size for training and test.)", "cite_spans": [ { "start": 44, "end": 59, "text": "(Winkler, 1999)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7.2" }, { "text": "We have presented a new unsupervised method for learning string-to-string transducers. It learns from a collection of related strings whose relationships are unknown. The key idea is that some strings are mutations of common strings that occurred earlier. We compute a distribution over the unknown phyloge-netic tree that relates these strings, and use it to reestimate the transducer parameters via EM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "One direction for future work would be more sophisticated transduction models than the one we developed in \u00a74. For names, this could include learning common nicknames; explicitly modeling abbreviation processes such as initials; conditioning on name components such as title and middle name; and transliterating across languages. 14 In other domains, one could model bibliographic entry propagation, derivational morphology, or historical sound change (again using language tags).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "Another future direction would be to incorporate the context of tokens in order to help reconstruct which tokens are coreferent. Combining contextual similarity with string similarity has previously proved very useful for identifying cognates (Schafer and Yarowsky, 2002; Schafer, 2006b; Bergsma and Van Durme, 2011) . In our setting it would help to distinguish people with identical names, as well as determining whether two people with similar names are really the same.", "cite_spans": [ { "start": 243, "end": 271, "text": "(Schafer and Yarowsky, 2002;", "ref_id": "BIBREF28" }, { "start": 272, "end": 287, "text": "Schafer, 2006b;", "ref_id": "BIBREF30" }, { "start": 288, "end": 316, "text": "Bergsma and Van Durme, 2011)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "14 These last two points suggest that the mutation model should operate not on simple (entity, string) pairs, but on richer representations in which the name has been parsed into its components (Eisenstein et al., 2011) , labeled with a language ID, and perhaps labeled with a phonological pronunciation. These additional properties of a named entity may be either observed or latent in training data. For example, if wy and y denote the string and language of name y, then define p(y | x) = p( y | x) \u2022 p(wy | y , x, wx). The second factor captures transliteration from language x to language y , e.g., by using \u00a74's model with an ( x, y )-specific parameter setting.", "cite_spans": [ { "start": 194, "end": 219, "text": "(Eisenstein et al., 2011)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "8" }, { "text": "In the above problems, one learns a set of O(K) or O(K 2 ) specialized transducers that relate Latin to Italian, singular to plural, etc. We instead use one global mutation model that applies to all names-but see footnote 14 on incorporating specialized transductions (Latin to Italian) within our mutation model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We cannot currently hypothesize unobserved intermediate forms, e.g., common ancestors of similar strings. See \u00a76.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Straightforward extensions are to allow a variable mutation rate \u00b5(yi) that depends on properties of yi, and to allow w k+1 to depend on known properties of ei. See footnote 14 for further discussion of enriched tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The very fact that x has been frequently observed demonstrates that it has often chosen to stop mutating. This implies that it is likely to choose stop again rather than mutate into y.5 This is not the tree shown inFigure 1, whose vertices are types rather than tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This somewhat resembles the traditional affine gap penalty in computational biology(Gusfield, 1997), which makes deletions or insertions cheaper if they are consecutive. We instead make consecutive edits cheaper regardless of the edit type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In general this is an overestimate for each phylogeny.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For instance, on a dataset of approximately 6000 distinct names, pruning reduced the number of outgoing edges at each vertex to fewer than 100 per vertex.10 Notice that the N observed tokens would be approximately exchangeable in this setting: they are unlikely to depend on one another when N U , and hence their order no longer matters much. In effect, generating the U hidden tokens constructs a rich distribution (analogous to a sample from the Dirichlet process) from which the N observed tokens are then sampled IID.11 Using a Wikipedia dump from February 2, 2011.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "LDC Catalog No. LDC2003T05.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In preliminary experiments, we did not find these parameters to be sensitive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Transformation process priors", "authors": [ { "first": "Chi", "middle": [], "last": "Ho", "suffix": "" }, { "first": "", "middle": [], "last": "Minh", "suffix": "" }, { "first": "Ho-Chi", "middle": [], "last": "Ho Chi Mihn", "suffix": "" }, { "first": "Ho", "middle": [], "last": "Minh", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Chih-Minh", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Fawkes", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Fawkes", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Faux", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Falks", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Faukes", "suffix": "" }, { "first": "", "middle": [], "last": "Fawks", "suffix": "" }, { "first": "Guy", "middle": [ "Falkes" ], "last": "Guy Foxe", "suffix": "" }, { "first": "", "middle": [], "last": "Nicholas Ii Of Russia", "suffix": "" } ], "year": 2011, "venue": "NIPS 2011 Workshop on Bayesian Nonparametrics: Hope or Hype?", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ho Chi Minh, Ho chi mihn, Ho-Chi Minh, Ho Chih-minh Guy Fawkes, Guy fawkes, Guy faux, Guy Falks, Guy Faukes, Guy Fawks, Guy foxe, Guy Falkes Nicholas II of Russia, Nikolai Aleksandrovich Romanov, Nicholas Alexandrovich of Russia, Nicolas II Bill Gates, Lord Billy, Bill Gates, BillGates, Billy Gates, William Gates III, William H. Gates William Shakespeare, William shekspere, William shakspeare, Bill Shakespear Bill Clinton, Billll Clinton, William Jefferson Blythe IV, Bill J. Clinton, William J Clinton References Nicholas Andrews and Jason Eisner. 2011. Transformation pro- cess priors. In NIPS 2011 Workshop on Bayesian Nonpara- metrics: Hope or Hype?, Sierra Nevada, Spain, December. Extended abstract (3 pages).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "A", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "B", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bagga and B. Baldwin. 1998. Algorithms for scoring coref- erence chains. In LREC.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning to paraphrase: an unsupervised approach using multiple-sequence alignment", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2003, "venue": "Proc. of NAACL-HLT", "volume": "", "issue": "", "pages": "16--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Lillian Lee. 2003. Learning to para- phrase: an unsupervised approach using multiple-sequence alignment. In Proc. of NAACL-HLT, pages 16-23, Strouds- burg, PA, USA.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chain letters and evolutionary histories", "authors": [ { "first": "H", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "M", "middle": [], "last": "Li", "suffix": "" }, { "first": "B", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2003, "venue": "Scientific American", "volume": "288", "issue": "3", "pages": "76--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Bennett, M. Li, , and B. Ma. 2003. Chain letters and evolutionary histories. Scientific American, 288(3):76- 81, June. More mathematical version available at http: //www.cs.uwaterloo.ca/~mli/chain.html.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning bilingual lexicons using the visual similarity of labeled web images", "authors": [ { "first": "Shane", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" } ], "year": 2011, "venue": "Proc. of IJCAI", "volume": "", "issue": "", "pages": "1764--1769", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In Proc. of IJCAI, pages 1764-1769, Barcelona, Spain.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Adaptive duplicate detection using learnable string similarity measures", "authors": [ { "first": "Mikhail", "middle": [], "last": "Bilenko", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2003, "venue": "Proc. of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '03", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikhail Bilenko and Raymond J. Mooney. 2003. Adaptive duplicate detection using learnable string similarity mea- sures. In Proc. of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '03, pages 39-48, New York, NY, USA. ACM.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A probabilistic approach to language change", "authors": [ { "first": "Alexandre", "middle": [], "last": "Bouchard-C\u00f4t\u00e9", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "169--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Bouchard-C\u00f4t\u00e9, Percy Liang, Thomas Griffiths, and Dan Klein. 2008. A probabilistic approach to language change. In Proc. of NIPS, pages 169-176.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Large-scale named entity disambiguation based on Wikipedia data", "authors": [ { "first": "S", "middle": [], "last": "Cucerzan", "suffix": "" } ], "year": 2007, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proc. of EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Canonicalization of database records using adaptive similarity measures", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Wick", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Marzilli", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2007, "venue": "Proc. of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '07", "volume": "", "issue": "", "pages": "201--209", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta, Michael Wick, Robert Hall, Matthew Marzilli, and Andrew McCallum. 2007. Canonicalization of database records using adaptive similarity measures. In Proc. of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '07, pages 201-209.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [ "P" ], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [ "M" ], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [ "B" ], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society. Series B (Methodological)", "volume": "39", "issue": "1", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maxi- mum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Method- ological), 39(1):1-38.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Image phylogeny by minimal spanning trees", "authors": [ { "first": "Z", "middle": [], "last": "Dias", "suffix": "" }, { "first": "A", "middle": [], "last": "Rocha", "suffix": "" }, { "first": "S", "middle": [], "last": "Goldenstein", "suffix": "" } ], "year": 2012, "venue": "IEEE Trans. on Information Forensics and Security", "volume": "7", "issue": "2", "pages": "774--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Dias, A. Rocha, and S. Goldenstein. 2012. Image phy- logeny by minimal spanning trees. IEEE Trans. on Informa- tion Forensics and Security, 7(2):774-788, April.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discovering morphological paradigms from plain text using a Dirichlet process mixture model", "authors": [ { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2011, "venue": "Supplementary material", "volume": "", "issue": "", "pages": "616--627", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markus Dreyer and Jason Eisner. 2011. Discovering morpho- logical paradigms from plain text using a Dirichlet process mixture model. In Proc. of EMNLP, pages 616-627. Sup- plementary material (9 pages) also available.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Latentvariable modeling of string transductions with finite-state methods", "authors": [ { "first": "Markus", "middle": [], "last": "Dreyer", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2008, "venue": "Proc. of EMNLP", "volume": "", "issue": "", "pages": "1080--1089", "other_ids": {}, "num": null, "urls": [], "raw_text": "Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent- variable modeling of string transductions with finite-state methods. In Proc. of EMNLP, pages 1080-1089, Honolulu, Hawaii, October. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Structured databases of named entities from Bayesian nonparametrics", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" }, { "first": "Tae", "middle": [], "last": "Yano", "suffix": "" }, { "first": "William", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Xing", "suffix": "" } ], "year": 2011, "venue": "Proc. of the First workshop on Unsupervised Learning in NLP", "volume": "", "issue": "", "pages": "2--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein, Tae Yano, William Cohen, Noah Smith, and Eric Xing. 2011. Structured databases of named entities from Bayesian nonparametrics. In Proc. of the First workshop on Unsupervised Learning in NLP, pages 2-12, Edinburgh, Scotland, July. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Transformational priors over grammars", "authors": [ { "first": "Jason", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Eisner. 2002. Transformational priors over grammars. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, July.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Algorithms on Strings, Trees, and Sequences-Computer Science and Computational Biology", "authors": [ { "first": "Dan", "middle": [], "last": "Gusfield", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Gusfield. 1997. Algorithms on Strings, Trees, and Sequences-Computer Science and Computational Biology. Cambridge University Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning bilingual lexicons from monolingual corpora", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2008, "venue": "Proc. of ACL-08: HLT", "volume": "", "issue": "", "pages": "771--779", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proc. of ACL-08: HLT, pages 771-779.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Finding cognates using phylogenies", "authors": [ { "first": "David", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Hall and Dan Klein. 2010. Finding cognates using phylo- genies. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Unsupervised deduplication using cross-field dependencies", "authors": [ { "first": "Rob", "middle": [], "last": "Hall", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "Proc. of the ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, KDD '08", "volume": "", "issue": "", "pages": "310--317", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rob Hall, Charles Sutton, and Andrew McCallum. 2008. Un- supervised deduplication using cross-field dependencies. In Proc. of the ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, KDD '08, pages 310-317.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Malware phylogeny generation using permutations of code", "authors": [ { "first": "", "middle": [ "Enamul" ], "last": "Md", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Karim", "suffix": "" }, { "first": "Arun", "middle": [], "last": "Walenstein", "suffix": "" }, { "first": "Laxmi", "middle": [], "last": "Lakhotia", "suffix": "" }, { "first": "", "middle": [], "last": "Parida", "suffix": "" } ], "year": 2005, "venue": "Journal in Computer Virology", "volume": "1", "issue": "1-2", "pages": "13--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md. Enamul. Karim, Andrew Walenstein, Arun Lakhotia, and Laxmi Parida. 2005. Malware phylogeny generation using permutations of code. Journal in Computer Virology, 1(1- 2):13-23.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Weakly supervised named entity transliteration and discovery from multilingual comparable corpora", "authors": [ { "first": "Alexandre", "middle": [], "last": "Klementiev", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2006, "venue": "Proc. of COLING-ACL", "volume": "", "issue": "", "pages": "817--824", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proc. of COLING-ACL, pages 817- 824.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Machine transliteration. Computational Linguistics", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "", "volume": "24", "issue": "", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Knight and J. Graehl. 1998. Machine transliteration. Com- putational Linguistics, 24:599-612.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Structured prediction models via the matrixtree theorem", "authors": [ { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Globerson", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "Proc. of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "141--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured prediction models via the matrix- tree theorem. In Proc. of EMNLP-CoNLL, pages 141-150.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Using learned conditional distributions as edit distance", "authors": [ { "first": "Jose", "middle": [], "last": "Oncina", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Sebban", "suffix": "" } ], "year": 2006, "venue": "Proc. of the 2006 Joint IAPR international Conference on Structural, Syntactic, and Statistical Pattern Recognition, SSPR'06/SPR'06", "volume": "", "issue": "", "pages": "403--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jose Oncina and Marc Sebban. 2006. Using learned conditional distributions as edit distance. In Proc. of the 2006 Joint IAPR international Conference on Structural, Syntactic, and Statis- tical Pattern Recognition, SSPR'06/SPR'06, pages 403-411.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Simultaneous multilingual search for translingual information retrieval", "authors": [ { "first": "Kristen", "middle": [], "last": "Parton", "suffix": "" }, { "first": "Kathleen", "middle": [ "R" ], "last": "Mckeown", "suffix": "" }, { "first": "James", "middle": [], "last": "Allan", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Henestroza", "suffix": "" } ], "year": 2008, "venue": "Proceeding of the ACM conference on Information and Knowledge Management, CIKM '08", "volume": "", "issue": "", "pages": "719--728", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristen Parton, Kathleen R. McKeown, James Allan, and En- rique Henestroza. 2008. Simultaneous multilingual search for translingual information retrieval. In Proceeding of the ACM conference on Information and Knowledge Manage- ment, CIKM '08, pages 719-728.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning string edit distance", "authors": [ { "first": "Eric", "middle": [ "Sven" ], "last": "Ristad", "suffix": "" }, { "first": "Peter", "middle": [ "N" ], "last": "Yianilos", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Sven Ristad and Peter N. Yianilos. 1996. Learning string edit distance. Technical Report CS-TR-532-96, Princeton University, Department of Computer Science.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Learning string edit distance", "authors": [ { "first": "Eric", "middle": [ "Sven" ], "last": "Ristad", "suffix": "" }, { "first": "N", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Yianilos", "suffix": "" } ], "year": 1998, "venue": "IEEE Transactions on Pattern Recognition and Machine Intelligence", "volume": "20", "issue": "5", "pages": "522--532", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string edit distance. IEEE Transactions on Pattern Recognition and Machine Intelligence, 20(5):522-532, May.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "An algorithm for unsupervised transliteration mining with an application to word alignment", "authors": [ { "first": "Hassan", "middle": [], "last": "Sajjad", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2011, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "430--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassan Sajjad, Alexander Fraser, and Helmut Schmid. 2011. An algorithm for unsupervised transliteration mining with an application to word alignment. In Proc. of ACL, pages 430- 439.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Inducing translation lexicons via diverse similarity measures and bridge languages", "authors": [ { "first": "Charles", "middle": [], "last": "Schafer", "suffix": "" }, { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2002, "venue": "Proc. of CONLL", "volume": "", "issue": "", "pages": "146--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Schafer and David Yarowsky. 2002. Inducing transla- tion lexicons via diverse similarity measures and bridge lan- guages. In Proc. of CONLL, pages 146-152.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Novel probabilistic finite-state transducers for cognate and transliteration modeling", "authors": [ { "first": "Charles", "middle": [], "last": "Schafer", "suffix": "" } ], "year": 2006, "venue": "7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Schafer. 2006a. Novel probabilistic finite-state transduc- ers for cognate and transliteration modeling. In 7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Translation Discovery Using Diverse Smilarity Measures", "authors": [ { "first": "Charles", "middle": [], "last": "Schafer", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Schafer. 2006b. Translation Discovery Using Diverse Smilarity Measures. Ph.D. thesis, Johns Hopkins University.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Probabilistic models of nonprojective dependency trees", "authors": [ { "first": "David", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "Proc. of EMNLP-CoNLL", "volume": "", "issue": "", "pages": "132--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "David A. Smith and Noah A. Smith. 2007. Probabilistic mod- els of nonprojective dependency trees. In Proc. of EMNLP- CoNLL, pages 132-140.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A survey of recursive trees", "authors": [ { "first": "R", "middle": [ "T" ], "last": "Smythe", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Mahmoud", "suffix": "" } ], "year": 1995, "venue": "Theory of Probability and Mathematical Statistics", "volume": "51", "issue": "", "pages": "1--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. T. Smythe and H. M. Mahmoud. 1995. A survey of recur- sive trees. Theory of Probability and Mathematical Statistics, 51(1-27).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A statistical model for lost language decipherment", "authors": [ { "first": "Benjamin", "middle": [], "last": "Snyder", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2010, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "1048--1057", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Snyder, Regina Barzilay, and Kevin Knight. 2010. A statistical model for lost language decipherment. In Proc. of ACL, pages 1048-1057.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Mega5: Molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods", "authors": [ { "first": "Koichiro", "middle": [], "last": "Tamura", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Glen", "middle": [], "last": "Stecher", "suffix": "" }, { "first": "Masatoshi", "middle": [], "last": "Nei", "suffix": "" }, { "first": "Sudhir", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2011, "venue": "Molecular Biology and Evolution", "volume": "28", "issue": "10", "pages": "2731--2739", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koichiro Tamura, Daniel Peterson, Nicholas Peterson, Glen Stecher, Masatoshi Nei, and Sudhir Kumar. 2011. Mega5: Molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Molecular Biology and Evolution, 28(10):2731- 2739.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Finding optimum branchings. Networks", "authors": [ { "first": "", "middle": [], "last": "R E Tarjan", "suffix": "" } ], "year": 1977, "venue": "", "volume": "7", "issue": "", "pages": "25--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "R E Tarjan. 1977. Finding optimum branchings. Networks, 7(1):25-35.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Graph Theory", "authors": [ { "first": "", "middle": [], "last": "Tutte", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tutte. 1984. Graph Theory. Addison-Wesley.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The state of record linkage and current research problems", "authors": [ { "first": "William", "middle": [ "E" ], "last": "Winkler", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William E. Winkler. 1999. The state of record linkage and cur- rent research problems. Technical report, Statistical Research Division, U.S. Census Bureau.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "A portion of a spanning tree found by our model.", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Sample alias lists scraped from Wikipedia. Note that only partial alias lists are shown for space reasons.", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Learning curves for different initializations of the transducer parameters. Above, \"sup=100\" (for instance) means that 100 entities were used as training data to initialize the transducer parameters (constructing pairs between all title-alias pairs for those Wikpedia entities).", "type_str": "figure", "uris": null } } } }