{ "paper_id": "N12-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:05:03.087493Z" }, "title": "Entity Clustering Across Languages", "authors": [ { "first": "Spence", "middle": [], "last": "Green", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "spenceg@stanford.edu" }, { "first": "Nicholas", "middle": [], "last": "Andrews", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Matthew", "middle": [ "R" ], "last": "Gormley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "", "affiliation": { "laboratory": "", "institution": "Johns Hopkins University", "location": {} }, "email": "mdredze@cs.jhu.edu" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "manning@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Standard entity clustering systems commonly rely on mention (string) matching, syntactic features, and linguistic resources like English WordNet. When co-referent text mentions appear in different languages, these techniques cannot be easily applied. Consequently, we develop new methods for clustering text mentions across documents and languages simultaneously, producing cross-lingual entity clusters. Our approach extends standard clustering algorithms with cross-lingual mention and context similarity measures. Crucially, we do not assume a pre-existing entity list (knowledge base), so entity characteristics are unknown. On an Arabic-English corpus that contains seven different text genres, our best model yields a 24.3% F1 gain over the baseline.", "pdf_parse": { "paper_id": "N12-1007", "_pdf_hash": "", "abstract": [ { "text": "Standard entity clustering systems commonly rely on mention (string) matching, syntactic features, and linguistic resources like English WordNet. When co-referent text mentions appear in different languages, these techniques cannot be easily applied. Consequently, we develop new methods for clustering text mentions across documents and languages simultaneously, producing cross-lingual entity clusters. Our approach extends standard clustering algorithms with cross-lingual mention and context similarity measures. Crucially, we do not assume a pre-existing entity list (knowledge base), so entity characteristics are unknown. On an Arabic-English corpus that contains seven different text genres, our best model yields a 24.3% F1 gain over the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper introduces techniques for clustering coreferent text mentions across documents and languages. On the web today, a breaking news item may instantly result in mentions to a real-world entity in multiple text formats: news articles, blog posts, tweets, etc. Much NLP work has focused on model adaptation to these diverse text genres. However, the diversity of languages in which the mentions appear is a more significant challenge. This was particularly evident during the 2011 popular uprisings in the Arab world, in which electronic media played a prominent role. A key issue for the outside world was the aggregation of information that appeared simultaneously in English, French, and various Arabic dialects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To our knowledge, we are the first to consider clustering entity mentions across languages without a priori knowledge of the quantity or types of real-world entities (a knowledge base). The cross-lingual setting introduces several challenges. First, we cannot assume a prototypical name format. For example, the Anglo-centric first/middle/last prototype used in previous name modeling work (cf. (Charniak, 2001 )) does not apply to Arabic names like Abdullah ibn Abd Al-Aziz Al-Saud or Chinese names like Hu Jintao (referred to as Mr. Hu, not Mr. Jintao). Second, organization names often require both transliteration and translation. For example, the Arabic 'General Motors Corp' contains transliterations of 'General Motors', but a translation of 'Corporation'. Our models are organized as a pipeline. First, for each document, we perform standard mention detection and coreference resolution. Then, we use pairwise cross-lingual similarity models to measure both mention and context similarity. Finally, we cluster the mentions based on similarity.", "cite_spans": [ { "start": 390, "end": 410, "text": "(cf. (Charniak, 2001", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work makes the following contributions: (1) introduction of the task, (2) novel models for crosslingual entity clustering of person and organization entities, (3) cross-lingual annotation of the NIST Automatic Content Extraction (ACE) 2008 Arabic-English evaluation set, and (4) experimental results using both gold and automatic within-document processing. We will release our software and annotations to support future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Consider the toy corpus in Fig. 1 . The English documents contain mentions of two people: Steven Paul Jobs and Mark Elliot Zuckerberg. Of course, the surface realization of Mr. Jobs' last name in English is also an ordinary nominal, hence the ambiguous mention string (absent context) in the second document. The Arabic document introduces an organization entity (Apple Inc.) along with proper and pronominal references to Mr. Jobs. Finally, the French document refers to Mr. Jobs by the honorific 'Monsieur,' and to Figure 1: Clustering entity mentions across languages and documents. The toy corpus contains English (doc1 and doc2), Arabic (doc3), and French (doc4). Together, the documents make reference to three real-world entities, the identification of which is the primary objective of this work. We use a separately-trained system for within-document mention detection and coreference (indicated by the text boxes and intra-document links, respectively). Our experimental results are for Arabic-English only.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 33, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Task Description via a Simple Example", "sec_num": "1.1" }, { "text": "Apple without its corporate designation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description via a Simple Example", "sec_num": "1.1" }, { "text": "Our goal is to automatically produce the crosslingual entity clusters E 1 (Mark Elliot Zuckerberg), E 2 (Apple Inc.), and E 3 (Steven Paul Jobs). Both the true number and characteristics of these entities are unobserved. Our models require two pre-processing steps: mention detection and within-document coreference/anaphora resolution, shown in Fig. 1 by the text boxes and intra-document links, respectively. For example, in doc3, a within-document coreference system would pre-link joobz 'Jobs' with the masculine pronoun h 'his'. In addition, the mention detector determines that the surface form \"Jobs\" in doc2 is not an entity reference. For this within-document pre-processing we use Serif (Ramshaw et al., 2011 ). 1 Our models measure cross-lingual similarity of the coreference chains to make clustering decisions (\u2022 in Fig. 1 ). The similarity models (indicated by the = and = operators in Fig. 1 ) consider both mention string and context similarity ( \u00a72). We use the mention similarities as hard constraints, and the context similarities as soft constraints. In this work, we investigate two standard constrained clustering algorithms ( \u00a73). Our methods can be used to extend existing systems for mono-lingual entity clustering (also known as \"cross-document coreference resolution\") to the cross-lingual setting.", "cite_spans": [ { "start": 697, "end": 718, "text": "(Ramshaw et al., 2011", "ref_id": "BIBREF40" } ], "ref_spans": [ { "start": 346, "end": 352, "text": "Fig. 1", "ref_id": null }, { "start": 829, "end": 835, "text": "Fig. 1", "ref_id": null }, { "start": 900, "end": 906, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Task Description via a Simple Example", "sec_num": "1.1" }, { "text": "1 Serif is a commercial system that assumes each document contains only one language. Currently, there are no publicly available within-document coreference systems for Arabic and many other languages. To remedy this problem, the CoNNL-2012 shared task aims to develop multilingual coreference systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Description via a Simple Example", "sec_num": "1.1" }, { "text": "Our goal is to create cross-lingual sets of co-referent mentions to real-world entities (people, places, organizations, etc.) . In this paper, we adopt the following notation. Let M be a set of distinct text mentions in a collection of documents; C is a partitioning of M into document-level sets of co-referent mentions (called coreference chains); E is a partitioning of C into sets of co-referent chains (called entities). Let i, j be nonnegative integers less than or equal to |M | and a, b be non-negative integers less than or equal to |C|. Our experiments use a separate within-document coreference system to create C, which is fixed. We will learn E, which has size no greater than |C| since the set of mono-lingual chains is the largest valid partitioning.", "cite_spans": [ { "start": 88, "end": 125, "text": "(people, places, organizations, etc.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Mention and Context Similarity", "sec_num": "2" }, { "text": "We define accessor functions to access properties of mentions and chains. For any mention m i , define the following functions: lang(m i ) is the language; doc(m i ) is the document containing m i ; type(m i ) is the semantic type, which is assigned by the withindocument coreference system. We also extract a set of mention contexts S, which are the sentences containing each mention (i.e., |S| = |M |).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention and Context Similarity", "sec_num": "2" }, { "text": "We learn the partition E by considering mention and context similarity, which are measured with separate component models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention and Context Similarity", "sec_num": "2" }, { "text": "We use separate methods for within-and crosslanguage mention similarity. The pairwise similarity of any two mentions m i and m j is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Similarity", "sec_num": "2.1" }, { "text": "Arabic Rules \u2192 b \u2192 t \u2192 th \u2192 j \u2192 h \u2192 kh \u2192 d \u2192 th \u2192 r \u2192 z \u2192 s \u2192 sh \u2192 s \u2192 d \u2192 t \u2192 th \u2192 a \u2192 g \u2192 f \u2192 q \u2192 k \u2192 l \u2192 m \u2192 n \u2192 h \u2192 a \u2192 w \u2192 a \u2192 ah \u2192 \u2205 \u2192 \u2205 English Rules k \u2192 c p \u2192 b x \u2192 ks e,i,o,u \u2192 \u2205", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Similarity", "sec_num": "2.1" }, { "text": "sim(m i , m j ) = jaro-winkler(m i , m j ) if lang(m i ) = lang(m j ) maxent(m i , m j ) otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Similarity", "sec_num": "2.1" }, { "text": "Jaro-Winkler Distance (within-language) If lang(m i ) = lang(m j ), we use the Jaro-Winkler edit distance (Porter and Winkler, 1997) . Jaro-Winkler rewards matching prefixes, the empirical justification being that less variation typically occurs at the beginning of names. 2 The metric produces a score in the range [0,1], where 0 indicates equality.", "cite_spans": [ { "start": 106, "end": 132, "text": "(Porter and Winkler, 1997)", "ref_id": "BIBREF38" }, { "start": 273, "end": 274, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Mention Similarity", "sec_num": "2.1" }, { "text": "When lang(m i ) = lang(m j ), then the two mentions might be in different writing systems. Edit distance calculations no longer apply directly. One solution would be full-blown transliteration (Knight and Graehl, 1998) , followed by application of Jaro-Winkler. However, transliteration systems are complex and require significant training resources. We find that a simpler, low-resource approach works well in practice. First, we deterministically map both languages to a common phonetic representation (Tbl. 1). 3 Next, we align the mention pairs with the Hungarian algorithm, O", "cite_spans": [ { "start": 193, "end": 218, "text": "(Knight and Graehl, 1998)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Maxent model (cross-language)", "sec_num": null }, { "text": "Active for each bigram in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxent model (cross-language)", "sec_num": null }, { "text": "cbigrams(mi,u) cbigrams(mj,v) B -D -mi Active for each bigram in cbigrams(mi) \u2212 cbigrams(mj) B -D -mj Active for each bigram in cbigrams(mj) \u2212 cbigrams(mi) B -L -D Value of abs(size(cbigrams(mi) \u2212 cbigrams(mj))) B -E -D Count of token pairs with Lev(mi,u, mj,v) > 3.0 T -E -D Sum of aligned token edit distances L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxent model (cross-language)", "sec_num": null }, { "text": "Active for one of:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxent model (cross-language)", "sec_num": null }, { "text": "len(mi) > len(mj) or len(mi) < len(mj) or len(mi) = len(mj) L -D abs(len(mi) \u2212 len(mj)) S", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maxent model (cross-language)", "sec_num": null }, { "text": "Active if len(mi) = 1 S -P Active if len(mi) = len(mj) = 1 which produces a word-to-word alignment A m i ,m j . 4 Finally, we build a simple binary Maxent classifier p(y|m i , m j ; \u03bb) that extracts features from the aligned mentions (Tbl. 2). We learn the parameters \u03bb using a quasi-Newton procedure with L 1 (lasso) regularization (Andrew and Gao, 2007) .", "cite_spans": [ { "start": 112, "end": 113, "text": "4", "ref_id": null }, { "start": 333, "end": 355, "text": "(Andrew and Gao, 2007)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Maxent model (cross-language)", "sec_num": null }, { "text": "Mention strings alone are not always sufficient for disambiguation. Consider again the simple example in Fig. 1 . Both doc3 and doc4 reference \"Steve Jobs\" and \"Apple\" in the same contexts. Context cooccurence and/or similarity can thus disambiguate these two entities from other entities with similar references (e.g., \"Steve Jones\" or \"Apple Corps\"). As with the mention strings, the contexts may originate in different writing systems. We consider both highand low-resource approaches for mapping contexts to a common representation.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 111, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Context Mapping and Similarity", "sec_num": "2.2" }, { "text": "Machine Translation (MT) For the high-resource setting, if lang(m i ) = English, then we translate both m i and its context s i to English with an MT system. We use Phrasal (Cer et al., 2010) , a phrase-based system which, like most public MT systems, lacks a transliteration module. We believe that this approach yields the most accurate context mapping for highresource language pairs (like English-Arabic).", "cite_spans": [ { "start": 173, "end": 191, "text": "(Cer et al., 2010)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Context Mapping and Similarity", "sec_num": "2.2" }, { "text": "The polylingual topic model (PLTM) (Mimno et al., 2009 ) is a generative process in which document tuplesgroups of topically-similar documents-share a topic distribution. The tuples need not be sentence-aligned, so training data is easier to obtain. For example, one document tuple might be the set of Wikipedia articles (in all languages) for Steve Jobs. Let D be a set of document tuples, where there is one document in each tuple for each of L languages.", "cite_spans": [ { "start": 35, "end": 54, "text": "(Mimno et al., 2009", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Polylingual Topic Model (PLTM)", "sec_num": null }, { "text": "Each language has vocabulary V l and each document d l t has N l t tokens. We specify a fixed-size set of topics K. The PLTM generates the document tuples as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polylingual Topic Model (PLTM)", "sec_num": null }, { "text": "P T M \u03b8 t \u223c Dir(\u03b1 K ) [cross-lingual tuple-topic prior] \u03c6 l k \u223c Dir(\u03b2 V l ) [word-topic prior]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polylingual Topic Model (PLTM)", "sec_num": null }, { "text": "for each token w l t,n with n = {1, . . . , N l t }: z t,n \u223c Mult(\u03b8 t ) w l t,n \u223c Mult(\u03c6 l zt,n ) For cross-lingual context mapping, we infer the 1best topic assignments for each token in all S mention contexts. This technique reduces V l = k for all l. Moreover, all languages have a common vocabulary: the set of K topic indices. Since the PLTM is not a contribution of this paper, we refer the interested reader to (Mimno et al., 2009) for more details.", "cite_spans": [ { "start": 418, "end": 438, "text": "(Mimno et al., 2009)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Polylingual Topic Model (PLTM)", "sec_num": null }, { "text": "After mapping each mention context to a common representation, we measure context similarity based on the choice of clustering algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Polylingual Topic Model (PLTM)", "sec_num": null }, { "text": "We incorporate the mention and context similarity measures into a clustering framework. We consider two algorithms. The first is hierarchical agglomerative clustering (HAC), with which we assume basic familiarity . A shortcoming of HAC is that a stop threshold must be tuned. To avoid this requirement, we also consider non-parametric probabilistic clustering in the form of a Dirichlet process mixture model (DPMM) (Antoniak, 1974) .", "cite_spans": [ { "start": 416, "end": 432, "text": "(Antoniak, 1974)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "Both clustering algorithms can be modified to accommodate pairwise constraints. We have observed better results by encoding mention similarity as a hard constraint. Context similarity is thus the cluster distance measure. 5 To turn the Jaro-Winkler distance into a hard boolean constraint, we tuned a threshold \u03b7 on held-out data, i.e., jaro-winkler(", "cite_spans": [ { "start": 222, "end": 223, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "m i , m j ) \u2264 \u03b7 \u21d2 m i = m j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "Likewise, the Maxent model is a binary classifier, so", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "p(y = 1|m i , m j ; \u03bb) > 0.5 \u21d2 m i = m j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "In both clustering algorithms, any two chains C a and C b cannot share the same cluster assignment if: The deterministic accessor function repr(C a ) returns the representative mention of a chain. The heuristic we used was \"first mention\": the function returns the earliest mention that appears in the associated document. In many languages, the first mention is typically more complete than later mentions. This heuristic also makes our system less sensitive to withindocument coreference errors. 6 The representative mention only has special status for mention similarity: context similarity considers all mention contexts.", "cite_spans": [ { "start": 498, "end": 499, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "1. Document origin: doc(C a ) = doc(C b ) 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering Algorithms", "sec_num": "3" }, { "text": "HAC iteratively merges the \"nearest\" clusters according to context similarity. In our system, each cluster context is a bag of words W formed from the contexts of all coreference chains in that cluster. For each word in W we estimate a unigram Entity Language Model (ELM) (Raghavan et al., 2004) :", "cite_spans": [ { "start": 272, "end": 295, "text": "(Raghavan et al., 2004)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained Hierarchical Clustering", "sec_num": "3.1" }, { "text": "P (w) = count W (w) + \u03c1P V (w) w count W (w ) + \u03c1 P V (w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Hierarchical Clustering", "sec_num": "3.1" }, { "text": "is the unigram probability in all contexts in the corpus 7 and \u03c1 is a smoothing parameter. For any two entity clusters E a and E b , the distance between P Ea and P E b is given by a metric based on the Jensen-Shannon Divergence (JSD) (Endres and Schindelin, 2003) :", "cite_spans": [ { "start": 235, "end": 264, "text": "(Endres and Schindelin, 2003)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained Hierarchical Clustering", "sec_num": "3.1" }, { "text": "dist(P Ea , P E b ) = 2 \u2022 JSD(P Ea ||P E b ) = KL(P Ea ||M ) + KL(M ||P E b )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Hierarchical Clustering", "sec_num": "3.1" }, { "text": "where KL(P Ea ||M ) is the Kullback-Leibler divergence and M = 1 2 (P Ea + P E b ). We initialize HAC to E = C, i.e., the initial clustering solution is just the set of all coreference chains. Then we remove all links in the HAC proximity matrix that violate pairwise cannot-link constraints. During clustering, we do not merge E a and E b if any pair of chains violates a cannot-link constraint. This procedure propagates the cannot-link constraints (Klein et al., 2002) . To output E, we stop clustering when the minimum JSD exceeds a stop threshold \u03b3, which is tuned on a development set.", "cite_spans": [ { "start": 451, "end": 471, "text": "(Klein et al., 2002)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained Hierarchical Clustering", "sec_num": "3.1" }, { "text": "Instead of tuning a parameter like \u03b3, it would be preferable to let the data dictate the number of entity clusters. We thus consider a non-parametric Bayesian mixture model where the mixtures are multinomial distributions over the entity contexts S. Specifically, we consider a DPMM, which automatically infers the number of mixtures. Each C a has an associated mixture \u03b8 a :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Dirichlet Process Mixture Model (DPMM)", "sec_num": "3.2" }, { "text": "C a |\u03b8 a \u223c Mult(\u03b8 a ) \u03b8 a |G \u223c G G|\u03b1, G 0 \u223c DP(\u03b1, G 0 ) \u03b1 \u223c Gamma(1, 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Dirichlet Process Mixture Model (DPMM)", "sec_num": "3.2" }, { "text": "where \u03b1 is the concentration parameter of the DP prior and G 0 is the base distribution with support V . For our experiments, we set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Dirichlet Process Mixture Model (DPMM)", "sec_num": "3.2" }, { "text": "G 0 = Dir(\u03c0 1 , . . . , \u03c0 V ), where \u03c0 i = P V (w i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constrained Dirichlet Process Mixture Model (DPMM)", "sec_num": "3.2" }, { "text": "For inference, we use the Gibbs sampler of Vlachos et al. (2009) , which can incorporate pairwise constraints. The sampler is identical to a standard collapsed, token-based sampler, except the conditional probability p(E a = E|E \u2212a , C a ) = 0 if C a cannot be merged with the chains in cluster E. This property makes the model non-exchangeable, but in practice non-exchangeable models are sometimes useful (Blei and Frazier, 2010) . During sampling, we also learn \u03b1 using the auxiliary variable procedure of West (1995) , so the only fixed parameters are those of the vague Gamma prior. However, we found that these hyperparameters were not sensitive.", "cite_spans": [ { "start": 58, "end": 64, "text": "(2009)", "ref_id": null }, { "start": 407, "end": 431, "text": "(Blei and Frazier, 2010)", "ref_id": "BIBREF6" }, { "start": 509, "end": 520, "text": "West (1995)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Constrained Dirichlet Process Mixture Model (DPMM)", "sec_num": "3.2" }, { "text": "We trained our system for Arabic-English crosslingual entity clustering. 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data and Procedures", "sec_num": "4" }, { "text": "The Maxent mention similarity model requires a parallel name list for training. Name pair lists can be obtained from the LDC (e.g., LDC2005T34 contains nearly 450,000 parallel Chinese-English names) or Wikipedia (Irvine et al., 2010) . We extracted 12,860 name pairs from the parallel Arabic-English translation treebanks, 9 although our experiments show that the model achieves high accuracy with significantly fewer training examples. We generated a uniform distribution of training examples by running a Bernoulli trial for each aligned name pair in the corpus. If the coin was heads, we replaced the English name with another English name chosen randomly from the corpus.", "cite_spans": [ { "start": 212, "end": 233, "text": "(Irvine et al., 2010)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Maxent Mention Similarity", "sec_num": null }, { "text": "MT Context Mapping For the MT context mapping method, we trained Phrasal with all data permitted under the NIST OpenMT Ar-En 2009 constrained track evaluation. We built a 5-gram language model from the Xinhua and AFP sections of the Gigaword corpus (LDC2007T07), in addition to all of the target side training data. In addition to the baseline Phrasal feature set, we used the lexicalized re-ordering model of Galley and Manning (2008) .", "cite_spans": [ { "start": 410, "end": 435, "text": "Galley and Manning (2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Maxent Mention Similarity", "sec_num": null }, { "text": "For PLTM training, we formed a corpus of 19,139 English-Arabic topicallyaligned Wikipedia articles. Cross-lingual links in Wikipedia are abundant: as of February 2010, there were 77.07M cross-lingual links among Wikipedia's 272 language editions (de Melo and Weikum, 2010). To increase vocabulary coverage for our ACE2008 evaluation corpus, we added 20,000 document singletons from the ACE2008 training corpus. The topically-aligned tuples served as \"glue\" to share topics between languages, while the ACE documents distribute those topics over in-domain vocabulary. 10 We used the PLTM implementation in Mallet (Mc-Callum, 2002) . We ran the sampler for 10,000 iterations and set the number of topics K = 512.", "cite_spans": [ { "start": 567, "end": 569, "text": "10", "ref_id": null }, { "start": 612, "end": 629, "text": "(Mc-Callum, 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "PLTM Context Mapping", "sec_num": null }, { "text": "Our experimental design is a cross-lingual extension of the standard cross-document coreference resolution task, which appeared in ACE2008 NIST, 2008) . We evaluate name (NAM) mentions for cross-lingual person (PER) and organization (ORG) entities. Neither the number nor the attributes of the entities are known (i.e., the task does not include a knowledge base). We report results for both gold and automatic within-document mention detection and coreference resolution.", "cite_spans": [ { "start": 139, "end": 150, "text": "NIST, 2008)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Task Evaluation Framework", "sec_num": "5" }, { "text": "We use entity-level evaluation metrics, i.e., we evaluate the E entity clusters rather than the mentions. For the gold setting, we report:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "\u2022 B 3 (Bagga and Baldwin, 1998a): Precision and recall are computed from the intersection of the hypothesis and reference clusters. \u2022 CEAF (Luo, 2005) : Precision and recall are computed from a maximum bipartite matching between hypothesis and reference clusters. \u2022 NVI (Reichart and Rappoport, 2009) :", "cite_spans": [ { "start": 139, "end": 150, "text": "(Luo, 2005)", "ref_id": "BIBREF28" }, { "start": 270, "end": 300, "text": "(Reichart and Rappoport, 2009)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "Information-theoretic measure that utilizes the entropy of the clusters and their mutual information. Unlike the commonly-used Variation of Information (VI) metric, normalized VI (NVI) is not sensitive to the size of the data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "For the automatic setting, we must apply a different metric since the number of system chains may differ from the reference. We use B 3 sys (Cai and Strube, 2010) , a variant of B 3 that was shown to penalize both twinless reference chains and spurious system chains more fairly.", "cite_spans": [ { "start": 140, "end": 162, "text": "(Cai and Strube, 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": null }, { "text": "The automatic evaluation of cross-lingual coreference systems requires annotated 10 Mimno et al. (2009) showed that so long as the proportion of topically-aligned to non-aligned documents exceeded 0.25, the topic distributions (as measured by mean Jensen-Shannon Divergence between distributions) did not degrade significantly.", "cite_spans": [ { "start": 81, "end": 103, "text": "10 Mimno et al. (2009)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Corpus", "sec_num": null }, { "text": "Tokens multilingual corpora. Cross-document annotation is expensive , so we chose the ACE2008 Arabic-English evaluation corpus as a starting point for cross-lingual annotation. The corpus consists of seven genres sampled from independent sources over the course of a decade (Tbl. 3). The corpus provides gold mono-lingual cross-document coreference annotations for both PER and ORG entities. Using these annotations as a starting point, we found and annotated 216 cross-lingual entities. 11 Because a similar corpus did not exist for development, we split the evaluation corpus into development and test sections. However, the usual method of splitting by document would not confine all mentions of each entity to one side of the split. We thus split the corpus by global entity id. We assigned one-third of the entities to development, and the remaining twothirds to test.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Docs", "sec_num": null }, { "text": "Entities Chains", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Docs", "sec_num": null }, { "text": "Our modeling techniques and task formulation can be viewed as cross-lingual extensions to cross-document coreference resolution. The classic work on this task was by Bagga and Baldwin (1998b) , who adapted the Vector Space Model (VSM) (Salton et al., 1975 ). Gooi and Allan (2004) found effective algorithmic extensions like agglomerative clustering. Successful feature extensions to the VSM for cross-document coreference have included biographical information (Mann and Yarowsky, 2003) and syntactic context (Chen and Martin, 2007) . However, neither of these feature sets generalize easily to the cross-lingual setting with multiple entity types. Fleischman and Hovy (2004) added a discriminative pairwise mention classifier to a VSM-like model, much as we do. More recent work has considered new models for web-scale corpora (Rao et al., 2010; Singh et al., 2011) .", "cite_spans": [ { "start": 166, "end": 191, "text": "Bagga and Baldwin (1998b)", "ref_id": "BIBREF4" }, { "start": 235, "end": 255, "text": "(Salton et al., 1975", "ref_id": "BIBREF43" }, { "start": 259, "end": 280, "text": "Gooi and Allan (2004)", "ref_id": "BIBREF20" }, { "start": 462, "end": 487, "text": "(Mann and Yarowsky, 2003)", "ref_id": "BIBREF30" }, { "start": 510, "end": 533, "text": "(Chen and Martin, 2007)", "ref_id": "BIBREF10" }, { "start": 650, "end": 676, "text": "Fleischman and Hovy (2004)", "ref_id": "BIBREF16" }, { "start": 829, "end": 847, "text": "(Rao et al., 2010;", "ref_id": "BIBREF41" }, { "start": 848, "end": 867, "text": "Singh et al., 2011)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Related Tasks and Work", "sec_num": "6" }, { "text": "Cross-document work on languages other than English is scarce. Wang (2005) used a combination of the VSM and heuristic feature selection strategies to cluster transliterated Chinese personal names. For Arabic, Magdy et al. (2007) started with the output of the mention detection and within-document coreference system of Florian et al. (2004) . They clustered the entities incrementally using a binary classifier. Baron and Freedman (2008) used complete-link agglomerative clustering, where merging decisions were based on a variety of features such as document topic and name uniqueness. Finally, Sayeed et al. (2009) translated Arabic name mentions to English and then formed clusters greedily using pairwise matching.", "cite_spans": [ { "start": 63, "end": 74, "text": "Wang (2005)", "ref_id": "BIBREF50" }, { "start": 210, "end": 229, "text": "Magdy et al. (2007)", "ref_id": "BIBREF29" }, { "start": 321, "end": 342, "text": "Florian et al. (2004)", "ref_id": "BIBREF17" }, { "start": 414, "end": 439, "text": "Baron and Freedman (2008)", "ref_id": "BIBREF5" }, { "start": 598, "end": 618, "text": "Sayeed et al. (2009)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Related Tasks and Work", "sec_num": "6" }, { "text": "To our knowledge, the cross-lingual entity clustering task is novel. However, there is significant prior work on similar tasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Related Tasks and Work", "sec_num": "6" }, { "text": "\u2022 Multilingual coreference resolution: Adapt English within-document coreference models to other languages (Harabagiu and Maiorano, 2000; Florian et al., 2004; Luo and Zitouni, 2005) . \u2022 Named entity translation: For a non-English document, produce an inventory of entities in English. An ACE2007 pilot task . \u2022 Named entity clustering: Assign semantic types to text mentions (Collins and Singer, 1999; Elsner et al., 2009) . \u2022 Cross-language name search / entity linking:", "cite_spans": [ { "start": 107, "end": 137, "text": "(Harabagiu and Maiorano, 2000;", "ref_id": "BIBREF22" }, { "start": 138, "end": 159, "text": "Florian et al., 2004;", "ref_id": "BIBREF17" }, { "start": 160, "end": 182, "text": "Luo and Zitouni, 2005)", "ref_id": "BIBREF27" }, { "start": 376, "end": 402, "text": "(Collins and Singer, 1999;", "ref_id": "BIBREF12" }, { "start": 403, "end": 423, "text": "Elsner et al., 2009)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Related Tasks and Work", "sec_num": "6" }, { "text": "Match a single query name against a list of known multilingual names (knowledge base). A track in the 2011 NIST Text Analysis Conference (TAC-KBP) evaluation (Aktolga et al., 2008; McCarley, 2009; Udupa and Khapra, 2010; Mc-Namee et al., 2011) .", "cite_spans": [ { "start": 158, "end": 180, "text": "(Aktolga et al., 2008;", "ref_id": "BIBREF0" }, { "start": 181, "end": 196, "text": "McCarley, 2009;", "ref_id": "BIBREF34" }, { "start": 197, "end": 220, "text": "Udupa and Khapra, 2010;", "ref_id": "BIBREF48" }, { "start": 221, "end": 243, "text": "Mc-Namee et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Related Tasks and Work", "sec_num": "6" }, { "text": "Our work incorporates elements of the first three tasks. Most importantly, we avoid the key element of entity linking: a knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Related Tasks and Work", "sec_num": "6" }, { "text": "We performed intrinsic evaluations for both mention and context similarity. For context similarity, we analyzed mono-lingual entity clustering, which also facilitated comparison to prior work on the ACE2008 evaluation set. Our main results are for the new task: cross-lingual entity clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "7" }, { "text": "We created a random 80/10/10 (train, development, test) split of the Maxent training corpus and evaluated binary classification accuracy (Tbl. 4). Of the mis-classified examples, we observed three major error types. First, the model learns that high edit distance is predictive of a mismatch. However, singleton strings that do not match often have a lower edit distance than longer strings that do match. As a result, singletons often cause false positives. Second, names that originate in a third language tend to violate the phonemic correspondences. For example, the model gives a false negative for a German football team:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "(phonetic mapping: af s kazrslawtrn) versus \"FC Kaiserslautern.\" Finally, names that require translation are problematic. Table 6 : Cross-lingual entity clustering (test set, gold within-document processing). B 3 target is the standard B 3 metric applied to the subset of target cross-lingual entities in the test set. For CEAF and B 3 , S is the stronger baseline due to the high proportion of singleton entities in the corpus. Of course, cross-lingual entities have at least two chains, so N -is a better baseline for cross-lingual clustering.", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 129, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "Mono-lingual Entity Clustering For comparison, we also evaluated our system on a standard monolingual cross-document coreference task (Arabic and English) (Tbl. 5). We configured the system with HAC clustering and Jaro-Winkler (within-language) mention similarity. We built mono-lingual ELMs for context similarity. We used two baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "\u2022 S : E = C, i.e., the cross-lingual clustering solution is just the set of mono-lingual coreference chains. This is a common baseline for mono-lingual entity clustering (Baron and Freedman, 2008) .", "cite_spans": [ { "start": 170, "end": 196, "text": "(Baron and Freedman, 2008)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "\u2022 N -: We run HAC with \u03c1 = \u221e. Therefore, E is the set of fully-connected components in C subject to the pairwise constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "For HAC, we manually tuned the stop threshold \u03b3, the Jaro-Winkler threshold \u03b7, and the ELM smoothing parameter \u03c1 on the development set. For the DPMM, no development tuning was necessary, and we evaluated a single sample of E taken after 3,000 iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "To our knowledge, Baron and Freedman (2008) reported the only previous results on the ACE2008 data set. However, they only gave gold results for English, and clustered the entire evaluation corpus (test+development). To control for the effect of within-document errors, we considered their gold input (mention detection and within-document coreference resolution) results. They reported B 3 for the two entity types separately: ORG (91.5% F1) and PER (94.3% F1 Table 7 : Cross-lingual entity clustering (test set, automatic (Serif) within-document processing). For HAC, we used the same parameters as the gold setting.", "cite_spans": [ { "start": 18, "end": 43, "text": "Baron and Freedman (2008)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 461, "end": 468, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "the two systems are at least in the same range.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Mention Matching", "sec_num": null }, { "text": "We evaluated four system configurations on the new task: HAC+MT, HAC+PLTM, DPMM+MT, and DPMM+PLTM. First, we established an upper bound by assuming gold within-document mention detection and coreference resolution (Tbl. 6). This setting isolated the new cross-lingual clustering methods from within-document processing errors. Then we evaluated with Serif (automatic) within-document processing (Tbl. 7). This second experiment replicated an application setting. We used the same baselines and tuning procedures as in the mono-lingual clustering experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual Entity Clustering", "sec_num": "7.2" }, { "text": "In the gold setting, HAC+MT produces the best results, as expected. The dimensionality reduction of the vocabulary imposed by PLTM significantly reduces accuracy, but HAC+PLTM still exceeds the baseline. We tried increasing the number of PLTM topics k, but did not observe an improvement in task accuracy. For both context-mapping methods, the DPMM suffers from low-recall. Upon inspection, the clustering solution of DPMM+MT contains a high proportion of singleton hypotheses, suggesting that the model finds lower similarity in the presence of a larger vocabulary. When the context vocabulary consists of PLTM topics, larger clusters are discovered (DPMM+PLTM). The effect of dimensionality reduction is also apparent in the clustering solutions of the PLTM models. For example, for the Serif output, DPMM+PLTM produces a cluster consisting of \"White House\", \"Senate\", \"House of Representatives\", and \"Parliament\". Arabic mentions of the latter three entities pass the pairwise mention similarity constraints due to the word 'council', which appears in text mentions for all three legislative bodies. A cross-language matching error resulted in the linking of \"White House\", and the reduced granularity of the contexts precluded further disambiguation. Of course, these entities probably appear in similar contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "The caveat with the Serif results in Tbl. 7 is that 3,251 of the 7,655 automatic coreference chains are not in the reference. Consequently, the evaluation is dominated by the penalty for spurious system coreference chains. Nonetheless, all models except for DPMM+PLTM exceed the baselines, and the relationships between models depicted in the gold experiments hold for the this setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": null }, { "text": "Cross-lingual entity clustering is a natural step toward more robust natural language understanding. We proposed pipeline models that make clustering decisions based on cross-lingual similarity. We investigated two methods for mapping documents in different languages to a common representation: MT and the PLTM. Although MT may achieve more accurate results for some language pairs, the PLTM training resources (e.g., Wikipedia) are readily available for many languages. As for the clustering algorithms, HAC appears to perform better than the DPMM on our dataset, but this may be due to the small corpus size. The instance-level constraints represent tendencies that could be learned from larger amounts of data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "With more data, we might be able to relax the constraints and use an exchangeable DPMM, which might be more effective. Finally, we have shown that significant quantities of within-document errors cascade into the cross-lingual clustering phase. As a result, we plan a model that clusters the mentions directly, thus removing the dependence on within-document coreference resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "In this paper, we have set baselines and proposed models that significantly exceeded those baselines. The best model improved upon the cross-lingual entity baseline by 24.3% F1. This result was achieved without a knowledge base, which is required by previous approaches to cross-lingual entity linking. More importantly, our techniques can be used to extend existing cross-document entity clustering systems for the increasingly multilingual web.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "For multi-token names, we sort the tokens prior to computing the score, as suggested byChristen (2006).3 This idea is reminiscent of Soundex, whichFreeman et al. (2006) used for cross-lingual name matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The Hungarian algorithm finds an optimal minimum-cost alignment. For pairwise costs between tokens, we used the Levenshtein edit distance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Specification of a combined similarity measure is an interesting direction for future work.6 These constraints are similar to the pair-filters ofMayfield et al. (2009).7 Recall that after context mapping, all languages have a common vocabulary V .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We tokenized all English documents with packages from the Stanford parser(Klein and Manning, 2003). For Arabic documents, we used Mada(Habash and Rambow, 2005) for orthographic normalization and clitic segmentation.9 LDC Catalog numbers LDC2009E82 and LDC2009E88.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The annotators were the first author and another fluent speaker of Arabic. The annotations, corrections, and corpus split are available at http://www.spencegreen.com/research/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Cross-document cross-lingual coreference retrieval", "authors": [ { "first": "E", "middle": [], "last": "Aktolga", "suffix": "" }, { "first": "M", "middle": [], "last": "Cartright", "suffix": "" }, { "first": "J", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2008, "venue": "CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Aktolga, M. Cartright, and J. Allan. 2008. Cross-document cross-lingual coreference retrieval. In CIKM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Scalable training of L1-regularized log-linear models", "authors": [ { "first": "G", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "J", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2007, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Andrew and J. Gao. 2007. Scalable training of L1-regularized log-linear models. In ICML.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems", "authors": [ { "first": "C", "middle": [ "E" ], "last": "Antoniak", "suffix": "" } ], "year": 1974, "venue": "The Annals of Statistics", "volume": "2", "issue": "6", "pages": "1152--1174", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. E. Antoniak. 1974. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The Annals of Statistics, 2(6):1152-1174.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "A", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "B", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bagga and B. Baldwin. 1998a. Algorithms for scoring coref- erence chains. In LREC.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Entity-based cross-document coreferencing using the vector space model", "authors": [ { "first": "A", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "B", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Bagga and B. Baldwin. 1998b. Entity-based cross-document coreferencing using the vector space model. In COLING-ACL.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Who is Who and What is What: Experiments in cross-document co-reference", "authors": [ { "first": "A", "middle": [], "last": "Baron", "suffix": "" }, { "first": "M", "middle": [], "last": "Freedman", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Baron and M. Freedman. 2008. Who is Who and What is What: Experiments in cross-document co-reference. In EMNLP.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Distance dependent Chinese restaurant processes", "authors": [ { "first": "D", "middle": [], "last": "Blei", "suffix": "" }, { "first": "P", "middle": [], "last": "Frazier", "suffix": "" } ], "year": 2010, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Blei and P. Frazier. 2010. Distance dependent Chinese restau- rant processes. In ICML.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Evaluation metrics for end-toend coreference resolution systems", "authors": [ { "first": "J", "middle": [], "last": "Cai", "suffix": "" }, { "first": "M", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the SIGDIAL 2010 Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Cai and M. Strube. 2010. Evaluation metrics for end-to- end coreference resolution systems. In Proceedings of the SIGDIAL 2010 Conference.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Phrasal: A statistical machine translation toolkit for exploring new model features", "authors": [ { "first": "D", "middle": [], "last": "Cer", "suffix": "" }, { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "HLT-NAACL, Demonstration Session", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Cer, M. Galley, D. Jurafsky, and C. D. Manning. 2010. Phrasal: A statistical machine translation toolkit for exploring new model features. In HLT-NAACL, Demonstration Session.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised learning of name structure from coreference data", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2001, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak. 2001. Unsupervised learning of name structure from coreference data. In NAACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Towards robust unsupervised personal name disambiguation", "authors": [ { "first": "Y", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Chen and J. Martin. 2007. Towards robust unsupervised personal name disambiguation. In EMNLP-CoNLL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A comparison of personal name matching: Techniques and practical issues", "authors": [ { "first": "P", "middle": [], "last": "Christen", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Christen. 2006. A comparison of personal name matching: Techniques and practical issues. Technical Report TR-CS-06- 02, Australian National University.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Unsupervised models for named entity classification", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1999, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In EMNLP.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Untangling the cross-lingual link structure of Wikipedia", "authors": [ { "first": "G", "middle": [], "last": "De Melo", "suffix": "" }, { "first": "G", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2010, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. de Melo and G. Weikum. 2010. Untangling the cross-lingual link structure of Wikipedia. In ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Structured generative models for unsupervised named-entity clustering", "authors": [ { "first": "M", "middle": [], "last": "Elsner", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2009, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Elsner, E. Charniak, and M. Johnson. 2009. Structured generative models for unsupervised named-entity clustering. In HLT-NAACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A new metric for probability distributions", "authors": [ { "first": "D", "middle": [ "M" ], "last": "Endres", "suffix": "" }, { "first": "J", "middle": [ "E" ], "last": "Schindelin", "suffix": "" } ], "year": 2003, "venue": "IEEE Transactions on Information Theory", "volume": "49", "issue": "7", "pages": "1858--1860", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. M. Endres and J. E. Schindelin. 2003. A new metric for probability distributions. IEEE Transactions on Information Theory, 49(7):1858 -1860.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Multi-document person name resolution", "authors": [ { "first": "M", "middle": [], "last": "Fleischman", "suffix": "" }, { "first": "E", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2004, "venue": "ACL Workshop on Reference Resolution and its Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Fleischman and E. Hovy. 2004. Multi-document person name resolution. In ACL Workshop on Reference Resolution and its Applications.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A statistical model for multilingual entity detection and tracking", "authors": [ { "first": "R", "middle": [], "last": "Florian", "suffix": "" }, { "first": "H", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "A", "middle": [], "last": "Ittycheriah", "suffix": "" }, { "first": "H", "middle": [], "last": "Jing", "suffix": "" }, { "first": "N", "middle": [], "last": "Kambhatla", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kambhatla, et al. 2004. A statistical model for multilingual entity detection and tracking. In HLT-NAACL.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Cross linguistic name matching in English and Arabic: a one to many mapping extension of the Levenshtein edit distance algorithm", "authors": [ { "first": "A", "middle": [ "T" ], "last": "Freeman", "suffix": "" }, { "first": "S", "middle": [ "L" ], "last": "Condon", "suffix": "" }, { "first": "C", "middle": [ "M" ], "last": "Ackerman", "suffix": "" } ], "year": 2006, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. T. Freeman, S. L. Condon, and C. M. Ackerman. 2006. Cross linguistic name matching in English and Arabic: a one to many mapping extension of the Levenshtein edit distance algorithm. In HLT-NAACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A simple and effective hierarchical phrase reordering model", "authors": [ { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Galley and C. D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In EMNLP.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Cross-document coreference on a large scale corpus", "authors": [ { "first": "C", "middle": [ "H" ], "last": "Gooi", "suffix": "" }, { "first": "J", "middle": [], "last": "Allan", "suffix": "" } ], "year": 2004, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. H. Gooi and J. Allan. 2004. Cross-document coreference on a large scale corpus. In HLT-NAACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Arabic tokenization, part-ofspeech tagging and morphological disambiguation in one fell swoop", "authors": [ { "first": "N", "middle": [], "last": "Habash", "suffix": "" }, { "first": "O", "middle": [], "last": "Rambow", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "N. Habash and O. Rambow. 2005. Arabic tokenization, part-of- speech tagging and morphological disambiguation in one fell swoop. In ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Multilingual coreference resolution", "authors": [ { "first": "M", "middle": [], "last": "Harabagiu", "suffix": "" }, { "first": "S", "middle": [ "J" ], "last": "Maiorano", "suffix": "" } ], "year": 2000, "venue": "ANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Harabagiu and S. J. Maiorano. 2000. Multilingual corefer- ence resolution. In ANLP.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Transliterating from all languages", "authors": [ { "first": "A", "middle": [], "last": "Irvine", "suffix": "" }, { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "A", "middle": [], "last": "Klementiev", "suffix": "" } ], "year": 2010, "venue": "AMTA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Irvine, C. Callison-Burch, and A. Klementiev. 2010. Translit- erating from all languages. In AMTA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Klein and C. D. Manning. 2003. Accurate unlexicalized parsing. In ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "From instancelevel constraints to space-level constraints: Making the most of prior knowledge in data clustering", "authors": [ { "first": "S", "middle": [ "D" ], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Kamvar", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2002, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, S. D. Kamvar, and C. D. Manning. 2002. From instance- level constraints to space-level constraints: Making the most of prior knowledge in data clustering. In ICML.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Machine transliteration. Computational Linguistics", "authors": [ { "first": "K", "middle": [], "last": "Knight", "suffix": "" }, { "first": "J", "middle": [], "last": "Graehl", "suffix": "" } ], "year": 1998, "venue": "", "volume": "24", "issue": "", "pages": "599--612", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Knight and J. Graehl. 1998. Machine transliteration. Compu- tational Linguistics, 24:599-612.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-lingual coreference resolution with syntactic features", "authors": [ { "first": "X", "middle": [], "last": "Luo", "suffix": "" }, { "first": "I", "middle": [], "last": "Zitouni", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Luo and I. Zitouni. 2005. Multi-lingual coreference resolution with syntactic features. In HLT-EMNLP.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "On coreference resolution performance metrics", "authors": [ { "first": "X", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2005, "venue": "HLT-EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Luo. 2005. On coreference resolution performance metrics. In HLT-EMNLP.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Arabic cross-document person name normalization", "authors": [ { "first": "W", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "K", "middle": [], "last": "Darwish", "suffix": "" }, { "first": "O", "middle": [], "last": "Emam", "suffix": "" }, { "first": "H", "middle": [], "last": "Hassan", "suffix": "" } ], "year": 2007, "venue": "Workshop on Computational Approaches to Semitic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. Magdy, K. Darwish, O. Emam, and H. Hassan. 2007. Arabic cross-document person name normalization. In Workshop on Computational Approaches to Semitic Languages.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Unsupervised personal name disambiguation", "authors": [ { "first": "G", "middle": [ "S" ], "last": "Mann", "suffix": "" }, { "first": "D", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2003, "venue": "NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. S. Mann and D. Yarowsky. 2003. Unsupervised personal name disambiguation. In NAACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Introduction to Information Retrieval", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "P", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. D. Manning, P. Raghavan, and H. Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Cross-document coreference resolution: A key technology for learning by reading", "authors": [ { "first": "J", "middle": [], "last": "Mayfield", "suffix": "" }, { "first": "D", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "B", "middle": [], "last": "Dorr", "suffix": "" }, { "first": "J", "middle": [], "last": "Eisner", "suffix": "" }, { "first": "T", "middle": [], "last": "Elsayed", "suffix": "" } ], "year": 2009, "venue": "AAAI Spring Symposium on Learning by Reading and Learning to Read", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Mayfield, D. Alexander, B. Dorr, J. Eisner, T. Elsayed, et al. 2009. Cross-document coreference resolution: A key technol- ogy for learning by reading. In AAAI Spring Symposium on Learning by Reading and Learning to Read.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "MALLET: A machine learning for language toolkit", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Mccallum", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. K. McCallum. 2002. MALLET: A machine learning for language toolkit. http://mallet.cs.umass.edu.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Cross language name matching", "authors": [ { "first": "J", "middle": [ "S" ], "last": "Mccarley", "suffix": "" } ], "year": 2009, "venue": "SIGIR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. S. McCarley. 2009. Cross language name matching. In SIGIR.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Cross-language entity linking", "authors": [ { "first": "P", "middle": [], "last": "Mcnamee", "suffix": "" }, { "first": "J", "middle": [], "last": "Mayfield", "suffix": "" }, { "first": "D", "middle": [], "last": "Lawrie", "suffix": "" }, { "first": "D", "middle": [ "W" ], "last": "Oard", "suffix": "" }, { "first": "D", "middle": [], "last": "Doermann", "suffix": "" } ], "year": 2011, "venue": "IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. McNamee, J. Mayfield, D. Lawrie, D.W. Oard, and D. Doer- mann. 2011. Cross-language entity linking. In IJCNLP.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Polylingual topic models", "authors": [ { "first": "D", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "H", "middle": [ "M" ], "last": "Wallach", "suffix": "" }, { "first": "J", "middle": [], "last": "Naradowsky", "suffix": "" }, { "first": "D", "middle": [ "A" ], "last": "Smith", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Mimno, H. M. Wallach, J. Naradowsky, D. A. Smith, and A. McCallum. 2009. Polylingual topic models. In EMNLP.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Automatic Content Extraction 2008 evaluation plan (ACE2008): Assessment of detection and recognition of entities and relations within and across documents", "authors": [ { "first": "", "middle": [], "last": "Nist", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NIST. 2008. Automatic Content Extraction 2008 evaluation plan (ACE2008): Assessment of detection and recognition of entities and relations within and across documents. Tech- nical Report rev. 1.2d, National Institute of Standards and Technology (NIST), 8 August.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Approximate String Comparison and its Effect on an Advanced Record Linkage System", "authors": [ { "first": "E", "middle": [ "H" ], "last": "Porter", "suffix": "" }, { "first": "W", "middle": [ "E" ], "last": "Winkler", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "190--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. H. Porter and W. E. Winkler, 1997. Approximate String Com- parison and its Effect on an Advanced Record Linkage System, chapter 6, pages 190-199. U.S. Bureau of the Census.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "An exploration of entity models, collective classification and relation description", "authors": [ { "first": "H", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "J", "middle": [], "last": "Allan", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "KDD Workshop on Link Analysis and Group Detection", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Raghavan, J. Allan, and A. McCallum. 2004. An explo- ration of entity models, collective classification and relation description. In KDD Workshop on Link Analysis and Group Detection.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "SERIF language processing-effective trainable language understanding", "authors": [ { "first": "L", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "E", "middle": [], "last": "Boschee", "suffix": "" }, { "first": "M", "middle": [], "last": "Freedman", "suffix": "" }, { "first": "J", "middle": [], "last": "Macbride", "suffix": "" }, { "first": "R", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "A", "middle": [], "last": "Zamanian", "suffix": "" } ], "year": 2011, "venue": "Handbook of Natural Language Processing and Machine Translation: DARPA Global Autonomous Language Exploitation", "volume": "", "issue": "", "pages": "636--644", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Ramshaw, E. Boschee, M. Freedman, J. MacBride, R. Weischedel, and A. Zamanian. 2011. SERIF language processing-effective trainable language understanding. In J. Olive et al., editors, Handbook of Natural Language Process- ing and Machine Translation: DARPA Global Autonomous Language Exploitation, pages 636-644. Springer.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Streaming cross document entity coreference resolution", "authors": [ { "first": "D", "middle": [], "last": "Rao", "suffix": "" }, { "first": "P", "middle": [], "last": "Mcnamee", "suffix": "" }, { "first": "M", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2010, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Rao, P. McNamee, and M. Dredze. 2010. Streaming cross document entity coreference resolution. In COLING.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "The NVI clustering evaluation measure", "authors": [ { "first": "R", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "A", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Reichart and A. Rappoport. 2009. The NVI clustering evalu- ation measure. In CoNLL.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "A vector space model for automatic indexing", "authors": [ { "first": "G", "middle": [], "last": "Salton", "suffix": "" }, { "first": "A", "middle": [], "last": "Wong", "suffix": "" }, { "first": "C", "middle": [ "S" ], "last": "Yang", "suffix": "" } ], "year": 1975, "venue": "CACM", "volume": "18", "issue": "", "pages": "613--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "G. Salton, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. CACM, 18:613-620, November.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Arabic cross-document coreference detection", "authors": [ { "first": "A", "middle": [], "last": "Sayeed", "suffix": "" }, { "first": "T", "middle": [], "last": "Elsayed", "suffix": "" }, { "first": "N", "middle": [], "last": "Garera", "suffix": "" }, { "first": "D", "middle": [], "last": "Alexander", "suffix": "" }, { "first": "T", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2009, "venue": "ACL-IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Sayeed, T. Elsayed, N. Garera, D. Alexander, T. Xu, et al. 2009. Arabic cross-document coreference detection. In ACL- IJCNLP, Short Papers.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Large-scale cross-document coreference using distributed inference and hierarchical models", "authors": [ { "first": "S", "middle": [], "last": "Singh", "suffix": "" }, { "first": "A", "middle": [], "last": "Subramanya", "suffix": "" }, { "first": "F", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "A", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2011, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Singh, A. Subramanya, F. Pereira, and A. McCallum. 2011. Large-scale cross-document coreference using distributed in- ference and hierarchical models. In ACL.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Entity translation and alignment in the ACE-07 ET task", "authors": [ { "first": "Z", "middle": [], "last": "Song", "suffix": "" }, { "first": "S", "middle": [], "last": "Strassel", "suffix": "" } ], "year": 2008, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Song and S. Strassel. 2008. Entity translation and alignment in the ACE-07 ET task. In LREC.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Linguistic resources and evaluation techniques for evaluation of cross-document automatic content extraction", "authors": [ { "first": "S", "middle": [], "last": "Strassel", "suffix": "" }, { "first": "M", "middle": [], "last": "Przybocki", "suffix": "" }, { "first": "K", "middle": [], "last": "Peterson", "suffix": "" }, { "first": "Z", "middle": [], "last": "Song", "suffix": "" }, { "first": "K", "middle": [], "last": "Maeda", "suffix": "" } ], "year": 2008, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Strassel, M. Przybocki, K. Peterson, Z. Song, and K. Maeda. 2008. Linguistic resources and evaluation techniques for evaluation of cross-document automatic content extraction. In LREC.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Improving the multilingual user experience of Wikipedia using cross-language name search", "authors": [ { "first": "R", "middle": [], "last": "Udupa", "suffix": "" }, { "first": "M", "middle": [ "M" ], "last": "Khapra", "suffix": "" } ], "year": 2010, "venue": "HLT-NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Udupa and M. M. Khapra. 2010. Improving the multilin- gual user experience of Wikipedia using cross-language name search. In HLT-NAACL.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Unsupervised and constrained Dirichlet process mixture models for verb clustering", "authors": [ { "first": "A", "middle": [], "last": "Vlachos", "suffix": "" }, { "first": "A", "middle": [], "last": "Korhonen", "suffix": "" }, { "first": "Z", "middle": [], "last": "Ghahramani", "suffix": "" } ], "year": 2009, "venue": "Proc. of the Workshop on Geometrical Models of Natural Language Semantics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Vlachos, A. Korhonen, and Z. Ghahramani. 2009. Unsuper- vised and constrained Dirichlet process mixture models for verb clustering. In Proc. of the Workshop on Geometrical Models of Natural Language Semantics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Cross-document transliterated personal name coreference resolution", "authors": [ { "first": "H", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2005, "venue": "Fuzzy Systems and Knowledge Discovery", "volume": "3614", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Wang. 2005. Cross-document transliterated personal name coreference resolution. In L. Wang and Y. Jin, editors, Fuzzy Systems and Knowledge Discovery, volume 3614 of Lecture Notes in Computer Science, pages 11-20. Springer.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Hyperparameter estimation in Dirichlet process mixture models", "authors": [ { "first": "M", "middle": [], "last": "West", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. West. 1995. Hyperparameter estimation in Dirichlet process mixture models. Technical report, Duke University.", "links": null } }, "ref_entries": { "FIGREF1": { "num": null, "type_str": "figure", "text": "Semantic type: type(C a ) = type(C b ) 3. Mention Match: sim(m i , m j ) = f alse, where m i = repr(C a ) and m j = repr(C b ).", "uris": null }, "TABREF0": { "num": null, "text": "", "html": null, "content": "
: English-Arabic mapping rules to a common or-
thographic representation. \"\u2205\" indicates a null mapping.
For English, we also lowercase and remove determiners
and punctuation. For Arabic, we remove the determiner
Al 'the' and the elongation character tatwil ' '.
", "type_str": "table" }, "TABREF1": { "num": null, "text": "", "html": null, "content": "", "type_str": "table" }, "TABREF3": { "num": null, "text": "ACE2008 evaluation corpus PER and ORG entity statistics. Singleton chains account for 51.4% of the Arabic data and 46.2% of the English data. Just 216 entities appear in both languages.", "html": null, "content": "
", "type_str": "table" }, "TABREF5": { "num": null, "text": "", "html": null, "content": "
: Cross-lingual mention matching accuracy [%].
The training data contains names from three genres: broad-
cast news (bn), newswire (nw), and weblog (wb). We used
the full training corpus (all) for the cross-lingual clustering
experiments, but the model achieved high accuracy with
significantly fewer training examples (e.g., bn).
CEAF\u2191 NVI\u2193B 3 \u2191
#hyp PRF1
Mono-lingual Arabic (#gold=1,721)
HAC87.20.052 1,669 89.8 89.8 89.8
Mono-lingual English (#gold=1,529)
HAC88.50.042 1,536 93.7 89.0 91.4
", "type_str": "table" }, "TABREF6": { "num": null, "text": "Mono-lingual entity clustering evaluation (test set, gold within-document processing). Higher scores (\u2191) are better for CEAF and B 3 , whereas lower (\u2193) is better for NVI. #gold indicates the number of reference entities, whereas #hyp is the size of E.", "html": null, "content": "", "type_str": "table" } } } }