{ "paper_id": "D13-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:41:47.223844Z" }, "title": "Growing Multi-Domain Glossaries from a Few Seeds using Probabilistic Topic Models", "authors": [ { "first": "Stefano", "middle": [], "last": "Faralli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e0 di Roma", "location": {} }, "email": "faralli@di.uniroma1.it" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universit\u00e0 di Roma", "location": {} }, "email": "navigli@di.uniroma1.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present a minimallysupervised approach to the multi-domain acquisition of wide-coverage glossaries. We start from a small number of hypernymy relation seeds and bootstrap glossaries from the Web for dozens of domains using Probabilistic Topic Models. Our experiments show that we are able to extract high-precision glossaries comprising thousands of terms and definitions.", "pdf_parse": { "paper_id": "D13-1018", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present a minimallysupervised approach to the multi-domain acquisition of wide-coverage glossaries. We start from a small number of hypernymy relation seeds and bootstrap glossaries from the Web for dozens of domains using Probabilistic Topic Models. Our experiments show that we are able to extract high-precision glossaries comprising thousands of terms and definitions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dictionaries, thesauri and glossaries are useful sources of information for students, scholars and everyday readers, who use them to look up words of which they either do not know, or have forgotten, the meaning. With the advent of the Web an increasing number of dictionaries and technical glossaries has been made available online, thereby speeding up the definition search process. However, finding definitions is not always immediate, especially if the target term pertains to a specialized domain. Indeed, not even well-known services such as Google Define are able to provide definitions for scientific or technical terms such as taxonomy learning or distant supervision in AI or figure-four leglock and suspended surfboard in wrestling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Domain-specific knowledge of a definitional nature is not only useful for humans, it is also useful for machines (Hovy et al., 2013) . Examples include Natural Language Processing tasks such as Question Answering (Cui et al., 2007) , Word Sense Disambiguation (Duan and Yates, 2010; Faralli and Navigli, 2012) and ontology learning (Velardi et al., 2013) . Unfortunately, most of the Web dictionaries and glossaries available online comprise just a few hundred definitions, and they therefore provide only a partial view of a domain. This is also the case with manually compiled glossaries created by means of collaborative efforts, such as Wikipedia. 1 The coverage issue is addressed by online aggregation services such as Google Define, which bring together definitions from several online dictionaries. However, these services do not classify textual definitions by domain: they just present the collected definitions for all the possible meanings of a given term.", "cite_spans": [ { "start": 113, "end": 132, "text": "(Hovy et al., 2013)", "ref_id": "BIBREF10" }, { "start": 213, "end": 231, "text": "(Cui et al., 2007)", "ref_id": "BIBREF3" }, { "start": 283, "end": 309, "text": "Faralli and Navigli, 2012)", "ref_id": "BIBREF5" }, { "start": 332, "end": 354, "text": "(Velardi et al., 2013)", "ref_id": "BIBREF25" }, { "start": 652, "end": 653, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In order to automatically obtain large domain glossaries, in recent years computational approaches have been developed which extract textual definitions from corpora (Navigli and Velardi, 2010; Reiplinger et al., 2012) or the Web (Velardi et al., 2008; Fujii and Ishikawa, 2000) . The methods involving corpora start from a given set of terms (possibly automatically extracted from a domain corpus) and then harvest textual definitions for these terms from the input corpus using a supervised system. Web-based methods, instead, extract text snippets from Web pages which match pre-defined lexical patterns, such as \"X is a Y\", along the lines of Hearst (1992) . These approaches typically perform with high precision and low recall, because they fall short of detecting the high variability of the syntactic structure of textual definitions. To address the low-recall issue, recurring cue terms occurring within dictionary and encyclopedic resources can be automatically extracted and incorporated into lexical patterns (Saggion, 2004) . However, this approach is term-specific and does not scale to arbitrary terminologies and domains.", "cite_spans": [ { "start": 166, "end": 193, "text": "(Navigli and Velardi, 2010;", "ref_id": "BIBREF16" }, { "start": 194, "end": 218, "text": "Reiplinger et al., 2012)", "ref_id": "BIBREF21" }, { "start": 230, "end": 252, "text": "(Velardi et al., 2008;", "ref_id": "BIBREF24" }, { "start": 253, "end": 278, "text": "Fujii and Ishikawa, 2000)", "ref_id": "BIBREF8" }, { "start": 647, "end": 660, "text": "Hearst (1992)", "ref_id": "BIBREF9" }, { "start": 1021, "end": 1036, "text": "(Saggion, 2004)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of the new approach outlined in this paper is to enable the automatic harvesting of largescale, full-fledged domain glossaries for dozens of domains, an outcome which should be very useful for both human activities and automatic tasks. We present ProToDoG (Probabilistic Topics for multi-Domain Glossaries), a framework for growing multi-domain glossaries which has three main novelties: i) minimal human supervision: a small set of hypernymy relation seeds for each domain is used to bootstrap the multi-domain acquisition process;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ii) jointness: our approach harvests terms and glosses at the same time;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "iii) probabilistic topic models are leveraged for a simultaneous, high-precision multi-domain classification of the extracted definitions, with substantial performance improvements over our previous work on glossary bootstrapping, i.e., GlossBoot (De Benedictis et al., 2013) .", "cite_spans": [ { "start": 237, "end": 275, "text": "GlossBoot (De Benedictis et al., 2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ProToDog is able to harvest definitions from the Web and thus drop the requirement of large corpora for each domain. Moreover, apart from the need to select a few seeds, it avoids the use of training data or manually defined sets of lexical patterns. It is thus applicable to virtually any language of interest.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given a set of domains D = {d 1 , ..., d n }, for each domain d \u2208 D ProToDoG harvests a domain glossary G d containing pairs of the kind (t, g) where t is a domain term and g is its textual definition, i.e., gloss. We show the pseudocode of ProToDoG in Algorithm 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Step 1. Initial seed selection: Algorithm 1 takes as input a set of domains D and, for each domain d \u2208 D, a small set of hypernymy relation seeds ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "S d = {(t 1 , h 1 ), . . . , (t |S d | , h |S d | )},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "for each domain d \u2208 D do 4: G k d \u2190 \u2205 5: for each seed (t j , h j ) \u2208 S d do 6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "pages \u2190 webSearch(t j , h j , \"glossary\") 7: infer topic assignments for iteration-k glosses 12:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "G k d \u2190 G k d \u222a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "filter out non-domain glosses for each domain 13:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "for each d \u2208 D do 14: S d \u2190 seedSelectionF orN extIteration(G k d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "15:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "end for", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "16:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "k \u2190 k + 1 17: until k > max 18: for each domain d \u2208 D do 19:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "recover filtered glosses into G max+1 d 20:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "G d \u2190 j=1,...,max+1 G j d 21: end for 22: return G = {(G d , d) : d \u2208 D}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "pair (t j , h j ) contains a term t j and its generalization h j (e.g., (linux, operating system)). This is the only human input to the entire glossary acquisition process. The selection of the input seeds plays a key role in the bootstrapping process, in that the pattern and gloss extraction process will be driven by them. The chosen hypernymy relations thus have to be as topical and representative as possible for the domain of interest (e.g., (compiler, computer program) is an appropriate pair for computer science, while (byte, unit of measurement) is not, as it might cause the extraction of out-of-domain glossaries of units and measures).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "The algorithm first sets the iteration counter k to 1 (line 1) and starts the first iteration of the glossary bootstrapping process (lines 2-17), each involving steps 2-4, described below. After each iteration k, for each domain d we keep track of the set of glosses G k d acquired during that iteration. After the last iteration, we perform step (5) of gloss recovery (lines 18-21).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Step 2. Web search and glossary extraction (lines 3-9): For each domain d, we first initialize the domain glossary for iteration k: G k d := \u2205 (line 4). Then, for each seed pair (t j , h j ) \u2208 S d , we submit the following query to a Web search engine: \"t j \" \"h j \" glossary and collect the top-ranking results for each query (line 6). 2 Each resulting page is a candidate glossary for the domain d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "We then call the extractGlossary function (line 7) which extracts terms and glosses from the retrieved pages as follows. From each candidate page, we harvest all the text snippets s starting with t j and ending with h j (e.g., \"linux -an operating system\"), i.e., s = t j . . . h j . For each such text snippet s, we extract the following pattern instance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "p L t j p M gloss s (t j ) p R ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "where:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "\u2022 p M is the longest sequence of HTML tags and non-alphanumeric characters between t j and the glossary definition (e.g., \" -\" between \"linux\" and \"an\" in the above example);", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "\u2022 gloss s (t j ) is the gloss of t j obtained by moving to the right of p M until we reach a nonformatting tag element (e.g., ,

,

), while ignoring formatting elements such as , and which are typically included within a definition sentence;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "\u2022 p L and p R are the longest sequences of HTML tags on the left of t j and the right of gloss s (t j ), respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "For instance, given the HTML snippet \". . .

linux -an operating system developed by Linus Torvalds

. . . \" we extract the following pattern instance: p L = \"

\", t j = \"linux\", p M = \" -\", gloss s (t j ) = \"an operating system developed by Linus Torvalds\", p R =\"

\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Then we generalize the above pattern instance by replacing t j and gloss s (t j ) with *, obtaining:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "p L * p M * p R ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "For the above example, we obtain the following pattern:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "

* -*

.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "We add the first sentence of the retrieved gloss gloss s (t j ) to our glossary G k d , i.e., G k d :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "= G k d \u222a {(t j , f irst(gloss s (t j )))},", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "where f irst(g) returns the first sentence of gloss g. Finally, we look for additional pairs of terms/glosses in the Web page containing the snippet s by matching the page against the generalized pattern p L * p M * p R , and adding them to G k d . As a result of step (2), for each domain d \u2208 D we obtain a glossary G k d for the terms discovered at iteration k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Step 3. Topic modeling and gloss filtering (lines 10-12): Unfortunately, not all (term, gloss) pairs in a glossary G k d will pertain to the domain d. For instance, we might end up retrieving interdisciplinary or even unrelated glossaries. In order to address this fuzziness, we model domains with a Probabilistic Topic Model (PTM) (Blei et al., 2003; Steyvers and Griffiths, 2007) . PTMs model a given text document as a mixture of topics. In our case topics are domains and we, first, create a topic model from the domain glossaries acquired before the current iteration k, then, second, use the topic model to estimate the domain assignment of each new pair (term, gloss) in our glossaries G k d , i.e., obtained at iteration k, third, filter out non-domain glosses.", "cite_spans": [ { "start": 332, "end": 351, "text": "(Blei et al., 2003;", "ref_id": "BIBREF1" }, { "start": 352, "end": 381, "text": "Steyvers and Griffiths, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "\u2022 Two count matrices, i.e., the word-domain matrix C W D and the gloss-domain matrix C M D , such that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "C W D w,d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "counts the number of times w \u2208 W is assigned to domain d \u2208 D, i.e., it occurs in the glosses of domain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "d; C M D (t,g),d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "counts the number of words in g assigned to domain d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "At this point, as shown by Steyvers and Griffiths (2007) , we can estimate the probability \u03c6", "cite_spans": [ { "start": 27, "end": 56, "text": "Steyvers and Griffiths (2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "(d)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "w for word w, and the probability \u03b8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "(t,g) d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "for a term/gloss pair (t, g), of belonging to domain d:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "\u03c6 (d) w = C W D w,d + \u03b2 |W | w =1 C W D w ,d + |W |\u03b2 ;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 (t,g) d = C M D (t,g),d + \u03b1 |D| d =1 C M D (t,g),d + |D|\u03b1", "eq_num": "(2)" } ], "section": "ProToDoG", "sec_num": "2" }, { "text": "where \u03b1 and \u03b2 are smoothing factors. 5 The two above probabilities represent the core of our topic model of the domain knowledge acquired up until iteration k \u2212 1.", "cite_spans": [ { "start": 37, "end": 38, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Probabilistic modeling of iteration-k glosses (line 11): We now utilize the above topic model to estimate the probabilities in Formulas 1 and 2 for the newly acquired glosses at iteration k. To this end we define M := d\u2208D G k d as the union of the (term, gloss) pairs at iteration k and W :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "= d\u2208D T k d W", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "as the union of terms acquired at iteration k, but also occurring in W (i.e., the entire terminology up until iteration k \u2212 1). Then we apply Gibbs sampling (Blei et al., 2003; Phan et al., 2008) to estimate the probability of each pair (t, g) \u2208 M of pertaining to a domain d by computing:", "cite_spans": [ { "start": 157, "end": 176, "text": "(Blei et al., 2003;", "ref_id": "BIBREF1" }, { "start": 177, "end": 195, "text": "Phan et al., 2008)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "\u03b8 (t,g) d = R M D (t,g),d + \u03b1 |D| d =1 R M D (t,g),d + |D|\u03b1 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "where the gloss-domain matrix R M D is initially defined by counting random domain assignments for each word w in the bag of words of each (term, gloss) pair \u2208 M . Next, the domain assignment counts in R M D are iteratively updated using Gibbs sampling. 6 is not maximum among all domains in D. Non-domain pairs are removed from G k d and stored into a set A d for possible recovery after the last iteration (see step (5)).", "cite_spans": [ { "start": 254, "end": 255, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Step 4. Seed selection for next iteration (lines 13-15): For each domain d \u2208 D, we now select the new set of hypernymy relation seeds to be used to start the next iteration. First, for each newlyacquired term/gloss pair (t, g) \u2208 G k d , we automatically extract a candidate hypernym h from the textual gloss g. To do this we use a simple heuristic which just selects the first content term in the gloss. 7 Then we sort all the glosses in G k d by the number of seed terms found in each gloss. In the case of ties (i.e., glosses with the same number of seed terms), we further sort the glosses by \u03b8", "cite_spans": [ { "start": 404, "end": 405, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "(t,g) d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": ". Finally we select the (term, hypernym) pairs corresponding to the |S d | top-ranking glosses as the new set of seeds for the next iteration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Next, we increment k (line 16 of Algorithm 1) and if the maximum number of iterations is reached we jump to step (5). Otherwise, we go back to step (2) of our glossary bootstrapping algorithm with the new set of seeds S d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "Step 5. Gloss recovery (line 19): After all iterations, the entire multi-domain terminology W (cf. step (3)) may contain several new terms which were not present when a given gloss g was filtered out. So, thanks to the last-iteration topic model, the gloss g might come back into play because its words are now important cues for a domain. To reassess the domain pertinence of (term, gloss) pairs in A d for each d, we just reapply the entire step (3) by setting 3 Experimental Setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "G max+1 d := A d for each d \u2208 D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ProToDoG", "sec_num": "2" }, { "text": "For our experiments we selected 30 different domains ranging from Arts to Warfare, mostly following the domain classification of Wikipedia featured articles (full list at http://lcl.uniroma1. it/protodog). The set includes several technical domains, such as Chemistry, Geology, Meteorology, Mathematics, some of which are highly interdisciplinary. For instance, the Environment domain covers terms from fields such as Chemistry, Biology, Law, Politics, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domains", "sec_num": "3.1" }, { "text": "Since our evaluations required considerable human effort, in what follows we calculated all performances on a random set of 10 domains, shown in the top row of Table 1 . For each of these 10 domains we selected well-reputed glossaries on the Web as gold standards, including the Reuters glossary of finance, the Utah computing glossary and many others (full list at the above URL). We show the size of our 10 gold-standard datasets in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 160, "end": 167, "text": "Table 1", "ref_id": null }, { "start": 435, "end": 442, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Gold Standard", "sec_num": "3.2" }, { "text": "We evaluated the quality of both terms and glosses, as jointly extracted by ProToDoG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation measures", "sec_num": "3.3" }, { "text": "For each domain we calculated coverage, extracoverage and precision of the acquired terms T . Coverage is the ratio of extracted terms in T also contained in the gold standardT over the size ofT . Extra-coverage is calculated as the ratio of the additional extracted terms in T \\T over the number of gold standard termsT . Finally, precision is the ratio of extracted terms in T deemed to be within the domain. To calculate precision we randomly sampled 5% of the retrieved terms and asked two human annotators to manually tag their domain pertinence (with adjudication in case of disagreement; \u03ba = .62, indicating substantial agreement). Note that by randomly sampling on the entire set T we calculate the precision of both terms in T \u2229T , i.e., in the gold standard, and terms in T \\T , i.e., not in the gold standard, but which are not necessarily outside the domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terms", "sec_num": "3.3.1" }, { "text": "We calculated the precision of the extracted glosses as the ratio of glosses which were both wellformed textual definitions and specific to the target domain. Precision was determined on a random sample of 5% of the acquired glosses for each domain. The annotation was made by two annotators, with \u03ba = .675, indicating substantial agreement. The annotators were provided with specific guidelines available on the ProToDoG Web site (see URL above).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Glosses", "sec_num": "3.3.2" }, { "text": "We compared ProToDog against:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "3.4" }, { "text": "\u2022 BoW: a bag-of-words variant in which step", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "3.4" }, { "text": "(3) is replaced by a simple bag-of-words scoring approach which assigns a score to each term/gloss pair (t, g) \u2208 G k d as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(g) = |Bag(g) \u2229 T 1,k\u22121 d | |Bag(g)| .", "eq_num": "(4)" } ], "section": "Comparison", "sec_num": "3.4" }, { "text": "where Bag(g) contains all content words in g. At iteration k, we filter out those glosses whose score(g) < \u03c3, where \u03c3 is a threshold tuned in the same manner as \u03b4 (see Section 3.5). This approach essentially implements GlossBoot, our previous work on domain glossary bootstrapping (De Benedictis et al., 2013).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "3.4" }, { "text": "\u2022 Wikipedia: since Wikipedia is the largest collaborative resource, covering hundreds of fields of knowledge, we devised a simple heuristic for producing multi-domain glossaries from Wikipedia, so as to compare their performance against our gold standards. For each target domain we manually selected one or more Wikipedia categories representing the domain (for instance, Category:Arts for Arts, Category:Business for Finance, etc.). Then, for each domain d, we picked out all the Wikipedia pages tagged either with the categories selected for d or their direct subcategories (e.g., Category:Creative works) or subsubcategories (e.g., Category:Genres).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "3.4" }, { "text": "From each page we extracted a (page title, gloss) pair, where the gloss was obtained by extracting the first sentence of the Wikipedia page, as done, e.g., in BabelNet (Navigli and Ponzetto, 2012) . Since subcategories might have more parents and might thus belong to multiple domains, we discarded pages assigned to more than 2 domains.", "cite_spans": [ { "start": 168, "end": 196, "text": "(Navigli and Ponzetto, 2012)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison", "sec_num": "3.4" }, { "text": "In order to choose the optimal values of the parameters of ProToDoG (number |S d | of seeds per domain, number max of iterations, and filtering threshold \u03b4) and BoW (\u03c3 threshold) we selected two extra domains, i.e., Botany and Fashion, not used in our tests, together with the corresponding gold standard Web glossaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter tuning", "sec_num": "3.5" }, { "text": "As regards the number of seeds, we defined an initial pool of 10 seeds for each of the two tuning domains and studied the average performance of 5 random sets of x seeds (from the initial pool), when x = 1, 3, 5, 7, 9. As regards the number of iterations, we explored all values between 1 and 20. Finally, for the filtering thresholds \u03b4 and \u03c3 for ProToDoG PTM and its BoW variant, we tried values of \u03b4 \u2208 {0, 0.03, 0.06, . . . , 0.6} and \u03c3 \u2208 {0, 0.05, 0.1, . . . , 1.0}, respectively. Given the high number of possible parameter value configurations, we first explored the entire search space automatically by calculating the coverage of ProToDoG PTM (and BoW) with each configuration against our tuning gold standards. Then we identified as optimal candidates those \"frontier\" configurations for which, when moving from a lower-coverage configuration, coverage reached a maximum. We then calculated the precision of each optimal candidate configuration by manually validating a 3% random sample of the resulting glossaries for the two tuning domains. The optimal configuration for ProToDoG was |S d | = 5, max = 5, \u03b4 = 0.03, while for BoW was \u03c3 = 0.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parameter tuning", "sec_num": "3.5" }, { "text": "In Figure 1 we show the performance trend over iterations for our two tuning domains when |S d | = 5 and \u03b4 = 0.03. Performance is calculated as the harmonic mean of precision and coverage of the acquired glossary after each iteration, from 1 to 20. We can see that after 5 iterations performance decreases for Botany (a highly interdisciplinary domain) due to lower precision, while it remains stable for Fashion due to the lack of newly-acquired glosses.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parameter tuning", "sec_num": "3.5" }, { "text": "For each domain d we manually selected five seed hypernymy relations as the seed sets S d input to Algorithm 1 (see Section 3.5). The seeds were selected by the authors on the basis of just two conditions: i) the seeds should cover different aspects of the domain and, indeed, should identify the domain implicitly; ii) at least 10,000 results should be returned by the search engine when querying it with the seeds plus the glossary keyword (see line 6 of Algorithm 1). The seed selection was not fine-tuned (i.e., it was not adjusted to improve performance), so it might well be that better seeds would provide better results (see (Kozareva and Hovy, 2010a) Table 1 : Size of the gold standard and the automatically-acquired glossaries for 10 of the 30 selected domains (t: number of terms, g: number of glosses).", "cite_spans": [ { "start": 633, "end": 659, "text": "(Kozareva and Hovy, 2010a)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 660, "end": 667, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Seed Selection", "sec_num": "3.6" }, { "text": "The size of the extracted terminologies for the 10 domains after five iterations is reported in Table 1 (the output for all 30 domains is available at the above URL, cf. Section 3.1). ProToDoG PTM and its BoW variant extract thousands of terms and glosses for each domain, whereas the number of glosses obtained from Wikipedia (cf. Section 3.4) varies depending upon the domain, from thousands to hundreds of thousands. Note that there is no overlap between the glossaries extracted by ProToDoG and the set of Wikipedia articles, since the latter are not organized as glossaries.", "cite_spans": [], "ref_spans": [ { "start": 96, "end": 103, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Terms", "sec_num": "4.1" }, { "text": "In Table 2 we show the percentage results in terms of precision (P), coverage (C), and extracoverage (X, see Section 3.3 for definitions) for ProToDoG PTM and its BoW variant and for the Wikipedia glossary. With the exception of the Food domain, ProToDoG achieves the best precision. The Wikipedia glossary has fluctuating precision values, ranging between 25% and 90%, due to the heterogeneous nature of subcategories. ProToDog achieves the best coverage of gold standard terms on 6 of the 10 domains, with the BoW variant obtaining slightly higher coverage on 3 domains and +10% on the Food domain. The coverage of Wikipedia glossaries, instead, with the sole exception of Sport, is much lower, despite the use of (sub)subcategories (cf. Section 3.4). Both ProToDoG PTM and BoW achieve very high extracoverage percentages, meaning that they are able to go substantially beyond our domain gold standards, but it is the Wikipedia glossary which achieves the highest extra-coverage values. To get a better insight into the quality of extra-coverage we calculated the percentage of named entities (i.e., encyclopedic) among the terms extracted by each of the different approaches. Comparing results across the (E) columns of Table 2 it can be seen that high percentages of the terms extracted by Wikipedia are named entities, which is in marked contrast to the 0%-1% extracted by ProToDog. This is as should be expected for an encyclopedia, whose coverage focuses on people, places, brands, etc. rather than concepts.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 1223, "end": 1230, "text": "Table 2", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Terms", "sec_num": "4.1" }, { "text": "To summarize, ProToDoG PTM outperforms both BoW and Wikipedia in terms of precision, while at the same time achieving both competitive coverage and extra-coverage. The Wikipedia glossary suffers from fluctuating precision values across domains and overly encyclopedic coverage of terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Terms", "sec_num": "4.1" }, { "text": "We show the results of gloss evaluation in Table 2 (last two columns) for ProToDoG PTM and BoW (we do not report the precision values for Wikipedia, as they are slightly lower than those obtained for terms). Precision ranges between 89% and 99% for ProToDoG PTM and between 82% and 97% for BoW. We observe that these results are strongly correlated with the precision of the extracted terms (cf. Table 2), because the retrieved glosses of domain terms are usually in-domain too, and follow a definitional style since they come from glossaries. Note, however, that the gloss precision could also be terms glosses PTM BoW Wiki PTM BoW P C X E P C X E P C X E P P higher than term precision, thanks to many pertinent glosses being extracted for the same term (cf. Table 1 ).", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 50, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 761, "end": 769, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Glosses", "sec_num": "4.2" }, { "text": "In Table 4 we show an excerpt of the multidomain glossary extracted by ProToDoG for the Art, Business and Sport domains.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Glosses", "sec_num": "4.2" }, { "text": "We performed a comparison with Google Define, 8 a state-of-the-art definition search service. This service inputs a term query and outputs a list of glosses. First, we randomly sampled 100 terms from our gold standard for each domain. Next, for each domain, we manually calculated the fraction of terms for which at least one in-domain definition was provided by Google Define and ProToDoG. Table 3 shows the coverage results. In this experiment, Google Define outperforms our system on 9 of the 10 analyzed domains. However, we note that when searching for domain-specific knowledge only, Google Define: i) needs to know the domain term to be defined in advance, while ProToDoG jointly acquires domain terms and glosses starting from just a few seeds; ii) does not discriminate between glosses pertaining to the target domain and glosses pertaining to other fields or senses, whereas ProToDog extracts terms and glosses specific to each domain of interest.", "cite_spans": [], "ref_spans": [ { "start": 391, "end": 398, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Comparison with Google Define", "sec_num": "5.1" }, { "text": "We also compared ProToDoG with the output of a state-of-the-art taxonomy learning framework, called TaxoLearn (Navigli et al., 2011) . We did this because i) TaxoLearn extracts terms and glosses from domain corpora in order to create a domain taxonomy; ii) it is one of the few systems which extracts both terms and glosses from specialized corpora; iii) the extracted glossaries are available online. 9 Therefore we compared the performance of ProToDoG on two domains for which glossaries were extracted by TaxoLearn, i.e. AI and Finance. The glossaries were harvested from large collections of scholarly articles. For ProToDoG we selected 10 seeds to cover all the fields of AI, while for the financial domain we selected the same 5 seeds used in the Business Art rock art includes pictographs (designs painted on stone surfaces) and petroglyphs (designs pecked or incised on stone surfaces). impressionism Late 19th-century French school dedicated to defining transitory visual impressions painted directly from nature, with light and color of primary importance. point Regarding paper, a unit of thickness equating 1/1000 inch. Business hyperinflation Extremely rapid or out of control inflation. interbank rate The rate of interest charged by a bank on a loan to another bank. points Amount of discount on a mortgage loan stated as a percentage; one point equals one percent of the face amount of the loan; a discount of one point raises the net yield on the loan by one-eighth of one percent.", "cite_spans": [ { "start": 110, "end": 132, "text": "(Navigli et al., 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with TaxoLearn", "sec_num": "5.2" }, { "text": "The actual number of strokes taken by a player for hole or round before the player's handicap is deducted. obstructing preventing the opponent from going around a player by standing in the path of movement. points a team statistic indicating its degree of success, calculated as follows: 2 points for a win (3 in the 1994 World Cup), 1 point for a tie, 0 points for a loss. domain of our experiments above (cf. Section 3). We show the number of extracted terms and glosses for ProToDoG and TaxoLearn in Table 5 . We also show the precision values calculated on a random sample of 5% of terms and glosses. As can be clearly seen, on both domains ProToDoG extracts a number of terms and glosses which is an order of magnitude greater than those obtained by Tax-oLearn, while at the same time obtaining considerably higher precision.", "cite_spans": [], "ref_spans": [ { "start": 503, "end": 510, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Sport gross score", "sec_num": null }, { "text": "Current approaches to automatic glossary acquisition suffer from two main issues: i) the poor availability of large domain-specific corpora from which terms and glosses are extracted at different times; ii) the focus on individual domains. ProToDog addresses both issues by providing a joint multidomain approach to term and glossary extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Among the approaches which extract unrestricted textual definitions from open text, Fujii and Ishikawa (2000) determine the definitional nature of text fragments by using an n-gram model, whereas Klavans and Muresan (2001) apply pattern matching techniques at the lexical level guided by cue phrases such as \"is called\" and \"is defined as\". More recently, a domain-independent supervised approach, named Word-Class Lattices (WCLs), was presented which learns lattice-based definition classifiers applied to candidate sentences containing the input terms (Navigli and Velardi, 2010) . To avoid the burden of manually creating a training dataset, definitional patterns can be extracted automatically. utilized Wikipedia as a huge source of definitions and simple, yet effective heuristics to automatically annotate them. Reiplinger et al. (2012) experimented with two different approaches for the acquisition of lexicalsyntactic patterns. The first approach bootstraps patterns from a domain corpus and then manually refines the acquired patterns. The second approach, instead, automatically acquires definitional sentences by using a more sophisticated syntactic and semantic processing. The results show high precision in both cases. However, all the above approaches need large domain corpora, the poor availability of which hampers the creation of wide-coverage glossaries for several domains. To avoid the need to use a large corpus, domain terminologies can be obtained by using Doubly-Anchored Patterns (DAPs) AI Finance # terms P # glosses P # terms P # glosses P ProToDoG 4983 83% 5326 84% 7370 95% 9795 96% TaxoLearn 427 77% 834 79% 2348 86% 1064 88% which, given a (term, hypernym) pair, extract from the Web sentences matching manually-defined patterns like \" such as , and *\" (Kozareva and Hovy, 2010b) . This term extraction process is further extended by harvesting new hypernyms using the corresponding inverse patterns (called DAP \u22121 ) like \"* such as , and \". Similarly to ProToDoG, this approach drops the requirement of a domain corpus and starts from a small number of (term, hypernym) seeds. However, while DAPs have proven useful in the induction of domain taxonomies (Kozareva and Hovy, 2010b) , they cannot be applied to the glossary learning task because the extracted sentences are not formal definitions. In contrast, ProToDoG performs the novel task of multi-domain glossary acquisition from the Web by bootstrapping the extraction process with a few (term, hypernym) seeds. Bootstrapping techniques (Brin, 1998; Agichtein and Gravano, 2000; Pa\u015fca et al., 2006) have been successfully applied to several tasks, including learning semantic relations (Pantel and Pennacchiotti, 2006) , extracting surface text patterns for open-domain question answering (Ravichandran and Hovy, 2002) , semantic tagging (Huang and Riloff, 2010) and unsupervised Word Sense Disambiguation (Yarowsky, 1995) . ProToDoG synergistically integrates bootstrapping with probabilistic topic models so as to keep the glossary acquisition process within the target domains as much as possible.", "cite_spans": [ { "start": 84, "end": 109, "text": "Fujii and Ishikawa (2000)", "ref_id": "BIBREF8" }, { "start": 196, "end": 222, "text": "Klavans and Muresan (2001)", "ref_id": "BIBREF12" }, { "start": 554, "end": 581, "text": "(Navigli and Velardi, 2010)", "ref_id": "BIBREF16" }, { "start": 819, "end": 843, "text": "Reiplinger et al. (2012)", "ref_id": "BIBREF21" }, { "start": 1803, "end": 1829, "text": "(Kozareva and Hovy, 2010b)", "ref_id": "BIBREF14" }, { "start": 2223, "end": 2249, "text": "(Kozareva and Hovy, 2010b)", "ref_id": "BIBREF14" }, { "start": 2561, "end": 2573, "text": "(Brin, 1998;", "ref_id": "BIBREF2" }, { "start": 2574, "end": 2602, "text": "Agichtein and Gravano, 2000;", "ref_id": "BIBREF0" }, { "start": 2603, "end": 2622, "text": "Pa\u015fca et al., 2006)", "ref_id": "BIBREF18" }, { "start": 2710, "end": 2742, "text": "(Pantel and Pennacchiotti, 2006)", "ref_id": "BIBREF19" }, { "start": 2813, "end": 2842, "text": "(Ravichandran and Hovy, 2002)", "ref_id": "BIBREF20" }, { "start": 2862, "end": 2886, "text": "(Huang and Riloff, 2010)", "ref_id": "BIBREF11" }, { "start": 2930, "end": 2946, "text": "(Yarowsky, 1995)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper we have presented ProToDoG, a new, minimally-supervised approach to multi-domain glossary acquisition. Starting from a small set of hypernymy seeds which identify each domain of interest, we apply a bootstrapping approach which iteratively obtains generalized patterns from Web glossaries and then applies them to the extraction of term/gloss pairs. To our knowledge, ProToDoG is the first approach to large-scale probabilistic glossary learning which jointly acquires thousands of terms and glosses for dozens of domains with minimal supervision. At the core of ProToDoG lies our glossary bootstrapping approach, thanks to which we can drop the requirements of existing techniques such as the ready availability of domain corpora, which often do not contain enough definitions (cf. Table 5) , and the manual definition of lexical patterns, which typically extract sentence snippets instead of formal glosses.", "cite_spans": [], "ref_spans": [ { "start": 797, "end": 805, "text": "Table 5)", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "ProToDoG will be made available to the research community. Beyond the immediate usability of the output glossaries (we show an excerpt in Table 4 ), we also wish to show the benefit of ProToDoG in gloss-driven approaches to taxonomy learning (Navigli et al., 2011; Velardi et al., 2013) and Word Sense Disambiguation (Duan and Yates, 2010; Faralli and Navigli, 2012) . The 30domain glossaries and gold standards created for our experiments are available from http://lcl. uniroma1.it/protodog.", "cite_spans": [ { "start": 242, "end": 264, "text": "(Navigli et al., 2011;", "ref_id": "BIBREF17" }, { "start": 265, "end": 286, "text": "Velardi et al., 2013)", "ref_id": "BIBREF25" }, { "start": 340, "end": 366, "text": "Faralli and Navigli, 2012)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 138, "end": 145, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "We remark that the terminologies covered with ProToDoG are not only precise, but are also one order of magnitude greater than those covered in individual online glossaries. As future work, we plan to study the ability of ProToDoG to acquire domain glossaries at different levels of specificity (i.e., domains vs. subdomains). Finally, we will adapt ProToDoG to other languages, by translating the glossary keyword used in step (2), along the lines of (De Benedictis et al., 2013).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "See http://en.wikipedia.org/wiki/Portal:Contents/Glossaries", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the Google Ajax API, which returns the 64 topranking search results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Creating the topic model (line 10): For a given iteration k and domain d, we first define the terminology accumulated up until iteration k \u2212 1 for that domain as the set T 1,k\u22121d := k\u22121 j=1 T j d , where T jd is the set of terms acquired at iteration j, i.e.,T j d := {t : \u2203(t, g) \u2208 G j d }.3 Then we define:\u2022 W := d\u2208D T 1,k\u22121 das the entire terminology acquired up until iteration k\u22121 for all domains, i.e., the full set of terms independently of their domain;\u2022 M := d\u2208D k\u22121 j=1 G jd as the multi-domain glossary acquired up until iteration k \u2212 1, i.e., the full set of pairs (term, gloss) independently of their domain; 43 For the first iteration, i.e., when k = 1, we define T 1,0d := {t : \u2203(t, g) \u2208 G 1 d }, i.e., we use the terminology resulting from step (2) of the first iteration.4 For k = 1, M := d\u2208D G 1 d .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "As experienced bySteyvers and Griffiths (2007), the values of \u03b1 = 50/|D| and \u03b2 = 0.01 work well with many different text collections.6 For the PTM part of ProToDoG we used the JGibbLDAFiltering out non-domain glosses (line 12): Now, for each domain d \u2208 D, for each pair (t, g) \u2208 G k d we have a probability \u03b8 (t,g) d of belonging to d. We mark (t, g) as a non-domain item if \u03b8 (t,g) d < \u03b4, where \u03b4 is a confidence threshold, or if \u03b8 (t,g) d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While more complex strategies could be devised, e.g., lattice-based hypernym extraction(Navigli and Velardi, 2010), we found that this heuristic works well because, even when it is not a hypernym, the first term acts as a cue word for the defined term.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Accessible from Google search with the define: keyword.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://ontolearn.org and http://lcl. uniroma1.it/taxolearn", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors gratefully acknowledge the support of the \"MultiJEDI\" ERC Starting Grant No. 259234.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: extracting relations from large plain-text collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 5th ACM conference on Digital Libraries", "volume": "", "issue": "", "pages": "85--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: extracting relations from large plain-text collections. In Proceedings of the 5th ACM conference on Digital Libraries, pages 85-94, San Antonio, Texas, USA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Latent Dirichlet Allocation", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Blei", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "Michael", "middle": [ "I" ], "last": "Jordan", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3:993-1022.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extracting patterns and relations from the World Wide Web", "authors": [ { "first": "", "middle": [], "last": "Sergey Brin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the International Workshop on The World Wide Web and Databases", "volume": "", "issue": "", "pages": "172--183", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Brin. 1998. Extracting patterns and relations from the World Wide Web. In Proceedings of the International Workshop on The World Wide Web and Databases, pages 172-183, London, UK.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Soft pattern matching models for definitional question answering", "authors": [ { "first": "Hang", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kan", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" } ], "year": 2007, "venue": "ACM Transactions on Information Systems", "volume": "25", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hang Cui, Min-Yen Kan, and Tat-Seng Chua. 2007. Soft pattern matching models for definitional question an- swering. ACM Transactions on Information Systems, 25(2):8.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "GlossBoot: Bootstrapping Multilingual Domain Glossaries from the Web", "authors": [ { "first": "Stefano", "middle": [], "last": "Flavio De Benedictis", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Faralli", "suffix": "" }, { "first": "", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "627--635", "other_ids": {}, "num": null, "urls": [], "raw_text": "Flavio De Benedictis, Stefano Faralli, and Roberto Nav- igli. 2013. GlossBoot: Bootstrapping Multilingual Domain Glossaries from the Web. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics, pages 528-538, Sofia, Bulgaria. Weisi Duan and Alexander Yates. 2010. Extracting glosses to disambiguate word senses. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the As- sociation for Computational Linguistics, pages 627- 635, Los Angeles, CA, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A New Minimally-supervised Framework for Domain Word Sense Disambiguation", "authors": [ { "first": "Stefano", "middle": [], "last": "Faralli", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefano Faralli and Roberto Navigli. 2012. A New Minimally-supervised Framework for Domain Word Sense Disambiguation. In Proceedings of the 2012", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1411--1422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 1411-1422, Jeju, Korea.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Java Framework for Multilingual Definition and Hypernym Extraction", "authors": [ { "first": "Stefano", "middle": [], "last": "Faralli", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, System Demonstrations", "volume": "", "issue": "", "pages": "103--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefano Faralli and Roberto Navigli. 2013. A Java Framework for Multilingual Definition and Hypernym Extraction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, Sys- tem Demonstrations, pages 103-108, Sofia, Bulgaria.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Utilizing the World Wide Web as an encyclopedia: extracting term descriptions from semi-structured texts", "authors": [ { "first": "Atsushi", "middle": [], "last": "Fujii", "suffix": "" }, { "first": "Tetsuya", "middle": [], "last": "Ishikawa", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 38th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "488--495", "other_ids": {}, "num": null, "urls": [], "raw_text": "Atsushi Fujii and Tetsuya Ishikawa. 2000. Utilizing the World Wide Web as an encyclopedia: extracting term descriptions from semi-structured texts. In Pro- ceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 488-495, Hong Kong.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automatic acquisition of hyponyms from large text corpora", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 15th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "539--545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the 15th International Conference on Computational Linguistics, pages 539-545, Nantes, France.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Collaboratively built semi-structured content and Artificial Intelligence: The story so far", "authors": [ { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2013, "venue": "Artificial Intelligence", "volume": "194", "issue": "", "pages": "2--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semi-structured content and Artificial Intelligence: The story so far. Artificial Intelligence, 194:2-27.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Inducing domain-specific semantic class taggers from (almost) nothing", "authors": [ { "first": "Ruihong", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "275--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruihong Huang and Ellen Riloff. 2010. Inducing domain-specific semantic class taggers from (almost) nothing. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 275-285, Uppsala, Sweden.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Evaluation of the DEFINDER system for fully automatic glossary construction", "authors": [ { "first": "Judith", "middle": [], "last": "Klavans", "suffix": "" }, { "first": "Smaranda", "middle": [], "last": "Muresan", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the American Medical Informatics Association (AMIA) Symposium", "volume": "", "issue": "", "pages": "324--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judith Klavans and Smaranda Muresan. 2001. Evalu- ation of the DEFINDER system for fully automatic glossary construction. In Proceedings of the American Medical Informatics Association (AMIA) Symposium, pages 324-328, Washington, D.C., USA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Not all seeds are equal: Measuring the quality of text mining seeds", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "618--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva and Eduard H. Hovy. 2010a. Not all seeds are equal: Measuring the quality of text min- ing seeds. In Proceedings of Human Language Tech- nologies: The 11th Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 618-626, Los Angeles, Cali- fornia, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A semisupervised method to learn and construct taxonomies using the web", "authors": [ { "first": "Zornitsa", "middle": [], "last": "Kozareva", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1110--1118", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zornitsa Kozareva and Eduard H. Hovy. 2010b. A semi- supervised method to learn and construct taxonomies using the web. In Proceedings of Empirical Methods in Natural Language Processing, pages 1110-1118, Cambridge, MA, USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Ba-belNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2012, "venue": "Artificial Intelligence", "volume": "193", "issue": "", "pages": "217--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Simone Paolo Ponzetto. 2012. Ba- belNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217-250.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning Word-Class Lattices for definition and hypernym extraction", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1318--1327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli and Paola Velardi. 2010. Learning Word-Class Lattices for definition and hypernym ex- traction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1318-1327, Uppsala, Sweden.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A graph-based algorithm for inducing lexical taxonomies from scratch", "authors": [ { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Faralli", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 22th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1872--1877", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Navigli, Paola Velardi, and Stefano Faralli. 2011. A graph-based algorithm for inducing lexi- cal taxonomies from scratch. In Proceedings of the 22th International Joint Conference on Artificial Intel- ligence, pages 1872-1877, Barcelona, Spain.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Names and similarities on the Web: Fact extraction in the fast lane", "authors": [ { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" }, { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Bigham", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "809--816", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marius Pa\u015fca, Dekang Lin, Jeffrey Bigham, Andrei Lif- chits, and Alpa Jain. 2006. Names and similarities on the Web: Fact extraction in the fast lane. In Proceed- ings of the 21st International Conference on Computa- tional Linguistics and 44th Annual Meeting of the As- sociation for Computational Linguistics, pages 809- 816, Sydney, Australia.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning to classify short and sparse text & web with hidden topics from large-scale data collections", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL)", "volume": "", "issue": "", "pages": "91--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging Generic Patterns for Automatically Har- vesting Semantic Relations. In Proceedings of the 21st International Conference on Computational Lin- guistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL), Syd- ney, Australia, pages 113-120, Sydney, Australia. Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In Proceedings of the 17th international conference on World Wide Web, WWW '08, pages 91- 100, New York, NY, USA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Learning surface text patterns for a question answering system", "authors": [ { "first": "Deepak", "middle": [], "last": "Ravichandran", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "41--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deepak Ravichandran and Eduard Hovy. 2002. Learn- ing surface text patterns for a question answering sys- tem. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 41- 47, Philadelphia, PA, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Extracting glossary sentences from scholarly articles: A comparative evaluation of pattern bootstrapping and deep analysis", "authors": [ { "first": "Melanie", "middle": [], "last": "Reiplinger", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Sch\u00e4fer", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Wolska", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries", "volume": "", "issue": "", "pages": "55--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Melanie Reiplinger, Ulrich Sch\u00e4fer, and Magdalena Wol- ska. 2012. Extracting glossary sentences from schol- arly articles: A comparative evaluation of pattern bootstrapping and deep analysis. In Proceedings of the ACL-2012 Special Workshop on Rediscovering 50 Years of Discoveries, pages 55-65, Jeju Island, Korea.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Identifying definitions in text collections for question answering", "authors": [ { "first": "Horacio", "middle": [], "last": "Saggion", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "1927--1930", "other_ids": {}, "num": null, "urls": [], "raw_text": "Horacio Saggion. 2004. Identifying definitions in text collections for question answering. In Proceedings of the Fourth International Conference on Language Resources and Evaluation, pages 1927-1930, Lisbon, Portugal.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Probabilistic Topic Models", "authors": [ { "first": "Mark", "middle": [], "last": "Steyvers", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Griffiths", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steyvers and Tom Griffiths, 2007. Probabilistic Topic Models. Lawrence Erlbaum Associates.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Mining the web to create specialized glossaries", "authors": [ { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" }, { "first": "Pierluigi D'", "middle": [], "last": "Amadio", "suffix": "" } ], "year": 2008, "venue": "IEEE Intelligent Systems", "volume": "23", "issue": "5", "pages": "18--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paola Velardi, Roberto Navigli, and Pierluigi D'Amadio. 2008. Mining the web to create specialized glossaries. IEEE Intelligent Systems, 23(5):18-25.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "OntoLearn Reloaded: A graph-based algorithm for taxonomy induction", "authors": [ { "first": "Paola", "middle": [], "last": "Velardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Faralli", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "3", "pages": "665--707", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paola Velardi, Stefano Faralli, and Roberto Navigli. 2013. OntoLearn Reloaded: A graph-based algorithm for taxonomy induction. Computational Linguistics, 39(3):665-707.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Unsupervised Word Sense Disambiguation rivaling supervised methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33 rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised Word Sense Dis- ambiguation rivaling supervised methods. In Proceed- ings of the 33 rd Annual Meeting of the Association for Computational Linguistics, pages 189-196, Cam- bridge, MA, USA.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Harmonic mean of precision and coverage for Botany and Fashion (tuning domains) over 20 iterations (|S d |=5, \u03b4=0.03).", "num": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "content": "
d \u2208 D
Output: a multi-domain glossary G
1: k \u2190 1
2: repeat
3:
", "num": null, "text": "where the seed Algorithm 1 ProToDoG Input: the set of domains D, a set S d of hypernymy seeds for each domain" }, "TABREF2": { "type_str": "table", "html": null, "content": "", "num": null, "text": "As a result, we library, a Java Implementation of Latent Dirichlet Allocation (LDA) using Gibbs Sampling for Parameter Estimation and Inference, available at: http://jgibblda.sourceforge. net/ obtain an updated glossary G max+1 d which contains all the recovered glosses. Final output: For each domain d \u2208 D the final output of ProToDoG is a domain glossary G d := j=1,...,max+1 G j d . Finally the algorithm aggregates all glossaries G d into a multi-domain glossary G (line 22)." }, "TABREF3": { "type_str": "table", "html": null, "content": "
ArtBusinessChemistryComputingEnvironmentFoodLawMusicPhysicsSport
Gold t/g394 1777 164 421 713 946 180 218 315146
PTM t 4253 7370 2493 3412 3009 1526 1836 1647 3847 1696
g 7386 9795 3841 4186 3552 2175 4141 2729 5197 2938
BoW t 4012 7639 1174 3127 3644 1827 1773 1166 4471 1990
g 5923 8999 1414 3662 4334 2601 4024 1249 6956 3425
Wiki t,g 107.1k 48.4k 8137 32.0k 23.6k 5698 13.5k 84.1k 33.8k 267.5k
", "num": null, "text": "). However, such a study is beyond the scope of this paper." }, "TABREF5": { "type_str": "table", "html": null, "content": "
: Precision (P), coverage (C), extra-coverage (X), encyclopedic (E) percentages after 5 iterations.
ArtBusinessChemistryComputingEnvironmentFoodLawMusicPhysicsSport
Google Define 76 80 93 86 88 91 96 96 98 84
ProToDoG27 41 81 40 37 19 85 98 47 27
", "num": null, "text": "" }, "TABREF6": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Number of domain glosses (from a random sample of 100 gold standard terms per domain) retrieved using Google Define and ProToDoG." }, "TABREF7": { "type_str": "table", "html": null, "content": "
", "num": null, "text": "An excerpt of the resulting multi-domain glossary obtained with ProToDoG." }, "TABREF8": { "type_str": "table", "html": null, "content": "
", "num": null, "text": "Number and precision of terms and glosses extracted by ProToDoG and TaxoLearn in the Artificial Intelligence (AI) and Finance domains." } } } }