{ "paper_id": "J18-3005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:20:08.275445Z" }, "title": "Using Semantics for Granularities of Tokenization", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart Institut f\u00fcr maschinelle Sprachverarbeitung", "location": {} }, "email": "martin.riedl@ims.uni-stuttgart.de" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "", "affiliation": { "laboratory": "Language Technology Group", "institution": "University of Hamburg", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Depending on downstream applications, it is advisable to extend the notion of tokenization from low-level character-based token boundary detection to identification of meaningful and useful language units. This entails both identifying units composed of several single words that form a multiword expression (MWE), as well as splitting single-word compounds into their meaningful parts. In this article, we introduce unsupervised and knowledge-free methods for these two tasks. The main novelty of our research is based on the fact that methods are primarily based on distributional similarity, of which we use two flavors: a sparse countbased and a dense neural-based distributional semantic model. First, we introduce DRUID, which is a method for detecting MWEs. The evaluation on MWE-annotated data sets in two languages and newly extracted evaluation data sets for 32 languages shows that DRUID compares favorably over previous methods not utilizing distributional information. Second, we present SECOS, an algorithm for decompounding close compounds. In an evaluation of four dedicated decompounding data sets across four languages and on data sets extracted from Wiktionary for 14 languages, we demonstrate the superiority of our approach over unsupervised baselines, sometimes even matching the performance of previous language-specific and supervised methods. In a final experiment, we show how both decompounding and MWE information can be used in information retrieval. Here, we obtain the best results when combining word information with MWEs and the compound parts in a bag-of-words retrieval setup. Overall, our methodology paves the way to automatic detection of lexical units beyond standard tokenization techniques without language-specific preprocessing steps such as POS tagging.", "pdf_parse": { "paper_id": "J18-3005", "_pdf_hash": "", "abstract": [ { "text": "Depending on downstream applications, it is advisable to extend the notion of tokenization from low-level character-based token boundary detection to identification of meaningful and useful language units. This entails both identifying units composed of several single words that form a multiword expression (MWE), as well as splitting single-word compounds into their meaningful parts. In this article, we introduce unsupervised and knowledge-free methods for these two tasks. The main novelty of our research is based on the fact that methods are primarily based on distributional similarity, of which we use two flavors: a sparse countbased and a dense neural-based distributional semantic model. First, we introduce DRUID, which is a method for detecting MWEs. The evaluation on MWE-annotated data sets in two languages and newly extracted evaluation data sets for 32 languages shows that DRUID compares favorably over previous methods not utilizing distributional information. Second, we present SECOS, an algorithm for decompounding close compounds. In an evaluation of four dedicated decompounding data sets across four languages and on data sets extracted from Wiktionary for 14 languages, we demonstrate the superiority of our approach over unsupervised baselines, sometimes even matching the performance of previous language-specific and supervised methods. In a final experiment, we show how both decompounding and MWE information can be used in information retrieval. Here, we obtain the best results when combining word information with MWEs and the compound parts in a bag-of-words retrieval setup. Overall, our methodology paves the way to automatic detection of lexical units beyond standard tokenization techniques without language-specific preprocessing steps such as POS tagging.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "If we take Ron Kaplan's motivation for tokenization seriously that the \"stream of characters in a natural language text must be broken up into distinct meaningful units\" (Kaplan 2005) to enable natural language processing beyond the character level, then tokenization is more than the low-level preprocessing task of treating interpunctuation, hyphenation, and enclitics. Rather, tokenization also should aspire to produce meaningful units, or, as Webster and Kit (1992) define it, tokens should be linguistically significant and methodologically useful. In practice, however, tokenizers are rather not concerned with meaning or significance-placed right at the beginning of any NLP pipeline and usually implemented in a rule-based fashion, they are merely workhorses to enable higher levels of processing, which includes a reasonable split of the input into word tokens and some normalization to cater to the sensitivity of subsequent processing components. Although it is clear that the methodological utility of a specific tokenization depends on the overall task, it seems much more practical to fix the tokenization in the beginning of the text ingestion process and handle task-specific adjustments later. The work presented in this article operationalizes lexical semantics in order to identify meaningful units. Assuming that low-level processing has already been performed, we devise a method that can identify multiword units, namely, word n-grams that have a non-compositional meaning, as well as a method that can split close compound words into their parts. Both methods are primarily based on distributional semantics (Harris 1951) : By operationalizing language unit similarity in various ways, we are able to inform the tokenization process with semantic information, enabling us to yield meaningful units, which are shown to be linguistically valid and methodologically useful in a series of suitable evaluations. Both methods do not make use of languagespecific processing, thus could be applied directly after low-level tokenization without assuming the existence of, for example, a part of speech tagger.", "cite_spans": [ { "start": 170, "end": 183, "text": "(Kaplan 2005)", "ref_id": "BIBREF20" }, { "start": 448, "end": 470, "text": "Webster and Kit (1992)", "ref_id": "BIBREF52" }, { "start": 1632, "end": 1645, "text": "(Harris 1951)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Depending on the task, the low-level \"standard\" tokenization can be too finegrained, as from a semiotic perspective multiword expressions (MWEs) refer to a single concept. On the flipside, tokenization can be too coarse-grained, as close compound words are detected as single words, whereas they are formed by the concatenation of at least two stems and can be considered as MWEs without white spaces. In this article, we will describe two different approaches to represent (nominal) concepts in a similar fashion. This results in an extended tokenization, similar to the work by Hassler and Fliedl (2006) . However, they extend their tokenization solely by bracketing phrases and MWEs and do not split text in more fine-grained units. Trim (2013) differentiates between low-level and high-level tokenization. Whereas high-level tokenization concentrates on the identification of MWEs and phrases, low-level tokenization mostly splits words that are connected by apostrophes or hyphens. Our notion of coarsegrained tokenization is similar to high-level tokenization. However, the fine-grained tokenization goes one step beyond low-level tokenization, as we split close compound words.", "cite_spans": [ { "start": 580, "end": 605, "text": "Hassler and Fliedl (2006)", "ref_id": "BIBREF15" }, { "start": 736, "end": 747, "text": "Trim (2013)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "First, we describe a method for detecting MWEs. For defining MWEs, we follow the definition by Sag et al. (2001, page 2) that claims that MWEs are \"idiosyncratic interpretations that cross word boundaries (or spaces).\" Furthermore, MWEs are made up of compounds, phrases, or sentences. The detection of named entities (e.g., names, locations, companies, or concepts) is often considered as a task of its own, which aims at identifying a subset of MWEs and is relevant for information extraction (e.g., relation extraction or event extraction), but also for information retrieval or automatic speech recognition systems.", "cite_spans": [ { "start": 95, "end": 117, "text": "Sag et al. (2001, page", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As a second contribution, we present a method for splitting close compounds. Examples for such close compounds include, for example, dishcloth (English), pancake (English), Hefeweizen (German for wheat beer), bijenzwerm (Dutch for swarm of bees) or hiilikuitu (Finnish for carbon fibre). Similar to MWEs, compounds are created by combining existing words, although in close compounds the stems are not separated by white space. Detecting the single stems, called decompounding, showed impact in several natural language processing (NLP) applications like automatic speech recognitions (Adda-Decker and Adda 2000), machine translation (Koehn and Knight 2003) , or information retrieval (IR) (Monz and de Rijke 2001) and is perceived as a crucial component for the processing of languages that are productive with respect to this phenomenon.", "cite_spans": [ { "start": 634, "end": 657, "text": "(Koehn and Knight 2003)", "ref_id": "BIBREF23" }, { "start": 690, "end": 714, "text": "(Monz and de Rijke 2001)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "For both the detection of MWEs and the decompounding of words, most existing approaches rely either on supervised methods or use language-dependent part-ofspeech (POS) information. In this work, we present two knowledge-free and unsupervised (and therefore language-independent) methods that rely on information gained by distributional semantic models that are computed using large unannotated corpora, namely, word2vec (Mikolov et al. 2013) and JoBimText (Biemann and Riedl 2013) . First, we describe these methods and highlight how their information can help for both tokenization tasks. Then, we present results for the identification of MWEs and afterwards show the performance of the method for decompounding. For both tasks, we first show the performance using manually annotated gold data before we present evaluations for multiple languages using automatically extracted data sets from Wikipedia and Wiktionary. Lastly, we demonstrate how both flavors of such an extended tokenization can be used in an IR setting. The article is partly based on previous work (Riedl and Biemann 2015, 2016 ) that has been substantially extended by adding experiments for several languages and showing the advantage of combining the methods in an information retrieval evaluation.", "cite_spans": [ { "start": 421, "end": 442, "text": "(Mikolov et al. 2013)", "ref_id": "BIBREF28" }, { "start": 457, "end": 481, "text": "(Biemann and Riedl 2013)", "ref_id": "BIBREF8" }, { "start": 1069, "end": 1079, "text": "(Riedl and", "ref_id": "BIBREF37" }, { "start": 1080, "end": 1098, "text": "Biemann 2015, 2016", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The article is organized as follows. Section 2 describes the distributional semantic models that are used to compute similarities between lexical units, which are the main source of information for both fine-and coarse-grained tokenization. Then, we describe how multiword expressions can be detected and evaluate our methodology. In Section 4, we describe the workings and the evaluation for compound splitting. How to use both methods for information retrieval is shown in Section 5. In Section 6, we present the related work. Afterwards, we highlight the main findings in the conclusion in Section 7 and give an overview of future work in Section 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Both methods described in this article have in common that they rely on distributional semantics, which is based on the distributional hypothesis that was conceived by Harris (1951) . This hypothesis states that words that occur in a similar context tend to have similar meaning. Many methods have implemented that assumption in order to compute word similarities using various contexts (e.g., neighboring words, words with syntactic dependencies) (Hindle 1990; Grefenstette 1994; Lin 1998) . Usually, words are not only similar to synonyms but also to hypernyms, antonyms, or related terms. For the task of splitting words, the similarity to hypernyms is interesting, as compounds are often similar to more general terms, which are stems of the compound. For example, the word Hefeweizenbier [yeast wheat beer] is most similar to the term Bier [beer] or Weizenbier [wheat beer], which are words that are nested in the more specific word. Such information is beneficial when it comes to the task of splitting compounds, as we shall see subsequently. When computing similarities not only for words but considering word n-grams, we observe that concepts that are composed of several word units are often similar to single-word terms. For example, the word hot dog is most similar to food-related terms like hamburger or sandwich. As shown in the remainder of this article, the information of distributional semantics is beneficial for the tasks of identifying of MWEs but also for the task of compound splitting.", "cite_spans": [ { "start": 168, "end": 181, "text": "Harris (1951)", "ref_id": "BIBREF14" }, { "start": 448, "end": 461, "text": "(Hindle 1990;", "ref_id": "BIBREF18" }, { "start": 462, "end": 480, "text": "Grefenstette 1994;", "ref_id": "BIBREF13" }, { "start": 481, "end": 490, "text": "Lin 1998)", "ref_id": "BIBREF25" }, { "start": 845, "end": 851, "text": "[beer]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Using Distributional Semantics for Fine-and Coarse-Grained Tokenization", "sec_num": "2." }, { "text": "In this work, we compute semantic similarities using the dense vector-based CBOW model from word2vec (Mikolov et al. 2013 ) and a symbolic graph-based approach called JoBimText (Biemann and Riedl 2013) . In order to use both models within the word splitting and the word merging task, we transform them to a so-called distributional thesaurus (DT) as defined by Lin (1997) . A DT can be considered as a dictionary where for each word the top n most similar words are listed, ordered by their similarity score.", "cite_spans": [ { "start": 101, "end": 121, "text": "(Mikolov et al. 2013", "ref_id": "BIBREF28" }, { "start": 177, "end": 201, "text": "(Biemann and Riedl 2013)", "ref_id": "BIBREF8" }, { "start": 362, "end": 372, "text": "Lin (1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Using Distributional Semantics for Fine-and Coarse-Grained Tokenization", "sec_num": "2." }, { "text": "The CBOW model is learned during the task of predicting a word by its context words. For this, the input layer is defined by the contexts of a word. As output layer we use the center word. The prediction is performed using a single hidden layer that represents the semantic model with the specified dimensions. For the computation of word2vec models, we use 500 dimensions, 5 negative samples, and a word window of 5. Because the implementation by Mikolov et al. 20131 does not support the computation of similarities between all n-grams within a corpus, we use the word2vecf implementation by Levy and Goldberg (2014). 2 This implementation allows specifying terms and contexts directly and features the functionality to retrieve the most relevant contexts for a word. In order to extract a DT from models computed with word2vec and word2vecf, we compute the cosine similarity between all terms and extract, for each term, the 200 most similar terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Distributional Semantics for Fine-and Coarse-Grained Tokenization", "sec_num": "2." }, { "text": "As opposed to the mainstream of using dense vector representations, the approach by Biemann and Riedl (2013) , called JoBimText, uses a sparse count-based context representation that nevertheless scales to arbitrary amounts of data (Riedl and Biemann 2013) . Furthermore, this approach has achieved competitive results to dense vector space models like CBOW and SKIP-gram (Mikolov et al. 2013) in word similarity evaluations (Riedl 2016; Riedl and Biemann 2017) . To keep the preprocessing language independent, we keep only words in a context window for both approaches, as opposed to, for example, dependency-parsing-based contexts. For the task of MWE identification we do not only represent single words but also n-grams using single-word contexts. For the task of decompounding, only unigrams are considered.", "cite_spans": [ { "start": 84, "end": 108, "text": "Biemann and Riedl (2013)", "ref_id": "BIBREF8" }, { "start": 232, "end": 256, "text": "(Riedl and Biemann 2013)", "ref_id": "BIBREF38" }, { "start": 372, "end": 393, "text": "(Mikolov et al. 2013)", "ref_id": "BIBREF28" }, { "start": 425, "end": 437, "text": "(Riedl 2016;", "ref_id": "BIBREF37" }, { "start": 438, "end": 461, "text": "Riedl and Biemann 2017)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Using Distributional Semantics for Fine-and Coarse-Grained Tokenization", "sec_num": "2." }, { "text": "Based on the frequencies of words/n-grams and contexts, we calculate the lexicographer's mutual information (LMI) significance score (Evert 2005) between terms and features and remove all context features that co-occur with more than 1,000 terms, as these features tend to be too general. In the next step we reduce the number of context features per term by keeping for each term only 1,000 context features with the highest LMI score. The similarity score is defined as the number of shared features of two terms. Such an overlap-based similarity measure is proportional to the Jaccard similarity measure, although we do not conduct any normalization. After computing the feature overlap between all pairs of terms, we retain the 200 most similar terms for each word n-gram. In line with Lin (1997) we refer to such a resource as DT.", "cite_spans": [ { "start": 133, "end": 145, "text": "(Evert 2005)", "ref_id": "BIBREF12" }, { "start": 790, "end": 800, "text": "Lin (1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Using Distributional Semantics for Fine-and Coarse-Grained Tokenization", "sec_num": "2." }, { "text": "The detection of multiword units is one of the extensions needed for coarse-grained tokenization. As summarized concisely by Blanc, Constant, and Watrin (2007, page 1) , \"language is full of multiword units.\" By inspecting dictionaries, we highlight the importance of MWEs. For example, in WordNet, 41.41% of all words are MWEs, as shown in Table 1 . Whereas more than 50% of all nouns are MWEs, only about 26% of all verbs are MWEs. As the majority of all MWEs found in WordNet are nouns (93.73%), developing the method we focus first on the detection of terms belonging to this word class in Section 3.6 and show the performance on all word classes in subsequent sections.", "cite_spans": [ { "start": 125, "end": 167, "text": "Blanc, Constant, and Watrin (2007, page 1)", "ref_id": null } ], "ref_spans": [ { "start": 341, "end": 348, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Merging Words: Multiword Identification", "sec_num": "3." }, { "text": "Although it seems intuitive to treat certain sequences of tokens as (single) terms, there is still considerable controversy about the definition of what exactly constitutes a MWE. Sag et al. (2001) pinpoint the need for an appropriate definition of MWEs. For this, they classify a range of syntactic formations that could form MWEs and define MWEs as being non-compositional with respect to the meaning of their parts. Although the exact requirements of MWEs is bound to specific tasks (such as parsing, keyword extraction, etc.), we operationalize the notion of non-compositionality by using distributional semantics and introduce a measure that works well for a range of task-based MWE definitions.", "cite_spans": [ { "start": 180, "end": 197, "text": "Sag et al. (2001)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Merging Words: Multiword Identification", "sec_num": "3." }, { "text": "Reviewing previously introduced MWE ranking approaches (cf. Section 6.1), most methods use the following mechanisms to determine multiwordness: POS tags, word/multiword frequency, and significance of co-occurrence of the parts. In contrast, our method uses an additional mechanism, which performs a ranking based on distributional semantics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Words: Multiword Identification", "sec_num": "3." }, { "text": "Distributional semantics has already been used for MWE identification, but mainly to discriminate between compositional and non-compositional MWEs (Schone and Jurafsky 2001; Hermann and Blunsom 2014; Salehi, Cook, and Baldwin 2014) . Here we introduce a concept to describe the multiwordness of a term by its uniqueness. This score measures the likeliness that a term in context can be replaced with a single word. This measure is motivated by the semiotic consideration that due to parsimony, concepts are often expressed as single words. Furthermore, we deploy a context-aware punishment term, called incompleteness, which degrades the score of candidates that seem incomplete regarding their contexts. For example, the term red blood can be called incomplete as the following word is most likely the word cell. Both concepts are combined into a single score we call DRUID (DistRibutional Uniqueness and Incompleteness Degree), which is calculated based on a DT. In the following, we show the performance of this method for French and English and examine the effect of corpus size on MWE extraction. This section extends work presented in Riedl and Biemann (2015) . In addition, we demonstrate the language independence of the method by evaluating it on 32 languages and give a more detailed data analysis. We want to emphasize that our method works in an unsupervised fashion and is not restricted to certain POS classes. However, most of the competitive methods require POS filtering as a pre-processing step in order to do their statistics. Hence, these methods are mostly evaluated based on noun compounds. Because of comparison reasons, the first evaluation that uses POS filtering (see Section 3.6) is restricted to noun compounds. However, the remaining experiments in Sections 3.7 and 3.8 are not restricted to any particular POS.", "cite_spans": [ { "start": 147, "end": 173, "text": "(Schone and Jurafsky 2001;", "ref_id": "BIBREF46" }, { "start": 174, "end": 199, "text": "Hermann and Blunsom 2014;", "ref_id": "BIBREF17" }, { "start": 200, "end": 231, "text": "Salehi, Cook, and Baldwin 2014)", "ref_id": "BIBREF43" }, { "start": 1141, "end": 1165, "text": "Riedl and Biemann (2015)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Merging Words: Multiword Identification", "sec_num": "3." }, { "text": "First, we describe the new method and show its performance on different data sets; we briefly describe the baseline and previous approaches in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Words: Multiword Identification", "sec_num": "3." }, { "text": "In the first setting, we evaluate our method by comparing the MWE rankings to multiword lists that have been annotated in corpora. In order to show the performance of the method, we introduce an upper bound and two baseline methods and give a brief description of the competitors. Most of these methods rely on lists of pre-filtered MWE candidate terms T. Usually these are extracted by patterns defined on POS sequences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Previous Approaches", "sec_num": "3.1" }, { "text": "3.1.1 Upper Bound. As an upper bound, we consider a perfect ranking, where we rank all positive candidates before all negative ones. Within the data set, we only have binary labels for true and false MWEs. Thus, any ordering of the MWEs within the block of MWEs labeled as true, respectively, false, does not change the upper bound.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines and Previous Approaches", "sec_num": "3.1" }, { "text": "The ratio between true candidates and all candidates serves as a lower baseline, which is also called baseline precision (Evert 2008) . The second baseline is the frequency baseline, which ranks candidate terms t \u2208 T according to their frequency freq(t). Here, we hypothesize that words with high frequency are multiword expressions.", "cite_spans": [ { "start": 121, "end": 133, "text": "(Evert 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "3.1.3 C-value/NC-value. Frantzi, Ananiadou, and Tsujii (1998) developed the commonly used C-value (see Equation 1). This value is composed of two factors. As first factor, they use the logarithm of the term length in words in order to favor longer MWEs. The second factor is the frequency of the term reduced by the average frequency of all candidate terms T, which nest the term t (i.e., t is a substring of the terms we denote as T t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cv(t) = log 2 (|t|) \u2022 \uf8eb \uf8ed freq(t) \u2212 1 |T t | b\u2208T t freq(b) \uf8f6 \uf8f8", "eq_num": "(1)" } ], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "An extension of the C-value was proposed by Frantzi, Ananiadou, and Tsujii (1998) and is called the NC-value. It takes advantage of context words C t , which are neighboring words of t, by assigning weights to them. As context words only nouns, adjectives, and verbs are considered. 3 Context words are weighted with Equation 2, where k denotes the number of times the context word c \u2208 C t occurs with any of the candidate terms. This number is normalized by the number of candidate terms.", "cite_spans": [ { "start": 44, "end": 81, "text": "Frantzi, Ananiadou, and Tsujii (1998)", "ref_id": null }, { "start": 283, "end": 284, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "w(c) = k |T| (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "The NC-value is a weighted sum of the C-value and the product of the term t occurring with each context c, which form the term t c .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "nc(t) = 0.8 \u2022 cv(t) + 0.2 c\u2208C t freq(t c )w(c)", "eq_num": "(3)" } ], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "3.1.4 t-test. The t-test (see, e.g., Manning and Sch\u00fctze 1999, page 163 ) is a statistical test for the significance of co-occurrence of two words. It relies on the probabilities of the term and its single words. The probability of a word p(w) is defined as the frequency of the term divided by the total number of terms of the same length. The t-test statistic is computed using Equation (4) with freq(.) being the total frequency of all unigrams.", "cite_spans": [ { "start": 37, "end": 71, "text": "Manning and Sch\u00fctze 1999, page 163", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t(w 1 . . . w n ) \u2248 p(w 1 . . . w n ) \u2212 n i=1 p(w i ) p(w 1 . . . w n )/freq(.)", "eq_num": "(4)" } ], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "We then use this score to rank the candidate terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "3.1.5 Marginal Frequency-Based Geometric Mean (FGM) Score. Nakagawa and Mori (2002, 2003) presented another method that is inspired by the C/NC-value and outperformed a modified C-value measure. 4 It is composed of two scoring mechanisms for the candidate term t, as shown in Equation (5).", "cite_spans": [ { "start": 59, "end": 71, "text": "Nakagawa and", "ref_id": "BIBREF32" }, { "start": 72, "end": 89, "text": "Mori (2002, 2003)", "ref_id": "BIBREF32" }, { "start": 195, "end": 196, "text": "4", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "FGM(t) = GM(t) \u2022 MF(t)", "eq_num": "(5)" } ], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "The first term in the equation is the geometric mean GM(.) of the number of distinct direct left l(.) and right r(.) neighboring words for each single word t i within t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "GM(t) = \uf8eb \uf8ed t i \u2208t (|l(t i )| + 1)(|r(t i )| + 1) \uf8f6 \uf8f8 1 2|t| (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "These neighboring words are extracted directly from the corpus; the method relies on neither candidate lists nor POS tags. In contrast, the marginal frequency MF(t) relies on the candidate list and the underlying corpus. This frequency counts how often the candidate term occurs within the corpus and is not a subset of a candidate. Korkontzelos (2010) showed that although scoring according to Equation (5) leads to comparatively good results, it is consistently outperformed by the performance of MF(t).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lower Baseline and Frequency Baseline.", "sec_num": "3.1.2" }, { "text": "Here, we describe the DRUID method for ranking terms regarding their multiwordness, which consists of two mechanisms relying on semantic word similarities: A score for the uniqueness of a term and a score that punishes its incompleteness. 5 The importance and influence of the results for the combination of both mechanisms is demonstrated in Section 3.9. The DT is computed as described in Section 2, using n-grams (n = 1, 2, 3, 4). When using JoBimText to compute such a DT, we use the left and right neighboring words as context. In order to compute the DRUID score using the CBOW model, we compute dense vector representations using word2vecf (Levy and Goldberg 2014) and convert it to a DT by extracting the 200 most similar words for each n-gram. An example using JoBimText for the most similar n-grams to the terms red blood cell and red blood including their feature overlap is shown in Table 2. 3.2.1 Uniqueness Computation. The first mechanism of our MWE ranking method is based on the following hypothesis: n-grams that are MWEs could be substituted by single words, thus they have many single words among their most similar terms. When a semantically non-compositional word combination is added to the vocabulary, it expresses a concept that is necessarily similar to other concepts. Hence, if a candidate multiword is similar to many single word terms, this indicates multiwordness.", "cite_spans": [ { "start": 239, "end": 240, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 895, "end": 903, "text": "Table 2.", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "DistRibutional Uniqueness and Incompleteness Degree (DRUID)", "sec_num": "3.2" }, { "text": "To compute the uniqueness score (uq) of an n-gram t, we first extract the n-grams it is similar to using the DT as described in Section 2. The function similarities(t) returns the 200 most similar n-grams to the given n-gram t. We then compute the ratio between unigrams and all similar n-grams considered using the formula, where the function unigram(.) tests whether a word is a unigram:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DistRibutional Uniqueness and Incompleteness Degree (DRUID)", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "uq(t) = |{\u2200w \u2208 similarities(t) | unigram(w)}| |similarities(t)| .", "eq_num": "(7)" } ], "section": "DistRibutional Uniqueness and Incompleteness Degree (DRUID)", "sec_num": "3.2" }, { "text": "We illustrate the computation of our measure based on two example terms: the MWE red blood cell and the non-MWE red blood. When considering only the ten most similar entries for both n-grams as illustrated in Table 2 , we observe a uniqueness score of 7/10 = 0.7 for both n-grams. If considering the top 200 similar n-grams, which are also used in our experiments, we obtain 135 unigrams for the candidate red blood cell and 100 unigrams for the n-gram red blood. We use these counts for exemplifying the workings of the method in the remainder.", "cite_spans": [], "ref_spans": [ { "start": 209, "end": 216, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "DistRibutional Uniqueness and Incompleteness Degree (DRUID)", "sec_num": "3.2" }, { "text": "In order to avoid ranking nested terms at high positions, we introduce a measure that punishes such \"incomplete terms\". This mechanism is called incompleteness (ic) and, similarly to the C/NC-value method (see Section 3.1.3), consists of a context weighting function that punishes incomplete terms. We show the pseudocode for the computation in Algorithm 1. First, we use the function context(t) to extract the 1,000 most significant context features. This function returns a list of tuples of left and right contexts. contexts \u2190 context(t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incompleteness Computation.", "sec_num": "3.2.2" }, { "text": "3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incompleteness Computation.", "sec_num": "3.2.2" }, { "text": "C \u2190 map() 4: for all (c left , c right ) in contexts do 5: C[c left ,left] \u2190 C[c left , left] + 1 6: C[c right , right] \u2190 C[c right , right] + 1 7: end for 8:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incompleteness Computation.", "sec_num": "3.2.2" }, { "text": "return max value(C)/|contexts| 9: end function For JobimText, these context features are the same that are used for the similarity computation in Section 2 and have been ranked according to the LMI measure. In the case of word2vecf, context features are extracted per word. To be compatible with the JoBimText contexts, we extract the 1,000 contexts with the highest cosine similarity between word and context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incompleteness Computation.", "sec_num": "3.2.2" }, { "text": "For the example term red blood, some of the contexts are extravasated, cells , uninfected, cells , nucleated, corpuscles . In the next step we iterate over all contexts. Using the first context feature results in the tuple (extravasated, cells). Then, we separately count the occurrence of both the left and the right context, including its relative position (left/right) as illustrated in Table 3 for the two example terms.", "cite_spans": [], "ref_spans": [ { "start": 390, "end": 397, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Incompleteness Computation.", "sec_num": "3.2.2" }, { "text": "We subsequently return the maximal count and normalize it by the counts of features |context(t)| considered, which is at most 1,000. This results in the incompleteness measure ic(t). For our example terms we achieve the values ic(red blood) = 557/1, 000 and ic(red blood cell) = 48/1, 000. Whereas the uniqueness scores for the most similar entries are close together (100 vs. 135), we now have a measure that indicates the incompleteness of an n-gram, assigning higher scores to more incomplete terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Incompleteness Computation.", "sec_num": "3.2.2" }, { "text": "As shown in the previous two sections, a high uniqueness score indicates the multiwordness and a high incompleteness score should decrease the overall score. In our experiments (see Section 3.9) we reveal that using solely the uniqueness score results in good scores. However, often expressions ending with stopwords and incomplete MWEs are detected. In experiments, we found the best combination when we subtract the incompleteness score from the uniqueness score. 6 This mechanism is inspired by the NC-value and motivated as terms that are often preceded/followed by the same word do not cover the full multiword expression and need to be downranked. This leads to Equation 8, which we call DistRibutional Uniqueness and Incompleteness Degree:", "cite_spans": [ { "start": 466, "end": 467, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Combining Both Measures.", "sec_num": "3.2.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "DRUID(t) = uq(t) \u2212 ic(t)", "eq_num": "(8)" } ], "section": "Combining Both Measures.", "sec_num": "3.2.3" }, { "text": "Applying the DRUID score to our example terms (considering the 200 most similar terms) we achieve the scores DRUID(red blood cell) = 135/200 \u2212 48/1, 000 = 0.627 and DRUID(red blood) = 100/200 \u2212 557/1, 000 = \u22120.057. As a higher DRUID score indicates the multiwordness of an n-gram, we conclude that the n-gram red blood cell is a better MWE than the n-gram red blood.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining Both Measures.", "sec_num": "3.2.3" }, { "text": "To evaluate the method, we examine two experimental settings: first, we compute all measures on a small corpus that has been annotated for MWEs, which serves as the gold standard. In the second setting, we compute the measures on a larger in-domain corpus. The evaluation is again performed for the same candidate terms as given by the gold standard. Results for the top k ranked entries are reported using the precision at k:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3.3" }, { "text": "P@k = 1 k k i=1 x i (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3.3" }, { "text": "with x i equal to 1 if the ith ranked candidate is annotated as MWE and 0 otherwise. For an overall performance we use the average precision (AP) as defined by Thater, Dinu, and Pinkal (2009) :", "cite_spans": [ { "start": 160, "end": 191, "text": "Thater, Dinu, and Pinkal (2009)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3.3" }, { "text": "AP = 1 |T mwe | |T| k=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3.3" }, { "text": "x k P@k (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3.3" }, { "text": "with T mwe being the set of positive MWEs. When facing tied scores, we mix false and true candidates randomly following Cabanac et al. 2010.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setting", "sec_num": "3.3" }, { "text": "For the first experiments, we consider two annotated (small) corpora and two unannotated (large) corpora for the evaluation and computation of MWEs. The language independence of DRUID is demonstrated on various Wikipedia text corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "GENIA Corpus and SPMRL 2013: French Treebank. In the first experiments, we use two small annotated corpora that serve as the gold standard MWEs. We use the medical GENIA corpus (Kim et al. 2003) , which consists of 1,999 abstracts from Medline 7 and encompasses 0.4 million words. This corpus has annotations regarding important and biomedical terms. 8 In addition, single terms are annotated in this data set, which we ignore.", "cite_spans": [ { "start": 177, "end": 194, "text": "(Kim et al. 2003)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "The second small corpus is based on the French Treebank (Abeill\u00e9 and Barrier 2004), which was extended for the SPMRL task (Seddah et al. 2013) . This version of the corpus also contains compounds annotated as MWEs. In our experiments, we use the training data, which cover 0.4 million words.", "cite_spans": [ { "start": 122, "end": 142, "text": "(Seddah et al. 2013)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "Whereas the GENIA MWEs target term matching and medical information retrieval, the SPMRL MWEs mainly focus on improving parsing through compound recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "Medline Corpus and Est R\u00e9publicain Corpus (ERC). In a second experiment, the scalability to larger corpora is tested. For this, we make use of the entire set of Medline abstracts, which consists of about 1.1 billion words. The Est R\u00e9publicain Corpus (Seddah et al. 2012 ) is our large French corpus. 9 It is made up from local French newspapers from the eastern part of France and comprises 150 million words.", "cite_spans": [ { "start": 250, "end": 269, "text": "(Seddah et al. 2012", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "Wikipedia. Applying the methods to texts extracted from 32 Wikipedias validates their language independence. For this, we use the following languages: Arabic, Basque, Bulgarian, Catalan, Croatia, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hebrew, Hungarian, Italian, Kazakh, Latin, Latvian, Norwegian, Persian, Polish, Portuguese, Romanian, Russian, Slovene, Spanish, Swedish, Turkish, and Ukrainian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "7 The Medline corpus is available at:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "http://www.nlm.nih.gov/bsd/licensee/access/medline_pubmed.html. 8 The GENIA corpus is freely available at:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "http://www.nactem.ac.uk/genia/genia-corpus/pos-annotation. 9 The ERC is available at: http://www.cnrtl.fr/corpus/estrepublicain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.4" }, { "text": "In the first two experiments, we use POS filters to select candidates. We concentrate on filters that extract noun MWEs, as they constitute the largest number of MWEs (see Table 1 ) and avoid further preprocessing like lemmatization. We use the filter introduced by Justeson and Katz (1995) for the English medical data sets (see Table 4 ).", "cite_spans": [ { "start": 279, "end": 290, "text": "Katz (1995)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 172, "end": 179, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 330, "end": 337, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.5" }, { "text": "Considering only terms that appear more than ten times yields 1,340 candidates for the GENIA data set and 29,790 candidates for the Medline data set. According to Table 5 , we observe that most candidates are bigrams. Whereas about 20% of MWEs are trigrams in both corpora, only a marginal number of longer MWEs have been marked.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 170, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.5" }, { "text": "For the French data sets, we apply the POS filter proposed by Daille, Gaussier, and Lang\u00e9 (1994) , which is suited to match nominal MWEs (see Table 4 ). Applying the same filtering as for the medical corpora leads to 330 candidate terms for the SPMRL and 7,365 candidate terms for the ERC. Here the ratio between bigrams and trigrams is more balanced but again the number of 4-grams constitutes the smallest class.", "cite_spans": [ { "start": 62, "end": 96, "text": "Daille, Gaussier, and Lang\u00e9 (1994)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 142, "end": 149, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.5" }, { "text": "In comparison with the Medline data set, the ratio of multiwords extracted by the POS filter on the French corpus is much lower. We attribute this to the fact that in the French data, many adverbial, prepositional MWEs are annotated, which are not covered by the POS filter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.5" }, { "text": "The third experiment shows the performance of the method in absence of languagespecific preprocessing. Thus, we only filter the candidates by frequency and do not make use of POS filtering. As most previous methods rely on POS-filtered data, we cannot compare with them in this language-independent setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.5" }, { "text": "For the evaluation, we compute the scores of the competitive methods in two ways: First, we compute the scores based on the full candidate list without any frequency filter and prune low-frequent candidates only for the evaluation (post-prune). In the second setting, we filter candidates according to their frequency before the computation of scores (pre-prune). This leads to differences for context-aware measures, because in the pre-pruned case a lower number of less noisy contexts is used. The evaluation on Wikipedia is slightly different, as we do not have any gold data. Thus, we compute the ranking regarding the multiwordness for all words in the corpus. Based on this list, we determine the multiwordness of an n-gram by testing its existence in the respective language's Wiktionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.5" }, { "text": "First, we present the results based on the GENIA corpus (see Table 6 ). Almost all competitive methods beat the lower baseline. The C/NC-value performs best when the pruning is done after the frequency filtering. In line with the findings of Korkontzelos (2010) and in contrast to Frantzi, Ananiadou, and Tsujii (1998), the AP of the C-value is slightly higher than for the NC-value. All the FGM-based methods except the GM measure alone outperform the C-value. The results in Table 6 indicate that the best competitive system is the post-pruned FGM system, as it has much higher average precision scores and misses only 50 MWEs in the first 500 entries. A slightly different picture is presented in Figure 1 , where we plot the P@k scores against the number of candidates. Here DRUID computed on the JoBimText similarities performs well for the top-k list for small k, that is, it finds many valid MWEs with high confidence, thus combines well with MF, which extends to larger k, but places too much importance of frequency when used alone. Common errors occur for frequent prepositional phrases, such as Table 6 Results for P@100, P@500, and the average precision (AP) for various ranking measures. The gold standard is extracted using the GENIA corpus. This corpus is also used for computing the measures. This graph shows the P@k for some measures, plotting the precision against k. Using DRUID in combination with the MF and FGM measures yields the highest precision scores.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 6", "ref_id": null }, { "start": 477, "end": 484, "text": "Table 6", "ref_id": null }, { "start": 700, "end": 708, "text": "Figure 1", "ref_id": null }, { "start": 1106, "end": 1113, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results Using POS", "sec_num": "3.6" }, { "text": "\"in patience\" (we give more details on errors in Section 3.10). Using similarities from the word2vec model does not work well for the DRUID method. This is mainly attributed to the fact that multiwords are mostly similar to words of the same frequency (Schnabel et al. 2015; Riedl and Biemann 2017) and often these words are multiwords themselves. Observing, for example, the most similar terms for the term red blood cells, we retrieve the words peripheral blood mononuclear cell, show that the, U937 cells, basal, potent, which are much noisier than the ones we obtain with the JoBimText model (see Table 2 in Section 3.2.1) and within the top 10 most similar terms, we only find four single-worded terms. This is already an indicator that the concept of uniqueness does not apply to similarities computed with word2vec. In contrast, the JoBimText similarities are most similar to more frequent words (we detected 7 out of 10 terms to be unigrams), and we detect more synonyms and hypernyms that are single-word terms. Only for the P@100, can the word2vec-based method beat the t-test and frequency baselines. However, for all other measures, the performance is similar to these baselines or even inferior, and significantly worse than using DRUID with JoBimText. Thus, we will not report results for the other MWE extraction experiments.", "cite_spans": [], "ref_spans": [ { "start": 601, "end": 608, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "When looking at effects between post-pruning and pre-pruning, we observe that FGM scores higher than MF when post-pruning, but the inverse is observed when prepruning. Our JoBimText-based DRUID method can outperform FGM only on the topranked 300 terms (see Figure 1 and Table 6 ).", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 265, "text": "Figure 1", "ref_id": null }, { "start": 270, "end": 277, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Multiplying the logarithmic frequency with DRUID, the results improve slightly and the best P@100 of 0.97 is achieved. All FGM results are outperformed when combining the post-pruned FGM scores with our measure. According to Figure 1 , this combination achieves high precision for the first ranked candidates and still exploits the good performance of the post-pruned FGM based method for the middle-ranked candidates.", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 233, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Different results are achieved for the SPMRL data set, as can be seen in Table 7 . Whereas the pre-pruned C-value again receives better results than frequency, it scores below the lower baseline. In addition, the post-pruned FGM and MF method do not exceed the lower baseline. Data analysis revealed that for the French data set only ten out of the 330 candidate terms are nested within any of the candidates. This is much lower than the 637 terms nested in the 1340 candidate terms for the GENIA data set. As both the FGM-based methods and the C/NC-value heavily rely on nested candidates, they cannot profit from the candidates of this data set and achieve similar scores as ordering candidates according to their frequency. Comparing the baselines to our scoring method, this time we obtain the best result for DRUID without additional factors. However, multiplying DRUID with MF or log(frequency) still outperforms the other methods and the baselines.", "cite_spans": [], "ref_spans": [ { "start": 73, "end": 80, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Most MWE evaluations have been performed on rather small corpora. Here, we examine the performance of the measures for large corpora, to realistically simulate a situation where the MWEs should be found automatically for an entire domain or language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Using the Medline corpus, all methods except the GM score outperform the lower baseline and the frequency baseline (see Table 8 ). Regarding the AP the best results are obtained when combining our DRUID method with the MF, whereas for P@100 and P@500 the log-frequency-weighted DRUID scores best. As we can observe from Table 7 Results for MWE detection on the French SPMRL corpus. Both the generation of the gold standard and the computations of the measures have been performed on this corpus. Table 8 Results of n-gram ranking on the medical data. Whereas the gold standard is extracted from the GENIA data set, the ranking measures as well as the frequency threshold for selecting the gold candidates are computed using the Medline corpus. Figure 2 , using solely the DRUID method or the combined variation with the logfrequency lead to the best ranking for the first 1,000 ranked candidates. However, both methods are outperformed beyond the first 1,000 ranked candidates by the MFinformed DRUID variations. Using the combination with GM results in the lowest scores.", "cite_spans": [], "ref_spans": [ { "start": 120, "end": 127, "text": "Table 8", "ref_id": null }, { "start": 320, "end": 327, "text": "Table 7", "ref_id": null }, { "start": 496, "end": 503, "text": "Table 8", "ref_id": null }, { "start": 744, "end": 752, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In this experiment, the C-value achieves the best performance from the competitive methods for the P@100 and P@500, followed by the t-test. But the highest AP is reached with the post-pruned MF method, which also outperforms the sole DRUID slightly. Contrary to the GENIA results, the MF scores are consistently higher than the FGM scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In the French ERC, no nested terms are found within the candidates. Thus, the post-pruned and pre-pruned settings are equivalent and thus MF equals frequency. We show the results for the evaluation using the ERC in Table 9 .", "cite_spans": [], "ref_spans": [ { "start": 215, "end": 222, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "The best results are again obtained with our method with and without the logarithmic frequency weighting. Again, the AP of the C-value and most of the FGM-based methods are inferior to the frequency scoring. Only the t-test and the MF score slightly higher than the frequency. 10 In contrast to the results based on the smaller SPMRL data set, the MF, FGM, and C-value can outperform the lower baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In comparison to the smaller corpora, the performance for the larger corpora is much lower. Especially low-frequent terms in the small corpora that have high frequencies in the larger corpora have not been annotated as MWEs. Precision scores when considering different number of highest ranked words for DRUID and combined DRUID variations. Here, the gold standard is extracted from the GENIA data set, whereas the scores for the methods are computed using the Medline corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "Next, we apply our method to candidates without any POS filtering and report results for candidates surpassing a frequency threshold of 10. Thus, we do not only restrict the evaluation on noun MWEs but use all MWEs of all POS classes that have been annotated in both corpora. As most competitive methods from the previous section rely on POS tags, we only use the t-test for comparison. Analysis revealed that the top-scored candidates according to the t-test begin with stopwords. As an additional heuristic for the t-test, we shift those MWEs to the last ranks that start or end with one of the most frequent ten words in the corpus. For the smaller data set the best results are achieved with the sole DRUID (see Table 10 ) and frequency weighting does not seem to be beneficial, as highly frequent n-grams ending with stopwords are ranked higher in absence of POS filtering. This, however, is not observed for larger corpora. Here, the best results for Medline are achieved with the frequency-weighted DRUID. Whereas for French, the sole DRUID method performs best, the difference between the DRUID and the log-frequency-weighted DRUID is rather Table 9 Results for ranking n-grams, according to their multiwordness, based on the French ERC. The candidates are extracted based on the smaller SPMRL corpus. small. The low APs can be explained by the large number of considered candidates. The second best scores are achieved with the stopword-filtered t-test (t-test + sw). As in this setting the C-value cannot make use of candidate filtering based on POS tags, we do not list its performance, as it performs on par with frequency.", "cite_spans": [], "ref_spans": [ { "start": 716, "end": 724, "text": "Table 10", "ref_id": "TABREF0" }, { "start": 1150, "end": 1157, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Results Without POS Filtering", "sec_num": "3.7" }, { "text": "In order to demonstrate the performance of DRUID for several languages, we perform an evaluation on 32 languages. For this experiment, we compute similarities on their respective Wikipedias. 11 The evaluation is performed by extracting the 1,000 highest ranked words using DRUID. In order to determine whether a word sequence is a MWE, we use Wiktionary as \"gold\" standard and test whether it occurs as word entry. 12 Using this information, we compute the AP for these 1,000 ranked words. We present the results for this experiment in columns 2 to 5 in Table 11 . The t-test with stopword filtering mostly performs similar to the frequency baseline and improves from an average score of 0.07 to 0.08. We observe that in comparison to two baselines, frequency (freq.) and the t-test with stopword filtering, the DRUID method yields the best scores for 6 out of the 32 languages. However, if we multiply the logarithmic frequency by the DRUID measure, we gain the best performance for 30 languages. In general, numerical scores are low-for example, for Arabic, Slovene, or Italian, we obtain APs below 0.10. The highest scores are achieved for Swedish (0.33), German (0.36), Turkish (0.36), French (0.44), and English (0.70). Analyzing the results, we observe that many \"false\" MWEs are multiword units that are in fact multiword units, which are just not covered in the respective language's Wiktionary. Furthermore, we detect that these word sequences often are titles of Wikipedia articles. The absence of word lemmatization causes further decline, as words in Wiktionary are recorded in lemmatized form. To alleviate this influence, we extend our evaluation and check the occurrence of word sequences both in Wiktionary and Wikipedia. Using the Wikipedia API also normalizes query terms and, thus, we obtain a better word sequence coverage. This is confirmed by much higher results, as shown in columns 6 through 9 in Table 11 . Using the frequency combination with DRUID, we even gain higher APs for languages, which attained worse scores in the previous setting (e.g., Arabic [0.62], Slovene [0.17], and Italian [0.44]). Except for Estonian and Polish, using the logarithmic frequency weighting performs best for all languages. For these two languages, using the sole DRUID measure performs best. The best performance is obtained for English (0.87), Turkish (0.66), French (0.66), German (0.62), and Portuguese (0.64). Based on these multilingual experiments, we have demonstrated that DRUID not only performs well for English and French, but also for other languages, showing that its elements, uniqueness and incompleteness, are language-independent principles for multiword characterization.", "cite_spans": [], "ref_spans": [ { "start": 554, "end": 562, "text": "Table 11", "ref_id": "TABREF0" }, { "start": 1921, "end": 1929, "text": "Table 11", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Multilingual Evaluation", "sec_num": "3.8" }, { "text": "Here, we show different parameters for DRUID, relying on the English GENIA data set without POS filtering of MWE candidates and by considering only terms with a frequency of 10 or more. Inspecting the two different components of the DRUID measure (see Figure 3 top), we observe that the uniqueness measure contributes most to the DRUID score. The main effect of the incompleteness component is the downranking of a rather small number of terms with high uniqueness scores, which improves the overall ranking. We can also see that for the top-ranked terms, the negative incompleteness score does not improve over the frequency baseline but merely outperforms frequency for candidates in the middle range. Used in DRUID, we observe a slight improvement for the complete ranking. We achieve a P@500 of 0.474 for the uniqueness scoring and 0.498 for the DRUID score.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 260, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Components of DRUID", "sec_num": "3.9" }, { "text": "When filtering similar entries, used for the uq scoring, by their similarity score (see Figure 3 bottom), we observe that the amount of similar n-grams considered seems to be more important than the quality of the similar entries: With the increasing filtering, the quality of extracted candidate MWEs diminishes. ", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Components of DRUID", "sec_num": "3.9" }, { "text": "Results for the components of the DRUID measure (top) and for different filtering thresholds (bottom) of the similar entries considered for the uniqueness scoring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "The experiments confirm that our DRUID measure, either weighted with the MF or alone, works best across two languages and across different corpus sizes. It also achieves the best results in absence of POS filtering for candidate term extraction. The optimal weighting of DRUID depends on the nestedness of the MWEs: Using DRUID with the MF should be applied when there are more than 20% of nested candidates. If there are no nested candidates, we recommend using the log-frequency or no frequency weighting. We present the best-ranked candidates obtained with our method and with the best competitive method in terms of P@100 for the two smaller corpora. Using the GENIA data set, our log-frequency based DRUID (see left column in Table 12 ) ranks only true MWE within the 15 top-scored candidates.", "cite_spans": [], "ref_spans": [ { "start": 731, "end": 739, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Discussion and Data Analysis", "sec_num": "3.10" }, { "text": "The right-hand side shows results extracted with the pre-pruned MF method that yields three non-MWE terms. Whereas these terms seem to be introduced as candidates due to a POS error, the MF, and the C-value are not capable of removing terms starting with stopwords. The DRUID score alleviates this problem with the uniqueness factor. For the French data set, only one false candidate is ranked in the top 15 candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Data Analysis", "sec_num": "3.10" }, { "text": "In comparison, eight non-annotated candidates are ranked in the top 15 candidates by the MF (post-pruned) method as shown in Table 13 .", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 133, "text": "Table 13", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Discussion and Data Analysis", "sec_num": "3.10" }, { "text": "Whereas the unweighted DRUID method scores better than its competitors on the large corpora, the best numerical results are achieved when using DRUID with frequencybased weights on smaller corpora. For a direct comparison, we evaluated the small and large corpora using an equal candidate set. We observed that all methods computed on the large corpora achieve slightly inferior results than when computing them using the small corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Data Analysis", "sec_num": "3.10" }, { "text": "Data analysis revealed that we personally would consider many of these high ranked \"false\" candidates as MWEs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Data Analysis", "sec_num": "3.10" }, { "text": "For examining the effect, we extracted the top ten ranked terms, which are not annotated as MWE from the methods with the best P@100 performance, resulting in Table 12 Top ranked candidates from the GENIA data set using our ranking method (left) and the competitive method (right). Each term is marked if it is an MWE (1) or not (0).", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 167, "text": "Table 12", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Discussion and Data Analysis", "sec_num": "3.10" }, { "text": "NF the log(freq) DRUID and the pre-pruned C-value methods. We show the terms including their ranking position based on the GENIA data set in Table 14 . First, we observe that the first \"false\" candidate for our method appears at rank 26 and at rank 1 for the C-value. Additionally, only 10 out of the top 74 candidates are not annotated as MWEs for our method, whereas the same number of 10 non-MWEs is found in the first 48 candidates for the competitor. When searching the terms within the MeSH dictionary, we find seven terms ranked from our method and two for the competitive method, showing that most such errors are at least questionable, given that these terms are contained in a domain-specific lexicon. 13 This leads us to the conclusion that our method scales to larger corpora.", "cite_spans": [ { "start": 712, "end": 714, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 141, "end": 149, "text": "Table 14", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "log(freq)\u2022DRUID MF (pre-pruned)", "sec_num": null }, { "text": "The highest ranked single-worded terms for Medline and ERC without any POS filtering, based on the DRUID score.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 15", "sec_num": null }, { "text": "GATA In contrast to the competitive measures introduced in this section, our method is also able to rank single-worded terms. We show the 20 highest ranked single-worded terms in Table 15 for the Medline and the ERC corpus. In both lists we did not filter by POS and removed numbers, which often have a high DRUID score. Both for French and for the medical data, we observe some verbs, but mostly common and proper nouns. These are well suited as keyword lists that are required for document indexing used, e.g., for search engines or automatic speech recognition, as we have demonstrated in Milde et al. (2016) .", "cite_spans": [ { "start": 592, "end": 611, "text": "Milde et al. (2016)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 179, "end": 187, "text": "Table 15", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Medline ERC", "sec_num": null }, { "text": "In this section, we have demonstrated the capabilities of our method for the identification of MWEs in order to treat them as single tokens. Using similarities from word2vec does not work well for DRUID and applying the symbolic JoBimText approach works better. This is mainly attributed to the fact that JoBimText prefers to extract similarities to more frequent words, which are often single-worded terms (e.g., hypernyms), whereas word2vec mostly predicts multiword expressions and words of the same frequency for similarity queries. This is possible as JoBimText does not embed terms in a metric space that is subject to the triangle inequality and is in line with the research by Schnabel et al. (2015) and Riedl and Biemann (2017) . These findings show that using similarities from a symbolic semantic method brings added value when it comes to the identification of MWEs. Uniqueness is a well-working mechanism in MWE modeling. Whereas frequency and co-occurrence have been captured in many previous approaches (see Manning and Sch\u00fctze [1999] , Ramisch, De Araujo, and Villavicencio [2012] , and Korkontzelos [2010] for a survey), we boost multiword candidates t by their grade of distributional similarity with single word terms. We implement such contextual substitutability with a model where the term t can consist of multiword tokens and similarity is measured based on the right and neighboring word between all (single and multiword) terms. Because it is the default to express concepts with single words, a high uniqueness score is assigned to multiwords that belong to the same category just as single words would. For example, using an English open-domain corpus, hot dog is most similar to the terms: food, burger, hamburger, sausage, and roadside. Candidates with a low number of single-word similarities also serve the same function, but more frequently we observe single n-grams with function words or modifying adjectives concatenated with content words-for example, small dog is most similar to \"various cat\", \"large amount of \", \"large dog\", \"certain dog\", and \"dog\". To be able to kick in, the measure requires a certain minimum frequency for candidates in order to find enough contextual overlap with other terms. Additionally, we demonstrate effective performance on larger corpora and show its applicability when used in a completely unsupervised evaluation setting. Furthermore, we have demonstrated the language independence of the measure by evaluating it on 32 languages using Wiktionary and Wikipedia for the evaluation.", "cite_spans": [ { "start": 712, "end": 736, "text": "Riedl and Biemann (2017)", "ref_id": "BIBREF41" }, { "start": 1023, "end": 1049, "text": "Manning and Sch\u00fctze [1999]", "ref_id": "BIBREF26" }, { "start": 1052, "end": 1096, "text": "Ramisch, De Araujo, and Villavicencio [2012]", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Summary on MWE Identification", "sec_num": "3.11" }, { "text": "In order to enable a tokenization for sub-word units, we introduce SECOS (SEmantic COmpound Splitter), which is based on the hypothesis that compounds are similar to their constituting word units. 14 Again, our method is based on a DT. In addition, it does not require any language-specific rules and can be applied in a knowledge-free way. We exemplify the method based on the compound noun Bundesfinanzministerium ( federal finance ministry), which is assembled of the words Bundes ( federal), Finanz ( finance), and Ministerium (ministry). This section extends the work presented in Riedl and Biemann (2016) by adding results on an Afrikaans and a Finnish data set. Additionally, we introduce an evaluation based on automatically extracted compounds from Wiktionary and present results for 14 languages.", "cite_spans": [ { "start": 586, "end": 610, "text": "Riedl and Biemann (2016)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Splitting Words: Decompounding", "sec_num": "4." }, { "text": "Our method consists of three stages: First, we extract a candidate word set that defines the possible sub-word units of compounds. We present several approaches to generate such candidates. Second, we use a general method that splits the compound based on a candidate word set. Using different candidate sets, we obtain different compound splits. Finally, we define a mechanism that ranks these splits and returns the top-ranked one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "Candidate Extraction. For the extraction of all candidates in C, we use a DT that is computed on a background corpus. We present three approaches for the generation of candidate sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "When we retrieve the l most similar terms for a word w from a DT, we observe wellsuited candidates that are nested in w. For example, Bundesfinanzministerium is similar to Bund, Bundes, and Finanzministerium. Extracting the most similar terms that are nested in w results in the first split candidate set, called similar candidate units. However, only for few terms do we observe nested candidates in the most similar words. Thus, we require methods to generate \"back-off\" candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "First, we introduce the extended similar candidate units. Here, we extract the l most similar terms for w and then grow this set by again adding their respective l most similar words. Based on these terms, we extract all words that are nested in w. This results in more but less-precise decompounding candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "As the coverage might still be insufficient to decompound all words (e.g., entirely unseen compounds), we propose a method to generate a global dictionary of single atomic word units. For this, we iterate over the entire vocabulary of the background corpus, applying the compound splitter (see Section 4.1) to all words where we find similar candidate units. Then, we add these detected units to the dictionary. Finally, for word w subject to decompounding, we first extract all nested words NW from this dictionary. Then, we remove all words in NW that are nested itself in NW, resulting in the candidate set we call generated dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "Compound Splitting. Here, we introduce the decompounding algorithm for a given candidate set. For decompounding the word w, we require a set of candidate words C. Each word in the candidate set needs to be a substring of w. We do not include candidates in C that have less than ml characters. Additionally, we apply a frequency threshold of wc. These mechanisms are intended to rule out spurious parts and \"words\" that are in fact short abbreviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "We show candidates, extracted from the similar candidate unit, with ml = 3 for the example term in Table 16 . Then, we iterate over each candidate c i \u2208 C and add its beginning and ending position within w to the set S. This set is then used to identify possible split positions of w. For this, we iterate from left to right and add all split possibilities to the word w. This approach overgenerates split points, as can be observed for the example word, which is split into six units: Bund-e-s-finanz-minister-ium.", "cite_spans": [], "ref_spans": [ { "start": 99, "end": 107, "text": "Table 16", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "To merge character n-grams, we use a suffix-and prefix-based method. The suffix merging method appends all character n-grams with n below ms to the left word. The prefix method merges all character n-grams with n below mp to the word on the right side. To avoid remaining prefixes/suffixes, we apply the opposite method afterwards. For the German language, the suffix-prefix ordering mostly yields the best output. The suffix-prefix-based approach results to Bundes-finanz-ministerium and the prefixsuffix method to Bund-esfinanz-ministerium. However, for some words, the prefix-suffix generates the correct compound split-for example, for the word Zuschauer-er-wartung (audience + he + service), which is correctly decompounded as Zuschauer-erwartung (audience+expectation).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "In order to select the correct split, we compute the geometric mean of the joint probability for each split variation. For this we use word counts from a background corpus. In addition to the geometric mean formula introduced in Koehn and Knight (2003) , we add a smoothing factor to each frequency in order to assign non-zero", "cite_spans": [ { "start": 229, "end": 252, "text": "Koehn and Knight (2003)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "SEmantic COmpound Splitter (SECOS)", "sec_num": "4.1" }, { "text": "Examples of the output of our algorithms for the example term Bundesfinanzministerium. values to unknown units. 15 This yields the following formula for a compound w, which is decomposed into the units w 1 , . . . , w N :", "cite_spans": [ { "start": 112, "end": 114, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 16", "sec_num": null }, { "text": "p(w) = N i wordcount(w i ) + total wordcount + \u2022 #words 1 N (11)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 16", "sec_num": null }, { "text": "Here, #word denotes the total number of words in the background corpus and total wordcount is the sum of all word counts. Then, we select the split variation with the highest geometric mean. 16 In our example, this is the prefix-suffix-merged candidate Bundes-finanz-ministerium.", "cite_spans": [ { "start": 191, "end": 193, "text": "16", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Table 16", "sec_num": null }, { "text": "Split Ranking. We have examined schemes of priority ordering for integrating information from different candidate sets-for example, using the similar candidate units first and only applying the other candidate sets if no split was found. However, preliminary experiments revealed that it was always beneficial to generate splits based on all three candidate sets and use the geometric mean scoring as outlined above to select the best split as decomposition of a word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 16", "sec_num": null }, { "text": "For the computation of our method, we use similarities computed on various languages. First, we compute the DTs using JoBimText using the left and the right neighboring word as context representation. In addition, we extract a DT from the CBOW method from word2vec (Mikolov et al. 2013) using 500 dimensions, as described in Section 2. We compute the similarities for German based on 70M sentences and for Finnish on 4M sentences that are provided via the Leipzig Corpora Collection corpus (Richter et al. 2006) . For the generation of the Dutch similarities, we use the Dutch web corpus (Sch\u00e4fer and Bildhauer 2013) , which is composed of 259 million sentences. 17 Similarities for Afrikaans are computed using the Taalkommissie corpus (3M sentences) (Taalkommissie 2011) and we use 150GB of texts for Russian. 18 The evaluation for various languages based on the automatically extracted data set is performed on similarities computed on text from the respective Wikipedias.", "cite_spans": [ { "start": 265, "end": 286, "text": "(Mikolov et al. 2013)", "ref_id": "BIBREF28" }, { "start": 490, "end": 511, "text": "(Richter et al. 2006)", "ref_id": "BIBREF36" }, { "start": 588, "end": 616, "text": "(Sch\u00e4fer and Bildhauer 2013)", "ref_id": "BIBREF44" }, { "start": 663, "end": 665, "text": "17", "ref_id": null }, { "start": 812, "end": 814, "text": "18", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Setting", "sec_num": "4.2" }, { "text": "We evaluate the performance of the algorithms using a splitwise precision and recall measure that is inspired by the measures introduced by Koehn and Knight (2003) . Our evaluation is based on the splits of the compounds and is defined as shown: precision = correct split correct split + wrong splits recall = correct split correct split + missing splits", "cite_spans": [ { "start": 140, "end": 163, "text": "Koehn and Knight (2003)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Setting", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F1 = 2 \u2022 precision \u2022 recall precision + recall", "eq_num": "(12)" } ], "section": "Evaluation Setting", "sec_num": "4.2" }, { "text": "As unsupervised baselines we use the semantic analogy-based splitter (SAS) from (Daiber et al. 2015) 19 and the split ranking by Koehn and Knight (2003) , called KK.", "cite_spans": [ { "start": 80, "end": 103, "text": "(Daiber et al. 2015) 19", "ref_id": null }, { "start": 129, "end": 152, "text": "Koehn and Knight (2003)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Setting", "sec_num": "4.2" }, { "text": "For the intrinsic evaluation, we chose data sets of various languages. We use one small German data set for tuning the parameters of the methods. This data set consists of 700 manually labeled German nouns from different frequency bands created by Holz and Biemann (2008) . For the evaluation, we consider two larger German data sets. The first data set comprises 158,653 nouns from the German newspaper magazine c't and was created by Marek (2006) . 20 As second data set we use a noun compound data set of 54,571 nouns from GermaNet, 21 which has been constructed by Henrich and Hinrichs (2011). 22 While converting these data sets for the task of compound splitting, we do not separate words in the gold standard, which is made up of prepositions (e.g., the word Abgang [outflow] is not split into Ab-gang [off walk]).", "cite_spans": [ { "start": 248, "end": 271, "text": "Holz and Biemann (2008)", "ref_id": "BIBREF19" }, { "start": 436, "end": 448, "text": "Marek (2006)", "ref_id": "BIBREF27" }, { "start": 451, "end": 453, "text": "20", "ref_id": null }, { "start": 598, "end": 600, "text": "22", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "4.3" }, { "text": "In addition, we apply our method to a Dutch data set of 21,997 compound nouns and an Afrikaans data set that consists of 77,651 compound nouns. Both data sets have been proposed by van Zaanen et al. (2014). Furthermore, we perform an evaluation on a recent Finnish data set proposed by Shapiro et al. (2017) that comprises 20,001 words. In contrast to the other data set it does not only contain compound words but also 16,968 words with a single stem that must not be split. To show the language independence of our method, we further report results data sets for 14 languages that we collected from Wiktionary. 23 ", "cite_spans": [ { "start": 286, "end": 307, "text": "Shapiro et al. (2017)", "ref_id": "BIBREF48" }, { "start": 613, "end": 615, "text": "23", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Data Sets", "sec_num": "4.3" }, { "text": "In order to show the influence of the various candidate sets and to find the best performing parameters of our method, we use the small German data set with 700 noun compounds. We obtain the highest F1 scores (see Table 17 ) considering only candidates with a frequency above 50 (wc = 50) and that have more than four characters (ml = 5). Furthermore, we append only prefixes and suffixes equal or shorter than three characters (ms = 3 and mp = 3).", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 222, "text": "Table 17", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Tuning the Method", "sec_num": "4.4" }, { "text": "As observed in Table 17 , the highest precision using the JoBimText similarities is achieved with the similar candidate units. However, the recall is lowest because for many words no information is available. Using the extended similarities, the precision decreases and the recall increases. Interestingly, we observe an opposite trend for word2vec. However, the best overall performance is achieved with the generated dictionary, which yields an F1 measure of 0.9583 using JoBimText and 0.9627 using word2vec. Using geometric mean scoring to select the best compound candidate lifts the F1 measure up to 0.9658 using JoBimText and 0.9675 using the word2vec similarities on this data set. ", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 23, "text": "Table 17", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Tuning the Method", "sec_num": "4.4" }, { "text": "In this section, we first show results for manually extracted data sets and then demonstrate the multilingual capabilities of our method using a data set that was automatically extracted from Wiktionary. We compare our results to previously available methods, which will be discussed in Section 6.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "4.5.1 Results for Manually Annotated Data Sets. Now, we compare the performance of our method against unsupervised baselines and knowledge-based systems (see Table 18 ). For the 700 nouns we achieve the highest precision, recall, and F1 measure using our method with similarities from word2vec. Because we have tuned our parameters on this comparably small data set, which might be prone to overfitting, we do not discuss these results in depth but provide them again for completeness.", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 166, "text": "Table 18", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "On the c't data set, the best results are observed by using (supervised) JWordSplitter (JWS) followed by supervised Automatische Sprachverarbeitungs Toolbox (ASV), and our method. Here, JWS achieves significant improvements against all other methods in terms of F1 score. 24 Nevertheless, our method yields the highest precision value; SAS and KK score lowest.", "cite_spans": [ { "start": 272, "end": 274, "text": "24", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "Evaluating on the GermaNet data set, our method with similarities both from JoBimText is only outperformed by the supervised ASV method. Similar to the results for the 700 nouns, JWS performs lower than the decompounding method from the ASV toolbox. Whereas our method obtains lower recall than ASV and JWS, it still significantly outperforms the unsupervised baselines (KK and SAS) and yields the overall highest precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "On the Afrikaans data set we observe higher precision using the baseline method (KK) than using SECOS. By approach, more words get split than using the KK method. Whereas the KK approach identifies most compounds correctly, many compounds are not detected at all. Here, our method performs best using JoBimText.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "For Dutch, no trained models for JWS and ASV are available. Thus, we did not use these tools but compare to the NL splitter, achieving a competitive precision but lower recall. This is caused by many short split candidates that are not detected due to the ml parameter. However, our method still significantly beats the KK baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "Furthermore, we show results based on a Finnish data set proposed by Shapiro (2016) . Whereas her method performs better in terms of recall in comparison to SECOS, Table 18 Results based on manually created data sets for German, Dutch, Afrikaans, and Finnish. We mark the best results in bold font and use an asterisk (*) to show if a method performs significantly better than the baseline methods and use two asterisks (**) when a single method outperforms all others significantly. we attain much higher precision and thus also obtain the highest F1 measure. Again, the best results are obtained using similarities from JoBimText, which even significantly outperform the second best results retrieved with our method using similarities from word2vec.", "cite_spans": [ { "start": 69, "end": 83, "text": "Shapiro (2016)", "ref_id": "BIBREF48" } ], "ref_spans": [ { "start": 164, "end": 172, "text": "Table 18", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Decompounding Evaluation", "sec_num": "4.5" }, { "text": "Automatically Extracted Data Sets. In a second evaluation, we report results based on a data set that was automatically extracted from Wiktionary. 25 We extracted all compounds for which we were also able to find their stems in the dictionary. We show only results for languages where we could find at least more than 90 compounds (see Table 19 ). For the evaluation, we computed similarities using the corresponding Wikipedias. In contrast to the previous experiments, we do not restrict to noun compounds only but also include words of other POS. As baseline system, we use the unsupervised KK baseline method. Comparing the results achieved with SECOS and KK, the highest precision is obtained by the word2vec-based KK method. However, recall is always much lower than when using SECOS. As the KK baseline does not use any smoothing, it misses many splits, which results in few but precise word splits. In general, the results for German, Dutch, and Finnish are lower than using the manually annotated data sets. This is expected, as the data set is presumably noisier. In comparison to the experiments in Section 4.5.1, the best results using SECOS, except for Latin, are achieved using similarities from JoBimText and not the ones from word2vec. Although the precision scores are mostly comparable, the recall scores are much lower. Also, for German, we observe performance drops when using word2vec. These performance drops can be explained by the different nature of the Wiktionary compounds. First, the Wiktionary data sets mainly have only one split point marked within the compound word, although these words might consist of more than two stems. We tried to resolve this issue by recursively splitting words with nested compounds also contained in the data set. However, recall changed only marginally. The main reason for the inferior performance of word2vec seems to be that most words in the Wiktionary data set are not contained in the processed corpora. For example, whereas on the German c't data set 19% of the words are not contained in the corpus, 60% of the words are unknown in the German Wiktionary data set. It seems that the dictionary-based approach with JoBimText similarities is less sensitive to unknown words than with word2vec similarities.", "cite_spans": [ { "start": 147, "end": 149, "text": "25", "ref_id": null } ], "ref_spans": [ { "start": 336, "end": 344, "text": "Table 19", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Results for", "sec_num": "4.5.2" }, { "text": "We observe that SECOS is able to achieve precision values above 95% for most languages. Recall, on the other hand, is consistently lower, resulting in F1 scores around 80%. This again confirms that the method is splitting cautiously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for", "sec_num": "4.5.2" }, { "text": "For Latin, we observe the lowest performance, which we mainly attribute to its small Wikipedia. Using the JoBimText similarities, only 24 of the 98 compounds are contained in the text at all, and only 10 with a frequency above 10. The best results in terms of F1 score are obtained for German (0.8756), Dutch (0.8669), Hungarian (0.8383), and Finnish (0.8544). On average, we obtain an F1 score of 0.8141. The ability to gain good results also on morphologically rich languages such as Hungarian and Finnish demonstrates the language-independence of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results for", "sec_num": "4.5.2" }, { "text": "In order to understand the errors of our method in the intrinsic evaluation, we analyzed the compounds that were split incorrectly using JoBimText. Whereas previous results were reported per split-point, we now look at the number of wrongly split entire compounds. For the c't compounds data set, our method splits 22.17% of compounds incorrectly, and we observe 32.6% wrongly split compounds in the Dutch data set (see Table 20 ).", "cite_spans": [], "ref_spans": [ { "start": 420, "end": 428, "text": "Table 20", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.6" }, { "text": "In addition, we analyzed how many compounds have been split in fewer parts (under-split) and more parts (over-split) than the gold data, or have the same number of splits, that, however, are incorrect (wrongly-split). For all data sets we observe a general trend: Our method tends to rather undersplit compounds, due to the parameters ms and mp that suppress very short parts. Compounds that are split at entirely incorrect positions constitute the lowest error class. We also analyzed for incorrectly split compounds Table 20 Number of compounds that have been split incorrectly with respect to the gold data. We report numbers of how many of these compounds have fewer split points (under-split), too many split points (over-split), or the correct number but wrong split points (wrongly-split). In addition, we show the total number of missed, wrong, and correct splits for these compounds. how often our method missed a split, performed a wrong split, and split correctly (see bottom three lines in Table 20 ). This analysis supports the previous finding: Most errors of our SECOS method consist of missed splits. Depending on the application, this might be a less detrimental behavior than splitting wrongly.", "cite_spans": [], "ref_spans": [ { "start": 518, "end": 526, "text": "Table 20", "ref_id": "TABREF1" }, { "start": 1002, "end": 1010, "text": "Table 20", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.6" }, { "text": "In order to enable a fine-grained tokenization on sub-word units, we have introduced an unsupervised method for decompounding words that is based on distributional semantics. We have shown the impact of its components and have tuned its parameters on a small German data set. On six data sets for four different languages, SECOS has been shown to perform competitively to supervised and rule-based tools and to outperform two unsupervised baselines by a large margin. Further, we demonstrated its languageindependence using automatically extracted compound data sets for 14 languages. Comparing two methods for the generation of distributional semantic models within SECOS, we obtain the best results for German, Dutch, and Afrikaans using word2vec. However, for Finnish the best results are achieved with JoBimText. On the automatically extracted data sets, JoBimText yields on average F1 scores of 0.8122, whereas the word2vec-based method achieves solely scores of 0.7705, which we attribute to the larger numbers of out-of-vocabulary words within the Wiktionary data set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summary on SECOS SECOS SECOS", "sec_num": "4.7" }, { "text": "In order to show the benefits of using both coarse-grained and fine-grained tokenization, we report results on an information retrieval task. In previous research, the incorporation of compound nouns and MWE information was used successfully in IR (Acosta, Villavicencio, and Moreira 2011; da Silva and Rocha Souza 2012) . Also splitting compounds turned out to be a useful processing step in order to improve IR systems Number: 372 Native American casino <desc> Description: Identify documents that discuss the growth of Native American casino gambling. <narr> Narrative: Relevant documents include discussions regarding Native American casino gambling: its social implications, effects on local and Native American economies, and legal aspects related to Native American tribal autonomy. </top>", "cite_spans": [ { "start": 248, "end": 289, "text": "(Acosta, Villavicencio, and Moreira 2011;", "ref_id": "BIBREF3" }, { "start": 290, "end": 320, "text": "da Silva and Rocha Souza 2012)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Using Coarse-and Fine-Grained Tokenization for Information Retrieval", "sec_num": "5." }, { "text": "Listing of a topic from the TREC 2004 Robust Track. (Monz and de Rijke 2001; Koehn and Knight 2003; Witschel and Biemann 2005; Airio 2006) . For this, we selected the TREC 2004 Robust Track (Voorhees 2005) , in which an IR system is evaluated based on 250 topics for which we use titles of the topic description as query. 26 For performing the evaluation, we setup an index for 528,155 documents from the TREC Disks 4 and 5 (without the Congressional Record on Disk 4). We use Lucene 27 with Okapi BM25 and build indices based on the words of the entire documents, the decompounded words within the documents, and also add indices for the detected MWEs within the documents that have a DRUID score above 0.3, 0.5, and 0.7. In order to compute models for decompounding and MWE detection, we use an English Wikipedia dump. This experiment focuses on demonstrating the impact of using the additional information gained by our methods in an extrinsic evaluation, rather than aiming at state-of-the-art retrieval performance. Furthermore, we want to highlight that we do not apply any language-dependent information. Thus, the results should generalize across languages.", "cite_spans": [ { "start": 52, "end": 76, "text": "(Monz and de Rijke 2001;", "ref_id": "BIBREF30" }, { "start": 77, "end": 99, "text": "Koehn and Knight 2003;", "ref_id": "BIBREF23" }, { "start": 100, "end": 126, "text": "Witschel and Biemann 2005;", "ref_id": "BIBREF53" }, { "start": 127, "end": 138, "text": "Airio 2006)", "ref_id": "BIBREF4" }, { "start": 190, "end": 205, "text": "(Voorhees 2005)", "ref_id": "BIBREF51" }, { "start": 322, "end": 324, "text": "26", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "For the query, we use the title of each topic. In Figure 4 , we show all content that is available for topic 372. For building the query, we only use the title. Using the description (<desc>)or the narrative (<narr>) requires further pre-processing and did not yield to better scores than using solely the title. We combine the different fields for building queries considering all fields as optional. Building the query for the example using tokens, decompounded tokens, and MWEs, we will obtain the title itself both querying against the tokens and decompounded tokens. In addition, we will query for the MWE Native American. As English does not contain many close compounds, the decompounding does not apply to many words and queries. However, words like hydroelectric will be split into hydro and electric.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 58, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "We show the mean average precision (MAP) scores for various combinations of the queries in Table 21 .", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 99, "text": "Table 21", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "As the queries use only words from the titles, querying solely against the MWE index does not make sense, as not all titles contain MWEs. We observe that using tokens of the content does result in better MAP scores than using decompounded tokens. 28", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 4", "sec_num": null }, { "text": "Results on the information retrieval TREC 7 task using compound and MWE information. Creating queries with both token and decompounded tokens results in higher MAP scores (0.2038). Combining the original content with MWE information, we obtain inferior results when considering MWEs with a low threshold (0.3 and 0.5) and gain some improvements when using the index with MWEs of high quality. Adding the MWE information to original tokens and decompounded tokens improves only when indexing MWEs with high scores (above 0.7). Using this combination performs best; however, none of the improvements in this experiment are significant with respect to the token-only baseline (using t-test and Wilcoxon rank sum test). Inspecting the interpolated precision-recall curve (see Figure 5 ), we also observe that the best results are obtained using MWE information in combination with compounds and content words.", "cite_spans": [], "ref_spans": [ { "start": 772, "end": 780, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Table 21", "sec_num": null }, { "text": "The generation of MWE dictionaries has drawn much attention in the field of NLP. Early computational approaches (e.g., Justeson and Katz 1995) use POS sequences as MWE extractors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work on Merging Words", "sec_num": "6.1" }, { "text": "Other approaches, relying on word frequency, statistically verify the hypothesis whether the parts of the MWE occur more often together than would be expected by chance (Evert 2005; Ramisch 2012) . One of the first measures that consider context information (co-occurrences) are the C-value and the NC-value, introduced by Frantzi, Ananiadou, and Tsujii (1998) . These methods first extract candidates using POS information and then compute scores based on the frequency of the MWE and the frequency of nested MWE candidates. The method described by Wermter and Hahn (2005) is based on the limited modifiability of MWEs. For this, they introduce a measure that combines frequencies of modifications of the candidate, where modifications are considered as occurrences of the candidate where a single word is replaced with a different one.", "cite_spans": [ { "start": 169, "end": 181, "text": "(Evert 2005;", "ref_id": "BIBREF12" }, { "start": 182, "end": 195, "text": "Ramisch 2012)", "ref_id": "BIBREF35" }, { "start": 332, "end": 360, "text": "Ananiadou, and Tsujii (1998)", "ref_id": null }, { "start": 550, "end": 573, "text": "Wermter and Hahn (2005)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Merging Words", "sec_num": "6.1" }, { "text": "A newer method is introduced by Lossio-Ventura et al. 2014, who re-rank scores based on an extension of the C-value, which uses a POS-based probability and an inverse document frequency. Using different measures and learning a classifier that predicts the multiwordness was first proposed by Pecina (2010), who, however, restricts his experiments to two-word MWEs for the Czech language only. Korkontzelos (2010) comparatively evaluates several MWE ranking measures. The best MWE extractor reported in his work is the scorer by Nakagawa and Mori (2002, 2003) , who use the un-nested frequency (called marginal frequency) of each candidate and multiply these by the geometric mean of the distinct neighbor of each word within the candidate.", "cite_spans": [ { "start": 528, "end": 540, "text": "Nakagawa and", "ref_id": "BIBREF32" }, { "start": 541, "end": 558, "text": "Mori (2002, 2003)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Merging Words", "sec_num": "6.1" }, { "text": "Distributional semantics is mostly used to detect compositionality of MWEs (Katz and Giesbrecht 2006; Salehi, Cook, and Baldwin 2014) . For this, most approaches compare the context vector of a MWE with the combined vectors based on the constituent words of the MWE. Then, the similarity between the vectors is used as the degree of compositionality. In machine translation, words are sometimes considered as multiwords if they can be translated as single term (cf. Bouamor, Semmar, and Zweigenbaum 2012; Anastasiou 2010). Although this follows the same intuition as our uniqueness measure described in Section 3.2.1, we do not require any bilingual corpora, but rather test if a multiword can likely be substituted for a single word.", "cite_spans": [ { "start": 75, "end": 101, "text": "(Katz and Giesbrecht 2006;", "ref_id": "BIBREF21" }, { "start": 102, "end": 133, "text": "Salehi, Cook, and Baldwin 2014)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Merging Words", "sec_num": "6.1" }, { "text": "Regarding the evaluation, mostly precision at k (P@k) and recall at k (R@k) are applied (e.g., Frantzi, Ananiadou, and Tsujii 1998; Evert 2005; Lossio-Ventura et al. 2014) . Another general approach is using the AP, which is also used in IR (Thater, Dinu, and Pinkal 2009) and has also been applied by Ramisch, De Araujo, and Villavicencio (2012).", "cite_spans": [ { "start": 95, "end": 131, "text": "Frantzi, Ananiadou, and Tsujii 1998;", "ref_id": null }, { "start": 132, "end": 143, "text": "Evert 2005;", "ref_id": "BIBREF12" }, { "start": 144, "end": 171, "text": "Lossio-Ventura et al. 2014)", "ref_id": "BIBREF25" }, { "start": 241, "end": 272, "text": "(Thater, Dinu, and Pinkal 2009)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Merging Words", "sec_num": "6.1" }, { "text": "Approaches to automatic decompounding can be classified into corpus-driven approaches and supervised approaches. Corpus-driven approaches are usually informed using frequency lists (Koehn and Knight 2003) , probabilistic models (Schiller 2005) , parallel corpora (Koehn and Knight 2003; Macherey et al. 2011) , or periphrases (i.e., reformulations) in large monolingual corpora (Holz and Biemann 2008) . As with other NLP tasks, supervised approaches are usually superior to unsupervised approaches if sufficient training material is available. A straightforward yet effective supervised decompounding system is contained in the ASV Toolbox (Biemann et al. 2008) , which uses trie-based (Morrison 1968; Witschel and Biemann 2005) datastructures for recursively splitting compounds based on training set splits. Alfonseca, Bilac, and Pharies (2008) combine several signals, including web anchor text, in an SVM-based supervised splitter. More recently, Shapiro (2016) proposes another supervised method that trains a morphology component on compounds and uses a language model and handcrafted constraints in order to split compounds. The method is evaluated on a Finnish data set.", "cite_spans": [ { "start": 181, "end": 204, "text": "(Koehn and Knight 2003)", "ref_id": "BIBREF23" }, { "start": 228, "end": 243, "text": "(Schiller 2005)", "ref_id": "BIBREF45" }, { "start": 263, "end": 286, "text": "(Koehn and Knight 2003;", "ref_id": "BIBREF23" }, { "start": 287, "end": 308, "text": "Macherey et al. 2011)", "ref_id": null }, { "start": 378, "end": 401, "text": "(Holz and Biemann 2008)", "ref_id": "BIBREF19" }, { "start": 641, "end": 662, "text": "(Biemann et al. 2008)", "ref_id": "BIBREF7" }, { "start": 687, "end": 702, "text": "(Morrison 1968;", "ref_id": "BIBREF31" }, { "start": 703, "end": 729, "text": "Witschel and Biemann 2005)", "ref_id": "BIBREF53" }, { "start": 811, "end": 847, "text": "Alfonseca, Bilac, and Pharies (2008)", "ref_id": "BIBREF5" }, { "start": 952, "end": 966, "text": "Shapiro (2016)", "ref_id": "BIBREF48" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Splitting Words", "sec_num": "6.2" }, { "text": "A widely used German decompounder is JWS, which is based on word lists of compound parts as well as manually crafted blacklists and whitelists. 29 The NL Splitter uses similar technology for Dutch compound decomposition. 30 An unsupervised approach is presented in Koehn and Knight (2003) : Out of several splits as given by matching parts of the compound to a vocabulary list, they pick the split with the highest geometric mean of word frequencies, which is entirely corpus-driven but ignores semantic relations between the compound and its parts. Daiber et al. (2015) propose an unsupervised system using an analogy-based approach that relies on word embeddings. Ziering and van der Plas (2016) introduced an unsupervised method based on morphology that is informed by lemmatization information. Although this approach is unsupervised, it is not knowledge-free, as it is informed by a language-specific morphology component.", "cite_spans": [ { "start": 144, "end": 146, "text": "29", "ref_id": null }, { "start": 221, "end": 223, "text": "30", "ref_id": null }, { "start": 265, "end": 288, "text": "Koehn and Knight (2003)", "ref_id": "BIBREF23" }, { "start": 550, "end": 570, "text": "Daiber et al. (2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Splitting Words", "sec_num": "6.2" }, { "text": "Decompounding is evaluated either intrinsically or in a task that benefits from it, for example, information retrieval (Monz and de Rijke 2001) , machine translation (Koehn and Knight 2003; Macherey et al. 2011) , or automatic speech recognition (Adda-Decker and Adda 2000; Ordelman, van Hessen, and de Jong 2003).", "cite_spans": [ { "start": 119, "end": 143, "text": "(Monz and de Rijke 2001)", "ref_id": "BIBREF30" }, { "start": 166, "end": 189, "text": "(Koehn and Knight 2003;", "ref_id": "BIBREF23" }, { "start": 190, "end": 211, "text": "Macherey et al. 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work on Splitting Words", "sec_num": "6.2" }, { "text": "In this article, we have introduced fine-grained and coarse-grained tokenization methods. Whereas normal tokenization considers the separation of words and interpunctuation marks, we have introduced two methods that join multiple words that form a concept and another method for splitting words that are formed by several stems. Both methods are unsupervised and knowledge-free and only rely on distributional semantic models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "As a side note, we have evaluated two models for distributional similarity in this context, showing that the compound splitting method works slightly better with neural word2vec similarities when most of the words are also contained in the corpus used for similarity computations. For the MWE identification we obtain significantly better results when using similarities on the basis of the sparse count-based JoBimText method, which we attribute to the different characteristics of similarity neighborhoods produced by these models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "For the detection of MWE we have evaluated our method using two annotated corpora of French and English medical texts. In addition, we have demonstrated the capability of detecting MWEs for 32 languages using an automatic evaluation on Wiktionary and Wikipedia. Furthermore, in order to split words, we have proposed SECOS and shown its performance on five gold standard data sets for German, Dutch, Afrikaans, and Finnish. We obtain state-of-the-art performance on two out of five data sets and additionally show the language independence of the method using an automatically extracted data set from Wiktionary for 14 languages. Lastly, we have shown that incorporating both coarse-and fine-grained tokenization results in performance gains for information retrieval.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7." }, { "text": "In future work, we want to expand the fine-grained tokenization and identify even smaller units within the compounds, which is also one of the major error classes for the compounding. Furthermore, we want to extend the compounding method to detect not only compounds but also morphemes. For the coarse-grained tokenization we want to develop methods that allow labeling the parts of MWEs. Furthermore, we propose to demonstrate the impact of our fine-grained and coarse-grained tokenization for further tasks like machine translation (Koehn and Knight 2003), question answering (Rinaldi et al. 2003; de Marneffe, Pad\u00f3, and Manning 2009) , and to apply it to texts of different languages and domains.", "cite_spans": [ { "start": 534, "end": 544, "text": "(Koehn and", "ref_id": "BIBREF23" }, { "start": 545, "end": 599, "text": "Knight 2003), question answering (Rinaldi et al. 2003;", "ref_id": null }, { "start": 600, "end": 636, "text": "de Marneffe, Pad\u00f3, and Manning 2009)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "8." }, { "text": "https://code.google.com/archive/p/word2vec/. 2 https://bitbucket.org/yoavgo/word2vecf.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Frantzi, Ananiadou, and Tsujii (1998) do not specify the context window size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "They adjust the logarithmic length in order to be able to use the C-value to detect single-word terms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The DRUID implementation is available open source and pre-computed models can be found here: http://www.jobimtext.org/druid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our experiments revealed that multiplicative combinations consistently performed worse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is achieved by chance for the MF, as it is equal to the frequency. The different scores are due to the randomly sorted tied scores used during our evaluation and reflect the variance of randomness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use Wikipedia dumps from late 2016. 12 For querying terms, we use the Wiktionary API: https://en.wiktionary.org/w/api.php, February 2017.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The MeSH dictionary is available at: http://www.nlm.nih.gov/mesh/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "An implementation of SECOS is available at: https://github.com/riedlma/SECOS. Furthermore, we provide models for all the languages that have been processed in this article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We set = 0.01. In the range of = [0.0001, 1] we observe marginally higher scores using smaller values. 16 Although our method mostly does not assume language knowledge, we uppercase the first letter of each w i , when we apply our method on German nouns. 17 Available at: http://webcorpora.org/. 18 The sentences are extracted from: http://lib.rus.ec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/jodaiber/semantic_compound_splitting. 20 Available at: http://heise.de/ct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We perform a Wilcoxon signed-rank test between the F1 scores of each candidate assuming p < 0.01. However, we only obtain a p-value below 0.5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For the extraction of compounds, we rely on compounds listed on: https://en.wiktionary.org/wiki/Category:Compound_words_by_language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://trec.nist.gov/data/robust/04.guidelines.html. 27 We use version 6.6.0, which is available at: https://lucene.apache.org/core/. 28 As we do not perform any pre-processing like POS-tagging, we split all words, not only nouns. This might additionally introduce some mismatches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/danielnaber/jwordsplitter. 30 http://ilps.science.uva.nl/resources/compound-splitter-nl/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Results of SECOS for compounds that have been automatically extracted from Wiktionary for 14 languages. We show results for the KK baseline and the SECOS method using similarities computed with JoBimText and word2vec. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 19", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "This only affects the GermaNet data set and reduces the effective test set to 53", "authors": [], "year": null, "venue": "We follow Schiller (2005) and remove all words including dashes", "volume": "118", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "We follow Schiller (2005) and remove all words including dashes. This only affects the GermaNet data set and reduces the effective test set to 53,118 nouns.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The data set was", "authors": [], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The data set was collected in February 2017 and is available here: http://ltdata1.informatik.uni-hamburg.de/SECOS/datasets/wiktionary_compounds.tar.gz.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Enriching a French Treebank", "authors": [ { "first": "References", "middle": [], "last": "Abeill\u00e9", "suffix": "" }, { "first": "Anne", "middle": [], "last": "", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Barrier", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "2233--2236", "other_ids": {}, "num": null, "urls": [], "raw_text": "References Abeill\u00e9, Anne and Nicolas Barrier. 2004. Enriching a French Treebank. In Proceedings of the Fourth International Conference on Language Resources and Evaluation, pages 2233-2236, Lisbon.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Adda-Decker, Martine and Gilles Adda. 2000. Morphological decomposition for ASR in German", "authors": [ { "first": "Otavio", "middle": [], "last": "Acosta", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Costa", "suffix": "" }, { "first": "Viviane", "middle": [], "last": "Villavicencio", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira Moreira", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World", "volume": "", "issue": "", "pages": "129--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Acosta, Otavio Costa, Aline Villavicencio, and Viviane Pereira Moreira. 2011. Identification and treatment of multiword expressions applied to information retrieval. In Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World, pages 101-109, Portland, OR. Adda-Decker, Martine and Gilles Adda. 2000. Morphological decomposition for ASR in German. In Proceedings of the Workshop on Phonetics and Phonology in ASR, pages 129-143, Saarbr\u00fccken.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Word normalization and decompounding in mono-and bilingual IR", "authors": [ { "first": "Eija", "middle": [], "last": "Airio", "suffix": "" } ], "year": 2006, "venue": "Information Retrieval", "volume": "9", "issue": "3", "pages": "249--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Airio, Eija. 2006. Word normalization and decompounding in mono-and bilingual IR. Information Retrieval, 9(3):249-271.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Decompounding query keywords from compounding languages", "authors": [ { "first": "Enrique", "middle": [], "last": "Alfonseca", "suffix": "" }, { "first": "Slaven", "middle": [], "last": "Bilac", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Pharies", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies", "volume": "", "issue": "", "pages": "253--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alfonseca, Enrique, Slaven Bilac, and Stefan Pharies. 2008. Decompounding query keywords from compounding languages. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies, pages 253-256, Columbus, OH.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Idiom Treatment Experiments in Machine Translation", "authors": [ { "first": "Dimitra", "middle": [], "last": "Anastasiou", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastasiou, Dimitra. 2010. Idiom Treatment Experiments in Machine Translation. Ph.D. thesis, Universit\u00e4t des Saarlandes, Saarbr\u00fccken, Germany.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "ASV Toolbox: a modular collection of language exploration tools", "authors": [ { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Uwe", "middle": [], "last": "Quasthoff", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Heyer", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Holz", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008", "volume": "", "issue": "", "pages": "1760--1767", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biemann, Chris, Uwe Quasthoff, Gerhard Heyer, and Florian Holz. 2008. ASV Toolbox: a modular collection of language exploration tools. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, pages 1760-1767, Marrakech.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Text: Now in 2D! A framework for lexical expansion with contextual similarity", "authors": [ { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" } ], "year": 2013, "venue": "Journal of Language Modelling", "volume": "1", "issue": "1", "pages": "55--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biemann, Chris and Martin Riedl. 2013. Text: Now in 2D! A framework for lexical expansion with contextual similarity. Journal of Language Modelling, 1(1): 55-95.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Tie-breaking bias: Effect of an uncontrolled parameter on information retrieval evaluation", "authors": [ { "first": "Olivier", "middle": [], "last": "Blanc", "suffix": "" }, { "first": "Matthieu", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Watrin", "suffix": "" }, { "first": ";", "middle": [], "last": "Prague", "suffix": "" }, { "first": "", "middle": [], "last": "Bouamor", "suffix": "" }, { "first": "Nasredine", "middle": [], "last": "Dhouha", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Semmar", "suffix": "" }, { "first": "", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 12th International Conference on Implementation and Application of Automata", "volume": "", "issue": "", "pages": "112--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blanc, Olivier, Matthieu Constant, and Patrick Watrin. 2007. A finite-state super-chunker. In Proceedings of the 12th International Conference on Implementation and Application of Automata, pages 306-308, Prague. Bouamor, Dhouha, Nasredine Semmar, and Pierre Zweigenbaum. 2012. Identifying bilingual multi-word expressions for statistical machine translation. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, pages 674-679, Istanbul. Cabanac, Guillaume, Gilles Hubert, Mohand Boughanem, and Claude Chrisment. 2010. Tie-breaking bias: Effect of an uncontrolled parameter on information retrieval evaluation. In Conference on Multilingual and Multimodal Information Access Evaluation, pages 112-123, Padua.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Splitting compounds by semantic analogy", "authors": [ { "first": "Joachim", "middle": [], "last": "Daiber", "suffix": "" }, { "first": "Lautaro", "middle": [], "last": "Quiroz", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Wechsler", "suffix": "" }, { "first": "Stella", "middle": [], "last": "Frank", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 1st Deep Machine Translation Workshop", "volume": "", "issue": "", "pages": "20--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daiber, Joachim, Lautaro Quiroz, Roger Wechsler, and Stella Frank. 2015. Splitting compounds by semantic analogy. In Proceedings of the 1st Deep Machine Translation Workshop, pages 20-28, Prague.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards automatic extraction of monolingual and bilingual terminology", "authors": [ { "first": "B\u00e9atrice", "middle": [], "last": "Daille", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Jean-Marc", "middle": [], "last": "Lang\u00e9", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 15th Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "515--521", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daille, B\u00e9atrice,\u00c9ric Gaussier, and Jean-Marc Lang\u00e9. 1994. Towards automatic extraction of monolingual and bilingual terminology. In Proceedings of the 15th Conference on Computational Linguistics -Volume 1, pages 515-521, Kyoto.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The C-value/ NC-value method of automatic recognition for multi-word terms", "authors": [ { "first": "Stefan", "middle": [], "last": "Evert", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Second European Conference on Research and Advanced Technology for Digital Libraries", "volume": "", "issue": "", "pages": "585--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evert, Stefan. 2005. The Statistics of Word Cooccurrences: Word Pairs and Collocations. Ph.D. thesis, Institut f\u00fcr maschinelle Sprachverarbeitung, University of Stuttgart, Germany. Evert, Stefan. 2008. A lexicographic evaluation of German adjective-noun collocations. In Proceedings of the LREC 2008 Workshop Towards a Shared Task for Multiword Expressions, pages 3-6, Marrakech. Frantzi, Katerina T., Sophia Ananiadou, and Jun-ichi Tsujii. 1998. The C-value/ NC-value method of automatic recognition for multi-word terms. In Proceedings of the Second European Conference on Research and Advanced Technology for Digital Libraries, pages 585-604, Heraklion.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Explorations in Automatic Thesaurus Discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grefenstette, Gregory. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Methods in Structural Linguistics", "authors": [ { "first": "Zellig", "middle": [], "last": "Harris", "suffix": "" }, { "first": "", "middle": [], "last": "Sabbetai", "suffix": "" } ], "year": 1951, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harris, Zellig Sabbetai. 1951. Methods in Structural Linguistics. University of Chicago Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text preparation through extended tokenization", "authors": [ { "first": "Marcus", "middle": [], "last": "Hassler", "suffix": "" }, { "first": "G\u00fcnther", "middle": [], "last": "Fliedl", "suffix": "" } ], "year": 2006, "venue": "Data Mining VII: Data, Text and Web Mining and Their Business Applications", "volume": "37", "issue": "", "pages": "13--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassler, Marcus and G\u00fcnther Fliedl. 2006. Text preparation through extended tokenization. Data Mining VII: Data, Text and Web Mining and Their Business Applications, 37:13-21.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Determining immediate constituents of compounds in GermaNet", "authors": [ { "first": "Verena", "middle": [], "last": "Henrich", "suffix": "" }, { "first": "Erhard", "middle": [], "last": "Hinrichs", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Conference on Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "420--426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henrich, Verena and Erhard Hinrichs. 2011. Determining immediate constituents of compounds in GermaNet. In Proceedings of the International Conference on Recent Advances in Natural Language Processing 2011, pages 420-426, Hissar.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Multilingual models for compositional distributed semantics", "authors": [ { "first": "Karl", "middle": [], "last": "Hermann", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Moritz", "suffix": "" }, { "first": "", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "58--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hermann, Karl Moritz and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 58-68, Baltimore, MA.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Noun classification from predicate-argument structures", "authors": [ { "first": "Donald", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1990, "venue": "Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "268--275", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, Donald. 1990. Noun classification from predicate-argument structures. In Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, pages 268-275, Pittsburgh, PA.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Technical terminology: Some linguistic properties and an algorithm for identification in text", "authors": [ { "first": "Florian", "middle": [], "last": "Holz", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": ";", "middle": [], "last": "Justeson", "suffix": "" }, { "first": "John", "middle": [ "S" ], "last": "Slava", "suffix": "" }, { "first": "M", "middle": [], "last": "Katz", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 9th International Conference on Computational Linguistics and Intelligent Text Processing (CICLING)", "volume": "1", "issue": "", "pages": "9--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Holz, Florian and Chris Biemann. 2008. Unsupervised and knowledge-free learning of compound splits and periphrases. In Proceedings of the 9th International Conference on Computational Linguistics and Intelligent Text Processing (CICLING), pages 117-127, Haifa. Justeson, John S. and Slava M. Katz. 1995. Technical terminology: Some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1:9-27.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A method for tokenizing text", "authors": [ { "first": "Ronald", "middle": [ "M" ], "last": "Kaplan", "suffix": "" } ], "year": 2005, "venue": "Inquiries into Words, Constraints and Contexts, CSLI Studies in Computational Linguistics Online", "volume": "", "issue": "", "pages": "55--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaplan, Ronald M. 2005. A method for tokenizing text. In M. Butt, M. Dalrymple, and T. H. King, editors. Inquiries into Words, Constraints and Contexts, CSLI Studies in Computational Linguistics Online, pages 55-64.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Automatic identification of non-compositional multi-word expressions using latent semantic analysis", "authors": [ { "first": "Graham", "middle": [], "last": "Katz", "suffix": "" }, { "first": "Eugenie", "middle": [], "last": "Giesbrecht", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties", "volume": "", "issue": "", "pages": "12--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katz, Graham and Eugenie Giesbrecht. 2006. Automatic identification of non-compositional multi-word expressions using latent semantic analysis. In Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties, pages 12-19, Sydney.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "GENIA corpus -a semantically annotated corpus for bio-textmining", "authors": [ { "first": "Jin", "middle": [ "-" ], "last": "Kim", "suffix": "" }, { "first": "Tomoko", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Yuka", "middle": [], "last": "Ohta", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tateisi", "suffix": "" }, { "first": "", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2003, "venue": "Bioinformatics", "volume": "19", "issue": "1", "pages": "180--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kim, Jin-Dong, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. GENIA corpus -a semantically annotated corpus for bio-textmining. Bioinformatics, 19(Suppl 1):i180-i182.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Unsupervised Learning of Multiword Expressions", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koehn, Philipp and Kevin Knight. 2003. Empirical methods for compound splitting. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, pages 187-193, Budapest. Korkontzelos, Ioannis. 2010. Unsupervised Learning of Multiword Expressions. Ph.D. thesis, University of York, UK. Levy, Omer and Yoav Goldberg. 2014. Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 302-308, Baltimore, MA.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Using syntactic dependency as local context to resolve word sense ambiguity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "64--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Dekang. 1997. Using syntactic dependency as local context to resolve word sense ambiguity. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics, pages 64-71, Madrid.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Language-independent compound splitting with morphological operations", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Montreal", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Lossio-Ventura", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Antonio", "suffix": "" }, { "first": "Mathieu", "middle": [], "last": "Jonquet", "suffix": "" }, { "first": "Maguelonne", "middle": [], "last": "Roche", "suffix": "" }, { "first": "", "middle": [], "last": "Teisseire", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1395--1404", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lin, Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics, pages 768-774, Montreal. Lossio-Ventura, Juan Antonio, Clement Jonquet, Mathieu Roche, and Maguelonne Teisseire. 2014. Yet another ranking function for automatic multiword term extraction. In Proceedings of the 9th International Conference on Natural Language Processing, pages 52-64, Warsaw. Macherey, Klaus, Andrew M. Dai, David Talbot, Ashok C. Popat, and Franz Och. 2011. Language-independent compound splitting with morphological operations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1395-1404, Portland, OR.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, Christopher D. and Hinrich Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. MIT Press.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Analysis of German compounds using weighted finite state transducers", "authors": [ { "first": "Torsten", "middle": [], "last": "Marek", "suffix": "" }, { "first": "", "middle": [], "last": "Germany", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 2009 Workshop on Applied Textual Inference", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marek, Torsten. 2006. Analysis of German compounds using weighted finite state transducers. Bachelor thesis, Universit\u00e4t T\u00fcbingen, Germany. de Marneffe, Marie-Catherine, Sebastian Pad\u00f3, and Christopher D. Manning. 2009. Multi-word expressions in textual inference: Much ado about nothing? In Proceedings of the 2009 Workshop on Applied Textual Inference, pages 1-9, Suntec.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1310--1318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In Proceedings of the International Conference on Machine Learning, pages 1310-1318, Scottsdale, AZ.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Ambient search: A document retrieval system for speech streams", "authors": [ { "first": "Benjamin", "middle": [], "last": "Milde", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Wacker", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Radomski", "suffix": "" }, { "first": "Max", "middle": [], "last": "M\u00fchlh\u00e4user", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "2082--2091", "other_ids": {}, "num": null, "urls": [], "raw_text": "Milde, Benjamin, Jonas Wacker, Stefan Radomski, Max M\u00fchlh\u00e4user, and Chris Biemann. 2016. Ambient search: A document retrieval system for speech streams. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2082-2091, Osaka.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Shallow morphological analysis in monolingual information retrieval for Dutch, German, and Italian", "authors": [ { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "De Rijke", "suffix": "" } ], "year": 2001, "venue": "Evaluation of Cross-Language Information Retrieval Systems, Second Workshop of the Cross-Language Evaluation Forum", "volume": "", "issue": "", "pages": "262--277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Monz, Christof and Maarten de Rijke. 2001. Shallow morphological analysis in monolingual information retrieval for Dutch, German, and Italian. In Evaluation of Cross-Language Information Retrieval Systems, Second Workshop of the Cross-Language Evaluation Forum, pages 262-277, Darmstadt.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "PATRICIA -Practical Algorithm To Retrieve Information Coded in Alphanumeric", "authors": [ { "first": "Donald", "middle": [ "R" ], "last": "Morrison", "suffix": "" } ], "year": 1968, "venue": "Journal of the ACM", "volume": "15", "issue": "4", "pages": "514--534", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morrison, Donald R. 1968. PATRICIA - Practical Algorithm To Retrieve Information Coded in Alphanumeric. Journal of the ACM, 0004-5411 15(4):514-534.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic term recognition based on statistics of compound nouns and their components", "authors": [ { "first": "Hiroshi", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Tatsunori", "middle": [], "last": "Mori", "suffix": "" } ], "year": 2002, "venue": "Proceedings of COLING-02 on COMPUTERM 2002: Second International Workshop on Computational Terminology", "volume": "14", "issue": "", "pages": "201--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nakagawa, Hiroshi and Tatsunori Mori. 2002. A simple but powerful automatic term extraction method. In Proceedings of COLING-02 on COMPUTERM 2002: Second International Workshop on Computational Terminology -Volume 14, pages 1-7, Taipei. Nakagawa, Hiroshi and Tatsunori Mori. 2003. Automatic term recognition based on statistics of compound nouns and their components. Terminology, 9(2):201-219.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Compound decomposition in Dutch large vocabulary speech recognition", "authors": [ { "first": "Roeland", "middle": [], "last": "Ordelman", "suffix": "" }, { "first": "Arjan", "middle": [], "last": "Van Hessen", "suffix": "" }, { "first": "Franciska", "middle": [], "last": "De", "suffix": "" }, { "first": "Jong", "middle": [], "last": "", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the European Conference on Speech Communication and Technology", "volume": "", "issue": "", "pages": "225--228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ordelman, Roeland, Arjan van Hessen, and Franciska de Jong. 2003. Compound decomposition in Dutch large vocabulary speech recognition. In Proceedings of the European Conference on Speech Communication and Technology, pages 225-228, Geneva. Pecina, Pavel. 2010. Lexical association measures and collocation extraction.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "A Generic and Open Framework for Multiword Expressions Treatment: From Acquisition to Applications", "authors": [ { "first": "Carlos", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "", "middle": [], "last": "Brazil", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Ramisch", "suffix": "" }, { "first": "Vitor", "middle": [], "last": "De Araujo", "suffix": "" }, { "first": "Aline", "middle": [], "last": "Villavicencio", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Student Research Workshop of the 50th Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramisch, Carlos. 2012. A Generic and Open Framework for Multiword Expressions Treatment: From Acquisition to Applications. Ph.D. thesis, Universidade Federal Do Rio Grande do Sul, Brazil. Ramisch, Carlos, Vitor De Araujo, and Aline Villavicencio. 2012. A broad evaluation of techniques for automatic acquisition of multiword expressions. In Proceedings of the Student Research Workshop of the 50th Meeting of the Association for Computational Linguistics, pages 1-6, Jeju Island.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Exploiting the Leipzig corpora collection", "authors": [ { "first": "Matthias", "middle": [], "last": "Richter", "suffix": "" }, { "first": "Uwe", "middle": [], "last": "Quasthoff", "suffix": "" }, { "first": "Erla", "middle": [], "last": "Hallsteinsd\u00f3ttir", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Fifth Slovenian and First International Language Technologies Conference", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richter, Matthias, Uwe Quasthoff, Erla Hallsteinsd\u00f3ttir, and Chris Biemann. 2006. Exploiting the Leipzig corpora collection. In Proceedings of the Fifth Slovenian and First International Language Technologies Conference, pages 68-73, Ljubljana.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Unsupervised Methods for Learning and Using Semantics of Natural Language", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riedl, Martin. 2016. Unsupervised Methods for Learning and Using Semantics of Natural Language. Ph.D. thesis, Technische Universit\u00e4t Darmstadt, Germany.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Scaling to large 3 data: An efficient and effective method to compute distributional thesauri", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "884--890", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riedl, Martin and Chris Biemann. 2013. Scaling to large 3 data: An efficient and effective method to compute distributional thesauri. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 884-890, Seattle, WA.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A single word is not enough: Ranking multiword expressions using distributional semantics", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2430--2440", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riedl, Martin and Chris Biemann. 2015. A single word is not enough: Ranking multiword expressions using distributional semantics. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 2430-2440, Lisbon.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Unsupervised compound splitting with distributional semantics rivals supervised methods", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "617--622", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riedl, Martin and Chris Biemann. 2016. Unsupervised compound splitting with distributional semantics rivals supervised methods. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 617-622, San Diego, CA.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "There's no 'count or predict' but task-based selection for distributional models", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" }, { "first": ";", "middle": [], "last": "Montpellier", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Rinaldi", "suffix": "" }, { "first": "James", "middle": [], "last": "Dowdall", "suffix": "" }, { "first": "Kaarel", "middle": [], "last": "Kaljurand", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Hess", "suffix": "" }, { "first": "Diego", "middle": [], "last": "Moll\u00e1", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 12th International Conference on Computational Semantics", "volume": "16", "issue": "", "pages": "25--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riedl, Martin and Chris Biemann. 2017. There's no 'count or predict' but task-based selection for distributional models. In Proceedings of the 12th International Conference on Computational Semantics, pages 264-272, Montpellier. Rinaldi, Fabio, James Dowdall, Kaarel Kaljurand, Michael Hess, and Diego Moll\u00e1. 2003. Exploiting paraphrases in a question answering system. In Proceedings of the Second International Workshop on Paraphrasing -Volume 16, pages 25-32, Sapporo.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Multiword expressions: A pain in the neck for NLP", "authors": [ { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Bond", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sag, Ivan Andrew, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2001. Multiword expressions: A pain in the neck for NLP. In Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics, pages 1-15, Mexico City.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Using distributional similarity of multi-way translations to predict multiword expression compositionality", "authors": [ { "first": "Bahar", "middle": [], "last": "Salehi", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Cook", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "472--481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Salehi, Bahar, Paul Cook, and Timothy Baldwin. 2014. Using distributional similarity of multi-way translations to predict multiword expression compositionality. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 472-481, Gothenburg.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Web Corpus Construction. Synthesis Lectures on Human Language Technologies", "authors": [ { "first": "Roland", "middle": [], "last": "Sch\u00e4fer", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Bildhauer", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4fer, Roland and Felix Bildhauer. 2013. Web Corpus Construction. Synthesis Lectures on Human Language Technologies. Morgan and Claypool.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Evaluation methods for unsupervised word embeddings", "authors": [ { "first": "Anne", "middle": [], "last": "Schiller", "suffix": "" }, { "first": "", "middle": [], "last": "Schnabel", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Tobias", "suffix": "" }, { "first": "David", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "Thorsten", "middle": [], "last": "Mimno", "suffix": "" }, { "first": "", "middle": [], "last": "Joachims", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 5th International Workshop on Finite-State Methods and Natural Language Processing", "volume": "", "issue": "", "pages": "298--307", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schiller, Anne. 2005. German Compound Analysis with WFSC. In Proceedings of the 5th International Workshop on Finite-State Methods and Natural Language Processing, pages 239-246, Helsinki. Schnabel, Tobias, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lisbon.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Ubiquitous usage of a broad coverage French corpus: Processing the Est R\u00e9publicain corpus", "authors": [ { "first": "Patrick", "middle": [], "last": "Schone", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "3249--3254", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schone, Patrick and Daniel Jurafsky. 2001. Is knowledge-free induction of multiword unit dictionary headwords a solved problem? In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 100-108, Pittsburgh, PA. Seddah, Djam\u00e9, Marie Candito, Benoit Crabb\u00e9, and Enrique Henestroza Anguiano. 2012. Ubiquitous usage of a broad coverage French corpus: Processing the Est R\u00e9publicain corpus. In Proceedings of the Eight International Conference on Language Resources and Evaluation, pages 3249-3254, Istanbul.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages", "authors": [ { "first": "Djam\u00e9", "middle": [], "last": "Seddah", "suffix": "" }, { "first": "Reut", "middle": [], "last": "Tsarfaty", "suffix": "" }, { "first": "Sandra", "middle": [], "last": "K\u00fcbler", "suffix": "" }, { "first": "Marie", "middle": [], "last": "Candito", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" }, { "first": "Rich\u00e1rd", "middle": [], "last": "Farkas", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Iakes", "middle": [], "last": "Goenaga", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Koldo Gojenola Galletebeitia", "suffix": "" }, { "first": "Spence", "middle": [], "last": "Goldberg", "suffix": "" }, { "first": "Nizar", "middle": [], "last": "Green", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Maier", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Przepi\u00f3rkowski", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Yannick", "middle": [], "last": "Seeker", "suffix": "" }, { "first": "Veronika", "middle": [], "last": "Versley", "suffix": "" }, { "first": "Marcin", "middle": [], "last": "Vincze", "suffix": "" }, { "first": "", "middle": [], "last": "Woli\u0144ski", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages", "volume": "", "issue": "", "pages": "146--182", "other_ids": {}, "num": null, "urls": [], "raw_text": "Seddah, Djam\u00e9, Reut Tsarfaty, Sandra K\u00fcbler, Marie Candito, Jinho D. Choi, Rich\u00e1rd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi\u00f3rkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli\u0144ski, Alina Wr\u00f3blewska, and Eric Villomente de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146-182, Seattle, WA.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Edson and Renato Souza. 2012. Information retrieval system using multiwords expressions (MWE) as descriptors", "authors": [ { "first": "Naomi", "middle": [ "T" ], "last": "Shapiro", "suffix": "" }, { "first": "", "middle": [], "last": "Osaka", "suffix": "" }, { "first": "Naomi", "middle": [ "T" ], "last": "Shapiro", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Falk", "suffix": "" }, { "first": "Kati", "middle": [], "last": "Kiiskinen", "suffix": "" }, { "first": "Arto", "middle": [], "last": "Anttila", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "9", "issue": "", "pages": "213--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shapiro, Naomi T. 2016. Splitting compounds with n-grams. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, pages 630-640, Osaka. Shapiro, Naomi T., Joshua Falk, Kati Kiiskinen, and Arto Anttila. 2017. FinnSyll 2.0.0: A Finnish syllabifier. Technical report, Stanford University, https://pypi.python.org/pypi/FinnSyll. da Silva, Edson and Renato Souza. 2012. Information retrieval system using multiwords expressions (MWE) as descriptors. Journal of Information Systems and Technology Management, 9(2):213-234.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Taalkommissiekorpus 1.1, Taalkommissie van die Suid-Afrikaanse Akademie vir Wetenskap en Kuns. Centre for Text Technology (CTexT)", "authors": [ { "first": "", "middle": [], "last": "Taalkommissie", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taalkommissie. 2011. Taalkommissiekorpus 1.1, Taalkommissie van die Suid-Afrikaanse Akademie vir Wetenskap en Kuns. Centre for Text Technology (CTexT), North-West University, Potchefstroom, South Africa.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Ranking paraphrases in context", "authors": [ { "first": "Stefan", "middle": [], "last": "Thater", "suffix": "" }, { "first": "Georgiana", "middle": [], "last": "Dinu", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Pinkal", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Workshop on Applied Textual Inference in conjunction with the ACL '09", "volume": "", "issue": "", "pages": "44--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thater, Stefan, Georgiana Dinu, and Manfred Pinkal. 2009. Ranking paraphrases in context. In Proceedings of the 2009 Workshop on Applied Textual Inference in conjunction with the ACL '09, pages 44-47, Suntec. Trim, Craig. 2013. The art of tokenization. Technical Report, IBM Developer Works. https://www.ibm.com/developerworks/ community/blogs/nlp/entry/ tokenization?lang=en_us.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "The TREC robust retrieval track. SIGIR Forum", "authors": [ { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2005, "venue": "", "volume": "39", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Voorhees, Ellen M. 2005. The TREC robust retrieval track. SIGIR Forum, 39(1):11-20.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Effective grading of termhood in biomedical literature", "authors": [ { "first": "Jonathan", "middle": [ "J" ], "last": "Webster", "suffix": "" }, { "first": ";", "middle": [], "last": "Chunyu Kit", "suffix": "" }, { "first": "", "middle": [], "last": "Nantes", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Wermter", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the 14th Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "809--813", "other_ids": {}, "num": null, "urls": [], "raw_text": "Webster, Jonathan J. and Chunyu Kit. 1992. Tokenization as the initial phase in NLP. In Proceedings of the 14th Conference on Computational Linguistics, pages 1106-1110, Nantes. Wermter, Joachim and Udo Hahn. 2005. Effective grading of termhood in biomedical literature. In Annual AMIA Symposium Proceedings, pages 809-813, Washington, DC.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Towards unsupervised and language-independent compound splitting using inflectional morphological transformations", "authors": [ { "first": "Hans", "middle": [], "last": "Witschel", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Friedrich", "suffix": "" }, { "first": ";", "middle": [], "last": "Biemann", "suffix": "" }, { "first": "", "middle": [], "last": "Joensuu", "suffix": "" }, { "first": "", "middle": [], "last": "Van Zaanen", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Menno", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Van Huyssteen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Aussems", "suffix": "" }, { "first": "Roald", "middle": [], "last": "Emmery", "suffix": "" }, { "first": "", "middle": [], "last": "Eiselen", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "644--653", "other_ids": {}, "num": null, "urls": [], "raw_text": "Witschel, Hans Friedrich and Chris Biemann. 2005. Rigorous dimensionality reduction through linguistically motivated feature selection for text categorization. In Proceedings of the 15th Nordic Conference of Computational Linguistics, NODALIDA 2005, pages 210-217, Joensuu. van Zaanen, Menno, Gerhard van Huyssteen, Suzanne Aussems, Chris Emmery, and Roald Eiselen. 2014. The development of Dutch and Afrikaans language resources for compound boundary analysis. In Proceedings of the 9th International Conference on Language Resources and Evaluation, pages 1056-1062, Reykjav\u00edk. Ziering, Patrick and Lonneke van der Plas. 2016. Towards unsupervised and language-independent compound splitting using inflectional morphological transformations. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 644-653, San Diego, CA.", "links": null } }, "ref_entries": { "FIGREF0": { "text": ")*(DRUID) MF(post\u2212pruned)*DRUID FGM(post\u2212pruned)*DRUID Figure 1", "type_str": "figure", "uris": null, "num": null }, "FIGREF1": { "text": "Figure 2 Precision scores when considering different number of highest ranked words for DRUID and combined DRUID variations. Here, the gold standard is extracted from the GENIA data set, whereas the scores for the methods are computed using the Medline corpus.", "type_str": "figure", "uris": null, "num": null }, "FIGREF3": { "text": "split possibilities Bund-e-s-finanz-minister-ium Merging character n-grams suffix-prefix Bundes-finanz-ministerium prefix-suffix Bund-esfinanz-ministerium", "type_str": "figure", "uris": null, "num": null }, "FIGREF5": { "text": "Interpolated precision-recall curve for the TREC 2004 Robust task.", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "content": "<table><tr><td/><td>noun</td><td>adjective</td><td>adverb</td><td>verb</td><td>all</td></tr><tr><td>MWE (count)</td><td>60,337</td><td>505</td><td>695</td><td>2,838</td><td>64,375</td></tr><tr><td>MWE (percentage)</td><td>51.51</td><td>2.35</td><td>15.53</td><td>24.59</td><td>41.41</td></tr><tr><td>all words</td><td>117,953</td><td>21,499</td><td>4,475</td><td>11,540</td><td>155,467</td></tr></table>", "html": null, "num": null, "text": "Amounts and percentages of MWEs contained in WordNet 3.1 for different POS.", "type_str": "table" }, "TABREF1": { "content": "<table><tr><td colspan=\"2\">red blood cell</td><td>red blood</td><td/></tr><tr><td>Similar term</td><td>Score</td><td>Similar term</td><td>Score</td></tr><tr><td>erythrocyte</td><td>133</td><td>red</td><td>148</td></tr><tr><td>red cell</td><td>129</td><td>white blood</td><td>111</td></tr><tr><td>RBC</td><td>95</td><td>Sertoli</td><td>93</td></tr><tr><td>platelet</td><td>70</td><td>Leydig</td><td>92</td></tr><tr><td>red-cell</td><td>37</td><td>NK</td><td>86</td></tr><tr><td>reticulocyte</td><td>34</td><td>mast</td><td>85</td></tr><tr><td>white blood</td><td>33</td><td>granulosa</td><td>81</td></tr><tr><td>leukocyte</td><td>29</td><td>endothelial</td><td>81</td></tr><tr><td>granulocyte</td><td>28</td><td>hematopoietic stem</td><td>79</td></tr><tr><td>the erythrocyte</td><td>28</td><td>peripheral blood monon</td><td>78</td></tr><tr><td colspan=\"3\">Algorithm 1 Computation of the incompleteness score</td><td/></tr><tr><td>1: function ic(t)</td><td/><td/><td/></tr><tr><td>2:</td><td/><td/><td/></tr></table>", "html": null, "num": null, "text": "The ten most similar entries for the term red blood cell (left) and red blood (right). Here, seven of ten terms are single words in both lists.", "type_str": "table" }, "TABREF2": { "content": "<table><tr><td>Context term</td><td>Position</td><td>Count</td></tr><tr><td/><td>red blood cell</td><td/></tr><tr><td>transfusions</td><td>right</td><td>48</td></tr><tr><td>(</td><td>right</td><td>42</td></tr><tr><td>transfusion</td><td>right</td><td>33</td></tr><tr><td/><td>red blood</td><td/></tr><tr><td>cells</td><td>right</td><td>557</td></tr><tr><td>cell</td><td>right</td><td>344</td></tr><tr><td>corpuscles</td><td>right</td><td>13</td></tr></table>", "html": null, "num": null, "text": "Top three most frequent context words for the term red blood cell and red blood in the Medline corpus.", "type_str": "table" }, "TABREF3": { "content": "<table><tr><td>Language</td><td>POS filter</td></tr><tr><td>English (Korkontzelos 2010)</td><td>(([JN]+[JN]?[NP]?[JN]?)N)</td></tr><tr><td colspan=\"2\">French (Daille, Gaussier, and Lang\u00e9 1994) N[J]?|NN|NPDN</td></tr></table>", "html": null, "num": null, "text": "POS sequences for filtering noun MWEs for English and French. Each letter is a truncated POS tag of length one where J is an adjective, N a noun, P a preposition, and D a determiner.", "type_str": "table" }, "TABREF4": { "content": "<table><tr><td>Corpus</td><td colspan=\"4\">Total Number of 2-gram 3-gram 4-gram</td></tr><tr><td/><td>Candidates</td><td/><td/><td/></tr><tr><td>GENIA</td><td>1,340</td><td>1,056</td><td>243</td><td>41</td></tr><tr><td>Medline</td><td>29,790</td><td>22,236</td><td>6,400</td><td>1,154</td></tr><tr><td>SPMRL</td><td>330</td><td>197</td><td>116</td><td>17</td></tr><tr><td>ERC</td><td>7,365</td><td>3,639</td><td>2,889</td><td>837</td></tr></table>", "html": null, "num": null, "text": "Number of MWE candidates after filtering for the expected POS tag. Additionally, the table shows the distribution over n-grams with n \u2208 {1, 2, 3, 4}.", "type_str": "table" }, "TABREF5": { "content": "<table><tr><td>upper baseline</td><td>1.000</td><td>1.000</td><td>1.0000</td></tr><tr><td>lower baseline</td><td>0.713</td><td>0.713</td><td>0.7134</td></tr><tr><td>frequency</td><td>0.790</td><td>0.750</td><td>0.7468</td></tr><tr><td>t-test</td><td>0.790</td><td>0.750</td><td>0.7573</td></tr><tr><td>C-value (pre-pruned)</td><td>0.880</td><td>0.846</td><td>0.8447</td></tr><tr><td>NC-value (pre-pruned)</td><td>0.880</td><td>0.840</td><td>0.8405</td></tr><tr><td>GM</td><td>0.590</td><td>0.662</td><td>0.6740</td></tr><tr><td>MF (pre-pruned)</td><td>0.920</td><td>0.872</td><td>0.8761</td></tr><tr><td>FGM (pre-pruned)</td><td>0.910</td><td>0.840</td><td>0.8545</td></tr><tr><td>MF (post-pruned)</td><td>0.900</td><td>0.876</td><td>0.8866</td></tr><tr><td>FGM (post-pruned)</td><td>0.900</td><td>0.900</td><td>0.8948</td></tr><tr><td>DRUID</td><td>0.930</td><td>0.852</td><td>0.8663</td></tr><tr><td>DRUID (using word2vec)</td><td>0.800</td><td>0.740</td><td>0.7352</td></tr><tr><td>Uniqueness (using word2vec)</td><td>0.680</td><td>0.752</td><td>0.7283</td></tr><tr><td>Incompleteness (using word2vec)</td><td>0.760</td><td>0.724</td><td>0.7375</td></tr><tr><td>log(freq)\u2022DRUID</td><td>0.970</td><td>0.860</td><td>0.8661</td></tr><tr><td>MF(post-pruned)\u2022DRUID</td><td>0.950</td><td>0.926</td><td>0.9241</td></tr><tr><td>FGM(post-pruned)\u2022DRUID</td><td>0.960</td><td>0.940</td><td>0.9262</td></tr></table>", "html": null, "num": null, "text": "", "type_str": "table" }, "TABREF9": { "content": "<table><tr><td>Language</td><td/><td colspan=\"2\">Wiktionary</td><td/><td colspan=\"4\">Wiktionary & Wikipedia</td></tr><tr><td/><td colspan=\"4\">freq t-tests DRUID log(freq)\u2022</td><td colspan=\"4\">freq t-test DRUID log(freq)\u2022</td></tr><tr><td/><td/><td>+sw</td><td/><td>DRUID</td><td/><td>+sw</td><td/><td>DRUID</td></tr><tr><td>Arabic</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.01</td><td>0.27</td><td>0.30</td><td>0.32</td><td>0.62</td></tr><tr><td>Basque</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.03</td><td>0.05</td><td>0.06</td><td>0.33</td><td>0.23</td></tr><tr><td>Bulgarian</td><td>0.01</td><td>0.01</td><td>0.00</td><td>0.03</td><td>0.28</td><td>0.35</td><td>0.23</td><td>0.54</td></tr><tr><td>Catalan</td><td>0.02</td><td>0.02</td><td>0.06</td><td>0.07</td><td>0.13</td><td>0.18</td><td>0.29</td><td>0.39</td></tr><tr><td>Croatia</td><td>0.04</td><td>0.05</td><td>0.01</td><td>0.06</td><td>0.14</td><td>0.15</td><td>0.11</td><td>0.21</td></tr><tr><td>Czech</td><td>0.07</td><td>0.07</td><td>0.01</td><td>0.08</td><td>0.17</td><td>0.20</td><td>0.14</td><td>0.28</td></tr><tr><td>Danish</td><td>0.01</td><td>0.02</td><td>0.01</td><td>0.25</td><td>0.19</td><td>0.21</td><td>0.19</td><td>0.32</td></tr><tr><td>Dutch</td><td>0.09</td><td>0.11</td><td>0.05</td><td>0.18</td><td>0.20</td><td>0.25</td><td>0.27</td><td>0.53</td></tr><tr><td>English</td><td>0.10</td><td>0.49</td><td>0.21</td><td>0.70</td><td>0.19</td><td>0.54</td><td>0.56</td><td>0.87</td></tr><tr><td>Estonian</td><td>0.03</td><td>0.03</td><td>0.03</td><td>0.05</td><td>0.12</td><td>0.13</td><td>0.17</td><td>0.14</td></tr><tr><td>Finnish</td><td>0.14</td><td>0.12</td><td>0.02</td><td>0.11</td><td>0.11</td><td>0.11</td><td>0.16</td><td>0.19</td></tr><tr><td>French</td><td>0.17</td><td>0.18</td><td>0.21</td><td>0.44</td><td>0.30</td><td>0.32</td><td>0.38</td><td>0.66</td></tr><tr><td>Galician</td><td>0.12</td><td>0.10</td><td>0.03</td><td>0.12</td><td>0.29</td><td>0.29</td><td>0.19</td><td>0.42</td></tr><tr><td>German</td><td>0.25</td><td>0.23</td><td>0.07</td><td>0.36</td><td>0.28</td><td>0.27</td><td>0.40</td><td>0.65</td></tr><tr><td>Greek</td><td>0.07</td><td>0.08</td><td>0.04</td><td>0.08</td><td>0.14</td><td>0.17</td><td>0.15</td><td>0.26</td></tr><tr><td>Hebrew</td><td>0.05</td><td>0.06</td><td>0.01</td><td>0.12</td><td>0.27</td><td>0.31</td><td>0.05</td><td>0.34</td></tr><tr><td>Hungarian</td><td>0.09</td><td>0.10</td><td>0.03</td><td>0.20</td><td>0.14</td><td>0.16</td><td>0.09</td><td>0.29</td></tr><tr><td>Italian</td><td>0.10</td><td>0.10</td><td>0.01</td><td>0.10</td><td>0.28</td><td>0.30</td><td>0.06</td><td>0.44</td></tr><tr><td>Kazakh</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.05</td><td>0.07</td><td>0.08</td><td>0.27</td><td>0.32</td></tr><tr><td>Latin</td><td>0.01</td><td>0.01</td><td>0.04</td><td>0.09</td><td>0.09</td><td>0.11</td><td>0.13</td><td>0.28</td></tr><tr><td>Latvian</td><td>0.00</td><td>0.00</td><td>0.00</td><td>0.01</td><td>0.10</td><td>0.10</td><td>0.07</td><td>0.13</td></tr><tr><td>Norwegian</td><td>0.02</td><td>0.02</td><td>0.39</td><td>0.21</td><td>0.19</td><td>0.20</td><td>0.28</td><td>0.40</td></tr><tr><td>Persian</td><td>0.08</td><td>0.11</td><td>0.04</td><td>0.19</td><td>0.29</td><td>0.37</td><td>0.41</td><td>0.55</td></tr><tr><td>Polish</td><td>0.07</td><td>0.08</td><td>0.02</td><td>0.19</td><td>0.12</td><td>0.14</td><td>0.36</td><td>0.32</td></tr><tr><td>Portuguese</td><td>0.14</td><td>0.14</td><td>0.05</td><td>0.20</td><td>0.31</td><td>0.34</td><td>0.32</td><td>0.64</td></tr><tr><td>Romanian</td><td>0.05</td><td>0.06</td><td>0.05</td><td>0.16</td><td>0.20</td><td>0.25</td><td>0.19</td><td>0.47</td></tr><tr><td>Russian</td><td>0.07</td><td>0.07</td><td>0.02</td><td>0.15</td><td>0.16</td><td>0.17</td><td>0.16</td><td>0.27</td></tr><tr><td>Slovene</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.05</td><td>0.09</td><td>0.11</td><td>0.09</td><td>0.17</td></tr><tr><td>Spanish</td><td>0.12</td><td>0.14</td><td>0.02</td><td>0.12</td><td>0.34</td><td>0.42</td><td>0.26</td><td>0.63</td></tr><tr><td>Swedish</td><td>0.07</td><td>0.10</td><td>0.03</td><td>0.33</td><td>0.19</td><td>0.26</td><td>0.42</td><td>0.58</td></tr><tr><td>Turkish</td><td>0.08</td><td>0.10</td><td>0.20</td><td>0.36</td><td>0.20</td><td>0.22</td><td>0.50</td><td>0.66</td></tr><tr><td>Ukrainian</td><td>0.01</td><td>0.01</td><td>0.02</td><td>0.04</td><td>0.09</td><td>0.11</td><td>0.12</td><td>0.14</td></tr><tr><td>Average</td><td>0.07</td><td>0.08</td><td>0.05</td><td>0.16</td><td>0.19</td><td>0.22</td><td>0.24</td><td>0.40</td></tr></table>", "html": null, "num": null, "text": "AP for the frequency baseline, t-test, and DRUID evaluated against Wiktionary and a combination of Wiktionary and Wikipedia, including word normalization.", "type_str": "table" }, "TABREF11": { "content": "<table><tr><td>DRUID</td><td/><td>MF (post-pruned)</td><td/></tr><tr><td>hausse des prix</td><td>1</td><td>milliards de francs</td><td>0</td></tr><tr><td>mise en oeuvre</td><td>1</td><td>millions de francs</td><td>0</td></tr><tr><td>prise de participation</td><td>1</td><td>Etats -Unis</td><td>1</td></tr><tr><td>chiffre d' affaires</td><td>1</td><td>chiffre d' affaires</td><td>1</td></tr><tr><td colspan=\"2\">formation professionnelle 1</td><td>taux d' int\u00e9r\u00eat</td><td>1</td></tr><tr><td>population active</td><td>1</td><td colspan=\"2\">milliards de dollars 0</td></tr><tr><td>taux d' int\u00e9r\u00eat</td><td>1</td><td>millions de dollars</td><td>0</td></tr><tr><td>politique mon\u00e9taire</td><td>1</td><td>Air France</td><td>1</td></tr><tr><td>Etats -Unis</td><td>1</td><td>% du capital</td><td>0</td></tr><tr><td>R\u00e9serve f\u00e9d\u00e9rale</td><td>1</td><td>milliard de francse</td><td>0</td></tr><tr><td>comit\u00e9 d'\u00e9tablissement</td><td>1</td><td>directeur g\u00e9n\u00e9ral</td><td>1</td></tr><tr><td>projet de loi</td><td>1</td><td>M. Jean</td><td>0</td></tr><tr><td>syst\u00e8me europ\u00e9en</td><td>0</td><td>an dernier</td><td>1</td></tr><tr><td>conseil des ministres</td><td>1</td><td>ann\u00e9es</td><td>1</td></tr><tr><td>Europe centrale</td><td>1</td><td>% par rapport</td><td>0</td></tr><tr><td>log(freq)DRUID</td><td/><td>C-value (pre-pr.)</td><td/></tr><tr><td>26 carboxylic acid</td><td/><td>1 present study</td><td/></tr><tr><td>28 connective tissue</td><td/><td>7 important role</td><td/></tr><tr><td>40 cathepsin B</td><td/><td>11 degrees C</td><td/></tr><tr><td>41 soft tissue</td><td/><td>13 risk factors</td><td/></tr><tr><td>42 transferrin receptor</td><td/><td colspan=\"2\">15 significant differences</td></tr><tr><td>53 DNA damaging</td><td/><td>18 other hand</td><td/></tr><tr><td>61 foreign body</td><td/><td>22 significant difference</td><td/></tr><tr><td>62 radical scavenging</td><td/><td>33 magnetic resonance</td><td/></tr><tr><td>71 spatial distribution</td><td/><td>39 first time</td><td/></tr><tr><td>74 myosin heavy chain</td><td/><td>48 significant increase</td><td/></tr></table>", "html": null, "num": null, "text": "Top ranked candidates from the SPMRL data set for the best DRUID method (left) and the best competitive method (right). Each term is marked if it is an MWE (1) or not (0). Top ranked terms for the Medline corpus, which are not marked as MWEs. The rank is denoted to the left of each term and all terms, which can be found within a lexicon, are marked in bold.", "type_str": "table" }, "TABREF13": { "content": "<table><tr><td/><td/><td>JoBimText</td><td/><td/><td>word2vec</td></tr><tr><td/><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>similar candidates</td><td colspan=\"3\">0.9880 0.6855 0.8094</td><td colspan=\"3\">0.9554 0.9548 0.9551</td></tr><tr><td colspan=\"4\">extended similar candidates 0.9617 0.7523 0.8442</td><td colspan=\"3\">0.9859 0.6813 0.8058</td></tr><tr><td>generated dictionary</td><td colspan=\"3\">0.9576 0.9589 0.9583</td><td colspan=\"3\">0.9644 0.9610 0.9627</td></tr><tr><td>geometric mean scoring</td><td colspan=\"3\">0.9698 0.9617 0.9658</td><td colspan=\"3\">0.9726 0.9624 0.9675</td></tr></table>", "html": null, "num": null, "text": "Precision (P), Recall (R), and F1 Measure (F1) on split positions for the 700 compound nouns using different split candidates.", "type_str": "table" } } } }