{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:40:56.187326Z" }, "title": "Compound or Term Features? Analyzing Salience in Predicting the Difficulty of German Noun Compounds across Domains", "authors": [ { "first": "Anna", "middle": [], "last": "H\u00e4tty", "suffix": "", "affiliation": { "laboratory": "", "institution": "Robert Bosch GmbH, Corporate Research", "location": { "settlement": "Renningen", "country": "Germany" } }, "email": "anna.haetty@de.bosch.com" }, { "first": "Julia", "middle": [], "last": "Bettinger", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "julia.bettinger@ims.uni-stuttgart.de" }, { "first": "Michael", "middle": [], "last": "Dorna", "suffix": "", "affiliation": { "laboratory": "", "institution": "Robert Bosch GmbH, Corporate Research", "location": { "settlement": "Renningen", "country": "Germany" } }, "email": "michael.dorna@de.bosch.com" }, { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "jonas.kuhn@ims.uni-stuttgart.de" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Predicting the difficulty of domain-specific vocabulary is an important task towards a better understanding of a domain, and to enhance the communication between lay people and experts. We investigate German closed noun compounds and focus on the interaction of compound-based lexical features (such as frequency and productivity) and terminologybased features (contrasting domain-specific and general language) across word representations and classifiers. Our prediction experiments complement insights from classification using (a) manually designed features to characterise termhood and compound formation and (b) compound and constituent word embeddings. We find that for a broad binary distinction into easy vs. difficult general-language compound frequency is sufficient, but for a more fine-grained four-class distinction it is crucial to include contrastive termhood features and compound and constituent features.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Predicting the difficulty of domain-specific vocabulary is an important task towards a better understanding of a domain, and to enhance the communication between lay people and experts. We investigate German closed noun compounds and focus on the interaction of compound-based lexical features (such as frequency and productivity) and terminologybased features (contrasting domain-specific and general language) across word representations and classifiers. Our prediction experiments complement insights from classification using (a) manually designed features to characterise termhood and compound formation and (b) compound and constituent word embeddings. We find that for a broad binary distinction into easy vs. difficult general-language compound frequency is sufficient, but for a more fine-grained four-class distinction it is crucial to include contrastive termhood features and compound and constituent features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In times of a constant growth of domain-specific data, it is more important than ever to analyse characteristics of domain-specific vocabulary. Domains are topically restricted subject fields containing domain-specific vocabulary that encode domain knowledge. The more technical the terminology in the domain vocabulary, the more difficult it is perceived by lay people unfamiliar with the domain. Predicting the difficulty of domain-specific vocabulary is therefore an important task for enhancing the communication between lays and experts. A prominent example in this respect is the medical domain, where the prediction of difficulty of medical terms can enhance the communication between doctors and patients, e.g. by simplifying medical texts (Abrahamsson et al., 2014; Wandji Tchami and Grabar, 2014) . While the medical domain represents a well-researched focus, the problem of miscommunication appears across domains.", "cite_spans": [ { "start": 748, "end": 774, "text": "(Abrahamsson et al., 2014;", "ref_id": "BIBREF0" }, { "start": 775, "end": 806, "text": "Wandji Tchami and Grabar, 2014)", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous research on automatic term difficulty prediction already explored a large number of parameters, but as to our knowledge there is yet no study that investigated how difficulty can be attributed to complex phrase formation processes (a language phenomenon) in interaction with domain specialization (a domain phenomenon). The current study investigates these aspects, goes beyond domain peculiarities (such as Latin words in the medical domain), and performs analyses across three rather different domains: Cooking, DIY ('doit-yourself') and Automotive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While we choose a diverse set of domains, we otherwise focus on a special phenomenon within domain-specific vocabulary: German closed compounds. Closed compounds are complex expressions that consist of several lexemes and are written in a single string of characters. An example is Bremsfl\u00fcssigkeit 'brake fluid', which is composed of the two simple words Bremse 'brake' and Fl\u00fcssigkeit 'fluid'. By focusing on closed compounds, the boundaries of the phrases to pre-extract in text are unambiguous, and feature analysis will not be biased by how the extraction method is designed. Furthermore, closed compounds are a frequent phenomenon in German: Baroni et al. (2002) found that 47% of the word types in a generallanguage corpus in German are compounds, and according to Clouet and Daille (2014) compounding is even more productive in specialized domains. The interaction of domain features and lexical features can be easily demonstrated at the examples of closed compounds: For example, the compound Hydraulikleitung 'hydraulic line' is considered difficult because it contains the rather technical constituent 'hydraulic'. In contrast, the compound Blaukochen (lit: 'blue boiling', a special kind of boiling fish by adding acid) only contains con-stituents that are well-known to lay people but is nevertheless difficult for them because the compound is not semantically transparent regarding its constituent 'blue', i.e. it is not obvious what the constituent contributes to the meaning of the compound. In sum, the difficulty of a compound cannot be derived from only compound attributes; in addition, it is influenced by the role and properties of the constituents.", "cite_spans": [ { "start": 648, "end": 668, "text": "Baroni et al. (2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we want to empirically investigate how phrase formation and domain-specific termhood 1 attributes interact in the automatic prediction of compound difficulty. In order to train predictive models, we use a German compound dataset with a total of 1,030 compounds across the above-mentioned three domains. Based on two settings of the gold standard dataset (a four-class and a binary version) we apply a decision tree classifier using manually designed features to characterize termhood and compound formation, and neural classifiers using word embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Term difficulty prediction (also referred to as term familiarity or term technicality prediction) can be seen as a subtask of automatic term extraction. For automatic term extraction, a major strand of methodologies are contrastive techniques, where a term candidate's distribution in a domain-specific text corpus is compared to the distribution in a reference corpus, for example a general-language corpus (Ahmad et al., 1994; Rayson and Garside, 2000; Drouin, 2003; Kit and Liu, 2008; Bonin et al., 2010; Kochetkova, 2015; Lopes et al., 2016; Mykowiecka et al., 2018, i.a.) . Many term difficulty prediction studies rely on some variant of contrastive approaches, mostly frequency-based; notable exceptions are Zeng-Treitler et al. (2008) , who apply a contextual network, and Bouamor et al. (2016) , who use a likelihood ratio test based on two language models. Most studies fall into the medical, biomedical or health domain. They rely on classical readability features such as frequency, term length, syllable count, the Dale-Chall readability formula or affixes (Zeng et al., 2005; Zeng-Treitler et al., 2008; Vydiswaran et al., 2014; . Some features are tailored to the medical domain, for example relying on neo-classical word 1 Termhood refers to the degree to which a lexical unit can be considered a domain-specific concept (Kageura and Umino, 1996) . components, since medical terminology is considered to be highly influenced by Greek and Latin (Del\u00e9ger and Zweigenbaum, 2009; Bouamor et al., 2016) .", "cite_spans": [ { "start": 408, "end": 428, "text": "(Ahmad et al., 1994;", "ref_id": "BIBREF1" }, { "start": 429, "end": 454, "text": "Rayson and Garside, 2000;", "ref_id": "BIBREF30" }, { "start": 455, "end": 468, "text": "Drouin, 2003;", "ref_id": "BIBREF14" }, { "start": 469, "end": 487, "text": "Kit and Liu, 2008;", "ref_id": "BIBREF22" }, { "start": 488, "end": 507, "text": "Bonin et al., 2010;", "ref_id": "BIBREF7" }, { "start": 508, "end": 525, "text": "Kochetkova, 2015;", "ref_id": "BIBREF23" }, { "start": 526, "end": 545, "text": "Lopes et al., 2016;", "ref_id": "BIBREF25" }, { "start": 546, "end": 576, "text": "Mykowiecka et al., 2018, i.a.)", "ref_id": null }, { "start": 714, "end": 741, "text": "Zeng-Treitler et al. (2008)", "ref_id": "BIBREF38" }, { "start": 780, "end": 801, "text": "Bouamor et al. (2016)", "ref_id": "BIBREF8" }, { "start": 1069, "end": 1088, "text": "(Zeng et al., 2005;", "ref_id": "BIBREF37" }, { "start": 1089, "end": 1116, "text": "Zeng-Treitler et al., 2008;", "ref_id": "BIBREF38" }, { "start": 1117, "end": 1141, "text": "Vydiswaran et al., 2014;", "ref_id": "BIBREF34" }, { "start": 1336, "end": 1361, "text": "(Kageura and Umino, 1996)", "ref_id": "BIBREF21" }, { "start": 1459, "end": 1490, "text": "(Del\u00e9ger and Zweigenbaum, 2009;", "ref_id": "BIBREF11" }, { "start": 1491, "end": 1512, "text": "Bouamor et al., 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As to our knowledge, there is no previous work that investigated term difficulty prediction for complex phrases. Regarding the more general task of automatic term extraction, a few studies included complex phrases and their constituents. For example, the C-value (Frantzi et al., 1998) combines linguistic and statistical information and takes nested terms into account for evaluating termhood. The FGM score (Nakagawa and Mori, 2003) relies on the geometric mean of the number of distinct left and right neighboring words for each constituent in a complex term. Contrastive Selection via Heads (CSvH) (Basili et al., 2001 ) is a corporacomparing measure that computes termhood for a complex term by biasing the termhood score with the general-language frequency of the head. H\u00e4tty et al. (2017) combine termhood measures within a random forest classifier to extract single and multiword terms and apply the measures recursively to the components. H\u00e4tty and Schulte im Walde (2018) demonstrate that propagating constituent information through neural networks improves the prediction of compound termhood.", "cite_spans": [ { "start": 263, "end": 285, "text": "(Frantzi et al., 1998)", "ref_id": "BIBREF16" }, { "start": 409, "end": 434, "text": "(Nakagawa and Mori, 2003)", "ref_id": "BIBREF29" }, { "start": 602, "end": 622, "text": "(Basili et al., 2001", "ref_id": "BIBREF4" }, { "start": 776, "end": 795, "text": "H\u00e4tty et al. (2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Closed compounds are complex expressions that consist of several lexemes and that are written in a single string of characters. The lexemes are called constituents. The constituents of a two-part compound can be divided into modifier and head, where the latter is word-final in German.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German Closed Noun Compounds", "sec_num": "3.1" }, { "text": "An important empirical compound attribute is the morphological family size (De Jong et al., 2000) of a lexeme, which we refer to as productivity henceforth. Morphological family size is defined as the type count of morphological family members, which comprise compounds and derived words that contain the given lexeme as a constituent. We distinguish between two kinds of productivity as a compound attribute: The productivity of a modifier refers to the number of compound types where a certain word type occupies the position of the modifier, and the productivity of a head refers to the number of compound types where a certain word type occupies the position of the head.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "German Closed Noun Compounds", "sec_num": "3.1" }, { "text": "As corpus for general language, we rely on the SdeWaC (Faa\u00df and Eckart, 2013) , a cleaned version of the web-crawled corpus deWaC (Baroni et al., 2009) , containing \u2248 880 million words.", "cite_spans": [ { "start": 54, "end": 77, "text": "(Faa\u00df and Eckart, 2013)", "ref_id": "BIBREF15" }, { "start": 130, "end": 151, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "As domain-specific corpora, we use the three domain corpora that are described by Bettinger et al. (2020) . The corpora were crawled for the domains of Cooking, DIY and Automotive. They were selected to include a variety of different domains; for example, the Automotive domain was chosen because it was expected to be more technical than the Cooking domain. The domain corpora consist of both user-generated and expert content. User-generated content was extracted from Wikipedia, wikihow.de and wikibooks.de, filtered by domain-related categories. Further, domain-specific homepages such as kochwiki.org were crawled. Expert texts include tool manuals and books (e.g. on Automotive and on Handicraft), as well as redacted text crawled from homepages such as 1-2-do.com. Finally, all corpora were reduced to the size of the smallest corpus and are equally-sized with 5.6 million tokens. The texts are tokenized, lemmatized and POS-tagged with spaCy 2 .", "cite_spans": [ { "start": 82, "end": 105, "text": "Bettinger et al. (2020)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora", "sec_num": "3.2" }, { "text": "We rely on the domain-specific compound difficulty gold standard developed on the basis of the just-described domain-specific corpora (Bettinger et al., 2020) . The gold standard contains 1,030 closed compounds from the domains of Cooking, DIY and Automotive. Compounds were automatically identified in text by applying the Simple Compound Splitter (Weller-Di Marco, 2017) . All compounds with a frequency smaller than three were excluded, which resulted in a pool of 12,400 Cooking compounds, 16,935 DIY compounds and 20,468 Automotive compounds, A subset was selected which was balanced for the following features: frequency of the compound, productivity of the head, productivity of the modifier and frequency of the head. The final dataset was rated by 26 annotators on a Likert-like difficulty scale (Likert, 1932) from 1 (easy; the term does not require specialized knowledge to be understood) to 4 (difficult; the term requires specialized knowledge). After the annotation process, the 20 annotations were selected where annotators agreed most. The 2 https://spacy.io/ average pairwise Spearman's \u03c1 correlations of the 20 annotators is 0.61. We base our models on two specifications of the gold standard: four-class: For each compound, we calculate the median. 3 In case of being between values, we decide for the upper median (i.e. if the value is .5, it is rounded up). binary: We simplify the annotation and break down the four graded classes into two broader classes: easy and difficult. We decide to cluster classes 2, 3 and 4 into a new class 'difficult' and keep class 1 as 'easy' for the following reasons: Annotators agreed most for class 1, so this is by far the biggest class. Our binary grouping balances the class sizes more equally and we believe that annotators can easily recognize when they find a compound to be easy (because they fully understand it, which is why we get such a good agreement), but when it comes to specifying difficulty they have more problems to express to which degree they do not understand the compound (due to the fact that they cannot know how much they do not understand). Figure 1 presents the binary and four-class distributions across the three gold standards. The graphs show that there are more difficult compounds in Automotive than in Cooking and DIY.", "cite_spans": [ { "start": 134, "end": 158, "text": "(Bettinger et al., 2020)", "ref_id": "BIBREF5" }, { "start": 349, "end": 372, "text": "(Weller-Di Marco, 2017)", "ref_id": "BIBREF36" }, { "start": 805, "end": 819, "text": "(Likert, 1932)", "ref_id": "BIBREF24" }, { "start": 1268, "end": 1269, "text": "3", "ref_id": null } ], "ref_spans": [ { "start": 2124, "end": 2132, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Gold Standard", "sec_num": "3.3" }, { "text": "Our prediction experiments investigate and complement insights from decision tree classification using manually designed features to characterise termhood and compound formation (section 4.1), and logistic regression (LR) and multilayer perceptron (MLP) classification using compound and constituent word embeddings (section 4.2). For evaluation, we use 5-fold cross-validation and Micro-and Macro-F1 score. As a comparison to the model results, we apply a majority-class baseline. When testing for significance, we use the McNemar's significance test (McNemar, 1947) , a paired non-parametric statistical hypothesis test.", "cite_spans": [ { "start": 552, "end": 567, "text": "(McNemar, 1947)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on Predicting Difficulty", "sec_num": "4" }, { "text": "A core research question for the classification experiments is to which degree attributes that are related to compoundhood influence the prediction, in contrast and in combination with attributes that are related to termhood. The feature types tailored to represent these attributes are the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "\u2022 COMPOUNDHOOD (C) FEATURES 4 : frequencies and productivities of compounds, heads and modifiers in the general-language and the domain-specific corpora; cosine distances between compound modifier and compound head embeddings", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "\u2022 TERMHOOD (T) FEATURES: contrastive measures Weirdness Ratio (Ahmad et al., 1994) , TFITF -Term Frequency Inverse Term Frequency (Bonin et al., 2010) , and CSvH -Contrastive Selection via Heads (Basili et al., 2001) \u2022 COMBINED C+T FEATURE: FGM-Score, a termhood measure that combines compound and termhood attributes (Nakagawa and Mori, 2003) Note that we decided against a direct computation of compound-constituent compositionality (Reddy et al., 2011; Schulte im Walde et al., 2013 as a feature, because the compound dataset was balanced for frequency. It includes infrequent compounds for which word embeddings and compositionality measures would be imprecise. Method: Decision Trees. Decision tree classifiers (DTs) are supervised machine learning methods that are represented as tree structures. DTs were chosen for this task because they are easy to interpret. We identify the optimal tree depth of our decision trees by constantly growing the trees until results decrease, with relying on Gini impurity as the branch splitting criterium. In this way we found an optimal depth of three for the decision tree in the binary task, and an optimal depth of five for the decision tree in the four-class task.", "cite_spans": [ { "start": 62, "end": 82, "text": "(Ahmad et al., 1994)", "ref_id": "BIBREF1" }, { "start": 130, "end": 150, "text": "(Bonin et al., 2010)", "ref_id": "BIBREF7" }, { "start": 195, "end": 216, "text": "(Basili et al., 2001)", "ref_id": "BIBREF4" }, { "start": 318, "end": 343, "text": "(Nakagawa and Mori, 2003)", "ref_id": "BIBREF29" }, { "start": 435, "end": 455, "text": "(Reddy et al., 2011;", "ref_id": "BIBREF31" }, { "start": 456, "end": 485, "text": "Schulte im Walde et al., 2013", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "Overall results. Table 1 shows the results for the decision tree classification using all features. The classification models significantly outperform the respective baselines in the binary classification tasks, but in the four-class distinctions this only applies to the Automotive domain and across all domains (non-significant results are in italics). For the binary task, the results for Automotive are better than for Cooking and DIY. We assume that this divergence is due to a higher imbalance of class sizes across the domains, cf. figure 1.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 24, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "Results by feature group. Having looked at the results when using all features at the same time, we now use coherent groups of features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "1. Domain-specific corpus-related features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "frequencies of compounds, heads and modifiers; productivities of heads and modifiers; FGM-Score 2. General-language corpus-related features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "frequencies of compounds, heads and modifiers; productivity of heads and modifiers; FGM-Score 3. Contrastive features:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "weirdness scores and TFITFs of compounds, heads and modifiers; CSvH 4. Cosine distance features: cosine scores of word2vec and fastText constituent vectors can see that most feature groups achieve lower results in comparison to using all features (in bold font), but at the same time 'All' does not achieve the best results. The categories Cosine, Domain and Head perform worst and do in most cases not even significantly improve over the baseline. The modifier features are better than the head features, which is in line with the results in (H\u00e4tty et al., 2017) where the modifier features are more important for detecting termhood than head features. For both the binary and the four-class tasks, the groups General, Compound and Contrastive perform best, with Compound as the winner for the binary task and Contrastive as the winner for the four-class task. The arrows in the result tables indicate which group results are significantly different to the winner group result.", "cite_spans": [ { "start": 543, "end": 563, "text": "(H\u00e4tty et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "Individual features . Tables 4 and 5 show the results for those individual features which perform significantly better than the respective baseline, sorted by increase in F1. For the four-class task, three more features perform significantly better than the baseline in comparison to the binary task; these features are marked in bold. The best individual features are the same for both tasks, with almost the same rankings. The best three individual features address distinct attributes of a compound term: a compound's general-language frequency (FREQ gen), a termhood measure involving constituents (FGM gen), and a contrastive termhood measure (comp WEIRD).", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 36, "text": ". Tables 4 and 5", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "Best feature combination. Tables 6 and 7 analyze how features interact: We perform feature selection by repeatedly adding the best-performing individual feature for each task, based on Micro-F1, until the scores stagnate or decrease. The resulting best feature combinations provide us with the best results for each task, while only comprising five individual feature types in both tables. The optimal combinations address attributes of the whole compounds and attributes of constituents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "Analyzing frequency and productivity. For investigating the influence of frequency and productivity properties of compounds and constituents, we created subsets of the gold standard where we distinguished between tertiles regarding compound frequency and constituent productivity: 'low', 'mid' and 'high'. Each property type is assessed once for the general-language and once for the domain-specific language. The 6 \u00d7 3 tertiles are determined by sorting all elements regarding one property and cutting the data into three equallysized portions. The resulting ranges are shown in table 8. We then compare the classifier results for the two extreme tertiles, 'low' and 'high', using all features on these subsets. The results are shown in the righthand part of table 8. It is obvious that across all properties better results are achieved for the 'low'category, as indicated by the bold font. The gap between the results for 'low' and 'high' is especially large for the productivities of modifiers and heads. Thus low productivity represents a rather clear indicator for a compound to be either easy or difficult (given that the model achieves better results in the prediction), while high productivity is an attribute of harder to distinguish easy and difficult terms. In order to investigate this effect further, we inspect the gold label distribution in the 'low' and 'high'-categories. We find a dominance of difficult compounds in the 'low'-categories, while there is a higher balance between easy and difficult compounds in the 'high'-categories. This shows that low productivity and frequency are indicators of difficulty, while high productivity and frequency are less distinctive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Term and Compound Features", "sec_num": "4.1" }, { "text": "For our second kind of classification experiments, we do not use hand-crafted features anymore but semantic representations of compounds and components for general-language and domain. Two kinds of word embeddings are used in the follow- Table 8 : Ranges of selected properties across tertiles, and results on binary classification for extreme 'low' and 'high' tertiles when using all features (cf. All in Table 2 with Micro-F1=0.732).", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 245, "text": "Table 8", "ref_id": null }, { "start": 406, "end": 413, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "ing: word2vec (Mikolov et al., 2013) and fastText (Bojanowski et al., 2017) . 5 We use the word2vec model, because it is a standard model for natural language processing applications. The fastText model works on character n-grams and not on words, and Bojanowski et al. (2017) argues that it performs well on closed compounds. This model is particularly interesting for us because a compound embedding is learned partially from the same n-grams as the embeddings of its constituents. Thus, we implicitly have a representation of the constituents in the compound embedding, which we expect to be beneficial for our classification task. Inspecting some words and their nearest neighbors for the two models confirms our intuition. For the verb kochen (\"cook\") the following six words are the most similar according to word2vec: sieden (\"to boil\"), garen (\"to refine\"), brutzeln (\"to sizzle\"), braten (\"to fry\"), grillen (\"to barbecue\") and zubereiten (\"to prepare\"). According to fastText we find the nearest neighbors erkochen (\"to reach by cooking\"), garkochen (\"to cook sth. well\"), teekochen 6 (\"to make tea\"), reiskochen (\"to cook rice\"), eierkochen (\"to cook eggs\") and bekochen (\"to cook for someone\"). The similarity in word2vec neighbors is more on the semantic level in contrast to fastText, where the words are highly similar on a surface morphological level. The embeddings are trained for each domain individually, by concatenating SdeWaC and the respective domain data as input.", "cite_spans": [ { "start": 14, "end": 36, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF27" }, { "start": 50, "end": 75, "text": "(Bojanowski et al., 2017)", "ref_id": "BIBREF6" }, { "start": 78, "end": 79, "text": "5", "ref_id": null }, { "start": 252, "end": 276, "text": "Bojanowski et al. (2017)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "Methods: LR and MLP We use our pre-trained word embeddings for compounds and constituents as features and apply two kinds of classifiers: 5 We do not use state-of-the-art contextualized word embeddings such as BERT (Devlin et al., 2019) , because we predict difficulty on a type-based, not context-dependent level. 6 We cite words in their original lowercased version as used in the model.", "cite_spans": [ { "start": 138, "end": 139, "text": "5", "ref_id": null }, { "start": 215, "end": 236, "text": "(Devlin et al., 2019)", "ref_id": null }, { "start": 315, "end": 316, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "\u2022 logistic regression: simple neural network with only input and output layers but no hidden layer,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "\u2022 multilayer perceptron: neural network with each one input, hidden and output layer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "For the binary classification task, the classifiers use a sigmoid activation in the output layer, for the fourclass task the classifiers use softmax activation. For the multilayer perceptron, we also use a sigmoid activation for the hidden layer. Concerning the parameters, the batch size is set to 32, there are 50 epochs and the hidden layer has a dimension of 64.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "Results. We compare three different input settings for the classification tasks: The first model only takes the compound word embeddings as input (see 'compound' in table 9). For all settings, we distinguish between two differently trained word embeddings: the word-based word2vec and the character-based fastText word embedding models. The second model ('comp+const') takes the concatenated embeddings of the compound and of its constituents (binary split, i.e. two constituents) as input, to evaluate the impact of the constituents. The third model ('only const') only uses the concatenated constituent vectors, to evaluate if this information is competitive. The results for the classifications are shown in table 9. For the binary task we reach the best results (marked in bold) with word2vec when using a combination of compound and constituent information, and with fastText when only using the compound embeddings. This tendency was expected: Since fastText embeddings are character-based, the constituents are implicitly encoded as well. Using only constituent information provides lower result scores in comparison to using compound information, which is in line with the results of the previous section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "The distribution of the results of the four-class task in table 9 is similar to the binary task, except for now also for fastText the combination of compound and constituent information works best. This might be caused by the more difficult task and is also indicated by the fact that for the four-class task MLP with the additional hidden layer produces the best results, while for the binary task the simpler model LR obtains the best results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "Interestingly, word2vec models mostly perform better than fastText models, although fastText implicitly contains constituent information. We argue that because 171 infrequent compound vectors are missing for word2vec (with a minimum frequency threshold for word vectors to be trained), these 171 compounds are assigned to the same random vector. Given that low frequency is a reasonable indicator for difficulty, the model might learn from the missing vectors which compounds are infrequent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "Although models using both compound and constituent information seem to be superior to models using only compound information, these results can only be treated as a tendency. For word2vec and both the binary and the four-class tasks, models using both compound and constituent embeddings are not significantly better than models using only compound embeddings. However, although models using compound embeddings perform significantly better than models using only constituent embeddings (which is intuitive), the latter still perform significantly better than the baseline. This shows that constituent embeddings carry informative characteristics for classifying compounds for difficulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification with Word Embeddings", "sec_num": "4.2" }, { "text": "Our experiments investigated how compound formation and termhood and domain attributes influence the prediction of compound difficulty.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Compounds and constituents. The binary task, as the presumably simpler task, reached better results with simpler means: General-language frequency of the compound is a good indicator (2% better than the second-best feature for Micro-F1); in addition, there is a 5% gap between compound and constituent features (table 4), which shows that compound features are sufficient for this task. For the four-class task, features differ less; the best results include compound and constituent information (table 5). However for both tasks we can see: a combination of compound and constituent features leads to best results (tables 6 and 7).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "The experiments with using neural networks show the same tendency (table 9): While for half of the cases in the binary task the compound vector is sufficient, the improvement over 'comp+const' is not significant, and overall using both compound and constituent vectors ('comp+const') provides the best results. We conclude that constituents influence the degree of difficulty of the compounds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Termhood. Contrastive features (i.e. termhood features) are more important for the four-class task than for the binary task (tables 2 and 3): For the four-class task, they perform significantly better than the general-language features, while for the binary task 'FREQ gen' is the best individual feature (table 4). In sum, for a broad difficulty distinction as for the binary task, general-language information might be sufficient, but for the more fine-grained four-class task contrastive termhood features are supportive.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Domains. There are no striking differences in the predictive power of the models across domains (table 1). For all three gold standards, the binary classification models outperform the respective baselines. In the four-class distinction, this is only the case for Automotive, which includes more difficult compounds than Cooking and DIY. Presumably, prediction differences are due to the differently (im)balanced sizes of the classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "Low versus high productivity and frequency. When contrasting the lower and upper tertile value ranges for compound frequency and constituent productivity, we found that low productivity and low frequency are very salient indicators for the level of difficulty. This seems counterintuitive: e.g. high frequency could be a reliable indicator for simplicity of a compound, while low frequency could indicate difficulty, but low frequency could also indicate that concepts are newly coined (which does not mean that they are difficult), or because of spelling or inflection errors. The dataset was cleaned for the latter, but the former case was not paid attention to. Concerning productivity, the gap between 'high' and 'low' is even more extreme. We hypothesize that this could be due to a compound being judged as difficult because of one difficult constituent, but an easy compound requires all constituents to be easy. This is why single easy constituents might be no good indicators -difficulty depends on the other constituent for the compound to be easy or difficult. Table 9 : LR/MLP Classifiers: Mi(cro)-F1 and Ma(cro)-F1 results for the Binary (left) and Four-Class (right) task.", "cite_spans": [], "ref_spans": [ { "start": 1072, "end": 1079, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "4.3" }, { "text": "This study investigated the automatic prediction of difficulty for domain-specific German compounds across three domains. We asked to what extent compound formation attributes and domain-specific termhood attributes influence and interact in the prediction. We found that plain general-language compound frequency is a reliable indicator for difficulty in our dataset, which shows that effects of domain-specialization and compound formation are reflected to a large extent by general corpus frequency. However, for a more fine-grained fourclass distinction of difficulty going beyond a broad binary distinction into 'easy' and 'difficult', contrastive termhood features and compound and constituent information are crucial.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Alternatively, one could calculate the mean compound difficulty values, but the means are more sensitive to outliers, and in our dataset therefore less appropriate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that for all but one of these features we have a balanced set of compounds in the gold standard, see section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language", "authors": [ { "first": "Emil", "middle": [], "last": "Abrahamsson", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Forni", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Skeppstedt", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Kvist", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "57--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emil Abrahamsson, Timothy Forni, Maria Skeppstedt, and Maria Kvist. 2014. Medical text simplification using synonym replacement: Adapting assessment of word difficulty to a compounding language. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Popu- lations, pages 57-65, Gothenburg, Sweden.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "What is a term? The semiautomatic extraction of terms from text. Translation Studies: An Interdiscipline. Selected papers from the Translation Studies Congress", "authors": [ { "first": "Khurshid", "middle": [], "last": "Ahmad", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Davies", "suffix": "" }, { "first": "Heather", "middle": [], "last": "Fulford", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Rogers", "suffix": "" } ], "year": 1992, "venue": "", "volume": "2", "issue": "", "pages": "267--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Khurshid Ahmad, Andrea Davies, Heather Fulford, and Margaret Rogers. 1994. What is a term? The semi- automatic extraction of terms from text. Translation Studies: An Interdiscipline. Selected papers from the Translation Studies Congress, Vienna, 1992, 2:267- -278.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The WaCky wide web: A collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "Adriano", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "Eros", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "", "volume": "43", "issue": "", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: A collection of very large linguistically processed web- crawled corpora. Language Resources and Evalua- tion, 43(3):209-226.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Predicting the components of German nominal compounds", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Matiasek", "suffix": "" }, { "first": "Harald", "middle": [], "last": "Trost", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 15th European Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "470--474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Baroni, Johannes Matiasek, and Harald Trost. 2002. Predicting the components of German nom- inal compounds. In Proceedings of the 15th Eu- ropean Conference on Artificial Intelligence, pages 470-474, Lyon, France.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A contrastive approach to term extraction", "authors": [ { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" }, { "first": "Maria", "middle": [ "T" ], "last": "Pazienza", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Fabio", "middle": [ "M" ], "last": "Zanzotto", "suffix": "" } ], "year": 2001, "venue": "Proceedings of the 4th Terminology and Artificial Intelligence Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roberto Basili, Maria T. Pazienza, Alessandro Mos- chitti, and Fabio M. Zanzotto. 2001. A contrastive approach to term extraction. In Proceedings of the 4th Terminology and Artificial Intelligence Confer- ence, Nancy, France.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A domain-specific dataset of difficulty ratings for German noun compounds in the domains DIY, Cooking and Automotive", "authors": [ { "first": "Julia", "middle": [], "last": "Bettinger", "suffix": "" }, { "first": "Anna", "middle": [], "last": "H\u00e4tty", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Dorna", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "4352--4360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Bettinger, Anna H\u00e4tty, Michael Dorna, and Sabine Schulte im Walde. 2020. A domain-specific dataset of difficulty ratings for German noun com- pounds in the domains DIY, Cooking and Auto- motive. In Proceedings of the 12th International Conference on Language Resources and Evaluation, pages 4352-4360, Marseille, France.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A contrastive approach to multi-word term extraction from domain corpora", "authors": [ { "first": "Francesca", "middle": [], "last": "Bonin", "suffix": "" }, { "first": "Felice", "middle": [], "last": "Dell'orletta", "suffix": "" }, { "first": "Giulia", "middle": [], "last": "Venturi", "suffix": "" }, { "first": "Simonetta", "middle": [], "last": "Montemagni", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "19--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francesca Bonin, Felice Dell'Orletta, Giulia Venturi, and Simonetta Montemagni. 2010. A contrastive ap- proach to multi-word term extraction from domain corpora. In Proceedings of the 7th International Conference on Language Resources and Evaluation, pages 19--21, Valletta, Malta.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Transfer-based learning-to-rank assessment of medical term technicality", "authors": [ { "first": "Dhouha", "middle": [], "last": "Bouamor", "suffix": "" }, { "first": "Leonardo", "middle": [ "Campillos" ], "last": "Llanos", "suffix": "" }, { "first": "Anne-Laure", "middle": [], "last": "Ligozat", "suffix": "" }, { "first": "Sophie", "middle": [], "last": "Rosset", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhouha Bouamor, Leonardo Campillos Llanos, Anne- Laure Ligozat, Sophie Rosset, and Pierre Zweigen- baum. 2016. Transfer-based learning-to-rank assess- ment of medical term technicality. In Proceedings of the 10th International Conference on Language Resources and Evaluation, Portoro\u017e, Slovenia.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Splitting of compound terms in non-prototypical compounding languages", "authors": [ { "first": "Loginova", "middle": [], "last": "Elizaveta", "suffix": "" }, { "first": "B\u00e9atrice", "middle": [], "last": "Clouet", "suffix": "" }, { "first": "", "middle": [], "last": "Daille", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the First Workshop on Computational Approaches to Compound Analysis", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elizaveta Loginova Clouet and B\u00e9atrice Daille. 2014. Splitting of compound terms in non-prototypical compounding languages. In Proceedings of the First Workshop on Computational Approaches to Com- pound Analysis, pages 11-19, Dublin, Ireland. As- sociation for Computational Linguistics and Dublin City University.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The morphological family size effect and morphology. Language and Cognitive Processes", "authors": [ { "first": "H", "middle": [ "De" ], "last": "Nivja", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Jong", "suffix": "" }, { "first": "Harald", "middle": [ "R" ], "last": "Schreuder", "suffix": "" }, { "first": "", "middle": [], "last": "Baayen", "suffix": "" } ], "year": 2000, "venue": "", "volume": "15", "issue": "", "pages": "329--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nivja H. De Jong, Robert Schreuder, and Harald R. Baayen. 2000. The morphological family size ef- fect and morphology. Language and Cognitive Pro- cesses, 15(4-5):329-365.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Extracting lay paraphrases of specialized expressions from monolingual comparable medical corpora", "authors": [ { "first": "Louise", "middle": [], "last": "Del\u00e9ger", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Zweigenbaum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Nonparallel Corpora", "volume": "", "issue": "", "pages": "2--10", "other_ids": {}, "num": null, "urls": [], "raw_text": "Louise Del\u00e9ger and Pierre Zweigenbaum. 2009. Ex- tracting lay paraphrases of specialized expressions from monolingual comparable medical corpora. In Proceedings of the 2nd Workshop on Building and Using Comparable Corpora: from Parallel to Non- parallel Corpora, pages 2-10.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep Bidirectional Transformers for Language Understanding", "authors": [], "year": null, "venue": "Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human La nguage Technologies", "volume": "", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 17th Annual Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human La nguage Technologies, pages 4171-4186, Minneapo- lis, Minnesota, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Term extraction using nontechnical corpora as a point of leverage. Terminology", "authors": [ { "first": "Patrick", "middle": [], "last": "Drouin", "suffix": "" } ], "year": 2003, "venue": "International Journal of Theoretical and Applied Issues in Specialized Communication", "volume": "9", "issue": "1", "pages": "99--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Drouin. 2003. Term extraction using non- technical corpora as a point of leverage. Terminol- ogy. International Journal of Theoretical and Ap- plied Issues in Specialized Communication, 9(1):99- 115.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "SdeWaC -A corpus of parsable sentences from the web", "authors": [ { "first": "Gertrud", "middle": [], "last": "Faa\u00df", "suffix": "" }, { "first": "Kerstin", "middle": [], "last": "Eckart", "suffix": "" } ], "year": 2013, "venue": "Iryna Gurevych, Chris Biemann, and Torsten Zesch", "volume": "8105", "issue": "", "pages": "61--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gertrud Faa\u00df and Kerstin Eckart. 2013. SdeWaC -A corpus of parsable sentences from the web. In Iryna Gurevych, Chris Biemann, and Torsten Zesch, ed- itors, Language Processing and Knowledge in the Web, volume 8105 of Lecture Notes in Computer Sci- ence, pages 61-68. Springer, Berlin Heidelberg.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The c-value/nc-value method of automatic recognition for multi-word terms", "authors": [ { "first": "Katerina", "middle": [ "T" ], "last": "Frantzi", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Ananiadou", "suffix": "" }, { "first": "Jun-Ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 2nd European Conference on Research and Advanced Technology for Digital Libraries", "volume": "", "issue": "", "pages": "585--604", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katerina T. Frantzi, Sophia Ananiadou, and Jun-ichi Tsujii. 1998. The c-value/nc-value method of au- tomatic recognition for multi-word terms. In Pro- ceedings of the 2nd European Conference on Re- search and Advanced Technology for Digital Li- braries, pages 585-604, London, UK.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Unsupervised method for the acquisition of general language paraphrases for medical compounds", "authors": [ { "first": "Natalia", "middle": [], "last": "Grabar", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Hamon", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 4th International Workshop on Computational Terminology", "volume": "", "issue": "", "pages": "94--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natalia Grabar and Thierry Hamon. 2014. Unsuper- vised method for the acquisition of general language paraphrases for medical compounds. In Proceed- ings of the 4th International Workshop on Computa- tional Terminology, pages 94-103, Dublin, Ireland.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Automatic diagnosis of understanding of medical words", "authors": [ { "first": "Natalia", "middle": [], "last": "Grabar", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Hamon", "suffix": "" }, { "first": "Dany", "middle": [], "last": "Amiot", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations", "volume": "", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natalia Grabar, Thierry Hamon, and Dany Amiot. 2014. Automatic diagnosis of understanding of medical words. In Proceedings of the 3rd Workshop on Pre- dicting and Improving Text Readability for Target Reader Populations, pages 11-20, Gothenburg, Swe- den.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Evaluating the reliability and interaction of recursively used feature classes for terminology extraction", "authors": [ { "first": "Anna", "middle": [], "last": "H\u00e4tty", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Dorna", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Student Research Workshop at the 15th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "113--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna H\u00e4tty, Michael Dorna, and Sabine Schulte im Walde. 2017. Evaluating the reliability and interac- tion of recursively used feature classes for terminol- ogy extraction. In Proceedings of the Student Re- search Workshop at the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics, pages 113-121, Valencia, Spain.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Finegrained Termhood Prediction for German Compound Terms using Neural Networks", "authors": [ { "first": "Anna", "middle": [], "last": "H\u00e4tty", "suffix": "" }, { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the COLING Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions", "volume": "", "issue": "", "pages": "62--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna H\u00e4tty and Sabine Schulte im Walde. 2018. Fine- grained Termhood Prediction for German Com- pound Terms using Neural Networks. In Proceed- ings of the COLING Joint Workshop on Linguistic Annotation, Multiword Expressions and Construc- tions, pages 62-73, Santa Fe, NM, USA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Methods of automatic term recognition: A review", "authors": [ { "first": "Kyo", "middle": [], "last": "Kageura", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Umino", "suffix": "" } ], "year": 1996, "venue": "Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication", "volume": "3", "issue": "2", "pages": "259--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyo Kageura and Bin Umino. 1996. Methods of au- tomatic term recognition: A review. Terminology. International Journal of Theoretical and Applied Is- sues in Specialized Communication, 3(2):259-289.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Measuring monoword termhood by rank difference via corpus comparison", "authors": [ { "first": "Chunyu", "middle": [], "last": "Kit", "suffix": "" }, { "first": "Xiaoyue", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication", "volume": "14", "issue": "2", "pages": "204--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chunyu Kit and Xiaoyue Liu. 2008. Measuring mono- word termhood by rank difference via corpus com- parison. Terminology. International Journal of The- oretical and Applied Issues in Specialized Communi- cation, 14(2):204-229.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A method for extracting technical terms using the modified weirdness measure", "authors": [ { "first": "Natalia", "middle": [ "A" ], "last": "Kochetkova", "suffix": "" } ], "year": 2015, "venue": "Automatic Documentation and Mathematical Linguistics", "volume": "49", "issue": "3", "pages": "89--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natalia A. Kochetkova. 2015. A method for extracting technical terms using the modified weirdness mea- sure. Automatic Documentation and Mathematical Linguistics, 49(3):89-95.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A technique for the measurement of attitudes. Archives of psychology", "authors": [ { "first": "Rensis", "middle": [], "last": "Likert", "suffix": "" } ], "year": 1932, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Estimating term domain relevance through term frequency, disjoint corpora frequency-tf-dcf. Knowledge-Based Systems", "authors": [ { "first": "Lucelene", "middle": [], "last": "Lopes", "suffix": "" }, { "first": "Paulo", "middle": [], "last": "Fernandes", "suffix": "" }, { "first": "Renata", "middle": [], "last": "Vieira", "suffix": "" } ], "year": 2016, "venue": "", "volume": "97", "issue": "", "pages": "237--249", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucelene Lopes, Paulo Fernandes, and Renata Vieira. 2016. Estimating term domain relevance through term frequency, disjoint corpora frequency-tf-dcf. Knowledge-Based Systems, 97:237-249.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Note on the sampling error of the difference between correlated proportions or percentages", "authors": [ { "first": "Quinn", "middle": [], "last": "Mcnemar", "suffix": "" } ], "year": 1947, "venue": "Psychometrika", "volume": "12", "issue": "2", "pages": "153--157", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika, 12(2):153-157.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Recognition of irrelevant phrases in automatically extracted lists of domain terms", "authors": [ { "first": "Agnieszka", "middle": [], "last": "Mykowiecka", "suffix": "" }, { "first": "Ma\u0142gorzata", "middle": [], "last": "Marciniak", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Rychlik", "suffix": "" } ], "year": 2018, "venue": "Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication", "volume": "24", "issue": "1", "pages": "66--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agnieszka Mykowiecka, Ma\u0142gorzata Marciniak, and Piotr Rychlik. 2018. Recognition of irrelevant phrases in automatically extracted lists of domain terms. Terminology. International Journal of The- oretical and Applied Issues in Specialized Commu- nication, 24(1):66-90.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Automatic term recognition based on statistics of compound nouns and their components", "authors": [ { "first": "Hirosi", "middle": [], "last": "Nakagawa", "suffix": "" }, { "first": "Tatsunori", "middle": [], "last": "Mori", "suffix": "" } ], "year": 2003, "venue": "Terminology. International Journal of Theoretical and Applied Issues in Specialized Communication", "volume": "9", "issue": "2", "pages": "201--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirosi Nakagawa and Tatsunori Mori. 2003. Auto- matic term recognition based on statistics of com- pound nouns and their components. Terminology. International Journal of Theoretical and Applied Is- sues in Specialized Communication, 9(2):201-219.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Comparing corpora using frequency profiling", "authors": [ { "first": "Paul", "middle": [], "last": "Rayson", "suffix": "" }, { "first": "Roger", "middle": [], "last": "Garside", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Workshop on Comparing Corpora", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Rayson and Roger Garside. 2000. Comparing cor- pora using frequency profiling. In Proceedings of the Workshop on Comparing Corpora, pages 1-6, Hong Kong.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "An empirical study on compositionality in compound nouns", "authors": [ { "first": "Siva", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" } ], "year": 2011, "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "210--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In Proceedings of 5th Interna- tional Joint Conference on Natural Language Pro- cessing, pages 210-218, Chiang Mai, Thailand.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The role of modifier and head properties in predicting the compositionality of English and German noun-noun compounds: A vector-space perspective", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" }, { "first": "Anna", "middle": [], "last": "H\u00e4tty", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Bott", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 5th Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "148--158", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde, Anna H\u00e4tty, and Stefan Bott. 2016. The role of modifier and head prop- erties in predicting the compositionality of English and German noun-noun compounds: A vector-space perspective. In Proceedings of the 5th Joint Con- ference on Lexical and Computational Semantics, pages 148-158, Berlin, Germany.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Exploring vector space models to predict the compositionality of German noun-noun compounds", "authors": [ { "first": "Sabine", "middle": [], "last": "Schulte Im Walde", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "M\u00fcller", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Roller", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "255--265", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Schulte im Walde, Stefan M\u00fcller, and Stefan Roller. 2013. Exploring vector space models to predict the compositionality of German noun-noun compounds. In Proceedings of the 2nd Joint Con- ference on Lexical and Computational Semantics, pages 255-265.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Mining consumer health vocabulary from community-generated text", "authors": [ { "first": "V", "middle": [], "last": "", "suffix": "" }, { "first": "G-Vinod", "middle": [], "last": "Vydiswaran", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" }, { "first": "A", "middle": [], "last": "David", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Hanauer", "suffix": "" }, { "first": "", "middle": [], "last": "Zheng", "suffix": "" } ], "year": 2014, "venue": "AMIA Annual Symposium Proceedings", "volume": "", "issue": "", "pages": "1150--1159", "other_ids": {}, "num": null, "urls": [], "raw_text": "V.G-Vinod Vydiswaran, Qiaozhu Mei, David A Hanauer, and Kai Zheng. 2014. Mining consumer health vocabulary from community-generated text. In AMIA Annual Symposium Proceedings, pages 1150-1159.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Towards automatic distinction between specialized and non-specialized occurrences of verbs in medical corpora", "authors": [ { "first": "Wandji", "middle": [], "last": "Ornella", "suffix": "" }, { "first": "Natalia", "middle": [], "last": "Tchami", "suffix": "" }, { "first": "", "middle": [], "last": "Grabar", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 4th International Workshop on Computational Terminology", "volume": "", "issue": "", "pages": "114--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ornella Wandji Tchami and Natalia Grabar. 2014. To- wards automatic distinction between specialized and non-specialized occurrences of verbs in medical cor- pora. In Proceedings of the 4th International Work- shop on Computational Terminology, pages 114- 124, Dublin, Ireland.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Simple compound splitting for German", "authors": [ { "first": "Marion", "middle": [], "last": "Weller", "suffix": "" }, { "first": "-Di", "middle": [], "last": "Marco", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 13th Workshop on Multiword Expressions", "volume": "", "issue": "", "pages": "161--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marion Weller-Di Marco. 2017. Simple compound splitting for German. In Proceedings of the 13th Workshop on Multiword Expressions, pages 161- 166, Valencia, Spain.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "A text corpora-based estimation of the familiarity of health terminology", "authors": [ { "first": "Qing", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Eunjung", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Crowell", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Tse", "suffix": "" } ], "year": 2005, "venue": "International Symposium on Biological and Medical Data Analysis", "volume": "", "issue": "", "pages": "184--192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qing Zeng, Eunjung Kim, Jon Crowell, and Tony Tse. 2005. A text corpora-based estimation of the fa- miliarity of health terminology. International Sym- posium on Biological and Medical Data Analysis, pages 184-192.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Estimating consumer familiarity with health terminology: A context-based approach", "authors": [ { "first": "Qing", "middle": [], "last": "Zeng-Treitler", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Goryachev", "suffix": "" }, { "first": "Tony", "middle": [], "last": "Tse", "suffix": "" }, { "first": "Alla", "middle": [], "last": "Keselman", "suffix": "" }, { "first": "Aziz", "middle": [], "last": "Boxwala", "suffix": "" } ], "year": 2008, "venue": "Journal of the American Medical Informatics Association", "volume": "15", "issue": "3", "pages": "349--356", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qing Zeng-Treitler, Sergey Goryachev, Tony Tse, Alla Keselman, and Aziz Boxwala. 2008. Estimating consumer familiarity with health terminology: A context-based approach. Journal of the American Medical Informatics Association, 15(3):349-356.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Gold standard: binary and four-class distributions across gold standards (figures taken from Bettinger et al. (2020)).", "type_str": "figure", "uris": null, "num": null }, "TABREF0": { "type_str": "table", "html": null, "text": "Results for classification using all features. All results but those in italics are significant.", "content": "
Baselines andBinaryFour-class
Gold StandardsMicro-F1 Macro-F1 Micro-F1 Macro-F1
Baseline Cooking0.5190.3420.4980.166
Baseline DIY0.5840.3690.4070.145
Baseline Automotive0.6670.4000.3250.123
Baseline All0.6040.3770.3760.137
Cooking0.6460.6310.5430.312
DIY0.7120.6840.5190.406
Automotive0.7500.7200.4710.286
All0.7320.7070.4920.405
Feature Group Micro-F1Macro-F1
Baseline0.6040.377
Cosine0.594*0.391*
Head0.608*0.568*
Domain0.635*0.593*
Modifier0.6560.627
Constituent0.6610.648
Contrastive0.7130.690
All0.7320.707
General0.7350.703
Compound0.7360.713
", "num": null }, "TABREF1": { "type_str": "table", "html": null, "text": "Binary: results by feature groups.", "content": "
Feature Group Micro-F1Macro-F1
Baseline0.3760.137
Cosine0.400*0.258*
Domain0.405*0.300*
Head0.4180.287
Constituent0.4550.364
Modifier0.4570.370
significant improve-mentGeneral Compound All Contrastive0.458 0.480 0.492 0.5100.359 0.342 0.405 0.408
", "num": null }, "TABREF2": { "type_str": "table", "html": null, "text": "Four-class: results by feature group.", "content": "
FeatureMicro-F1 Macro-F1
Baseline0.6040.377
comp TFITF0.6370.566
FREQ head gen0.6420.571
FREQ mod gen0.6450.619
PROD mod gen0.6530.616
comp WEIRD0.7090.690
FGM gen0.7130.696
FREQ gen0.7320.706
", "num": null }, "TABREF3": { "type_str": "table", "html": null, "text": "Binary: individual features which significantly outperform the baseline.", "content": "
FeatureMicro-F1 Macro-F1
Baseline0.3760.137
comp TFITF0.4120.238
FREQ mod dom0.4150.280
Num comp0.4170.248
PROD head gen0.4260.306
FREQ head gen0.4350.290
FREQ mod gen0.4540.322
PROD mod gen0.4550.298
comp WEIRD0.4620.330
FREQ gen0.4640.343
FGM gen0.4670.339
", "num": null }, "TABREF4": { "type_str": "table", "html": null, "text": "", "content": "
: Four-class: individual features which sig-
nificantly outperform the baseline.
", "num": null }, "TABREF5": { "type_str": "table", "html": null, "text": "Binary: feature selection.", "content": "
Chosen Feature Micro-F1 Macro-F1
+FGM gen0.4670.339
+head TFITF0.4870.350
+PROD mod gen0.4930.362
+PROD head gen0.5110.370
+NUM comp0.5110.370
", "num": null }, "TABREF6": { "type_str": "table", "html": null, "text": "Four-class: feature selection.", "content": "", "num": null } } } }