ACL-OCL / Base_JSON /prefixH /json /hackashop /2021.hackashop-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:41.725695Z"
},
"title": "Exploring Linguistically-Lightweight Keyword Extraction Techniques for Indexing News Articles in a Multilingual Set-up",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "Piskorski",
"suffix": "",
"affiliation": {},
"email": "jpiskorski@gmail.com"
},
{
"first": "Nicolas",
"middle": [],
"last": "Stefanovitch",
"suffix": "",
"affiliation": {
"laboratory": "Joint Research Centre (JRC) Ispra",
"institution": "",
"location": {
"country": "Italy"
}
},
"email": "nicolas.stefanovitch@ec.europa.eu"
},
{
"first": "European",
"middle": [],
"last": "Commission",
"suffix": "",
"affiliation": {
"laboratory": "Joint Research Centre (JRC) Ispra",
"institution": "",
"location": {
"country": "Italy"
}
},
"email": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Jacquet",
"suffix": "",
"affiliation": {
"laboratory": "Joint Research Centre (JRC) Ispra",
"institution": "",
"location": {
"country": "Italy"
}
},
"email": "guillaume.jacquet@ec.europa.eu"
},
{
"first": "Aldo",
"middle": [],
"last": "Podavini",
"suffix": "",
"affiliation": {
"laboratory": "Joint Research Centre (JRC) Ispra",
"institution": "",
"location": {
"country": "Italy"
}
},
"email": "aldo.podavini@ec.europa.eu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a study of state-of-theart unsupervised and linguistically unsophisticated keyword extraction algorithms, based on statistic-, graph-, and embedding-based approaches, including, i.a., Total Keyword Frequency, TF-IDF, RAKE, KPMiner, YAKE, KeyBERT, and variants of TextRank-based keyword extraction algorithms. The study was motivated by the need to select the most appropriate technique to extract keywords for indexing news articles in a realworld large-scale news analysis engine. The algorithms were evaluated on a corpus of circa 330 news articles in 7 languages. The overall best F 1 scores for all languages on average were obtained using a combination of the recently introduced YAKE algorithm and KPMiner (20.1%, 46.6% and 47.2% for exact, partial and fuzzy matching resp.).",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a study of state-of-theart unsupervised and linguistically unsophisticated keyword extraction algorithms, based on statistic-, graph-, and embedding-based approaches, including, i.a., Total Keyword Frequency, TF-IDF, RAKE, KPMiner, YAKE, KeyBERT, and variants of TextRank-based keyword extraction algorithms. The study was motivated by the need to select the most appropriate technique to extract keywords for indexing news articles in a realworld large-scale news analysis engine. The algorithms were evaluated on a corpus of circa 330 news articles in 7 languages. The overall best F 1 scores for all languages on average were obtained using a combination of the recently introduced YAKE algorithm and KPMiner (20.1%, 46.6% and 47.2% for exact, partial and fuzzy matching resp.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Keyword Extraction (KE) is the task of automated extraction of single or multiple-token phrases from a textual document that best express all key aspects of its content and can be seen as automated generation of a short document summary. It constitutes an enabling technology for document indexing, clustering, classification, summarization, etc. This paper presents a comparative study of the performance of some state-of-the-art unsupervised linguistically-lightweight keyword extraction methods and combinations thereof applied on news articles in seven languages. The main drive behind the reported work was to explore the usability of these methods for adding another level of indexing of news articles gathered and analysed by the Europe Media Monitor (EMM) 1 (Steinberger et al., 2017) , a large-scale multilingual real-time news gathering and analysis system, which processes an average of 300,000 online news articles per day in up to 70 languages and is serving several EU institutions and international organisations.",
"cite_spans": [
{
"start": 299,
"end": 346,
"text": "clustering, classification, summarization, etc.",
"ref_id": null
},
{
"start": 766,
"end": 792,
"text": "(Steinberger et al., 2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While a vast bulk of research and tools for KE have been reported in the past, the specific focus of our research was to select the most suitable KE methods for indexing news articles taking specifically into account the operational, multilingual and real-time processing character of EMM. Hence, only unsupervised, scalable vis-a-vis multilinguality and robust algorithms that do not require any sophisticated linguistic resources and are capable of processing single news article in a time-efficient manner were considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Keyword extraction has been the subject of research for decades. Both unsupervised and supervised approaches exist, the unsupervised being particularly popular due to the scarcity of annotated data as well as their domain independence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The unsupervised approaches are usually divided in three phases: (a) selection of candidate tokens that can constitute part of a keyword using some heuristics based on statistics and/or certain linguistic features (e.g., belonging to a specific part-ofspeech or not being a stop word, etc.), (b) rank-ing the selected tokens, and (c) generating keywords out of the selected tokens, where the final rank is computed using the scores of the individual tokens. The unsupervised methods are divided into: statistics-, graph-, embeddings-and language model-based ones. The statistics-based methods exploit frequency, positional and co-occurrence statistics in the process of selecting candidate keywords. The graph-based methods create a graph from textual documents with nodes representing the candidate keywords and edges representing some relatedness to other candidate keywords, and then deploy graph ranking algorithms, e.g. PageRank, TextRank, to rank the final set of keywords. Recently, a third group of methods emerged which are based on word (Mikolov et al., 2013) and sentence embeddings (Pagliardini et al., 2018) . Linguistic sophistication constitutes another dimension to look at the keyword extraction algorithms. Some of the methods use barely any language-specific resources, e.g., only stop word lists, whereas others exploit part-of-speech tagging or even syntactic parsing.",
"cite_spans": [
{
"start": 1047,
"end": 1069,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 1094,
"end": 1120,
"text": "(Pagliardini et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The supervised methods are simply divided into shallow and deep learning methods. The shallow methods exploit either binary classifiers to decide whether a token sequence is a keyword, linear regression-based models to rank the candidate keywords, and sequence labelling techniques. The deep learning methods exploit encoder-decoder and sequence-to-sequence labelling approaches. Most of the supervised machine-learning approaches reported in the literature deploy more linguistic sophistication (i.e., linguistic features) vis-a-vis unsupervised methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Extensive surveys on keyword extraction methods and comparison of their relative performance are provided in (Papagiannopoulou and Tsoumakas, 2020; Hasan and Ng, 2014; Kilic and Cetin, 2019; Alami Merrouni et al., 2019 ).",
"cite_spans": [
{
"start": 109,
"end": 147,
"text": "(Papagiannopoulou and Tsoumakas, 2020;",
"ref_id": "BIBREF16"
},
{
"start": 148,
"end": 167,
"text": "Hasan and Ng, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 168,
"end": 190,
"text": "Kilic and Cetin, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 191,
"end": 218,
"text": "Alami Merrouni et al., 2019",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since only a few monolingual corpora with keyword annotation of news articles exist (Marujo et al., 2013 (Marujo et al., , 2012 Bougouin et al., 2013) that use different approaches to keyword annotation, we have created a new multilingual corpus of circa 330 news articles annotated with keywords covering 7 languages which is used for evaluation purposes in our study. We are not aware of any similar multilingual resource available for research purposes.",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Marujo et al., 2013",
"ref_id": "BIBREF12"
},
{
"start": 105,
"end": 127,
"text": "(Marujo et al., , 2012",
"ref_id": "BIBREF11"
},
{
"start": 128,
"end": 150,
"text": "Bougouin et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. First, Section 2 introduces the Keyword Extraction task for news article indexing. Section 3 gives an overview of the methods explored. Next, Section 4 describes the creation of a multi-lingual data set and experiment results. Finally, we end up with conclusions and an outlook on future work in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The purpose of KE might vary depending on the domain in which it is deployed. In media monitoring and analysis the main objective is to capture from the text of each news article the main topics discussed therein, the key events reported, the entities involved in these events and what is the outcome, impact and significance thereof. For the sake of specifying what the expected output of KE should be, and in order to guide human annotators tasked to create test datasets, the following constraints on keyword selection were introduced (here in simplified form):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 a keyword can be a single word or a sequence of up to 5 consecutive words (unless it is a long proper name) as they appear in the news article or the title thereof,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 a minimum of 5 and ideally not more than 15 keywords (with ca 30% margin -to provide some flexibility) should be selected, however the set of selected keywords may not constitute more than 50% of the body of the news article,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 a single keyword may not include more than one entity,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 a keyword has to be either a noun phrase, proper name, verb, adjective, phrasal verb, or part of a clause (e.g., 'Trump died'),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 a stand-alone adverb, conjunction, determiner, number, preposition or pronoun may not constitute a keyword,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 a full sentence can never constitute a keyword,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 keywords should not be converted into their corresponding base forms, disregarding the fact that a base form would appear more natural,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "\u2022 if there are many candidate keywords to represent the same concept, only one of them should be selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction Task",
"sec_num": "2"
},
{
"text": "Given the specific context of real-time media monitoring, our experiments imposed the following main selection criteria to the keyword extraction techniques to explore and evaluate:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "\u2022 efficiency: ability to process a single news article within a fraction of a second,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "\u2022 multi-linguality: ability to quickly adapt the method to the processing of many different languages,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "\u2022 robustness: ability to process corrupted data without impacting performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Consequently, we have selected methods that: (a) do not require any language-specific resources except stop word lists and off-the-shelf pre-computed word embeddings, (b) exploit only information that can be computed in a time-efficient manner, e.g., frequency statistics, co-occurrence, positional information, string similarity, etc., (c) do not require any external text corpora (with one exception for a baseline method). The pool of methods (and variants thereof) explored includes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Total Keyword Frequency (TKF) exploits only frequency information to rank candidate keywords, where candidates are 1-3 word n-grams from text that do not contain punctuation marks, and which neither start nor end with a stop word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Term Frequency-Inverse Document Frequency (TF-IDF) constitutes the main baseline algorithm in our study. For the computation of TF-IDF scores a corpus consisting of 34.5M news articles gathered by EMM that span over the first 6 months of 2020 and covering ca. 70 languages was exploited. 2 A maximum of min(20, N/6) keywords with highest TF-IDF scores are returned for a news article, where N stands for the total number of tokens in the article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Rapid Automatic Keyword Extraction (RAKE) exploits both frequency and co-occurrence information about tokens to score candidate keyword phrases (token sequences that do contain neither stop words nor phrase delimiters) (Rose et al., 2 In particular, the pool of 34.5M news articles included: 11309K English, 6746K Spanish, 2322K French, 2001K Italian, 1431K German, 760K Romanian and 183K Polish articles, which covers the languages of the evaluation dataset (see Section 4.1). 2010). More specifically, the score for a candidate keyword phrase is computed as the sum of its member word scores. We explored three options for scoring words: (a) s(w) = f requency(w) (RAKE-FREQ), (b) s(w) = degree(w) (RAKE-DEG), which stands for the number of other content words that co-occurr with w in any candidate keyword phrase, and (c) s(w) = degree(w)/f requency(w) (RAKE-DEGFREQ).",
"cite_spans": [
{
"start": 219,
"end": 232,
"text": "(Rose et al.,",
"ref_id": null
},
{
"start": 233,
"end": 234,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Keyphrase Miner (KP-Miner) exploits frequency and positional information about candidate keywords (word n-grams that do not contain punctuation marks, and which neither start nor end with a stop word) with some weighting of multi-token keywords (El-Beltagy and Rafea, 2009) . More precisely, the score of a candidate keyword (in the case of single document scenario) is computed as:",
"cite_spans": [
{
"start": 245,
"end": 273,
"text": "(El-Beltagy and Rafea, 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "s(k) = f req(k) \u2022 max( |K| \u03b1 \u2022 |K m | , \u03c9) \u2022 1 AvgP os(k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "where f req(k), K, K m denote frequency of k, the set of all candidate keywords and the set of all multi-token candidate keywords resp., whereas \u03b1 and \u03c9 are two weight adjustment constants, and AvgP os(k) denotes the average position of the keyword in a text in terms of regions separated by punctuations. KP-Miner also has a specific cut-off parameter, which determines the number of tokens after which if the keyword appears for the first time it is filtered out and discarded as a candidate. Our version of KP-Miner does not include stemming different from the original one (El-Beltagy and Rafea, 2009) due to our multilingual context and the specification of KE task (see Section 2). Finally, KP-Miner scans the top n ranking candidates and removes the ones which constitute sub-parts of others and adjusts the scores accordingly. Based on the empirical observations the specific parameters, namely, \u03b1, \u03c9 and cut-off were set to 1.0, 3.0 and 1000 resp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Yet Another Keyword Extraction (Yake) exploits a wider range of features (Campos et al., 2020) vis-a-vis RAKE and KP-Miner in the process of scoring single tokens. Like the two algorithms introduced earlier, YAKE selects as candidate keywords word n-grams that do not contain punctuation marks, and which neither start nor end with a stop word. However, on top of this, an additional token classification step is then carried out in order to filter out additional tokens that should not constitute part of a keyword (e.g. non alphanumeric character sequences, etc.). Single tokens are scored using the following formula:",
"cite_spans": [
{
"start": 73,
"end": 94,
"text": "(Campos et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Score(t) = T rel\u2212context (t) \u2022 T position (t) T case (t) + T f req\u2212norm (t)+Tsentence(t) T rel\u2212context (t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "where: (a) T case (t) is a feature that reflects statistics on case information of all occurrences of t based on the assumption that uppercase tokens are more relevant than lowercase ones, (b) T position (t) is a feature that exploits positional information and boosts tokens that tend to appear at the beginning of a text, (c) T f req\u2212norm is a feature that gives higher value to tokens appearing more than the mean and balanced by the span provided by standard deviation, (d) T sentence (t) is a feature that boosts significance of tokens that appear in many different sentences, and (e) T rel\u2212context (t) is a relatedness to context indicator that 'downgrades' tokens that co-occur with higher number of unique tokens in a given window (see (Campos et al., 2020) for details). The score for a candidate keyword k = t 1 t 2 . . . t n is then computed as:",
"cite_spans": [
{
"start": 744,
"end": 765,
"text": "(Campos et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Score(k) = n i=1 Score(t i ) f requency(k) \u2022 (1 + n i=1 Score(t i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Once the candidate keywords are ranked, potential duplicates are removed by adding them in relevance order. When a new keyword is added it is compared against all more relevant candidates in terms of semantic similarity, and if this similarity is below a specified threshold it is discarded. While the original YAKE algorithm exploits for this purpose the Levenshtein distance, our implementation uses Weighted Logest Common Substrings string distance metric (Piskorski et al., 2009) which favours overlap in the initial part of the strings compared.",
"cite_spans": [
{
"start": 459,
"end": 483,
"text": "(Piskorski et al., 2009)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "3"
},
{
"text": "Keyword Extraction (KEYEMB) exploits document embeddings and cosine similarity in order to identify candidate keywords. First, a document embedding is computed, then word n-grams of different sizes are generated, which are subsequently ranked along their similarity to the embedding of the document (Grootendorst, 2020) .",
"cite_spans": [
{
"start": 299,
"end": 319,
"text": "(Grootendorst, 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "We tested three different out-of-the-box transformer-based sentence embeddings. BERTbased ones are taken from (Reimers and Gurevych, 2020) , which are both multilingual and fine-tuned on natural language inference and semantic text similarity tasks. One version uses a basic BERT model (KEYEMB-BERT-B) and the other a lightweight BERT model (KEYEMB-BERT-D). Finally, KEYEMB-LASER is based on LASER (Artetxe and Schwenk, 2019) embeddings. Contrary to BERT, they have not been fine-tuned on semantic similarity tasks, but for the task of aligning similar multilingual concepts to the same semantic space.",
"cite_spans": [
{
"start": 110,
"end": 138,
"text": "(Reimers and Gurevych, 2020)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "Filtering stop words without applying any of the different post-processing steps proposed in (Grootendorst, 2020) provided the best results and therefore is the setting we used in the evaluation and comparison against other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "Graph-based Keyword Extraction: (GRAPH) exploits properties of a graph whose nodes are substrings extracted from the text in order to identify which are the most important (Litvak and Last, 2008) . This approach differs from TextRank (Mihalcea and Tarau, 2004), in two ways: firstly, the graph is constructed in a fundamentally different way yielding smaller graphs and therefore faster processing time; secondly, different lowercomplexity graph measures are also explored, allowing even faster processing time.",
"cite_spans": [
{
"start": 172,
"end": 195,
"text": "(Litvak and Last, 2008)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "A node of the graph corresponds either to a sentence, a phrase delimited by any punctuation marks or a token sequence delimited by stop words. Two nodes are connected only if they share at least 20% of words after removal of stop words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "The importance of the nodes can be defined in different ways. In this study we looked at: (a) degree (GRAPH-DEGREE), which measures the absolute number of related sentences in the text, (b) centrality (GRAPH-CENTR) which intuitively measures the extent to which a specific node serves as a bridge to connect any unrelated pieces of information, (c) clustering (GRAPH-CLUST) which measure the level of interconnection between the neighbours of a node and itself, and finally, (d) the sum of the centrality and clustering measure (GRAPH-CE&CL). Please refer to (Brandes, 2005) for further details on these graph measures.",
"cite_spans": [
{
"start": 559,
"end": 574,
"text": "(Brandes, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "Although more sophisticated linguistic processing resources such as POS taggers and dependency parsers are available for at least several languages we did not consider KE techniques that exploit them since the range of languages covered would be still far away from the ca. 70 languages covered by EMM. Furthermore, although the BERT-based approaches to KE (even without any tuning) are known to be orders of magnitudes slower than the other methods, we explored them given the wide range of languages covered in terms of off-the-shelf embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Embedding-based",
"sec_num": null
},
{
"text": "For the evaluation of the KE algorithms we created random samples of circa 50 news articles published in 2020 for 7 languages: English, French, German, Italian, Polish, Romanian and Spanish. The selection of the languages was motivated to cover all three main Indo-European language families: Germanic, Romance and Slavic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The news articles were annotated with keywords by two human experts for each language in the following manner. Initially, all annotators were presented with the task definition, keyword selection guidelines, and annotated a couple of trial articles. Next, the annotators were tasked to select keywords for the proper set of 50 news articles for each language. The annotation was done by each annotator separately since we were interested to measure the discrepancies between annotators and differences between the languages. The final sets of documents used for evaluation for some of the languages contained less than 50 news articles due to some near duplicates encountered, etc. Table 1 shows the differences in terms of keyword annotation distribution across languages. The average number of keywords per article varies from 8.68 for French to 13.20 for German. At the token level, the average ranges from 20.66 annotated tokens (French) per article to 30.24 (Romanian). The discrepancies between annotators differ significantly across languages, e.g., for Polish, only 9.37% of the keywords are shared between the two annotators, whereas for Romanian, they are 48.68%. However, when one measures the differences at the token level the discrepancies are significantly smaller, i.e., for Polish, 49.67% of the tokens are shared between the annotators, whereas for Romanian, 69.16%. This comparison between annotators is completed by computing the percentage of \"fuzzy\" common tokens (Table 1) , corresponding to the common 4-gram characters. As expected, the percentage of \"fuzzy\" common tokens is higher than for exact common tokens for all languages. It increases by ca. 2 points for English, French, Italian, Spanish and more than 4 points for German, Polish and Romanian.",
"cite_spans": [],
"ref_spans": [
{
"start": 682,
"end": 689,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1486,
"end": 1495,
"text": "(Table 1)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "Based on the relatively high level of discrepancies between each pair of annotators per language (see Table 1 ) we decided to create the ground truth for evaluation by merging the respective keyword sets for each languages. The statistics of the resulting ground truth data are summarized in Table 2 . We can observe that the average number of keywords per article for Italian and French is significantly lower than for the other languages. The average number of tokens per keyword is quite stable, from 2.33 (Spanish) to 2.79 (English), except for German, 1.75 tokens per keyword, due to the frequent use of compounds in this language.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 292,
"end": 299,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "We have used the classical precision (P ), recall (R) and F 1 metrics for the evaluation purposes. The overall P , R and F 1 scores were computed as an average over the respective scores for single news articles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "We have computed the scores in three different ways. In the exact matching mode, we consider that an extracted keyword is matched correctly only if exactly the same keyword occurs in the ground truth (or vice versa).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "In the partial matching mode, the match of a given keyword c vis-a-vis Ground Truth GT = {k 1 , . . . , k n } is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "match(c) = max k\u2208GT 2 \u2022 commonT okens(c, k) |c| T + |k| T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "where commonT okens(c, k) denotes the number of tokens that appear both in c and k, and |c| T (|k| T ) denote the number of tokens the keyword c (k) consists of. The value of match(c) is between 0 and 1. Analogously, in the fuzzy matching mode, the match of a given keyword c vis-a-vis Ground Truth GT = {k 1 , . . . , k n } is computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "match(c) = max k\u2208GT Similarity(c, k)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "where Similarity(c, k) is computed using Longest Common Substring similarity metric (Bergroth et al., 2000) , whose value is between 0 and 1. Both P and R are computed analogously using the concept of partial and fuzzy matching. The main rationale behind using the partial and fuzzy matching mode was the fact that exact matching is simply too strict in terms of penalisation of automatically extracted keywords which do have strong overlap with keywords in the ground truth.",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "(Bergroth et al., 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "Finally, we have also computed standard deviation (SD) for all metrics in order to observe whether any of the algorithms is prone to producing response outliers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Methodology",
"sec_num": "4.2"
},
{
"text": "We have evaluated all the algorithms described in Section 3 with the following settings, unless specified elsewhere differently: (a) the max. number of tokens per keyword is 3, whereas the minimum (maximum) number of characters is set to 2 (80), (b) keywords can neither start nor end with a stop word, (c) keywords cannot contain tokens composed only of non-alphanumeric characters, and (d) the default maximum number of keywords to return is 15. The main drive behind setting the maximum number of keywords to 15 is based on empirical observation, optimizing both F 1 score and not returning too long list of keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The overall performance of each algorithm averaged across languages, in term of P , R and F 1 scores is listed in Table 3 , respectively for exact, partial and fuzzy matching. In general, only the results for the best settings per algorithm type are provided except for YAKE and KPMINER, which performed overall best. More specifically, the table contains results of some additional variants of YAKE and its combinations with KPMiner, namely: (a) YAKE-15 and YAKE-20 which return 15 and 20 keywords resp., (b) YAKE-KPMINER-I (intersection) which returns the intersection of the results returned by YAKE-15 and KP-Miner, (c) YAKE-KPMINER-U (union) which merges up to 10 top keywords returned by YAKE and KP-Miner output, and (d) YAKE-KPMINER-R (re-ranking) which sums the ranks of the keywords returned by YAKE-15 and KPMINER and selects top 15 keywords after the re-ranking.",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Across the three types of matching, the list of algorithms obtaining good results is quite stable (cf. Table 3 ). YAKE-KPMINER-R constantly obtaining the best F 1 , respectively 20.1%, 46.6% and 47.2% for the exact, partial and fuzzy matching, followed or equaled by the YAKE-KPMINER-U. YAKE-KPMINER-I obtained the best precision, respectively 28.5%, 55.9% and 57.2%. In terms of standard deviation (SD), YAKE-KPMINER-I appears to be the most unstable since it is constantly the algorithm with the highest SD, for P , R and F 1 , and for all types of matching.",
"cite_spans": [],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "As expected, the results obtained with partial and fuzzy matching are better than with exact matching. More interestingly, the fuzzy matching also allows to smooth the discrepancy between languages. Figure 1 highlights for YAKE-KPMINER-R algorithm how some languages like Polish, a highly inflected language, have a poor F 1 for exact matching, but are close to the all-language average for fuzzy matching. Figure 2 aims at comparing the results obtained in each language with a selection of algorithms for the fuzzy matching. The KPMINER algorithm appears to be best suited for the French language, whereas German the group of YAKE algorithms appears to be a better choice. There are some other language specific aspects according to the different algorithms, but less significant. As a matter of fact, the observations on YAKE and KPMINER strengths when applying on texts in specific languages were the main drive to introduce the various variants of combining these KE algorithms.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 407,
"end": 415,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "One can also conclude from the evaluation figures that YAKE-KP-MINER-R appears to be the best \"all-rounder\" algorithm. In this context it is also important to emphasize that the performance of the various algorithms relies on the quality and coverage of the stop word lists, which are used by almost all algorithms compared here. In particular, the respective algorithms used identical stop word lists, covering: English (583 words), French (464), German (604), Italian (397), Polish (355), Romanian (282), and Spanish (352).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "KEYEMB-based approaches tend to focus only on the most important sentence in the news article. As such, frequently, several 3-grams candidates originating from the same sentence are returned, where most of them are redundant. Interestingly, as regards fuzzy matching KEYEMB-LASER performs better than BERT-based ones despite not being specially trained on similarity tasks, while KEYEMB-BERT-D performs overall best out of the three. It is worth mentioning that this approach is by far the slowest of the reported approaches in terms of time efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "GRAPH-based approaches suffer from a similar focusing bias: they tend to focus on the most important concepts, as such they are always present but so are some variations thereof, e.g. reporting most frequent words within all the different contexts they appear in, therefore generating redundant keywords. Among this family of algorithms, the GRAPH-DEGREE performed best, meaning that a high co-occurrence count is a good indicator of relevance for KE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Embedding and graph-based approaches overfocus on the key concepts of a text. The fact that they are based on an indirect form of counting the most important words, without any further postprocessing, may in part explain why their performance is comparable to TF-IDF, which relies directly on frequency count. An advantage of graphbased approaches compared to embedding-based ones and TF-IDF is that they don't need to be trained in advance on any corpora. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Based on the results presented in the previous Section we carried out some additional experiments in order to explore whether the best performing algorithm, namely, YAKE-KPMINER-R, could be improved. In particular, given that this algorithm combines merging of keywords of two different algorithms, we have added an additional deduplication step. To be more precise, all keyword candidates that are properly included in other keyword candidates are discarded. We evaluated this new variant with different settings as regards the maximum allowed number of keywords returned. While we have not observed significant improvements in terms of the F 1 score when increasing the number of keywords returned by the algorithms described in the previous Section, the evaluation of YAKE-KPMINER-R with deduplication revealed that increasing this parameter yields some gains. Figure 3 and 4 provide P , R and F 1 curves for fuzzy matching according to the maximum number of keywords allowed to be returned for the English and German subcorpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 864,
"end": 872,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Deduplication",
"sec_num": "4.3.1"
},
{
"text": "One can observe that shifting the maximum number of keywords to ca. 25 results in some improvement for F 1 and R. While these findings pave the way for some future explorations on parameter tuning to improve F 1 figures, one needs to emphasize here that increasing the number of keywords, even if resulting in some small gains in F 1 is not a desired feature from an application point of view, where analysts expect and prefer to 'see less than more'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Deduplication",
"sec_num": "4.3.1"
},
{
"text": "We have carried out a small comparison of the runtime behaviour of the algorithms with respect to the time needed to process a collection of 16983 news articles on Covid-19 in English (84.9 MB of space on disk). The time given in seconds to run KTF, Rake, KPMiner, Yake and some variants thereof are provided in Table 4 . All the aforementioned algorithms have been implemented in Java and optimized in term of efficient data structures used that correspond to the upper bounds of the respective time complexity of these algorithms. Both embedding-and graph-based algorithms explored in our study were implemented in Python, using some existing libraries, and were not optimized for speed. For these reasons, it is not meaningful to report their exact time performance. As before, on a given CPU, embedding-based approaches run an order of magnitude slower than graph based algorithms, which themselves run a magnitude slower than the simpler algorithms, whose performance is reported in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 319,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 988,
"end": 995,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Time efficiency performance",
"sec_num": "4.4"
},
{
"text": "This paper presented the results of a small comparative study of the performance of some stateof-the-art knowledge-lightweight keyword extraction methods in the context of indexing news articles in various languages with keywords. The best performing method, namely, a combination of Yake and KPMiner algorithms, obtained F 1 score of 20.1%, 46.6% and 47.2% for the exact, partial and fuzzy matching respectively. Since both of these algorithms exploit neither any languagespecific (except stop word lists) nor other external resources like domain-specific corpora, this solution can be easily adapted to the processing of many languages and constitutes a strong baseline for further explorations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "The comparison presented in this paper is not exhaustive, other linguistically-lightweight unsupervised approaches could be explored, e.g., the graph-centric approach presented in (Skrlj et al., 2019) , and some post-processing filters to merge redundant keywords going beyond exploiting string similarity metrics, and simultaneously, techniques to improve diversification of the keywords returned.",
"cite_spans": [
{
"start": 180,
"end": 200,
"text": "(Skrlj et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "Extending the approaches explored in this study, e.g., through use of part-of-speech-based patterns to filter out implausible keywords (e.g., imposing constraints to include only adjectives and nouns as elements of keywords), use of more elaborated graph-based keyword ranking methods (e.g. Page Rank), integration of semantics (e.g., linking semantic meaning to text sequences through using knowledge bases and semantic networks (Papagiannopoulou and Tsoumakas, 2020; Hasan and Ng, 2014; Kilic and Cetin, 2019; Alami Merrouni et al., 2019) ) would potentially allow to improve the performance. However, these extensions would require significantly more linguistic sophistication, and consequently would be more difficult to port across languages.",
"cite_spans": [
{
"start": 430,
"end": 468,
"text": "(Papagiannopoulou and Tsoumakas, 2020;",
"ref_id": "BIBREF16"
},
{
"start": 469,
"end": 488,
"text": "Hasan and Ng, 2014;",
"ref_id": "BIBREF8"
},
{
"start": 489,
"end": 511,
"text": "Kilic and Cetin, 2019;",
"ref_id": "BIBREF9"
},
{
"start": 512,
"end": 540,
"text": "Alami Merrouni et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "For matters related to accessing the ground truth dataset created for the sake of carrying out the evaluation presented in this paper please contact the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Outlook",
"sec_num": "5"
},
{
"text": "https://emm.newsbrief.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are greatly indebted to Stefano Bucci, Florentina Ciltu, Corrado Mirra, Monica De Paola, Te\u00f3filio Garcia, Camelia Ignat, Jens Linge, Manuel Marker, Ma\u0142gorzata Piskorska, Camille Schaeffer, Jessica Scornavacche and Beatriz Torighelli for helping us with the keyword annotation of news articles in various languages. We are also thankful to Martin Atkinson who contributed to the work presented in this report, and to Charles MacMillan for proofreading the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic keyphrase extraction: a survey and trends",
"authors": [
{
"first": "Bouchra",
"middle": [],
"last": "Zakariae Alami Merrouni",
"suffix": ""
},
{
"first": "Brahim",
"middle": [],
"last": "Frikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ouhbi",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Intelligent Information Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zakariae Alami Merrouni, Bouchra Frikh, and Brahim Ouhbi. 2019. Automatic keyphrase extraction: a sur- vey and trends. Journal of Intelligent Information Systems, 54.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "597--610",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597-610.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A survey of longest common subsequence algorithms",
"authors": [
{
"first": "Lasse",
"middle": [],
"last": "Bergroth",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hakonen",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Raita",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "39--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lasse Bergroth, H. Hakonen, and T. Raita. 2000. A survey of longest common subsequence algorithms. pages 39-48.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "TopicRank: Graph-based topic ranking for keyphrase extraction",
"authors": [
{
"first": "Adrien",
"middle": [],
"last": "Bougouin",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Boudin",
"suffix": ""
},
{
"first": "B\u00e9atrice",
"middle": [],
"last": "Daille",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 6 th International Joint Conference on NLP",
"volume": "",
"issue": "",
"pages": "543--551",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrien Bougouin, Florian Boudin, and B\u00e9atrice Daille. 2013. TopicRank: Graph-based topic ranking for keyphrase extraction. In Proceedings of the 6 th In- ternational Joint Conference on NLP, pages 543- 551, Nagoya, Japan.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Network analysis: methodological foundations",
"authors": [],
"year": 2005,
"venue": "",
"volume": "3418",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrik Brandes. 2005. Network analysis: methodologi- cal foundations, volume 3418. Springer Science & Business Media.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Yake! keyword extraction from single documents using multiple local features",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Campos",
"suffix": ""
},
{
"first": "V\u00edtor",
"middle": [],
"last": "Mangaravite",
"suffix": ""
},
{
"first": "Arian",
"middle": [],
"last": "Pasquali",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Jorge",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Nunes",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Jatowt",
"suffix": ""
}
],
"year": 2020,
"venue": "Inf. Sci",
"volume": "509",
"issue": "",
"pages": "257--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Campos, V\u00edtor Mangaravite, Arian Pasquali, A. Jorge, C. Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. Inf. Sci., 509:257-289.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Kpminer: A keyphrase extraction system for english and arabic documents",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samhaa",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"A"
],
"last": "El-Beltagy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rafea",
"suffix": ""
}
],
"year": 2009,
"venue": "Inf. Syst",
"volume": "34",
"issue": "1",
"pages": "132--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samhaa R. El-Beltagy and Ahmed A. Rafea. 2009. Kp- miner: A keyphrase extraction system for english and arabic documents. Inf. Syst., 34(1):132-144.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Keybert: Minimal keyword extraction with bert",
"authors": [
{
"first": "Maarten",
"middle": [],
"last": "Grootendorst",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.4461265"
]
},
"num": null,
"urls": [],
"raw_text": "Maarten Grootendorst. 2020. Keybert: Minimal key- word extraction with bert.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic keyphrase extraction: A survey of the state of the art",
"authors": [
{
"first": "Saidul",
"middle": [],
"last": "Kazi",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52 nd ACL Conference",
"volume": "",
"issue": "",
"pages": "1262--1273",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazi Saidul Hasan and Vincent Ng. 2014. Automatic keyphrase extraction: A survey of the state of the art. In Proceedings of the 52 nd ACL Conference, pages 1262-1273, Baltimore, Maryland. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A survey on keyword and key phrase extraction with deep learning",
"authors": [
{
"first": "Ozlem",
"middle": [],
"last": "Kilic",
"suffix": ""
},
{
"first": "Ayd\u0131n",
"middle": [],
"last": "Cetin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ozlem Kilic and Ayd\u0131n Cetin. 2019. A survey on key- word and key phrase extraction with deep learning. pages 1-6.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Graph-based keyword extraction for single-document summarization",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Last",
"suffix": ""
}
],
"year": 2008,
"venue": "Coling 2008: Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summariza- tion. In Coling 2008: Proceedings of the work- shop Multi-source Multilingual Information Extrac- tion and Summarization, pages 17-24.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Supervised topical key phrase extraction of news stories using crowdsourcing, light filtering and coreference normalization. Language Resources and Evaluation",
"authors": [
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "Anatole",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "G",
"middle": [
"Jaime"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "E",
"middle": [
"Robert"
],
"last": "Frederking",
"suffix": ""
},
{
"first": "Paulo Jo\u00e3o",
"middle": [],
"last": "Neto",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "399--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu\u00eds Marujo, Anatole Gershman, G. Jaime Carbonell, E. Robert Frederking, and Paulo Jo\u00e3o Neto. 2012. Supervised topical key phrase extraction of news stories using crowdsourcing, light filtering and co- reference normalization. Language Resources and Evaluation, pages 399-403.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Keyphrase cloud generation of broadcast news. Proceedings of INTERSPEECH",
"authors": [
{
"first": "Lu\u00eds",
"middle": [],
"last": "Marujo",
"suffix": ""
},
{
"first": "M\u00e1rcio",
"middle": [],
"last": "Viveiros",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Neto",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu\u00eds Marujo, M\u00e1rcio Viveiros, and Jo\u00e3o Neto. 2013. Keyphrase cloud generation of broadcast news. Pro- ceedings of INTERSPEECH 2013.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Textrank: Bringing order into text",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into text. In Proceedings of the 2004 con- ference on empirical methods in natural language processing, pages 404-411.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111-3119. Curran Associates, Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised learning of sentence embeddings using compositional n-gram features",
"authors": [
{
"first": "Matteo",
"middle": [],
"last": "Pagliardini",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Jaggi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of",
"volume": "",
"issue": "",
"pages": "528--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised learning of sentence embed- dings using compositional n-gram features. In Pro- ceedings of NAACL 2018, pages 528-540, New Or- leans, Louisiana. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A review of keyphrase extraction",
"authors": [
{
"first": "Eirini",
"middle": [],
"last": "Papagiannopoulou",
"suffix": ""
},
{
"first": "Grigorios",
"middle": [],
"last": "Tsoumakas",
"suffix": ""
}
],
"year": 2020,
"venue": "Data Mining and Knowledge Discovery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eirini Papagiannopoulou and Grigorios Tsoumakas. 2020. A review of keyphrase extraction. Wiley Inter- disciplinary Reviews: Data Mining and Knowledge Discovery, 10.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "On knowledge-poor methods for person name matching and lemmatization for highly inflectional languages",
"authors": [
{
"first": "Jakub",
"middle": [],
"last": "Piskorski",
"suffix": ""
},
{
"first": "Karol",
"middle": [],
"last": "Wieloch",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Sydow",
"suffix": ""
}
],
"year": 2009,
"venue": "Information Retrieval",
"volume": "12",
"issue": "3",
"pages": "275--299",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jakub Piskorski, Karol Wieloch, and Marcin Sydow. 2009. On knowledge-poor methods for person name matching and lemmatization for highly inflectional languages. Information Retrieval, 12(3):275-299.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Making monolingual sentence embeddings multilingual using knowledge distillation",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual us- ing knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic keyword extraction from individual documents",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Rose",
"suffix": ""
},
{
"first": "Dave",
"middle": [],
"last": "Engel",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Cramer",
"suffix": ""
},
{
"first": "Wendy",
"middle": [],
"last": "Cowley",
"suffix": ""
}
],
"year": 2010,
"venue": "Text Mining. Applications and Theory",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Rose, Dave Engel, Nick Cramer, and Wendy Cowley. 2010. Automatic keyword extraction from individual documents. In Michael W. Berry and Ja- cob Kogan, editors, Text Mining. Applications and Theory, pages 1-20. John Wiley and Sons, Ltd.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Rakun: Rank-based keyword extraction via unsupervised learning and meta vertex aggregation",
"authors": [
{
"first": "Blaz",
"middle": [],
"last": "Skrlj",
"suffix": ""
},
{
"first": "Andraz",
"middle": [],
"last": "Repar",
"suffix": ""
},
{
"first": "Senja",
"middle": [],
"last": "Pollak",
"suffix": ""
}
],
"year": 2019,
"venue": "SLSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blaz Skrlj, Andraz Repar, and Senja Pollak. 2019. Rakun: Rank-based keyword extraction via unsu- pervised learning and meta vertex aggregation. In SLSP.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "EMM: Supporting the analyst by turning multilingual text into structured data",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Atkinson",
"suffix": ""
},
{
"first": "Teofilo",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Van Der Goot",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Linge",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Macmillan",
"suffix": ""
},
{
"first": "Hristo",
"middle": [],
"last": "Tanev",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Verile",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Wagner",
"suffix": ""
}
],
"year": 2017,
"venue": "Transparenz Aus Verantwortung: Neue Herausforderungen F\u00fcr Die Digitale Datenanalyse",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf Steinberger, Martin Atkinson, Teofilo Garcia, Erik van der Goot, Jens Linge, Charles Macmillan, Hristo Tanev, Marco Verile, and Gerhard Wagner. 2017. EMM: Supporting the analyst by turning multilin- gual text into structured data. In Transparenz Aus Verantwortung: Neue Herausforderungen F\u00fcr Die Digitale Datenanalyse. Erich Schmidt Verlag.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "F 1 scores for exact, partial and fuzzy matching for YAKE-KPMINER-R.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"text": "Exact and fuzzy overlap of keywords and tokens for annotator pairs for each language.",
"num": null,
"type_str": "table",
"content": "<table><tr><td>Language</td><td colspan=\"3\">#articles avg. nb of avg. nb of</td></tr><tr><td/><td/><td colspan=\"2\">keywords tokens per</td></tr><tr><td/><td/><td>per article</td><td>keyword</td></tr><tr><td>English</td><td>50</td><td>22.04</td><td>2.79</td></tr><tr><td>French</td><td>47</td><td>14.34</td><td>2.70</td></tr><tr><td>German</td><td>50</td><td>21.36</td><td>1.75</td></tr><tr><td>Italian</td><td>50</td><td>16.16</td><td>2.34</td></tr><tr><td>Polish</td><td>39</td><td>21.18</td><td>2.67</td></tr><tr><td>Romanian</td><td>49</td><td>20.61</td><td>2.62</td></tr><tr><td>Spanish</td><td>48</td><td>22.75</td><td>2.33</td></tr></table>",
"html": null
},
"TABREF2": {
"text": "",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
},
"TABREF4": {
"text": "Time efficiency comparison on a set of circa 17K news articles in English on Covid-19.",
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null
}
}
}
}