{ "paper_id": "S10-1033", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:28:07.725103Z" }, "title": "SJTULTLAB: Chunk Based Method for Keyphrase Extraction", "authors": [ { "first": "Letian", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jiao Tong University Shanghai", "location": { "country": "China" } }, "email": "" }, { "first": "Fang", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jiao Tong University", "location": { "settlement": "Shanghai", "country": "China" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present a chunk based keyphrase extraction method for scientific articles. Different from most previous systems, supervised machine learning algorithms are not used in our system. Instead, document structure information is used to remove unimportant contents; Chunk extraction and filtering is used to reduce the quantity of candidates; Keywords are used to filter the candidates before generating final keyphrases. Our experimental results on test data show that the method works better than the baseline systems and is comparable with other known algorithms.", "pdf_parse": { "paper_id": "S10-1033", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present a chunk based keyphrase extraction method for scientific articles. Different from most previous systems, supervised machine learning algorithms are not used in our system. Instead, document structure information is used to remove unimportant contents; Chunk extraction and filtering is used to reduce the quantity of candidates; Keywords are used to filter the candidates before generating final keyphrases. Our experimental results on test data show that the method works better than the baseline systems and is comparable with other known algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Keyphrases are sequences of words which capture the main topics discussed in a document. Keyphrases are very useful in many natural language processing (NLP) applications such as document summarization, classification and clustering. But it is an expensive and time-consuming job for users to tag keyphrases of a document. These needs motivate methods for automatic keyphrase extraction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most existing algorithms for keyphrase extraction treat this task as a supervised classification task. The KEA algorithm identifies candidate keyphrases using lexical methods, calculates feature values for each candidate, and uses a machine-learning algorithm to predict which candidates are good keyphrases. A domain-specific method was proposed based on the Naive Bayes learning scheme. Turney (Turney, 2000) treated a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Turney (Turney, 2003) also presented enhancements to the KEA keyphrase extraction algorithm that are designed to increase the coherence of the extracted keyphrases. Nguyen and yen Kan (Nguyen and yen Kan, 2007) presented a keyphrase extraction algorithm for scientific publications. They also introduced two features that capture the positions of phrases and salient morphological phenomena. Wu and Agogino (Wu and Agogino, 2004) proposed an automated keyphrase extraction algorithm using a nondominated sorting multiobjective genetic algorithm. Kumar and Srinathan (Kumar and Srinathan, 2008) used n-gram filtration technique and weight of words for keyphrase extraction from scientific articles.", "cite_spans": [ { "start": 396, "end": 410, "text": "(Turney, 2000)", "ref_id": "BIBREF7" }, { "start": 558, "end": 572, "text": "(Turney, 2003)", "ref_id": "BIBREF8" }, { "start": 751, "end": 761, "text": "Kan, 2007)", "ref_id": null }, { "start": 943, "end": 980, "text": "Wu and Agogino (Wu and Agogino, 2004)", "ref_id": "BIBREF9" }, { "start": 1097, "end": 1144, "text": "Kumar and Srinathan (Kumar and Srinathan, 2008)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For this evaluation task, Kim and Kan (Kim and Kan, 2009) tackled two major issues in automatic keyphrase extraction using scientific articles: candidate selection and feature engineering. They also re-examined the existing features broadly used for the supervised approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different from previous systems, our system uses a chunk based method to extract keyphrases from scientific articles. Domain-specific information is used to find out useful parts in a document. The chunk based method is used to extract candidates of keyphrases in a document. Keywords of a document are used to select keyphrases from candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following, Section 2 will describe the architecture of the system. Section 3 will introduce functions and implementation of each part in the system. Experiment results will be showed in Section 4. The conclusion will be given in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 System Architecture Figure 1 shows the architecture of our system. The system accepts a document as input (go through arrows with solid lines), then does the preprocessing job and identifies the structure of the document. After these two steps, the formatted document is sent to the candidate selection module Figure 1 : System architecture which first extracts chunks from the document, then uses some rules to filter the extracted chunks. After candidate selection, the system will choose top fifteen (ordered by the position of the first occurrence in the original document) chunks from the candidates as the keyphrases and output the result (\"Output1\" in Figure 1 ) which is our submitted result. The candidates will also be sent to keyphrase selection module which first extracts keywords from the formatted document, then uses keywords to choose keyphrases from the candidates. Keywords extraction needs some training data (go through arrows with dotted lines) which also needs first two steps of our system. The result of keywords selection module will be sent to \"Out-put2\" as the final result after choosing top fifteen chunks. OpenNLP 1 and KEA 2 are used in chunk extraction and keywords extraction respectively.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 30, "text": "Figure 1", "ref_id": null }, { "start": 312, "end": 320, "text": "Figure 1", "ref_id": null }, { "start": 661, "end": 669, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In preprocessing, our system first deletes line breaks between each broken lines to reconnect the 1 http://opennlp.sourceforge.net/ 2 http://nzdl.org/Kea/ broken sentences while line breaks after title and section titles will be reserved. Title and section titles are recognized through some heuristic rules that title occupies first few lines of a document and section titles are started with numbers except abstract and reference. The system then deletes brackets blocks in the documents to make sure no keyphrases will be splitted by brackets blocks (e.g., the brackets in \"natural language processing (NLP) applications\" could be an obstacle to extracting phrase \"natural language processing applications\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "Scientific articles often have similar structures which start with title, abstract and end with conclusion, reference. The structure information is used in our system to remove unimportant contents in the input document. Based on the analysis of training documents, we assume that each article can be divided into several parts: Title, Abstract, Introduction, Related Work, Content, Experiment, Conclusion, Acknowledgement and Reference, where Content often contains the description of theories, methods or algorithms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Structure Identification", "sec_num": "3.2" }, { "text": "To implement the identification of document structure, our system first maps each section title (including document title) to one of the parts in the document structure with some rules derived from the analysis of training documents. For each part except Content, we have a pattern to map the section titles. For example, the section title of Abstract should be equal to \"abstract\", the section title of Introduction should contain \"introduction\", the section title of Related Work should contain \"related work\" or \"background\", the section title of Experiment should contain \"experiment\", \"result\" or \"evaluation\", the section title of Conclusion should contain \"conclusion\" or \"discussion\". Section titles which do not match any of the patterns will be mapped to the Content part. After mapping section titles, the content between two section titles will be mapped to the same part as the first section title (e.g., the content between the section title \"1. Introduction\" and \"2. Related Work\" will be mapped to the Introduction part).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Document Structure Identification", "sec_num": "3.2" }, { "text": "In our keyphrase analysis, we observed that most keyphrases appear in the first few parts of a document, such as Title, Abstract, and Introduction. We also found that parts like Experiment, Acknowledgement and Reference almost have no keyphrases. Thus, Experiment, Acknowledgement and Reference are removed by our system and other parts are sorted in their original order and outputted as formatted document(s) (see in Figure 1) for further process.", "cite_spans": [], "ref_spans": [ { "start": 419, "end": 428, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Document Structure Identification", "sec_num": "3.2" }, { "text": "The purpose of candidate selection is to find out potential keyphrases in a document. Traditional approaches just choose all the possible words sequences and filters them with part-of-speech tags. This approach may result in huge amount of candidates and lots of meaningless candidates for each document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.3" }, { "text": "Our system uses chunk based method to solve these problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.3" }, { "text": "\"A chunk is a textual unit of adjacent word tokens which can be mutually linked through unambiguously identified dependency chains with no recourse to idiosyncratic lexical information.\" 3 Our approach significantly reduces the quantity of candidates and keep the meanings of original documents. For example, for an article title, \"Evaluating adaptive resource management for distributed real-time embedded systems\", the traditional method will extract lots of meaningless candidates like \"adaptive resource\" and \"distributed real-time\", while our method just extract \"adaptive resource management\" and \"distributed real-time embedded systems\" as candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Candidate Selection", "sec_num": "3.3" }, { "text": "The first step of candidate selection is chunk extraction which extract chunks from a document. Four tools in OpenNLP, SentenceDetector, Tokenizer, PosTagger and TreebankChunker, are utilized in our system. The system first evokes Sen-tenceDetector to split the formatted document into sentences. Then uses Tokenizer and PosTagger to label all the words with part-of-speech tag. At last, TreebankChunker is used to extract chunks from the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk Extraction", "sec_num": "3.3.1" }, { "text": "Not all the extracted chunks can be the candidates of keyphrases. Our system uses some heuristic rules to select candidates from extracted chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "3 http://www.ilc.cnr.it/sparkle/wp1-prefinal/node24.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "The types of rules range from statistic information to syntactic structures. The rules that our system uses are based on some traditional methods for candidate filtering. They are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "1. Any chunks in candidates should have less than 5 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "2. Any single word chunks in candidates should be found at least twice in a document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "3. Any chunks in candidates should be noun phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "4. Any chunks in candidates must start with the word with the part-of-speech tag (defined in OpenNLP) NN, NNS, NNP, NNPS, JJ, JJR or JJS and end with the word with the part-ofspeech tag NN, NNS, NNP or NNPS. Chunks that do not match these rules will be removed. Chunks that haven't been removed will be the candidate keyphrases of the document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk filtering", "sec_num": "3.3.2" }, { "text": "Our analysis shows that keywords are helpful to extract keyphrases from a document. Thus, keywords are used to select keyphrases from candidate chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Keyphrase Selection", "sec_num": "3.4" }, { "text": "KEA is a keyphrase extraction tool, it can also be used to extract keywords with some appropriate parameters. We observed that most keyphrases extracted by KEA only contain one word or two words which describe the key meaning of the document, even when the max length is set to 5 or more. There are four parameters to be set, in order to get best results, we set maximum length of a keyphrase to 2, minimum length of a keyphrase to 1, minimum occurrence of a phrase to 1 and number of keyphrases to extract to 30. Then, the output of the KEA system contains thirty keywords per document. As showed in Figure 1 , KEA needs training data (provided by the task owner). Our system uses formatted documents (generated by the first two steps of our system) of training data as the input training data to KEA.", "cite_spans": [], "ref_spans": [ { "start": 601, "end": 609, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Keywords Extraction", "sec_num": "3.4.1" }, { "text": "After extracting thirty keywords from each document, our system uses these keywords to filter out non-keyphrase chunks from the candidates. The system completes the task in two steps: 1) Remove candidates of a document that do not have any keywords of the document extracted by KEA; 2) Choose the top fifteen (ordered by the position of the first occurrence in the orginal document) keyphrases as the answer of a document (\"Out-put2\" in Figure 1 ). Table 1 shows the F-score of two outputs of our system and some baseline systems. The first three methods are the baselines provided by the task owner. TFIDF is an unsupervised method to rank the candidates based on TFIDF scores. NB and ME are supervised methods using Navie Bayes and maximum entropy in WEKA 4 . KEA refers to the KEA system with the parameters that can output the best results. OP1 is our system with the \"Output1\" as result and OP2 is our system with the \"Output2\" as result (see Figure 1) . In second column, \"R\" means to use the readerassigned keyphrases set as gold-standard data and \"C\" means to use both author-assigned and readerassigned keyphrases sets as answers.", "cite_spans": [], "ref_spans": [ { "start": 437, "end": 445, "text": "Figure 1", "ref_id": null }, { "start": 449, "end": 456, "text": "Table 1", "ref_id": null }, { "start": 948, "end": 957, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Chunk Selection", "sec_num": "3.4.2" }, { "text": "Top10 From the table, we can see that, both two outputs of our system made an improvement over the baseline systems and got better results than the well known KEA system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method by Top05", "sec_num": null }, { "text": "We submitted both results of OP1 and OP2 to the evaluation task. Because of some misunderstanding over the result upload system, only the", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method by Top05", "sec_num": null }, { "text": "We proposed a chunk based method for keyphrase extraction in this paper. In our system, document structure information of scientific articles is used to pick up significant contents, chunk based candidate selection is used to reduce the quantity of candidates and reserve their original meanings, keywords are used to select keyphrases from a document. All these factors contribute to the result of our system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "http://www.cs.waikato.ac.nz/ml/weka/ result of OP1 (with bold style) was successfully submitted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Nevill-manning", "authors": [ { "first": "Eibe", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Gordon", "middle": [ "W" ], "last": "Paynter", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Gutwin", "suffix": "" }, { "first": "Craig", "middle": [ "G" ], "last": "", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eibe Frank, Gordon W. Paynter, Ian H. Witten, Carl Gutwin, and Craig G. Nevill-manning. 1999.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Domain-specific keyphrase extraction", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Domain-specific keyphrase extraction. pages 668-", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Kea: Practical automatic keyphrase extraction", "authors": [ { "first": "Gordon", "middle": [ "W" ], "last": "Ian Witten Gordon", "suffix": "" }, { "first": "Eibe", "middle": [], "last": "Paynter", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Craig", "middle": [ "G" ], "last": "Gutwin", "suffix": "" } ], "year": 1999, "venue": "Proceedings of Digital Libraries 99 (DL'99", "volume": "", "issue": "", "pages": "254--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian Witten Gordon, Gordon W. Paynter, Eibe Frank, Carl Gutwin, and Craig G. Nevill-manning. 1999. Kea: Practical automatic keyphrase extraction. In Proceedings of Digital Libraries 99 (DL'99, pages 254-255. ACM Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Re-examining automatic keyphrase extraction approaches in scientific articles", "authors": [ { "first": "Nam", "middle": [], "last": "Su", "suffix": "" }, { "first": "Min-Yen", "middle": [], "last": "Kim", "suffix": "" }, { "first": "", "middle": [], "last": "Kan", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Su Nam Kim and Min-Yen Kan. 2009. Re-examining automatic keyphrase extraction approaches in scien- tific articles. In Proceedings of the Workshop on Multiword Expressions: Identification, Interpreta- tion, Disambiguation and Applications, pages 9-16, Singapore, August. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic keyphrase extraction from scientific documents using n-gram filtration technique", "authors": [ { "first": "Niraj", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Kannan", "middle": [], "last": "Srinathan", "suffix": "" } ], "year": 2008, "venue": "DocEng '08: Proceeding of the eighth ACM symposium on Document engineering", "volume": "", "issue": "", "pages": "199--208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Niraj Kumar and Kannan Srinathan. 2008. Automatic keyphrase extraction from scientific documents us- ing n-gram filtration technique. In DocEng '08: Proceeding of the eighth ACM symposium on Doc- ument engineering, pages 199-208, New York, NY, USA. ACM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Thuy Dung Nguyen and Min yen Kan", "authors": [], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thuy Dung Nguyen and Min yen Kan. 2007.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Keyphrase extraction in scientific publications", "authors": [], "year": null, "venue": "Proc. of International Conference on Asian Digital Libraries (ICADL 07", "volume": "", "issue": "", "pages": "317--326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keyphrase extraction in scientific publications. In In Proc. of International Conference on Asian Digi- tal Libraries (ICADL 07, pages 317-326. Springer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning algorithms for keyphrase extraction", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2000, "venue": "Information Retrieval", "volume": "2", "issue": "", "pages": "303--336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2:303-336.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Coherent keyphrase extraction via web mining", "authors": [ { "first": "Peter", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2003, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "434--439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Turney. 2003. Coherent keyphrase extraction via web mining. In In Proceedings of IJCAI, pages 434- 439.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Automating keyphrase extraction with multi-objective genetic algorithms", "authors": [ { "first": "Jia-Long", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Alice", "middle": [ "M" ], "last": "Agogino", "suffix": "" } ], "year": 2004, "venue": "HICSS '04: Proceedings of the Proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) -Track 4", "volume": "", "issue": "", "pages": "40104--40107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jia-Long Wu and Alice M. Agogino. 2004. Au- tomating keyphrase extraction with multi-objective genetic algorithms. In HICSS '04: Proceedings of the Proceedings of the 37th Annual Hawaii Interna- tional Conference on System Sciences (HICSS'04) - Track 4, page 40104.3, Washington, DC, USA. IEEE Computer Society.", "links": null } }, "ref_entries": {} } }