{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:12:23.080495Z" }, "title": "NLP Tools for Predictive Maintenance Records in MaintNet", "authors": [ { "first": "Farhad", "middle": [], "last": "Akhbardeh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rochester Institute of Technology", "location": { "country": "United States" } }, "email": "" }, { "first": "Travis", "middle": [], "last": "Desell", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rochester Institute of Technology", "location": { "country": "United States" } }, "email": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rochester Institute of Technology", "location": { "country": "United States" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Processing maintenance logbook records is an important step in the development of predictive maintenance systems. Logbooks often include free text fields with domain specific terms, abbreviations, and non-standard spelling posing challenges to off-the-shelf NLP pipelines trained on standard contemporary corpora. Despite the importance of this data type, processing predictive maintenance data is still an under-explored topic in NLP. With the goal of providing more datasets and resources to the community, in this paper we present a number of new resources available in MaintNet, a collaborative open-source library and data repository of predictive maintenance language datasets. We describe novel annotated datasets from multiple domains such as aviation, automotive, and facility maintenance domains and new tools for segmentation, spell checking, POS tagging, clustering, and classification.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Processing maintenance logbook records is an important step in the development of predictive maintenance systems. Logbooks often include free text fields with domain specific terms, abbreviations, and non-standard spelling posing challenges to off-the-shelf NLP pipelines trained on standard contemporary corpora. Despite the importance of this data type, processing predictive maintenance data is still an under-explored topic in NLP. With the goal of providing more datasets and resources to the community, in this paper we present a number of new resources available in MaintNet, a collaborative open-source library and data repository of predictive maintenance language datasets. We describe novel annotated datasets from multiple domains such as aviation, automotive, and facility maintenance domains and new tools for segmentation, spell checking, POS tagging, clustering, and classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Engineering systems are generating ever increasing amounts of maintenance records often recorded in the form of event logbooks. The analysis of these records are aimed to improve predictive maintenance systems reducing maintenance costs, helping to prevent accidents, and saving lives (Jarry et al., 2018) . Predictive maintenance records are collected in multiple domains such as aviation, healthcare, and transportation (Tanguy et al., 2016; Altuncu et al., 2018) . In this paper, we present new datasets in the aviation and automotive domains listed in Table 2 .", "cite_spans": [ { "start": 285, "end": 305, "text": "(Jarry et al., 2018)", "ref_id": "BIBREF9" }, { "start": 422, "end": 443, "text": "(Tanguy et al., 2016;", "ref_id": "BIBREF16" }, { "start": 444, "end": 465, "text": "Altuncu et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 556, "end": 563, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Maintenance record datasets generally contain free text fields describing issues and actions, as in the instances presented in Table 1 . Most standard NLP pipelines for pre-processing and annotation are trained on standard contemporary corpora (e.g. newspaper texts, novels) failing to address most of the domain specific terminology, abbreviations, and non-standard spelling present in maintenance records. To help support research in this area, the MaintNet 1 platform, a collaborative open-source library and data repository for predictive maintenance data, has been developed (Akhbardeh et al., 2020) . In this paper, we present an evaluation of the tools available at MaintNet, as well as two new datasets included in the platform.", "cite_spans": [ { "start": 580, "end": 604, "text": "(Akhbardeh et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 127, "end": 134, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this paper are the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. The creation of novel language resources (e.g. abbreviation lists, datasets, and termbanks) for technical language and predictive maintenance data in the aviation, automotive, and facility management domains. We present two new datasets with aviation and automotive safety records that have been recently collected and annotated and are now available at MaintNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. The creation and development of manually curated gold standards that can be used to evaluate the performance of POS tagging and clustering/classification on technical logbook data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. The development and evaluation of a number of Python (pre-)processing tools available at MaintNet including stop word removal, stemmers, lemmatizers, POS tagging, and clustering. We carry out an evaluation of MaintNet's spell checkers and POS taggers comparing them to off-the-shelf NLP packages such as NLTK (Bird et al., 2009) and Stanford Core NLP (Manning et al., 2014) , as well as clustering methods. ", "cite_spans": [ { "start": 312, "end": 331, "text": "(Bird et al., 2009)", "ref_id": "BIBREF3" }, { "start": 354, "end": 376, "text": "(Manning et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Research in predictive maintenance systems requires large, cleansed, and often annotated logbook data gathered in domains such as web information extraction, system maintenance (e.g., aviation, wind turbines, automobiles), and healthcare (e.g.electronic health records).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the domain of healthcare, Altuncu et al. (2018) analyzed health records of patient incidents provided by the UK National Health Service using a deep neural network with word embedding. Tixier et al. (2016) developed a system to analyze injury reports applying POS tagging and term frequency to extract keywords about injuries creating a dictionary of events to improve future safety management. Savova et al. (2010) applied off-theshelf NLTK libraries on free-text electronic medical records for information extraction purposes.", "cite_spans": [ { "start": 188, "end": 208, "text": "Tixier et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In technical domains such as aviation, where MaintNet provides a primary resource, Tanguy et al. (2016) studied various available NLP techniques such as topic modeling to process aviation incident reports and extract useful information. They used standard NLP libraries to pre-process the data and then applied the Talismane NLP toolkit (Urieli, 2013) for incident feature extraction and training.", "cite_spans": [ { "start": 83, "end": 103, "text": "Tanguy et al. (2016)", "ref_id": "BIBREF16" }, { "start": 337, "end": 351, "text": "(Urieli, 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "As to the problem of non-standard spelling, Sikl\u00f3si et al. (2013) , proposed a method of correcting misspelled words in clinical records by mapping spelling errors to a large database of correction candidates. However, due to the large number of abbreviations in medical records, they were limited to specific terms and the normalization had to be performed separately. de Amorim and Zampieri (2013) proposed a dictionary-based spell correction algorithm using a clustering technique by comparing various distance metrics to aim to lower the number of distance calculations while finding or matching target words for misspellings. With this in mind, in MaintNet we provide users with tools developed to deal with domain-specific misspellings and abbreviations.", "cite_spans": [ { "start": 44, "end": 65, "text": "Sikl\u00f3si et al. (2013)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In the next sub-sections we present the tools in resources available in MaintNet divided into language resources, pre-processing, and clustering, In addition to that, MaintNet provides various dynamic webpages for users to communicate with each other and with the project developers which work similarly to a forum or message board. We hope that MaintNet's community participation features will further facilitate discussion and research in this under explored domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "MaintNet Features", "sec_num": "3" }, { "text": "MaintNet currently features seven English datasets from the aviation, automotive, and facility maintenance domains, which are presented in Table 2 . This paper introduces two new datasets with aviation and automotive safety records in bold. These datasets were collected from the USA Federal Aviation Administration and Open Data DC respectively. The list of fields and data types in each dataset is presented in Table 3 . In Figure 1 we present a screenshot of one of Maint-Net's datasets, the Avi-Main dataset, that can be accessed and searched through the platform. Predictive maintenance datasets are particularly hard to obtain due to the sensitive information they contain. Therefore, we work closely with the data providers to ensure that all confidential and sensitive information in all datasets remains anonymous. As a collaborative platform, MaintNet will be expanded with the collaboration from interested members of the NLP community. MaintNet further provides the user with domain specific abbreviation dictionaries, morphosyntactic annotation, and term banks validated by domain experts. The morphosyntactic annotation contains the POS tag, compound, lemma, and word stems. Finally, the domain term banks contain a list of terms that are used in each domain along with a sample of usage extracted from the corpus.", "cite_spans": [], "ref_spans": [ { "start": 139, "end": 146, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 413, "end": 420, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 426, "end": 434, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Language Resources", "sec_num": "3.1" }, { "text": "One of the bottlenecks of automatically processing logbooks for predictive maintenance system is that most of these datasets are not annotated with the reason for maintenance or a categorization of the issue type. To address this issue, we implemented several pre-processing steps to clean and extract as much information from logbooks as possible. The pipeline is shown in Figure 2 . The Python scripts for all components in this pipeline are made available through Maintnet.", "cite_spans": [], "ref_spans": [ { "start": 374, "end": 382, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Pre-processing and Tools", "sec_num": "3.2" }, { "text": "The process starts with text normalization, including lowercasing, stop word and punctuation removal, and treating special characters with NLTK's (Bird et al., 2009) regular expression library, followed by tokenization (NLTK tokenizer), stemming (Snowball Stemmer), and lemmatization (WordNet (Miller, 1992) ). With use of the collected morphosyntactic information, POS annotation is carried out with the NLTK POS tagger. Term frequency-inverse document frequency (TF-IDF) is obtained using the gensim tfidf model (Rehurek and Sojka, 2010) . Our analysis of the logbooks found that many of the misspellings and abbreviations lead to incorrect or non-existent dictionary look ups. To overcome this issue, we explored various state-of-the-art spellcheckers including Enchant 2 , Pyspellchecker 3 , Symspellpy 4 , and Autocorrect 5 .", "cite_spans": [ { "start": 146, "end": 165, "text": "(Bird et al., 2009)", "ref_id": "BIBREF3" }, { "start": 284, "end": 307, "text": "(WordNet (Miller, 1992)", "ref_id": null }, { "start": 514, "end": 539, "text": "(Rehurek and Sojka, 2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Pre-processing and Tools", "sec_num": "3.2" }, { "text": "Given the inaccuracy of existing techniques, we developed methods of correcting syntactic errors, typos, and abbreviated words using a Levenshtein (Levenshtein, 1966) . This method uses a dictionary of domain specific words and maps the various possible misspelled words into the correct format by selecting the most similar word in the dictionary. The Levenshtein algorithm was chosen over other distance metrics (e.g., Euclidian, Cosine) as it allows us to control the minimum number of string edits and its widely used in spell checking (de Amorim and Zampieri, 2013). The results of our method compared to other spellchecking techniques in random samples of 500 instances from each of the 5 datasets is presented in Table 4 .", "cite_spans": [ { "start": 147, "end": 166, "text": "(Levenshtein, 1966)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 720, "end": 727, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Pre-processing and Tools", "sec_num": "3.2" }, { "text": "The results are reported in terms of success rate showing that the Levenshtein (Lev) algorithm outperforms the Enchant (Ench), Pyspellchecker (Spell), and Autocorrect (Auto) spell checkers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pre-processing and Tools", "sec_num": "3.2" }, { "text": "Token Miss Ench Spell Auto Lev. Avi-Main 3299 289 86% 61% 73% 98% Avi-Safe 6059 828 84% 56% 68% 91% Auto-Main 2599 266 69% 27% 49% 95% Auto-Acc 2422 169 87% 59% 77% 97% Faci-Main 7758 926 83% 63% 59% 93% Table 4 : Success rate of spell checkers on 500 instances per dataset. Token stands for total tokens and Miss stands for misspelled tokens.", "cite_spans": [], "ref_spans": [ { "start": 204, "end": 211, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Code", "sec_num": null }, { "text": "WordNet was used to lemmatize the document, however it requires defining a POS tagger parameter which we want to lemmatize (the wordNet default is \"noun\"). As the maintenance instances typically consist of verb, noun, adverb and adjective words that define a problem, action and occurrence, by using \"verb\" as the POS parameter, there is an issue of mapping important noun words such as \"left\" (e.g. left engine) to \"leave\" or \"ground\" to \"grind\". To resolve this issue, as we discussed in 3.1, we created an exception list using developed morphosyntactic information for the WordNet lemmatizer to ignore mapping words which could be multiple parts of speech.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Code", "sec_num": null }, { "text": "Finally, we have performed an extrinsic evaluation of MaintNet's pre-processing pipeline by evaluating its impact on POS tagging. To carry out this evaluation, we randomly selected 500 instances of the Avi-Main dataset to serve as our gold standard. A North-American English native speaker working in the project annotated the 500 instances using the Penn Treebank tagset. We make this gold standard available to the community in MaintNet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Code", "sec_num": null }, { "text": "We compared the performance of three available POS taggers: NLTK (Bird et al., 2009 ), Stanford CoreNLP (Manning et al., 2014 and TextBlob 6 trained on the raw and pre-processed versions the Avi-Main dataset and evaluated on raw and preprocessed versions of the gold standard. We present the results in Table 5 in terms of accuracy. Stanford CoreNLP obtained the best results among the three POS taggers with 91% and 87% accuracy on the processed and raw versions of the data respectively. The results show an improvement of 4% accuracy in the performance of each of the three POS taggers when annotating MaintNet's pre-processed data confirming the importance of these pre-processing methods. NLTK 77% 81% +4% Stanford 87% 91% +4% TextBlob 77% 81% +4% Table 5 : Results of three POS taggers annotating raw and (pre-)processed versions of the gold standard. ", "cite_spans": [ { "start": 65, "end": 83, "text": "(Bird et al., 2009", "ref_id": "BIBREF3" }, { "start": 84, "end": 125, "text": "), Stanford CoreNLP (Manning et al., 2014", "ref_id": null } ], "ref_spans": [ { "start": 303, "end": 310, "text": "Table 5", "ref_id": null }, { "start": 694, "end": 753, "text": "NLTK 77% 81% +4% Stanford 87% 91% +4% TextBlob 77%", "ref_id": "TABREF1" }, { "start": 762, "end": 769, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Code", "sec_num": null }, { "text": "MaintNet also features implementations of popular clustering algorithms applied to logbook data that are made freely available to the research community. The motivation behind this is that most logbook data available is not annotated, which requires a domain expert to group instances into categories. Clustering techniques were used to help in this process. We converted the terms and words into a numerical representation using libraries such as tfidfvectorizer (ElSahar et al., 2017) resulting in a large matrix of document terms (DT). We use truncated singular value decomposition (SVD) (ElSahar et al., 2017) known as latent semantic analysis (LSA), to perform a linear dimensionality reduction. We chose truncated SVD (LSA) over principal component analysis (PCA) (ElSahar et al., 2017) in our system, due to the fact LSA can directly be applied to our tfidf DT matrix and it focuses on document and term relationships where PCA focuses on a term covariance matrix (eigendecomposition of the correlation). We experimented with different 4 clustering techniques: k-means (Jain, 2010), Density-Based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al., 1996) , Latent Dirichlet Analysis (LDA) (Vorontsov et al., 2015) , and hierarchical clustering (Aggarwal and Zhai, 2012) . For comparison of the results, the silhouette and inertia metrics (Fraley and Raftery, 1998) were used to determine the number of clusters for k-means (both provided similar results), and perplexity (Fraley and Raftery, 1998) and coherence (Vorontsov et al., 2015) scores were used for LDA. DBSCAN and hierarchical clustering do not require a predetermined number of clusters.", "cite_spans": [ { "start": 1159, "end": 1179, "text": "(Ester et al., 1996)", "ref_id": "BIBREF6" }, { "start": 1214, "end": 1238, "text": "(Vorontsov et al., 2015)", "ref_id": "BIBREF19" }, { "start": 1269, "end": 1294, "text": "(Aggarwal and Zhai, 2012)", "ref_id": "BIBREF0" }, { "start": 1363, "end": 1389, "text": "(Fraley and Raftery, 1998)", "ref_id": "BIBREF7" }, { "start": 1496, "end": 1522, "text": "(Fraley and Raftery, 1998)", "ref_id": "BIBREF7" }, { "start": 1537, "end": 1561, "text": "(Vorontsov et al., 2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering", "sec_num": "3.3" }, { "text": "For evaluation, we used a standard measurement of cluster cohesion including high intra-cluster similarity and low inter-cluster similarity. We chose 3 different similarity algorithms including Levenshtein, Jaro, and cosine (Fraley and Raftery, 1998) to calculate intra-and inter-cluster similarity. The cosine similarity metric is commonly used and is independent of the length of document, while Jaro is more flexible by providing a rating of matching strings. We collected human annotated instances by a domain expert to serve as our gold standard, and these are provided on MaintNet to encourage research into improving unsupervised clustering of maintenance logbooks.", "cite_spans": [ { "start": 224, "end": 250, "text": "(Fraley and Raftery, 1998)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Clustering", "sec_num": "3.3" }, { "text": "Finally, Figure 3 shows the empirical analysis of the four clustering techniques with and without our additional data pre-processing steps (Levenshteinbased dictionary spellchecking and the lemmatizer list previously presented) on the Avi-Main dataset. We examined the distribution of cluster sizes, the number of clusters, and the number of outliers (in the case of DBSCAN). Using a domain-based spellchecker and the modified lemmatizer list improved the purity and overall accuracy of the clusters by increasing the means of intra-cluster similarity and decreasing the means of inter-cluster similarity.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 17, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Clustering", "sec_num": "3.3" }, { "text": "The DBSCAN provided more accurate clusters in comparison to other algorithms while also detecting outliers, which could help identify if any new issues are introduced to the maintenance logs or if there are safety issues reported by the pilot during flight operation. K-means provided somewhat comparable results to DBSCAN, but it was not able to detect outliers and determining the number of clusters (K) is challenging, especially as this number may change over time as more issues are reported. Hierarchical clustering performed poorly, where similar issues were found to be distributed across different clusters. It was also more computationally expensive than the other methods. Clusters generated with LDA were better than hierarchical clustering, however LDA clustered some of the documents that contain the same equipment with different types of issues description together, resulting in clusters with a mixture of issue types.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Clustering", "sec_num": "3.3" }, { "text": "In this paper we evaluate the tools available in MaintNet, a collaborative open-source library for predictive maintenance language resources. Maint-Net provides technical logbook datasets on multiple domains: aviation, automotive, and facility maintenance. A number of other important language resources such as abbreviation lists, morphosyntactic information lists, and termbanks have been developed and are also available through the platform. Text (pre-)processing tools developed in Python were evaluated and are also made available. These include spell checking, POS tagging, and document clustering.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Finally, we performed an intrinsic evaluation comparing the performance of several spellcheckers on five of the seven datasets in MaintNet and an extrinsic evaluation on raw and processed versions of the Avi-Main dataset on POS tagging. We showed an important increase in performance for all taggers we tested when using data processed with MaintNet's pre-processing pipeline. For the POS tagger comparison and clustering, we developed manually annotated gold standards which are also made available through the platform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Available at: https://people.rit.edu/ fa3019/MaintNet/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.abisource.com/projects/ enchant/ 3 https://github.com/barrust/ pyspellchecker 4 https://github.com/wolfgarbe/SymSpell 5 https://github.com/fsondej/ autocorrect", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://textblob.readthedocs.io/en/ dev/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the University of North Dakota's aviation program for providing the aviation maintenance records dataset.We also thank Zechariah Morgain, the aviation domain expert who evaluated the results of our preprocessing techniques and clustering algorithms at many stages during this work, providing us with detailed information about these data sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A survey of text clustering algorithms", "authors": [ { "first": "C", "middle": [], "last": "Charu", "suffix": "" }, { "first": "Chengxiang", "middle": [], "last": "Aggarwal", "suffix": "" }, { "first": "", "middle": [], "last": "Zhai", "suffix": "" } ], "year": 2012, "venue": "Mining Text Data", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charu C. Aggarwal and ChengXiang Zhai. 2012. A survey of text clustering algorithms. In Mining Text Data.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "MaintNet: A Collaborative Open-Source Library for Predictive Maintenance Language Resources", "authors": [ { "first": "Farhad", "middle": [], "last": "Akhbardeh", "suffix": "" }, { "first": "Travis", "middle": [], "last": "Desell", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.12443" ] }, "num": null, "urls": [], "raw_text": "Farhad Akhbardeh, Travis Desell, and Marcos Zampieri. 2020. MaintNet: A Collaborative Open- Source Library for Predictive Maintenance Lan- guage Resources. arXiv preprint arXiv:2005.12443.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "From text to topics in healthcare records: An unsupervised graph partitioning methodology", "authors": [ { "first": "M", "middle": [], "last": "Tarik Altuncu", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Sophia", "middle": [ "N" ], "last": "Yaliraki", "suffix": "" }, { "first": "Mauricio", "middle": [], "last": "Barahona", "suffix": "" } ], "year": 2018, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Tarik Altuncu, Erik Mayer, Sophia N. Yaliraki, and Mauricio Barahona. 2018. From text to topics in healthcare records: An unsupervised graph partition- ing methodology. ArXiv, abs/1807.02599.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Natural Language Processing with Python", "authors": [ { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Effective spell checking methods using clustering algorithms", "authors": [ { "first": "Renato", "middle": [], "last": "Cordeiro De Amorim", "suffix": "" }, { "first": "Marcos", "middle": [], "last": "Zampieri", "suffix": "" } ], "year": 2013, "venue": "RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Renato Cordeiro de Amorim and Marcos Zampieri. 2013. Effective spell checking methods using clus- tering algorithms. In RANLP.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised open relation extraction", "authors": [ { "first": "Hady", "middle": [], "last": "Elsahar", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Demidova", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Gottschalk", "suffix": "" }, { "first": "Christophe", "middle": [], "last": "Gravier", "suffix": "" }, { "first": "Fr\u00e9d\u00e9rique", "middle": [], "last": "Laforest", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hady ElSahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Fr\u00e9d\u00e9rique Laforest. 2017. Unsupervised open relation extraction. ArXiv, abs/1801.07174.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", "authors": [ { "first": "Martin", "middle": [], "last": "Ester", "suffix": "" }, { "first": "Hans-Peter", "middle": [], "last": "Kriegel", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Sander", "suffix": "" }, { "first": "Xiaowei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 1996, "venue": "KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Ester, Hans-Peter Kriegel, J\u00f6rg Sander, and Xi- aowei Xu. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "How many clusters? which clustering method? answers via model-based cluster analysis", "authors": [ { "first": "Chris", "middle": [], "last": "Fraley", "suffix": "" }, { "first": "Adrian", "middle": [ "E" ], "last": "Raftery", "suffix": "" } ], "year": 1998, "venue": "Comput. J", "volume": "41", "issue": "", "pages": "578--588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Fraley and Adrian E. Raftery. 1998. How many clusters? which clustering method? answers via model-based cluster analysis. Comput. J., 41:578- 588.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Data clustering: 50 years beyond k-means", "authors": [ { "first": "Anil", "middle": [], "last": "Kumar Jain", "suffix": "" } ], "year": 2010, "venue": "Pattern Recognition Letters", "volume": "31", "issue": "", "pages": "651--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anil Kumar Jain. 2010. Data clustering: 50 years be- yond k-means. Pattern Recognition Letters, 31:651- 666.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Aircraft atypical approach detection using functional principal component analysis", "authors": [ { "first": "Gabriel", "middle": [], "last": "Jarry", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Delahaye", "suffix": "" }, { "first": "Florence", "middle": [], "last": "Nicol", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Feron", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gabriel Jarry, Daniel Delahaye, Florence Nicol, and Eric Feron. 2018. Aircraft atypical approach detec- tion using functional principal component analysis. In SID.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "authors": [ { "first": "", "middle": [], "last": "Vladimir I Levenshtein", "suffix": "" } ], "year": 1966, "venue": "Soviet physics doklady", "volume": "10", "issue": "", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The stanford corenlp natural language processing toolkit", "authors": [ { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Manning", "suffix": "" }, { "first": "John", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Jenny", "middle": [ "Rose" ], "last": "Bauer", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Finkel", "suffix": "" }, { "first": "David", "middle": [], "last": "Bethard", "suffix": "" }, { "first": "", "middle": [], "last": "Mc-Closky", "suffix": "" } ], "year": 2014, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Wordnet: A lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1992, "venue": "Commun. ACM", "volume": "38", "issue": "", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller. 1992. Wordnet: A lexical database for english. Commun. ACM, 38:39-41.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Software framework for topic modelling with large corpora", "authors": [ { "first": "Radim", "middle": [], "last": "Rehurek", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Sojka", "suffix": "" } ], "year": 2010, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Radim Rehurek and Petr Sojka. 2010. Software frame- work for topic modelling with large corpora. In LREC.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications", "authors": [ { "first": "K", "middle": [], "last": "Guergana", "suffix": "" }, { "first": "James", "middle": [ "J" ], "last": "Savova", "suffix": "" }, { "first": "Philip", "middle": [ "V" ], "last": "Masanz", "suffix": "" }, { "first": "Jiaping", "middle": [], "last": "Ogren", "suffix": "" }, { "first": "Sunghwan", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Karin", "middle": [ "Kipper" ], "last": "Sohn", "suffix": "" }, { "first": "Christopher", "middle": [ "G" ], "last": "Schuler", "suffix": "" }, { "first": "", "middle": [], "last": "Chute", "suffix": "" } ], "year": 2010, "venue": "Journal of the American Medical Informatics Association : JAMIA", "volume": "17", "issue": "", "pages": "507--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guergana K. Savova, James J. Masanz, Philip V. Ogren, Jiaping Zheng, Sunghwan Sohn, Karin Kip- per Schuler, and Christopher G. Chute. 2010. Mayo clinical text analysis and knowledge extraction sys- tem (ctakes): architecture, component evaluation and applications. Journal of the American Medical Informatics Association : JAMIA, 17 5:507-13.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Context-aware correction of spelling errors in hungarian medical documents", "authors": [ { "first": "Borb\u00e1la", "middle": [], "last": "Sikl\u00f3si", "suffix": "" }, { "first": "Attila", "middle": [], "last": "Nov\u00e1k", "suffix": "" }, { "first": "G\u00e1bor", "middle": [], "last": "Pr\u00f3sz\u00e9ky", "suffix": "" } ], "year": 2013, "venue": "SLSP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Borb\u00e1la Sikl\u00f3si, Attila Nov\u00e1k, and G\u00e1bor Pr\u00f3sz\u00e9ky. 2013. Context-aware correction of spelling errors in hungarian medical documents. In SLSP.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Natural language processing for aviation safety reports: From classification to interactive analysis", "authors": [ { "first": "Ludovic", "middle": [], "last": "Tanguy", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Tulechki", "suffix": "" }, { "first": "Assaf", "middle": [], "last": "Urieli", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Hermann", "suffix": "" }, { "first": "C\u00e9line", "middle": [], "last": "Raynal", "suffix": "" } ], "year": 2016, "venue": "Computers in Industry", "volume": "78", "issue": "", "pages": "80--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ludovic Tanguy, Nikola Tulechki, Assaf Urieli, Eric Hermann, and C\u00e9line Raynal. 2016. Natural lan- guage processing for aviation safety reports: From classification to interactive analysis. Computers in Industry, 78:80-95.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Automated content analysis for construction safety: A natural language processing system to extract precursors and outcomes from unstructured injury reports", "authors": [ { "first": "Antoine J.-P", "middle": [], "last": "Tixier", "suffix": "" }, { "first": "Matthew", "middle": [ "R" ], "last": "Hallowell", "suffix": "" }, { "first": "Balaji", "middle": [], "last": "Rajagopalan", "suffix": "" }, { "first": "Dean", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antoine J.-P. Tixier, Matthew R. Hallowell, Balaji Ra- jagopalan, and Dean Bowman. 2016. Automated content analysis for construction safety: A natu- ral language processing system to extract precursors and outcomes from unstructured injury reports. In Automation in Construction.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Robust French Syntax Analysis: Reconciling Statistical Methods and Linguistic Knowledge in the Talismane Toolkit", "authors": [ { "first": "Assaf", "middle": [], "last": "Urieli", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Assaf Urieli. 2013. Robust French Syntax Analy- sis: Reconciling Statistical Methods and Linguistic Knowledge in the Talismane Toolkit. Ph.D. thesis, Universit\u00e9 de Toulouse II le Mirail.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Bigartm: Open source library for regularized multimodal topic modeling of large collections", "authors": [ { "first": "Konstantin", "middle": [], "last": "Vorontsov", "suffix": "" }, { "first": "Oleksandr", "middle": [], "last": "Frei", "suffix": "" }, { "first": "Murat", "middle": [], "last": "Apishev", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Romov", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Dudarenko", "suffix": "" } ], "year": 2015, "venue": "AIST", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konstantin Vorontsov, Oleksandr Frei, Murat Apishev, Peter Romov, and Marina Dudarenko. 2015. Bi- gartm: Open source library for regularized multi- modal topic modeling of large collections. In AIST.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "A screenshot of one of MaintNet's dataset webpages.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "The components in MaintNet's processing and information extraction pipeline: pre-processing, document clustering, and evaluation.", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Results of the clustering methods. From left to right, calculated mean and standard deviation of intraand inter-cluster similarity, cluster size distribution, number of clusters generated by each method and purity on Avi-Main dataset.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "text": "Four instances from one of MaintNet's aviation datasets.", "type_str": "table", "content": "
DomainDatasetInst. Tokens CodeSource
AviationMaintenance 6,169 76,866Avi-Main University of North Dakota Aviation Program
Accident5,268 162,533 Avi-AccOpen Data by Socrata
Safety25,558 345,979 Avi-Safe Federal Aviation Administration
Automotive Maintenance 6174,443Auto-Main Connecticut Open Data
Accident54,367 242,012 Auto-Acc NYS Department of Motor Vehicles
Safety5,456 137,038 Auto-Safe Open Data DC
FacilityMaintenance 87,276 2,469,003 Faci-Main Baltimore City Maryland Preventive Maintenance
", "html": null, "num": null }, "TABREF2": { "text": "", "type_str": "table", "content": "
: Instances and tokens in each dataset in MaintNet. Two new datasets, (Avi-safe and Auto-Safe), displayed
in bold.
", "html": null, "num": null }, "TABREF4": { "text": "Fields and data types in MaintNet's datasets.", "type_str": "table", "content": "", "html": null, "num": null } } } }