{ "paper_id": "N15-1022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:34:23.699593Z" }, "title": "Aligning Sentences from Standard Wikipedia to Simple Wikipedia", "authors": [ { "first": "William", "middle": [], "last": "Hwang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "wshwang@u.washington.edu" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "hannaneh@u.washington.edu" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "ostendor@u.washington.edu" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "weiwu@u.washington.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This work improves monolingual sentence alignment for text simplification, specifically for text in standard and simple Wikipedia. We introduce a method that improves over past efforts by using a greedy (vs. ordered) search over the document and a word-level semantic similarity score based on Wiktionary (vs. WordNet) that also accounts for structural similarity through syntactic dependencies. Experiments show improved performance on a hand-aligned set, with the largest gain coming from structural similarity. Resulting datasets of manually and automatically aligned sentence pairs are made available.", "pdf_parse": { "paper_id": "N15-1022", "_pdf_hash": "", "abstract": [ { "text": "This work improves monolingual sentence alignment for text simplification, specifically for text in standard and simple Wikipedia. We introduce a method that improves over past efforts by using a greedy (vs. ordered) search over the document and a word-level semantic similarity score based on Wiktionary (vs. WordNet) that also accounts for structural similarity through syntactic dependencies. Experiments show improved performance on a hand-aligned set, with the largest gain coming from structural similarity. Resulting datasets of manually and automatically aligned sentence pairs are made available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text simplification can improve accessibility of texts for both human readers and automatic text processing. Although simplification (Wubben et al., 2012) could benefit from data-driven machine translation, paraphrasing, or grounded language acquisition techniques, e.g. (Callison Burch and Osborne, 2003; Fung and Cheung, 2004; Munteanu and Marcu, 2005; Smith et al., 2010; Ganitkevitch et al., 2013; Hajishirzi et al., 2012; Kedziorski et al., 2014) , work has been limited because available parallel corpora are small (Petersen and Ostendorf, 2007) or automatically generated are noisy (Kauchak, 2013) .", "cite_spans": [ { "start": 133, "end": 154, "text": "(Wubben et al., 2012)", "ref_id": null }, { "start": 271, "end": 305, "text": "(Callison Burch and Osborne, 2003;", "ref_id": null }, { "start": 306, "end": 328, "text": "Fung and Cheung, 2004;", "ref_id": "BIBREF5" }, { "start": 329, "end": 354, "text": "Munteanu and Marcu, 2005;", "ref_id": "BIBREF16" }, { "start": 355, "end": 374, "text": "Smith et al., 2010;", "ref_id": "BIBREF22" }, { "start": 375, "end": 401, "text": "Ganitkevitch et al., 2013;", "ref_id": "BIBREF6" }, { "start": 402, "end": 426, "text": "Hajishirzi et al., 2012;", "ref_id": "BIBREF9" }, { "start": 427, "end": 451, "text": "Kedziorski et al., 2014)", "ref_id": null }, { "start": 521, "end": 551, "text": "(Petersen and Ostendorf, 2007)", "ref_id": "BIBREF18" }, { "start": 589, "end": 604, "text": "(Kauchak, 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Wikipedia is potentially a good resource for text simplification (Napoles and Dredze, 2010; Medero and Ostendorf, 2009) , since it includes standard articles and their corresponding simple articles in English. A challenge with automatic alignment is that standard and simple articles can be written independently so they are not strictly parallel, and have very different presentation ordering. A few studies use editor comments attached to Wikipedia edit logs to extract pairs of simple and difficult words (Yatskar et al., 2010; Woodsend and Lapata, 2011) . Other methods use text-based similarity techniques (Zhu et al., 2010; Coster and Kauchak, 2011; Kauchak, 2013) , but assume sentences in standard and simple articles are ordered relatively.", "cite_spans": [ { "start": 65, "end": 91, "text": "(Napoles and Dredze, 2010;", "ref_id": "BIBREF17" }, { "start": 92, "end": 119, "text": "Medero and Ostendorf, 2009)", "ref_id": null }, { "start": 508, "end": 530, "text": "(Yatskar et al., 2010;", "ref_id": null }, { "start": 531, "end": 557, "text": "Woodsend and Lapata, 2011)", "ref_id": "BIBREF24" }, { "start": 611, "end": 629, "text": "(Zhu et al., 2010;", "ref_id": "BIBREF27" }, { "start": 630, "end": 655, "text": "Coster and Kauchak, 2011;", "ref_id": "BIBREF2" }, { "start": 656, "end": 670, "text": "Kauchak, 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we align sentences in standard and simple Wikipedia using a greedy method that, for every simple sentence, finds the corresponding sentence (or sentence fragment) in standard Wikipedia. Unlike other methods, we do not make any assumptions about the relative order of sentences in standard and simple Wikipedia articles. We also constrain the many-to-one matches to cover sentence fragments. In addition, our method takes advantage of a novel word-level semantic similarity measure that is built on top of Wiktionary (vs. WordNet) which incorporates structural similarity represented in syntactic dependencies. The Wiktionary-based similarity measure has the advantage of greater word coverage than WordNet, while the use of syntactic dependencies provides a simple mechanism for approximating semantic roles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, we report the first manually annotated dataset for evaluating alignments for text simplification, develop and assess a series of alignment methods, and automatically generate a dataset of sentence pairs for standard and simple Wikipedia. Experiments show that our alignment method significantly outperforms previous methods on the hand-aligned Good Apple sauce or applesauce is a puree made of apples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Applesauce (or applesauce) is a sauce that is made from stewed or mashed apples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Commercial versions of applesauce are really available in supermarkets It is easy to make at home, and it is also sold already made in supermarkets as a common food.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Good Partial", "sec_num": null }, { "text": "Applesauce is a sauce that is made from stewed and mashed apples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial", "sec_num": null }, { "text": "Applesauce is made by cooking down apples with water or apple cider to the desired level. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Partial", "sec_num": null }, { "text": "Given comparable articles, sentence alignment is achieved by leveraging the sentence-level similarity score and the sequence-level search strategy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Sentence-Level Scoring: There are two main approaches for sentence-level scoring. One approach, used in Wikipedia alignment (Kauchak, 2013) , computes sentence similarities as the cosine distance between vector representations of tf.idf scores of the words in each sentence. Other approaches rely on word-level \u03c3(w, w ) semantic similarity scores", "cite_spans": [ { "start": 124, "end": 139, "text": "(Kauchak, 2013)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "s(W, W ) = 1 Z w\u2208W max w \u2208W \u03c3(w, w )idf (w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": ". Previous work use WordNet-based similarity (Wu and Palmer, 1994; Mohler and Mihalcea, 2009; Hosseini et al., 2014) , distributional similarity (Guo and Diab., 2012) , or discriminative similarity (Hajishirzi et al., 2010; Rastegari et al., 2015) .", "cite_spans": [ { "start": 45, "end": 66, "text": "(Wu and Palmer, 1994;", "ref_id": "BIBREF25" }, { "start": 67, "end": 93, "text": "Mohler and Mihalcea, 2009;", "ref_id": "BIBREF15" }, { "start": 94, "end": 116, "text": "Hosseini et al., 2014)", "ref_id": "BIBREF11" }, { "start": 145, "end": 166, "text": "(Guo and Diab., 2012)", "ref_id": "BIBREF7" }, { "start": 198, "end": 223, "text": "(Hajishirzi et al., 2010;", "ref_id": "BIBREF8" }, { "start": 224, "end": 247, "text": "Rastegari et al., 2015)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "In this paper, we leverage pairwise word similarities, and introduce two novel word-level semantic similarity metrics and show that they outperform the previous metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "There are several sequence-level alignment strategies (Shieber and Nelken, 2006) . In (Zhu et al., 2010) , sentence alignment between simple and standard articles is computed without constraints, so every sentence can be matched to multiple sentences in the other document. Two sentences are aligned if their similarity score is greater than a threshold. An alternative approach is to compute sentence alignment with a sequential constraint, i.e. using dynamic programming (Coster and Kauchak, 2011; Barzilay and Elhadad, 2003) . Specifically, the alignment is computed by a recursive function that optimizes alignment of one or two consecutive sentences in one article to sentences in the other article. This method relies on consistent ordering between two articles, which does not always hold for Wikipedia articles.", "cite_spans": [ { "start": 54, "end": 80, "text": "(Shieber and Nelken, 2006)", "ref_id": "BIBREF21" }, { "start": 86, "end": 104, "text": "(Zhu et al., 2010)", "ref_id": "BIBREF27" }, { "start": 473, "end": 499, "text": "(Coster and Kauchak, 2011;", "ref_id": "BIBREF2" }, { "start": 500, "end": 527, "text": "Barzilay and Elhadad, 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence-Level Search:", "sec_num": null }, { "text": "We develop datasets of aligned sentences in standard and simple Wikipedia. Here, we describe the manually annotated dataset and leave the details of the automatically generated dataset to Section 5.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification Datasets", "sec_num": "3" }, { "text": "Manually Annotated: For every sentence in a standard Wikipedia article, we create an HTML survey that lists sentences in the corresponding simple article and allow the annotator to judge each sentence pair as a good, good partial, partial, or bad match (examples in Table 1) : Good: The semantics of the simple and standard sentence completely match, possibly with small omissions (e.g., pronouns, dates, or numbers). Good Partial: A sentence completely covers the other sentence, but contains an additional clause or phrase that has information which is not contained within the other sentence. Partial: The sentences discuss unrelated concepts, but share a short related phrase that does not match considerably. Bad: The sentences discuss unrelated concepts.", "cite_spans": [], "ref_spans": [ { "start": 266, "end": 274, "text": "Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Simplification Datasets", "sec_num": "3" }, { "text": "The annotators were native speaker, hourly paid, undergraduate students. We randomly selected 46 article pairs from Wikipedia (downloaded in June 2012) that started with the character 'a'. In total, 67,853 sentence pairs were annotated (277 good, 281 good partial, 117 partial, and 67,178 bad). The kappa value for interannotator agreement is 0.68 (13% of articles were dual annotated). Most disagreements between annotators are confusions between 'partial' and 'good partial' matches. The manually annotated dataset is used as a test set for evaluating alignment methods as well as tuning parameters for generating automatically aligned pairs across standard and simple Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simplification Datasets", "sec_num": "3" }, { "text": "We use a sentence-level similarity score that builds on a new word-level semantic similarity, described below, together with a greedy search over the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Alignment Method", "sec_num": "4" }, { "text": "Word-level similarity functions return a similarity score \u03c3(w 1 , w 2 ) between words w 1 and w 2 . We introduce two novel similarity metrics: Wiktionarybased similarity and structural semantic similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Level Similarity", "sec_num": "4.1" }, { "text": "The Wiktionary-based semantic similarity measure leverages synonym information in Wiktionary as well as word-definition cooccurrence, which is represented in a graph and referred to as WikNet. In our work, each lexical content word (noun, verb, adjective and adverb) in the English Wiktionary is represented by one node in WikNet. If word w appears in any of the sense definitions of word w 1 , one edge between w 1 and w 2 is added, as illustrated in Figure 1 . We prune the WikNet using the following steps: i) morphological variations are mapped to their baseforms; ii) atypical word senses (e.g. \"obsolete,\" \"Jamaican English\") are removed; and iii) stopwords (determined based on high definition frequency) are removed. After pruning, there are roughly 177k nodes and 1.15M undirected edges. As expected, our Wiktionary based similarity metric has a higher coverage of 71.8% than WordNet, which has a word coverage of 58.7% in our annotated dataset.", "cite_spans": [], "ref_spans": [ { "start": 452, "end": 460, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "Motivated by the fact that the WikNet graph structure is similar to that of many social networks (Watts and Strogatz, 1998; Wu, 2012) , we characterize semantic similarity with a variation on a link-based node similarity algorithm that is commonly applied for person relatedness evaluations in social network studies, the Jaccard coefficient (Salton and McGill, 1983) , by quantifying the number of shared neighbors for two words. More specifically, we use the extended Jaccard coefficient, which looks at neighbors within an n-step reach (Fogaras and Racz, 2005) with an added term to indicate whether the words are direct neighbors. In addition, if the words or their neighbors have synonym sets in Wiktionary, then the shared synonyms are used in the extended Jaccard measure. If the two words are in each other's synonym lists, then the similarity is set to 1 otherwise,", "cite_spans": [ { "start": 97, "end": 123, "text": "(Watts and Strogatz, 1998;", "ref_id": "BIBREF23" }, { "start": 124, "end": 133, "text": "Wu, 2012)", "ref_id": "BIBREF26" }, { "start": 342, "end": 367, "text": "(Salton and McGill, 1983)", "ref_id": "BIBREF20" }, { "start": 539, "end": 563, "text": "(Fogaras and Racz, 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "\u03c3 wk (w 1 , w 2 ) = n l=0 J s l (w 1 , w 2 ), for J s l (w 1 , w 2 ) = \u0393 l (w 1 )\u2229syn\u0393 l (w 2 ) \u0393 l (w 1 )\u222a\u0393 l (w 2 ) where \u0393 l (w i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "is the l-step neighbor set of w i , and \u2229 syn accounts for synonyms if any. We precomputed similarities between pairs of words in WikNet to make the alignment algorithm more efficient. The WikNet is available at http://ssli.ee.washington.edu/ tial/projects/simplification/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "Structural Semantic Similarity: We extend the word-level similarity metric to account for both semantic similarity between words, as well as the dependency structure between the words in a sentence. We create a triplet for each word using Stanford's dependency parser (de Marneffe et al., 2006) . Each triplet t w = (w, h, r) consists of the given word w, its head word h (governor), and the dependency relationship (e.g., modifier, subject, etc) between w and h. The similarity between words w 1 and w 2 combines the similarity between these three features in order to boost the similarity score of words whose head words are similar and appear in the same dependency structure:", "cite_spans": [ { "start": 268, "end": 294, "text": "(de Marneffe et al., 2006)", "ref_id": "BIBREF3" }, { "start": 392, "end": 446, "text": "dependency relationship (e.g., modifier, subject, etc)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "\u03c3 ss wk (w 1 , w 2 ) = \u03c3 wk (w 1 , w 2 ) + \u03c3 wk (h 1 , h 2 )\u03c3 r (r 1 , r 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "where \u03c3 wk is the WikNet similarity and \u03c3 r (r 1 , r 2 ) represents dependency similarity between relations r 1 and r 2 such that \u03c3 r = 0.5 if both relations fall into the same category, otherwise \u03c3 r = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "WikNet Similarity:", "sec_num": null }, { "text": "To avoid aligning multiple sentences to the same content, we require one-to-one matches between sentences in standard and simple Wikipedia articles using a greedy algorithm. We first compute similarities between all sentences S j in the simple article and A i in standard article using a sentencelevel similarity score. Then, our method iteratively selects the most similar sentence pair S * , A * = arg max s(S j , A i ) and removes all other pairs associated with the respective sentences, repeating until all sentences in the shorter document are aligned. The cost of aligning sentences in two articles S, A is O(mn) where m, n are the number of sentences in articles S and A, respectively. The run time of our method using WikNet is less than a minute for the sentence pairs in our test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Greedy Sequence-level Alignment", "sec_num": "4.2" }, { "text": "Many simple sentences only match with a fragment of a standard sentence. Therefore, we extend the greedy algorithm to discover good partial matches as well. The intuition is that two sentences are good partial matches if a simple sentence has higher similarity with a fragment of a standard sentence than the complete sentence. We extract fragments for every sentence from the Stanford syntactic parse tree (Klein and Manning, 2003) . The fragments are generated based on the second level of the syntactic parse tree. Specifically, each fragment is a S, SBAR, or SINV node at this level. We then calculate the similarity between every simple sentence S j with every standard sentence A i as well as fragments of the standard sentence A k i . The same greedy algorithm is then used to align simple sentences with standard sentences or their fragments.", "cite_spans": [ { "start": 407, "end": 432, "text": "(Klein and Manning, 2003)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Greedy Sequence-level Alignment", "sec_num": "4.2" }, { "text": "We test our method on all pairs of standard and simple sentences for each article in the hand-annotated data (no training data is used). For our experiments, we preprocess the data by removing topic names, list markers, and non-English words. In addition, the data was tokenized, lemmatized, and parsed using Stanford CoreNLP (Manning et al., 2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Comparison to Baselines: The baselines are our implementations of previous work: Unconstrained WordNet (Mohler and Mihalcea, 2009) , which uses an unconstrained search for aligning sentences and WordNet semantic similarity (in particular Wu-Palmer (1994) et al., 2010), which uses a vector space representation and an unconstrained search for aligning sentences; and Ordered Vector Space (Coster and Kauchak, 2011) , which uses dynamic programming for sentence alignment and vector space scoring. We compare our method (Greedy Structural WikNet) that combines the novel Wiktionary-based structural semantic similarity score with a greedy search to the baselines. Figure 2 and Table 2 show that our method achieves higher precision-recall, max F1, and AUC compared to the baselines. The precision-recall score is computed for good pairs vs. other pairs (good partial, partial, and bad).", "cite_spans": [ { "start": 103, "end": 130, "text": "(Mohler and Mihalcea, 2009)", "ref_id": "BIBREF15" }, { "start": 238, "end": 254, "text": "Wu-Palmer (1994)", "ref_id": "BIBREF25" }, { "start": 388, "end": 414, "text": "(Coster and Kauchak, 2011)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 663, "end": 671, "text": "Figure 2", "ref_id": null }, { "start": 676, "end": 683, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "From error analysis, we found that most mistakes are caused by missing good matches (lower recall). As shown by the graph, we obtain high precision (about .9) at recall 0.5. Thus, applying our method on a large dataset yields high quality sentence alignments that would benefit data-driven learning in text simplification. Table 2 also shows that our method outperforms the baselines in identifying good and good partial matches. Error analysis shows that our fragment generation technique does not generate all possible or meaningful fragments, which suggests a direction for future work. We list a few qualitative examples in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 323, "end": 330, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 628, "end": 635, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "Ablation Study: Table 4 shows the results of ablating each component of our method, sequencelevel alignments and word-level similarity.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "Sequence-level Alignment: We study the contribution of the greedy approach in our method by using word-level structural semantic WikNet similarity \u03c3 ss(wk) and replacing the sequence-level greedy search strategy with dynamic programming and un- constrained approaches. As expected, the dynamic programming approach used in previous work does not perform as well as our method, even with the structural semantic WikNet similarity, because the simple Wikipedia articles are not explicit simplifications of standard articles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "Word-level Alignment: Table 4 also shows the contribution of the structural semantic WikNet similarity measure \u03c3 ss wk vs. other word-level similarities (WordNet similarity \u03c3 wd , structural semantic Word-Net similarity \u03c3 ss wd , and WikNet similarity \u03c3 wk ). In all the experiments, we use the sequence-level greedy alignment method. The structural semantic similarity measures improve over the corresponding similarity measures for both WordNet and WikNet. Moreover, WikNet similarity outperforms WordNet, and the structural semantic WikNet similarity measure achieves the best performance.", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "We develop a parallel corpus of aligned sentence pairs between standard and simple Wikipedia, together with their similarity scores. In particular, we use our best case method to align sentences from 22k standard and simple articles, which were download in April 2014. To speed up our method, we index the similarity scores of frequent words and distribute computations over multiple CPUs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically Aligned Data", "sec_num": "5.2" }, { "text": "We release a dataset of aligned sentence pairs, with a scaled threshold greater than 0.45. 1 Based on the precision-recall data, we choose a scaled threshold of 0.67 (P = 0.798, R = 0.599, F1 = 0.685) for good matches, and 0.53 (P = 0.687, R = 0.495, F1 = 0.575) for good partial matches. The selected thresholds yield around 150k good matches, 130k good partial matches, and 110k uncategorized matches. In addition, around 51.5 million potential matches, with a scaled score below 0.45, are pruned from the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Automatically Aligned Data", "sec_num": "5.2" }, { "text": "This work introduces a sentence alignment method for text simplification using a new word-level similarity measure (using Wiktionary and dependency structure) and a greedy search over sentences and sentence fragments. Experiments on comparable standard and simple Wikipedia articles outperform current baselines. The resulting hand-aligned and automatically aligned datasets are publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Future work involves developing text simplification techniques using the introduced datasets. In addition, we plan to improve our current alignment technique with better text preprocessing (e.g., coreference resolution (Hajishirzi et al., 2013) ), learning similarities, as well as phrase alignment techniques to obtain better partial matches.", "cite_spans": [ { "start": 219, "end": 244, "text": "(Hajishirzi et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "http://ssli.ee.washington.edu/tial/ projects/simplification/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments This research was supported in part by grants from the NSF (IIS-0916951) and (IIS-1352249). The authors also wish to thank Alex Tan and Hayley Garment for annotations, and the anonymous reviewers for their valuable feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sentence alignment for monolingual comparable corpora", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Noemie", "middle": [], "last": "Elhadad", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Barzilay and Elhadad2003] Regina Barzilay and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bootstrapping parallel corpora", "authors": [], "year": 2003, "venue": "Proceedings of the Human Language Technologies -North American Chapter of the Association for Computational Linguistics Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Callison Burch and Osborne2003] Chris Callison Burch and Miles Osborne. 2003. Bootstrapping parallel cor- pora. In Proceedings of the Human Language Tech- nologies -North American Chapter of the Association for Computational Linguistics Workshop on Build- ing and Using Parallel Texts: Data Driven Machine Translation and Beyond -Volume 3 (HLT NAACL PAR- ALLEL).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Simple english Wikipedia: A new text simplification task", "authors": [ { "first": "Kauchak2011] William", "middle": [], "last": "Coster", "suffix": "" }, { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Kauchak2011] William Coster and David Kauchak. 2011. Simple english Wikipedia: A new text simplification task. In Proceedings of the Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies (NAACL HLT).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generating typed dependency parses from phrase structure parses", "authors": [ { "first": "", "middle": [], "last": "Marneffe", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marneffe et al.2006] Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the International Conference on Language Resources and Evaluation (LREC).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Scaling link-based similarity search", "authors": [ { "first": "Daniel", "middle": [], "last": "Fogaras", "suffix": "" }, { "first": "Balazs", "middle": [], "last": "Racz", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the International Conference on World Wide Web (WWW)", "volume": "", "issue": "", "pages": "641--650", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Fogaras and Racz2005] Daniel Fogaras and Balazs Racz. 2005. Scaling link-based similarity search. In Pro- ceedings of the International Conference on World Wide Web (WWW), pages 641-650.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Mining Very-Non-Parallel Corpora: Parallel Sentence and Lexicon Extraction via Bootstrapping and EM", "authors": [ { "first": "Pascale", "middle": [], "last": "Fung", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Cheung", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Cheung2004] Pascale Fung and Percy Cheung. 2004. Mining Very-Non-Parallel Corpora: Paral- lel Sentence and Lexicon Extraction via Bootstrap- ping and EM. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "PPDB: The paraphrase database", "authors": [ { "first": "[", "middle": [], "last": "Ganitkevitch", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT)", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Ganitkevitch et al.2013] Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL HLT), pages 758-764, Atlanta, Georgia, June. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Modeling semantic textual similarity in the latent space", "authors": [ { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Diab.2012] Weiwei Guo and Mona Diab. 2012. Modeling semantic textual similarity in the latent space. In Proceedings of the Conference of the As- sociation for Computational Linguistics (ACL).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Adaptive near-duplicate detection via similarity learning", "authors": [ { "first": "[", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Association for Computing Machinery Special Interest Group in Information Retrieval(ACM SIGIR)", "volume": "", "issue": "", "pages": "419--426", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hajishirzi et al.2010] Hannaneh Hajishirzi, Wen-tau Yih, and Aleksander Kolcz. 2010. Adaptive near-duplicate detection via similarity learning. In Proceedings of the Association for Computing Machinery Special Interest Group in Information Retrieval(ACM SIGIR), pages 419-426.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semantic understanding of professional soccer commentaries", "authors": [ { "first": "[", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hajishirzi et al.2012] Hannaneh Hajishirzi, Mohammad Rastegari, Ali Farhadi, and Jessica Hodgins. 2012. Semantic understanding of professional soccer com- mentaries. In Proceedings of the Conference on Un- certainty in Artificial Intelligence (UAI).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Joint coreference resolution and named-entity linking with multi-pass sieves", "authors": [], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al.2013] Hannaneh Hajishirzi, Leila Zilles, Daniel S Weld, and Luke S Zettlemoyer. 2013. Joint coreference resolution and named-entity linking with multi-pass sieves. In Proceedings of the Conference on Empirical Methods in Natural Language Process- ing (EMNLP).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning to solve arithmetic word problems with verb categorization", "authors": [ { "first": "[", "middle": [], "last": "Hosseini", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hosseini et al.2014] Mohammad Javad Hosseini, Han- naneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Improving text simplification language modeling using unsimplified text data", "authors": [ { "first": "David", "middle": [], "last": "Kauchak", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "386--396", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Kauchak. 2013. Improving text simplification language modeling using unsimplified text data. In Proceedings of the Conference of the As- sociation for Computational Linguistics (ACL). [Kedziorski et al.2014] Rik Koncel Kedziorski, Han- naneh Hajishirzi, and Ali Farhadi. 2014. Multi- resolution language grounding with weak supervision. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 386-396.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The Stanford CoreNLP natural language processing toolkit", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Conference of the Association for Computational Linguistics: System Demonstrations (ACL)", "volume": "", "issue": "", "pages": "55--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Klein and Manning2003] Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the Conference of the Association for Computational Linguistics (ACL), pages 423-430. [Manning et al.2014] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Pro- ceedings of the Conference of the Association for Com- putational Linguistics: System Demonstrations (ACL), pages 55-60.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Analysis of vocabulary difficulty using wiktionary", "authors": [], "year": 2009, "venue": "Proceedings of the Speech and Language Technology in Education Workshop (SLaTE)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Medero and Ostendorf2009] Julie Medero and Mari Os- tendorf. 2009. Analysis of vocabulary difficulty using wiktionary. In Proceedings of the Speech and Lan- guage Technology in Education Workshop (SLaTE).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text-to-text semantic similarity for automatic short answer grading", "authors": [ { "first": "Mihalcea2009] Michael", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Mihalcea2009] Michael Mohler and Rada Mihalcea. 2009. Text-to-text semantic similarity for automatic short answer grading. In Proceedings of the Conference of the European Chapter of the Associa- tion for Computational Linguistics (EACL).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Improving machine translation performance by exploiting non-parallel corpora", "authors": [ { "first": "Marcu2005] Dragos Stefan", "middle": [], "last": "Munteanu", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Marcu2005] Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine transla- tion performance by exploiting non-parallel corpora. Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning simple wikipedia: a cogitation in ascertaining abecedarian language", "authors": [ { "first": "Courtney", "middle": [], "last": "Napoles", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Dredze", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Workshop on Computation Linguistics and Writing: Writing Processes and Authoring Aids (NAACL HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Napoles and Dredze2010] Courtney Napoles and Mark Dredze. 2010. Learning simple wikipedia: a cogi- tation in ascertaining abecedarian language. In Pro- ceedings of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies Workshop on Computation Lin- guistics and Writing: Writing Processes and Author- ing Aids (NAACL HLT).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Text simplification for langauge learners: A corpus analysis", "authors": [ { "first": "Sarah", "middle": [], "last": "Petersen", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Speech and Language Technology in Education Workshop (SLaTE)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Petersen and Ostendorf2007] Sarah Petersen and Mari Ostendorf. 2007. Text simplification for langauge learners: A corpus analysis. In Proceedings of the Speech and Language Technology in Education Work- shop (SLaTE).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Discriminative and consistent similarities in instance-level multiple instance learning", "authors": [ { "first": "[", "middle": [], "last": "Rastegari", "suffix": "" } ], "year": 2015, "venue": "Proceedings of Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Rastegari et al.2015] Mohammad Rastegari, Hannaneh Hajishirzi, and Ali Farhadi. 2015. Discriminative and consistent similarities in instance-level multiple instance learning. In Proceedings of Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Introduction to Modern Information Retrieval", "authors": [ { "first": "Mcgill1983] Gerard", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Mcgill", "suffix": "" } ], "year": 1983, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and McGill1983] Gerard Salton and Michael McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Towards robust context-sensitive sentence alignment for monolingual corpora", "authors": [ { "first": "Stuart", "middle": [], "last": "Shieber", "suffix": "" }, { "first": "Rani", "middle": [], "last": "Nelken", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Shieber and Nelken2006] Stuart Shieber and Rani Nelken. 2006. Towards robust context-sensitive sentence alignment for monolingual corpora. In Proceedings of the Conference of the Association for Computational Linguistics (ACL).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Extracting parallel sentences from comparable corpora using document level alignment", "authors": [], "year": 2010, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al.2010] Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sen- tences from comparable corpora using document level alignment. In Proceedings of the Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies (NAACL HLT).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Collective dynamics of small-world networks", "authors": [ { "first": "J", "middle": [], "last": "Strogatz1998] Duncan", "suffix": "" }, { "first": "", "middle": [], "last": "Watts", "suffix": "" }, { "first": "H", "middle": [], "last": "Steven", "suffix": "" }, { "first": "", "middle": [], "last": "Strogatz", "suffix": "" } ], "year": 1998, "venue": "Nature", "volume": "", "issue": "", "pages": "440--442", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Strogatz1998] Duncan J. Watts and Steven H. Strogatz. 1998. Collective dynamics of small-world networks. Nature, pages 440-442.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Wikisimple: Automatic simplification of wikipedia articles", "authors": [ { "first": "Kristian", "middle": [], "last": "Woodsend", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Association for Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "927--932", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Woodsend and Lapata2011] Kristian Woodsend and Mirella Lapata. 2011. Wikisimple: Automatic sim- plification of wikipedia articles. In Proceedings of the Association for Advancement of Artificial Intelligence Conference on Artificial Intelligence (AAAI), pages 927-932, San Francisco, CA.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Verbs semantics and lexical selection", "authors": [ { "first": "Palmer1994] Zhibiao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the Conference of the Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Palmer1994] Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Pro- ceedings of the Conference of the Association for Com- putational Linguistics (ACL).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia", "authors": [ { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT)", "volume": "", "issue": "", "pages": "1015--1024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei Wu. 2012. Graph-based Algorithms for Lexical Semantics and its Applications. Ph.D. thesis, University of Washington. [Wubben et al.2012] Sander Wubben, Antal Van Den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the Conference of the Association for Computational Linguistics (ACL), pages 1015-1024. [Yatskar et al.2010] Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lex- ical simplifications from wikipedia. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies (NAACL HLT).", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A monolingual tree-based translation model for sentence simplification", "authors": [], "year": 2010, "venue": "Proceedings of the International Conference on Computational Linguistics (COLING)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al.2010] Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Pro- ceedings of the International Conference on Computa- tional Linguistics (COLING).", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Part of WikNetwith words \"boy\" and \"lad\"." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Figure 2: Precisionrecall curve for our method vs. baselines." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "The castle was later incorporated into the construction ofAshtown Lodge which was to serve as the official residence of the Under Secretary from 1782. After the building was made bigger and improved, it was used as the house for the Under Secretary of Ireland from 1782. Good Partial Mozart's Clarinet Concerto and Clarinet Quintet are both in A major, and generally Mozart was more likely to use clarinets in A major than in any other key besides E-flat major Mozart used clarinets in A major often." }, "TABREF0": { "type_str": "table", "html": null, "content": "
set of standard and simple Wikipedia article pairs.
The datasets are publicly available to facilitate fur-
ther research on text simplification.
", "num": null, "text": "Annotated examples: the matching regions for partial and good partial are italicized." }, "TABREF2": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Max F1, AUC for identifying good matches and identifying good & good partial matches." }, "TABREF3": { "type_str": "table", "html": null, "content": "
Sequence-levelMax F1AUC
Greedy (sim G , \u03c3 ss wk ) Ordered (sim DP , \u03c3 ss wk )0.712 + 0.694 + 0.656 + 0.610 +
Unconstrained (sim U C , \u03c3 ss wk )0.6890.689
Word-levelMax F1AUC
Structural WikNet (sim G , \u03c3 ss wk ) WordNet (sim G , \u03c3 wd )0.712 + 0.694 + 0.665 + 0.663 +
Structural WordNet (sim G , \u03c3 ss wd )0.6850.679
WikNet (sim G , \u03c3 wk )0.6970.669
", "num": null, "text": "Qualitative examples of the good and good partial matches identified by our method." }, "TABREF4": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Max F1, AUC for ablation study on word-level and sequence-level similarity scores. Values with the + superscript are significant with p<0.05." } } } }