{ "paper_id": "2019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:37:28.082194Z" }, "title": "Noun Generation for Nominalization in Academic Writing", "authors": [ { "first": "Dariush", "middle": [], "last": "Saberi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hong Kong", "location": {} }, "email": "dsaberi2-c@my.cityu.edu.hk" }, { "first": "John", "middle": [], "last": "Lee", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Hong Kong", "location": {} }, "email": "jsylee@cityu.edu.hk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Nominalization is a common technique in academic writing for producing abstract and formal text. Since it often involves paraphrasing a clause with a verb or adjectival phrase into a noun phrase, an important task is to generate the noun to replace the original verb or adjective. Given that a verb or adjective may have multiple nominalized forms with similar meaning, the system needs to be able to automatically select the most appropriate one. We propose an unsupervised algorithm that makes the selection with BERT, a stateof-the-art neural language model. Experimental results show that it significantly outperforms baselines based on word frequencies, word2vec and doc2vec.", "pdf_parse": { "paper_id": "2019", "_pdf_hash": "", "abstract": [ { "text": "Nominalization is a common technique in academic writing for producing abstract and formal text. Since it often involves paraphrasing a clause with a verb or adjectival phrase into a noun phrase, an important task is to generate the noun to replace the original verb or adjective. Given that a verb or adjective may have multiple nominalized forms with similar meaning, the system needs to be able to automatically select the most appropriate one. We propose an unsupervised algorithm that makes the selection with BERT, a stateof-the-art neural language model. Experimental results show that it significantly outperforms baselines based on word frequencies, word2vec and doc2vec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Automatic paraphrasing -re-writing a sentence while preserving its original meaning -has received much interest in the computational linguistics community in recent years. One type of paraphrasing is lexical substitution (McCarthy and Navigli, 2009) , which replaces a word or short phrase with another. Paraphrasing can also involve manipulation of the clausal structure of a sentence, with a range of options that has been described as the \"cline of metaphoricity\" (Halliday and Matthiessen, 2014) . Towards one end of this cline, the text offers a \"congruent construal of experience\", and the sentences tend to be clausally complex but lexically simple (e.g., the complex clause \"Because she didn't know the rules, she died\" 1 ). Towards the other end of the cline, the text exhibits a \"metaphorical reconstrual\", and the sentences are clausally simpler and lexically denser (e.g., the nominal group \"Her death through ignorance of the rules\").", "cite_spans": [ { "start": 221, "end": 249, "text": "(McCarthy and Navigli, 2009)", "ref_id": "BIBREF8" }, { "start": 467, "end": 499, "text": "(Halliday and Matthiessen, 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Previous studies on automatic manipulation of clausal structure have mostly concentrated on syntactic simplification, typically by splitting a complex sentence into two or more simple sentences (Siddharthan, 2002; Alu\u00edsio et al., 2008; Narayan and Gardent, 2014) . More recent research has also attempted semi-automatic nominalization (Lee et al., 2018) , which aims to paraphrase a complex clause into a simplex clause by transforming verb or adjectival phrases into noun phrases.", "cite_spans": [ { "start": 194, "end": 213, "text": "(Siddharthan, 2002;", "ref_id": "BIBREF13" }, { "start": 214, "end": 235, "text": "Alu\u00edsio et al., 2008;", "ref_id": "BIBREF0" }, { "start": 236, "end": 262, "text": "Narayan and Gardent, 2014)", "ref_id": "BIBREF11" }, { "start": 335, "end": 353, "text": "(Lee et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Noun generation is a core task in the nominalization pipeline (Table 2) . Resources such as NOMLEX (Meyers et al., 1998) and CAT-VAR (Habash and Dorr, 2003) have greatly facilitated this task by providing lists of related nouns, verbs and adjectives. However, straightforward look-up in these lists does not suffice since a word may have multiple nominalized forms with similar meaning. For example, the verb \"dominate\" can be transformed into \"domination\", \"dominance\", \"dominion\", as well as the gerund form \"dominating\". We will henceforth refer to these as the \"noun candidates\". As shown in Table 1 , in the context of the clause \"The British dominated India\", \"domination\" would be preferred (i.e., \"British domination of India\"); in the context of the clause \"older people dominated this neighborhood\", \"dominance\" would be more appropriate (i.e., \"The dominance of older people in this neighborhood\").", "cite_spans": [ { "start": 99, "end": 120, "text": "(Meyers et al., 1998)", "ref_id": "BIBREF9" }, { "start": 133, "end": 156, "text": "(Habash and Dorr, 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 62, "end": 71, "text": "(Table 2)", "ref_id": "TABREF3" }, { "start": 596, "end": 603, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of this paper is to evaluate a noun generation algorithm that selects the best noun candidate during nominalization. The approach taken by Lee et al. (2018) , which considers noun frequency statistics alone, always selects the same noun regardless of the sentential context. We use instead a neural language model, BERT, for noun generation. Experimental results show that it significantly outperforms baselines based on word frequencies, word2vec and doc2vec. The rest of the paper is organized as follows. Following a review of previous work (Section 2), we give details on our dataset (Section 3) and outline our approach (Section 4). We then report experimental results (Section 5) and conclude.", "cite_spans": [ { "start": 148, "end": 165, "text": "Lee et al. (2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We first discuss the relation between our task and lexical substitution (Section 2.1) and word sense disambiguation (Section 2.2). We then describe an existing nominalization system (Section 2.3), whose noun generation algorithm will serve as our baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "Noun generation in nominalization can be considered a specialized kind of lexical substitution. While lexical substitution typically aims for a paraphrase in the same part-of-speech (POS) (e.g., \"dominate\" \u2192 \"prevail\"), our task by definition involves a change in POS, usually from a verb or adjective to a noun (e.g., \"dominate\" \u2192 \"domination\"). This difference is reflected in the limited number of verb-noun or adjective-noun entries in open-source paraphrase corpora such as PPDB (Ganitkevitch et al., 2013) .", "cite_spans": [ { "start": 484, "end": 511, "text": "(Ganitkevitch et al., 2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Relation to lexical substitution", "sec_num": "2.1" }, { "text": "Word sense disambiguation (WSD) is relevant to noun generation to the extent that verb senses can guide the choice of noun candidates. For example, \"succeed\" in the sense of \"achieve the desired result\" should be paraphrased as \"success\" (\"He succeeded in ...\" \u2192 \"His success in ...\"), whereas \"succeed\" in the sense of \"take over a position\" would require \"succession\" (\"He succeeded to the throne ...\" \u2192 \"His succession to the throne ...\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation to word sense disambiguation", "sec_num": "2.2" }, { "text": "WSD is not necessary for noun generation when the verb corresponds to a noun with the same range of meanings. Consider the verb \"conclude\", which may mean either \"to finish\" or \"to reach agreement\". Nominalization requires no WSD since the noun \"conclusion\" preserves the same semantic ambiguity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation to word sense disambiguation", "sec_num": "2.2" }, { "text": "In other cases, our task requires fine-grained WSD, especially when the noun candidates are semantically close. Their differences can be rather nuanced (e.g., \"domination\" vs. \"dominance\"), making it challenging for typical WSD models to distinguish.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relation to word sense disambiguation", "sec_num": "2.2" }, { "text": "In the first reported tool for semi-automatic nominalization aimed at academic writing (Lee et al., 2018) , the system first parses the input clause to detect the potential for nominalization. If its dependency tree exhibits an expected structure (e.g., Table 2 (i)), the system proceeds to lexical mapping (Table 2(ii)), which includes transforming the main verb (\"entered\") to a noun (\"entrance\"); an adverb (\"abruptly\") to an adjective (\"abrupt\"); and the subject (\"the clown\") to a possessive form (\"the clown's\" or \"of the clown\") . Finally, the system generates a sentence by choosing one of the possible surface realizations through heuristics (Table 2(iii)).", "cite_spans": [ { "start": 87, "end": 105, "text": "(Lee et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 254, "end": 261, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "The noun generation task in lexical mapping utilizes verb-to-noun and adjective-to-noun mappings, some examples of which are shown in Table 1. The system constructed these mappings on the basis of NOMLEX (Meyers et al., 1998) CATVAR (Habash and Dorr, 2003) 2 , with a total of 7,879 verb-to-noun mappings, and 11,369 adjective-noun mappings.", "cite_spans": [ { "start": 204, "end": 225, "text": "(Meyers et al., 1998)", "ref_id": "BIBREF9" }, { "start": 233, "end": 256, "text": "(Habash and Dorr, 2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "3 Dataset", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "Among the mappings described in Section 2.3, there were 7,380 verb-to-noun and 5,339 adjective-to-noun mappings with at least two noun candidates. We constructed our dataset on the basis of these mappings only, because the others do not require selection from multiple candidates. The ideal dataset for this research would consist of input sentences containing these verbs and adjectives; and, as gold output, the noun candidate selected for use in the nominalized version of these sentences. Unfortunately, no such largescale dataset exists. One option is to sample sentences in a corpus and ask human experts to nominalize them; this would however require considerable manual annotation. To avoid this cost, an alternative is to work backwards: identify sentences containing noun phrases that could plausibly be the result of nominalization (e.g., those in the right column of Table 1 ). This methodology produces the gold noun candidate automatically. One can then retrieve from the mappings the verb or adjective that would be in the hypothetical sentence before nominalization (e.g., those in the middle column of Table 1 ). Adopting this methodology, we constructed a challenging dataset by prioritizing verbs and adjectives that are more ambiguous, i.e., those with more noun candidates.", "cite_spans": [], "ref_spans": [ { "start": 879, "end": 886, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 1119, "end": 1126, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "One potential issue is the plausibility of the selected sentences as the nominalized form of an in-2 Verbs-to-be and modal verbs were not treated. put sentence. To make our dataset as realistic as possible, we required sentences to have one of the three common nominalized forms, corresponding to the three surface forms shown in Table 2(iii):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "\u2022 \"the of ...\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "\u2022 \"'s ...\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "\u2022 \" ...\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "where is the gold noun candidate, is a possessive pronoun and is the noun subject of the hypothetical input sentence before nominalization. In addition, we require the target noun, verb and adjective to be tagged as such at least two times in the Brown Corpus (Francis and Ku\u010dera, 1979) , to avoid words with rare usage.", "cite_spans": [ { "start": 304, "end": 317, "text": "Ku\u010dera, 1979)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "Our dataset consists of a total of 620 sentences that satisfy the above requirements, including 332 retrieved from the Brown Corpus and 288 from the British Academic Written English (BAWE) Corpus (Nesi, 2008) . The sentences contain 73 distinct verbs and 19 distinct adjectives, each with an average of 2.67 noun candidates.", "cite_spans": [ { "start": 196, "end": 208, "text": "(Nesi, 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Nominalization pipeline", "sec_num": "2.3" }, { "text": "The noun generation algorithm used by Lee et al. (2018) considers only the word frequency statistics of the noun candidates. It therefore always chooses the same noun candidate for a verb (or adjective), even if the sentential context warrants a different choice due to word sense, register or fluency considerations.", "cite_spans": [ { "start": 38, "end": 55, "text": "Lee et al. (2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "To remove this limitation, we use BERT (Devlin et al., 2019) , a state-of-the-art neural language model based on the \"Transformer\" architec-ture (Vaswani et al., 2017) . BERT has been shown to be effective in a wide range of natural language processing tasks. The model is bi-directional, i.e., trained to predict the identity of a masked word based on the words both before and after it. We consider the suitability of each noun candidate in the verb-to-noun and adjective-to-noun mappings as the masked word.", "cite_spans": [ { "start": 39, "end": 60, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF2" }, { "start": 145, "end": 167, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "In each sentence in our dataset, we mask the target noun and ask BERT for its word predictions for the masked position. 3 Among the noun candidates, we identify the highest-ranked one among the first 15,000 word predictions. If none of the candidates is ranked, we create a sentence with each candidate by replacing the masked word with it, and obtain the BERT score for the sentence. We select the candidate that yields the sentence with the highest score.", "cite_spans": [ { "start": 120, "end": 121, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "4" }, { "text": "We compared our proposed approach with four baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Spelling This baseline selects the noun candidate that has the smallest letter edit distance from the original verb or adjective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Frequency Following Lee et al. (2018) , this baseline selects the noun candidate with the highest unigram frequency count in the Google Web 1T Corpus (Brants and Franz, 2006) .", "cite_spans": [ { "start": 20, "end": 37, "text": "Lee et al. (2018)", "ref_id": "BIBREF7" }, { "start": 150, "end": 174, "text": "(Brants and Franz, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We select the noun candidate that is most similar to the original verb or adjective, as estimated by the Google News pre-trained Gensim model (Mikolov et al., 2013) .", "cite_spans": [ { "start": 142, "end": 164, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Word2vec", "sec_num": null }, { "text": "We select the noun candidate that has the highest cosine similarity with the sentence embeddings, taking each sentence as a small \"document\". 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Doc2vec", "sec_num": null }, { "text": "As shown in Table 3 , the Frequency baseline achieved higher accuracy than the Spelling baseline and Word2vec. The frequency of a noun candidate appears to serve as a good proxy for its appropriateness. All three approaches, however, ignore the specific context of the sentence, always proposing the same noun for a given verb or adjective. By taking the rest of the sentence into account when predicting the noun candidate, BERT yielded better performance. Consider the verb \"measure\". Although frequency favors the noun \"measure\", BERT was able to select \"measurement\" when it collocates with \"quantity\". While Doc2vec also considers the sentential context, it did not perform as well as BERT, likely because the masked language modeling objective offers a better fit for our task.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Doc2vec", "sec_num": null }, { "text": "Still, BERT's performance was limited by difficulties in recognizing nuanced differences between noun pairs such as \"use\" and \"usage\", or \"occupation\" and \"occupancy\". With access only to a single sentence, it was also unable to choose formal words such as \"continuance\" over \"continuation\" when called for by the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Doc2vec", "sec_num": null }, { "text": "We propose an unsupervised algorithm for noun generation from a verb or adjectival phrase, a task that is essential for automatic nominalization system for academic writing. This algorithm selects the most appropriate noun candidate with BERT, a state-of-the-art neural language model. Experimental results show that it significantly outperforms baselines based on word frequencies, word2vec and doc2vec.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "This example and the next are both taken fromHalliday and Matthiessen (2014).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used the PyTorch implementation of BERT with the bert-base-uncased model.4 We used the following settings: max epocs = 100, vector size = 20, alpha = 0.025, min count = 1, dm = 1. With word embeddings combined, the best results were obtained with dbow = 0 and dmpv = 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially funded by a HKSAR UGC Teaching Learning Grant (Meeting the Challenge of Teaching and Learning Language in the University: Enhancing Linguistic Competence and Performance in English and Chinese) in the 2016-19 Triennium; and by an Applied Research Grant (#9667151) from City University of Hong Kong.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Towards Brazilian Portuguese Automatic Text Simplification Systems", "authors": [ { "first": "Sandra", "middle": [], "last": "Alu\u00edsio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" }, { "first": "T", "middle": [ "A" ], "last": "Pardo", "suffix": "" }, { "first": "E", "middle": [ "G" ], "last": "Maziero", "suffix": "" }, { "first": "R", "middle": [ "P" ], "last": "Fortes", "suffix": "" } ], "year": 2008, "venue": "Proc. 8th ACM Symposium on Document Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sandra Alu\u00edsio, Lucia Specia, T. A. Pardo, E. G. Maziero, and R. P. Fortes. 2008. Towards Brazilian Portuguese Automatic Text Simplification Systems. In Proc. 8th ACM Symposium on Document Engi- neering.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The Google Web 1T 5-gram Corpus Version 1.1", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Franz", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "2006--2019", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants and Alex Franz. 2006. The Google Web 1T 5-gram Corpus Version 1.1. In LDC2006T13.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proc. NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pretraining of Deep Bidirectional Transformers for Language Un- derstanding. In Proc. NAACL-HLT.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Manual of Information to Accompany a Standard Corpus of Present-Day Edited American English, for use with Digital Computers", "authors": [ { "first": "W", "middle": [ "N" ], "last": "Francis", "suffix": "" }, { "first": "H", "middle": [], "last": "Ku\u010dera", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W. N. Francis and H. Ku\u010dera. 1979. Manual of Infor- mation to Accompany a Standard Corpus of Present- Day Edited American English, for use with Digital Computers. Providence, RI. Department of Linguis- tics, Brown University.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Ppdb: The paraphrase database", "authors": [ { "first": "Juri", "middle": [], "last": "Ganitkevitch", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Van Durme", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2013, "venue": "Proc. NAACL-HLT", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. Ppdb: The paraphrase database. In Proc. NAACL-HLT.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Categorial Variation Database for English", "authors": [ { "first": "Nizar", "middle": [], "last": "Habash", "suffix": "" }, { "first": "Bonnie", "middle": [], "last": "Dorr", "suffix": "" } ], "year": 2003, "venue": "Proc. NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nizar Habash and Bonnie Dorr. 2003. A Categorial Variation Database for English. In Proc. NAACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Halliday's Introduction to Functional Grammar", "authors": [ { "first": "M", "middle": [ "A K" ], "last": "Halliday", "suffix": "" }, { "first": "C", "middle": [ "M I M" ], "last": "Matthiessen", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. A. K. Halliday and C. M. I. M. Matthiessen. 2014. Halliday's Introduction to Functional Gram- mar. Routledge.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Assisted Nominalization for Academic English Writing", "authors": [ { "first": "John", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Dariush", "middle": [], "last": "Saberi", "suffix": "" }, { "first": "Marvin", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Webster", "suffix": "" } ], "year": 2018, "venue": "Proc. Workshop on Intelligent Interactive Systems and Language Generation (2ISNLG)", "volume": "", "issue": "", "pages": "26--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Lee, Dariush Saberi, Marvin Lam, and Jonathan Webster. 2018. Assisted Nominalization for Aca- demic English Writing. In Proc. Workshop on Intel- ligent Interactive Systems and Language Generation (2ISNLG), pages 26-30.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The English Lexical Substitution Task. Language Resources and Evaluation", "authors": [ { "first": "Diana", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2009, "venue": "", "volume": "43", "issue": "", "pages": "139--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diana McCarthy and Roberto Navigli. 2009. The English Lexical Substitution Task. Language Re- sources and Evaluation, 43:139-159.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Using NOMLEX to Produce Nominalization Patterns for Information Extraction", "authors": [ { "first": "Adam", "middle": [], "last": "Meyers", "suffix": "" }, { "first": "Catherine", "middle": [], "last": "Macleod", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Leslie", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Ruth", "middle": [], "last": "Reeves", "suffix": "" } ], "year": 1998, "venue": "Proc. Computational Treatment of Nominals", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Meyers, Catherine Macleod, Roman Yangarber, Ralph Grishman, Leslie Barrett, and Ruth Reeves. 1998. Using NOMLEX to Produce Nominalization Patterns for Information Extraction. In Proc. Com- putational Treatment of Nominals.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Efficient Estimation of Word Representations in Vector Space", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proc. International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Repre- sentations in Vector Space. In Proc. International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Hybrid Simplification using Deep Semantics and Machine Translation", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Gardent", "suffix": "" } ], "year": 2014, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shashi Narayan and Claire Gardent. 2014. Hybrid Simplification using Deep Semantics and Machine Translation. In Proc. ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "BAWE: an introduction to a new resource", "authors": [ { "first": "Hilary", "middle": [], "last": "Nesi", "suffix": "" } ], "year": 2008, "venue": "Proc. Eighth Teaching and Language Corpora Conference", "volume": "", "issue": "", "pages": "239--285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hilary Nesi. 2008. BAWE: an introduction to a new resource. In Proc. Eighth Teaching and Language Corpora Conference, page 239-46, Lisbon, Portu- gal. ISLA.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An Architecture for a Text Simplification System", "authors": [ { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2002, "venue": "Proc. Language Engineering Conference (LEC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Advaith Siddharthan. 2002. An Architecture for a Text Simplification System. In Proc. Language Engi- neering Conference (LEC).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Attention is All You Need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "6000--6010", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Pro- cessing Systems, pages 6000-6010.", "links": null } }, "ref_entries": { "TABREF0": { "text": "The British dominated India ... British domination of India ... {dominance, Older people dominated this The dominance of older people domination, ...} neighborhood ... in this neighborhood ... move \u2192 They moved northward ... Their move northward ... {motion, move, ...} The particle moved irregularly ... The irregular motion of the particle ... enter \u2192 The clown entered the stage ... The clown's entrance to the stage ... {entrance, entry, ...} The immigrants entered the The entry of the immigrants into country ... the country ... measure \u2192 {measure, Success is measured ... The measure of success ... measurement, ...} Blood pressure is measured ... The measurement of blood pressure ...", "num": null, "html": null, "type_str": "table", "content": "
Verb-to-noun mapping Example sentenceNominalized version
dominate \u2192
" }, "TABREF1": { "text": "Example verb-to-noun mappings with multiple noun candidates (left column), illustrated by sentences with the same verb (middle column) requiring different target nouns (right column) in their nominalized version.", "num": null, "html": null, "type_str": "table", "content": "" }, "TABREF3": { "text": "", "num": null, "html": null, "type_str": "table", "content": "
: The nominalization pipeline (Lee et al., 2018): (i) syntactic parsing; (ii) lexical mapping, including noun
generation (bolded), which is the focus of this paper; and (iii) sentence generation.
" }, "TABREF5": { "text": "Accuracy of our proposed noun generation algorithm with BERT, compared to baselines.", "num": null, "html": null, "type_str": "table", "content": "" } } } }