{ "paper_id": "O16-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:04:51.761720Z" }, "title": "N-best Parse Rescoring Based on Dependency-Based Word Embeddings", "authors": [ { "first": "Yu-Ming", "middle": [], "last": "Hsieh", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Wei-Yun", "middle": [], "last": "Ma", "suffix": "", "affiliation": {}, "email": "ma@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Rescoring approaches for parsing aim to re-rank and change the order of parse trees produced by a general parser for a given sentence. The re-ranking quality depends on the precision of the rescoring function. However it is a challenge to design an appropriate function to determine the qualities of parse trees. No matter which method is used, Treebank is a widely used resource in parsing task. Most approaches utilize complex features to re-estimate the tree structures of a given sentence [1, 2, 3]. Unfortunately, sizes of treebanks are generally small and insufficient, which results in a common problem of data sparseness. Learning knowledge from analyzing large-scaled unlabeled data is compulsory and proved useful in the previous works [4, 5, 6]. How to extract useful information from unannotated large scale", "pdf_parse": { "paper_id": "O16-1010", "_pdf_hash": "", "abstract": [ { "text": "Rescoring approaches for parsing aim to re-rank and change the order of parse trees produced by a general parser for a given sentence. The re-ranking quality depends on the precision of the rescoring function. However it is a challenge to design an appropriate function to determine the qualities of parse trees. No matter which method is used, Treebank is a widely used resource in parsing task. Most approaches utilize complex features to re-estimate the tree structures of a given sentence [1, 2, 3]. Unfortunately, sizes of treebanks are generally small and insufficient, which results in a common problem of data sparseness. Learning knowledge from analyzing large-scaled unlabeled data is compulsory and proved useful in the previous works [4, 5, 6]. How to extract useful information from unannotated large scale", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "corpus has been a research issue. Word embeddings have become increasingly popular lately, proving to be valuable as a source of features in a broad range of NLP tasks [7, 8, 9] . The word2vec [10] is among the most widely used word embedding models today. Their success is largely due to an efficient and user-friendly implementation that learns high quality word embeddings from very large corpora. The word2vec learns low dimensional continuous vector representations for words by considering window-based contexts, i.e., context words within some fixed distance of each side of the target words. Another different context type is dependency-based word embedding [11, 12, 13] , which considers syntactic contexts rather", "cite_spans": [ { "start": 168, "end": 171, "text": "[7,", "ref_id": "BIBREF6" }, { "start": 172, "end": 174, "text": "8,", "ref_id": "BIBREF7" }, { "start": 175, "end": 177, "text": "9]", "ref_id": "BIBREF8" }, { "start": 193, "end": 197, "text": "[10]", "ref_id": "BIBREF9" }, { "start": 666, "end": 670, "text": "[11,", "ref_id": "BIBREF10" }, { "start": 671, "end": 674, "text": "12,", "ref_id": "BIBREF11" }, { "start": 675, "end": 678, "text": "13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The 2016 Conference on Computational Linguistics and Speech Processing ROCLING 2016, pp. 100-102 \uf0d3 The Association for Computational Linguistics and Chinese Language Processing than window contexts in word2vec. Bansal et al. [8] and Melamud et al. [11] show the benefits of such modified-context embeddings in dependency parsing task. The dependency-based word embedding can relieve the problem of data sparseness, since even without occurrence of dependency word pairs in a corpus, dependency scores can be still calculated by word embeddings [12] . In this paper, we proposed a rescoring approach for parsing, based on a combination of original parsing scores and dependency word embedding scores to assist the determination of the best parse tree among the n-best parse trees. There are three main steps in our rescoring approach. The first step is to have the parser to produce n-best parse trees with their structural scores. For each parsed tree including words, part-of-speech (PoS) and semantic role labels. Second, we extract word-to-word associations (or called word dependency, a dependency implies its close association with other words in either syntactic or semantic perspective) from large amounts of auto-parsed data and adopt word2vecf [13] to train dependency-based word embeddings. The last step is to build a structural rescoring method to find the best tree structure from the n-best candidates. We conduct experiments on the standard data sets of the Chinese Treebank. We also study how different types of embeddings influence on rescoring, including word, word with semantic role labels, and word senses (concepts). Experimental results show that using semantic role labels in dependency embeddings has best performance. And the final experiments results indicate that our proposed approach outperforms the best parser in Chinese. Furthermore we attempt to compare the performance of using the traditional conditional probability method with our approach. From the experimental results, the embedding scores can relax data sparseness problem and have better results than the traditional approach.", "cite_spans": [ { "start": 225, "end": 228, "text": "[8]", "ref_id": "BIBREF7" }, { "start": 248, "end": 252, "text": "[11]", "ref_id": "BIBREF10" }, { "start": 544, "end": 548, "text": "[12]", "ref_id": "BIBREF11" }, { "start": 1253, "end": 1257, "text": "[13]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Keywords: Word Embeddings, Parsing, Word Dependency, Rescoring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Efficient stacked dependency parsing by forest reranking", "authors": [ { "first": "K", "middle": [], "last": "Hayashi", "suffix": "" }, { "first": "S", "middle": [], "last": "Kondo", "suffix": "" }, { "first": "Y", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2013, "venue": "Transactions of the ACL", "volume": "1", "issue": "", "pages": "139--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Hayashi, S.Kondo, and Y. Matsumoto, \"Efficient stacked dependency parsing by forest reranking,\" Transactions of the ACL, vol. 1, pp. 139-150, 2013.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using LTAG based features in parse reranking", "authors": [ { "first": "L", "middle": [], "last": "Shen", "suffix": "" }, { "first": "A", "middle": [], "last": "Sarkar", "suffix": "" }, { "first": "A", "middle": [], "last": "Toshi", "suffix": "" } ], "year": 2003, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "89--96", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Shen, A. Sarkar, and A. Toshi, \"Using LTAG based features in parse reranking,\" in Proceedings of EMNLP, pp. 89-96, 2003.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Coarse-to-fine n-best parsing and MaxEnt discriminative reranking", "authors": [ { "first": "E", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ACL 2005", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Charniak and M. Johnson, \"Coarse-to-fine n-best parsing and MaxEnt discriminative reranking,\" in Proceedings of ACL 2005, pp. 173-180, 2015.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Learning verb-noun relations to improve parsing", "authors": [ { "first": "A", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Second SIGHAN workshop on Chinese Language Processing", "volume": "", "issue": "", "pages": "119--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Wu, \"Learning verb-noun relations to improve parsing,\" in Proceedings of the Second SIGHAN workshop on Chinese Language Processing, pages 119-124, 2003.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Chinese Dependency Parsing with Large Scale Automatically Constructed Case Structures", "authors": [ { "first": "K", "middle": [], "last": "Yu", "suffix": "" }, { "first": "D", "middle": [], "last": "Kawahara", "suffix": "" }, { "first": "S", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING2008)", "volume": "", "issue": "", "pages": "1049--1056", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Yu, D. Kawahara, and S. Kurohashi, \"Chinese Dependency Parsing with Large Scale Automatically Constructed Case Structures,\" in Proceedings of the 22nd International Conference on Computational Linguistics (COLING2008), pp. 1049-1056, 2008.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Ambiguity Resolution for Vt-N Structures in Chinese", "authors": [ { "first": "Y", "middle": [], "last": "Hsieh", "suffix": "" }, { "first": "J", "middle": [ "S" ], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2014, "venue": "Proceedings of EMNLP 2014", "volume": "", "issue": "", "pages": "928--937", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Hsieh, J. S. Chang, and K. Chen, \"Ambiguity Resolution for Vt-N Structures in Chinese,\" in Proceedings of EMNLP 2014, pp. 928-937, 2014.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Word representations: A simple and general method for semi-supervised learning", "authors": [ { "first": "J", "middle": [], "last": "Turian", "suffix": "" }, { "first": "L", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Turian, L. Ratinov, and Y. Bengio, \"Word representations: A simple and general method for semi-supervised learning,\" in Proceedings of ACL, pp.384-394, 2010.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Tailoring continuous word representations for dependency parsing", "authors": [ { "first": "M", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "K", "middle": [], "last": "Gimpel", "suffix": "" }, { "first": "K", "middle": [], "last": "Livescu", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL 2014", "volume": "", "issue": "", "pages": "809--815", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Bansal, K. Gimpel, and K. Livescu, \"Tailoring continuous word representations for dependency parsing,\" in Proceedings of ACL 2014, pp. 809-815, 2014.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "R", "middle": [], "last": "Socher", "suffix": "" }, { "first": "A", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "J", "middle": [], "last": "Wu", "suffix": "" }, { "first": "J", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "A", "middle": [ "Y" ], "last": "Ng", "suffix": "" }, { "first": "C", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Socher, A. Perelygin, J. Wu, J. Chuang, C.D. Manning, A.Y. Ng, and C. Potts, \"Recursive deep models for semantic compositionality over a sentiment treebank,\" in Proceedings of EMNLP, pp. 1631-1642, 2013.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Mikolov, K. Chen, G. Corrado, and J. Dean, \"Efficient estimation of word representations in vector space,\" in Proceedings of ICLR. 2013.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Role of Context Types and Dimensionality in Learning Word Embeddings", "authors": [ { "first": "O", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "D", "middle": [], "last": "Mcclosky", "suffix": "" }, { "first": "S", "middle": [], "last": "Patwardhan", "suffix": "" }, { "first": "M", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL-HLT 2016", "volume": "", "issue": "", "pages": "1030--1040", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Melamud and D. McClosky and S.Patwardhan and M. Bansal, \"The Role of Context Types and Dimensionality in Learning Word Embeddings,\" in Proceedings of NAACL-HLT 2016, pp. 1030-1040, 2016.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A Simple Word Embedding Model for Lexical Substitution", "authors": [ { "first": "O", "middle": [], "last": "Melamud", "suffix": "" }, { "first": "O", "middle": [], "last": "Levy", "suffix": "" }, { "first": "I", "middle": [], "last": "Dagan", "suffix": "" } ], "year": 2015, "venue": "Proceedings of NAACL-HLT 2015", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Melamud, O. Levy, and I. Dagan, \"A Simple Word Embedding Model for Lexical Substitution,\" in Proceedings of NAACL-HLT 2015, pp. 1-7, 2015.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Dependency-Based Word Embeddings", "authors": [ { "first": "O", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Y", "middle": [], "last": "Goldberg", "suffix": "" } ], "year": 2014, "venue": "Proceedings of ACL 2014", "volume": "", "issue": "", "pages": "302--308", "other_ids": {}, "num": null, "urls": [], "raw_text": "O. Levy and Y. Goldberg, \"Dependency-Based Word Embeddings,\" in Proceedings of ACL 2014, pp. 302-308, 2014.", "links": null } }, "ref_entries": {} } }