{ "paper_id": "S17-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:29:20.646393Z" }, "title": "Semantic Frame Labeling with Target-based Neural Model", "authors": [ { "first": "Yukun", "middle": [], "last": "Feng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Language and Culture University", "location": {} }, "email": "yukunfg@gmail.com" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "", "affiliation": {}, "email": "yudong@blcu.edu.cn" }, { "first": "Jian", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Language and Culture University", "location": {} }, "email": "jianxu1@mail.ustc.edu.cn" }, { "first": "Chunhua", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing Language and Culture University", "location": {} }, "email": "chunhualiu596@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper explores the automatic learning of distributed representations of the target's context for semantic frame labeling with target-based neural model. We constrain the whole sentence as the model's input without feature extraction from the sentence. This is different from many previous works in which local feature extraction of the targets is widely used. This constraint makes the task harder, especially with long sentences, but also makes our model easily applicable to a range of resources and other similar tasks. We evaluate our model on several resources and get the state-of-the-art result on subtask 2 of SemEval 2015 task 15. Finally, we extend the task to word-sense disambiguation task and we also achieve a strong result in comparison to state-of-the-art work.", "pdf_parse": { "paper_id": "S17-1010", "_pdf_hash": "", "abstract": [ { "text": "This paper explores the automatic learning of distributed representations of the target's context for semantic frame labeling with target-based neural model. We constrain the whole sentence as the model's input without feature extraction from the sentence. This is different from many previous works in which local feature extraction of the targets is widely used. This constraint makes the task harder, especially with long sentences, but also makes our model easily applicable to a range of resources and other similar tasks. We evaluate our model on several resources and get the state-of-the-art result on subtask 2 of SemEval 2015 task 15. Finally, we extend the task to word-sense disambiguation task and we also achieve a strong result in comparison to state-of-the-art work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic frame labeling is the task of selecting the correct frame for a given target based on its semantic scene. A target is often called lexical unit which evokes the corresponding semantic frame. The lexical unit can be a verb, adjective or noun. Generally, a semantic frame describes how the lexical unit is used and specifies its characteristic interactions. There are many semantic frame resources, such as FrameNet (Baker et al., 1998) , VerbNet (Schuler, 2006) , Prop-Bank (Palmer et al., 2005) and Corpus Pattern Analysis (CPA) frames (Hanks, 2012) . However, most existing frame resources are manually created, which is time-consuming and expensive. Automatic semantic frame labeling can lead to the development of a broader range of resources. * The corresponding author Early works for semantic frame labeling mainly focus on FrameNet, PropBank and VerbNet resources. But most of them focus only one resource and rely heavily on feature engineering (e.g., Honnibal and Hawker 2005; Abend et al. 2008) . Recently, there are some works on learning CPA frames based on a new semantic frame resource, the Pattern Dictionary of English Verbs (PDEV) (El Maarouf and Baisa, 2013; El Maarouf et al., 2014) . The above two works also rely on features and both are only tested on 25 verbs. Most works aim at constructing the context representations of the target with explicit rules based on some basic features, e.g., Parts Of Speech (POS), Named Entities (NE) and dependency relations related to the target. Currently, some deep learning models have been applied with dependency features. Hermann et al. (2014) used the direct dependents and dependency path to extract the context representation based on distributed word embeddings on English FrameNet. Inspired by the work, Zhao et al. (2016) used a deep feed forward neural network on Chinese FrameNet with similar features. This is different from our goal where we want to explore an appropriate deep learning architecture without complex rules to construct the context representations. Feng et al. (2016) used a multilayer perceptrons (MLP) model on CPA frames without extra feature extraction, but the model is quite simple and has an input window which is not convenient.", "cite_spans": [ { "start": 423, "end": 443, "text": "(Baker et al., 1998)", "ref_id": "BIBREF2" }, { "start": 454, "end": 469, "text": "(Schuler, 2006)", "ref_id": "BIBREF18" }, { "start": 482, "end": 503, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF15" }, { "start": 545, "end": 558, "text": "(Hanks, 2012)", "ref_id": "BIBREF8" }, { "start": 969, "end": 994, "text": "Honnibal and Hawker 2005;", "ref_id": "BIBREF11" }, { "start": 995, "end": 1013, "text": "Abend et al. 2008)", "ref_id": "BIBREF0" }, { "start": 1161, "end": 1185, "text": "Maarouf and Baisa, 2013;", "ref_id": "BIBREF3" }, { "start": 1186, "end": 1210, "text": "El Maarouf et al., 2014)", "ref_id": "BIBREF4" }, { "start": 1781, "end": 1799, "text": "Zhao et al. (2016)", "ref_id": "BIBREF19" }, { "start": 2046, "end": 2064, "text": "Feng et al. (2016)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "In this paper, we present a target-based neural model which takes the whole target-specific sentence as input and gives the semantic frame label as output. Our goal is to make the model light without explicit rules to construct context representations and applicable to a range of resources. To cope with variable-length sentences under our constraint, a simple idea is to use recurrent neural networks (RNN) to process the sentences. But noise caused by irrelevant words in long sentences may hinder learning. In fact, the arguments related to the target are usually distributed near the target because when we write or speak, we will focus mainly on arguments that are in the immediate context of a core word. We use two RNNs each of which processes one part of the sentence split by the target. The model takes the target as the center and we call it the target-based recurrent networks (TRNN). In fact, TRNN itself is not novel enough, but according to our knowledge, no related research has focused on this topic. We will show that TRNN is quite suitable for learning the context of the target. In our model we select long short-term memory (LSTM) networks, a type of RNN designed to avoid the vanishing and exploding gradients. The overall structure is illustrated Figure 1 . w t is the t-th word in the sentence the length of which is T and target is the index of the target. x t is obtained by mapping w t into a fixed vector through well pre-trained word vectors. The model has two LSTMs each of which processes one part of the sentence split by the target. The model can automatically learn the distributed representation of target's context from w with few manual design.", "cite_spans": [], "ref_spans": [ { "start": 1271, "end": 1279, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction and Related Work", "sec_num": "1" }, { "text": "An introduction about LSTM can be found in the work of Hochreiter and Schmidhuber (1997) . The parameters of LSTM are W x * , W h * and b * where * stands for one of several internal gates. W x * is the matrix between the input vector x t and gates, W h * is the matrix between the output h t of LSTM and gates and b * is the bias vector on gates. The formulas of LSTM are:", "cite_spans": [ { "start": 55, "end": 88, "text": "Hochreiter and Schmidhuber (1997)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Context Representations", "sec_num": "2.1" }, { "text": "it = \u03c3(Wxixt + W hi ht\u22121 + bi) ft = \u03c3(W xf xt + W hf ht\u22121 + b f ) ct = ft ct\u22121 + it tanh(Wxcxt + W hc ht\u22121 + bc) ot = \u03c3(Wxoxt + W ho ht\u22121 + bo) ht = ot tanh(ct)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Representations", "sec_num": "2.1" }, { "text": "where \u03c3 is the sigmoid function and represents the element-wise multiplication. i t , f t c t and o t are the output of input gates, forget gates, cell states and output gates, respectively. In our model, two LSTMs share the same parameters. At last, the target's context representations cr are added by the outputs of two LSTMs:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Representations", "sec_num": "2.1" }, { "text": "cr = h target\u22121 + h target", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Representations", "sec_num": "2.1" }, { "text": "The dimension of cr is decided by the number of hidden units in LSTM, which is a hyper parameter in our model, and is usually much lower than that of one word vector. Here we make some intuitions behind the above formulas. The gradients from last layer flow equally on the (target \u2212 1)-th LST-M box and the target-th LSTM box and then the two flows go to both ends. As it is quite common in deep learning models, the gradients usually become ineffective as the depth of the flow increases especially when the sentence is very long. The gradients on words far from the target get less impact than those near the target. As a whole, more data are usually required to learn the arguments far from the target than those near the target. If the real arguments are distributed near the target, this model will be suitable as its architecture is designed to take care of the local context of the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Representations", "sec_num": "2.1" }, { "text": "We use Softmax layer as the output layer on the context representations. The output layer computes a probability distribution over the semantic frame labels. During the training, the cost we minimize is the negative log likelihood of the model:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Layer", "sec_num": "2.2" }, { "text": "L = \u2212 M m=1 logp tm", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Layer", "sec_num": "2.2" }, { "text": "Here M is number of the training sentences, t m is the index of the correct frame label for the m-th sentence and p is the probability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Layer", "sec_num": "2.2" }, { "text": "3 Experiments", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Layer", "sec_num": "2.2" }, { "text": "We simply divide all the datasets in two types: per-target and non per-target. Per-target semantic frame resources define a different set of frame labels for each target and we train one model for each target; different targets may share some semantic frame labels in non per-target resources and we train a single model for such resources. We use the Semlink project to create our datasets 1 . Semlink aims to link together different lexical resources via a set of mappings. We use its corpus which annotates FrameNet and Propbank frames for the WSJ section of the Penn Treebank. Another resource we use is PDEV 2 which is quite new and has CPA frame annotated examples on British National Corpus. All the original instances are sentence-tokenized and the punctuation was removed. The details of creating the datasets are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 FrameNet: Non per-target type. We get FrameNet annotated instances through Semlink. If one FrameNet frame label contains more than 300 instances, we divide it proportionately: 70%, 20% and 10%. Then we respectively accumulate the three parts by each frame label to create the training, test and validation set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 PropBank: Per-target type. The creation process is same as FrameNet except that we finally get training, test and validation set for each target and the cutoff is set to 70 instead of 300.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 PDEV: Same as PropBank but with the cutoff set to 100 instead of 70.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Since the performance of our model is almost decided by the training data we empirically choose the cutoff above to keep the instances of each label enough. Summary statistics of the above datasets are in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "We compare our model with the following baselines.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models and Training", "sec_num": "3.2" }, { "text": "Frame Names In Moscow they kept asking us things like why do you make 15 different corkscrews", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentences", "sec_num": null }, { "text": "It said it has taken measures to continue shipments during the work stoppage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Activityongoing", "sec_num": null }, { "text": "But the Army Corps of Engineers expects the river level to continue falling this month.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Activityongoing", "sec_num": null }, { "text": "The oil industry's middling profits could persist through the rest of the year. \u2022 MF: The most frequent (MF) method selects the most frequent semantic frame label seen in training instances for each instance in the test dataset. MF is actually a strong baseline for per-target dataset because we observed that most targets have one main frame label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processcontinue", "sec_num": null }, { "text": "\u2022 Target-Only: For FrameNet dataset, we use Target-Only method: if the target in the test instance has a unique frame label in the training data we give this frame label to current test instance; if the target has multiple frame labels in the training data we select the most frequent one in these labels; if the target is not seen in the training data, we select the most frequent label from the whole training data. This baseline is especially for FrameNet because we observed that each frame label has a set of targets but only a few targets have multiple frame labels. It may be easy to predict the frame label for test instances only according to the target.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processcontinue", "sec_num": null }, { "text": "\u2022 LSTM: The standard LSTM model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processcontinue", "sec_num": null }, { "text": "\u2022 MaxEnt: The Maximum Entropy model. We use the Stanford CoreNLP module 3 to ex-tract features for MaxEnt toolkit 4 . All dependents related to the target, their POS tags, dependency relations, lemmas, NE tags and the target itself will be extracted as features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Processcontinue", "sec_num": null }, { "text": "The number of the iterations for MaxEnt is decided by the validation set. For simplicity, we set the learning rate to 1.0 for TRNN and LSTM. The number of hidden units is tested on validation data with the values {35, 45, 55} for per-target resource and {80, 100, 120} for non per-target resource. We use the publicly available word2vec vectors, a dimensionality of 300, that were trained through the GloVe model (Pennington et al., 2014) on Wikipedia and Gigaword. For words not appeared in the vector model, their word vectors are all set to zero vectors. We train these models by stochastic gradient descent with minibatches. The minibatch is set to 10 for per-target resource and 50 for non per-target resource. We keep the word vectors static since no obvious improvement has been observed. Training will stop when the zeroone loss is zero over training data.", "cite_spans": [ { "start": 413, "end": 438, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Processcontinue", "sec_num": null }, { "text": "The results of the above datasets are in Table 3 . Target-Only gets very high scores on FrameNet dataset. FrameNet dataset has 55 targets which has multiple frame labels in the training data and these targets have 1981 instances in the test data. We get 0.769 F-score on these instances and 0.393 F-score on 64 unseen targets with 77 test instances. This can be the extreme case that the main feature for the correct frame is the target itself. Despite this simple fact, standard LSTM performs very badly on FrameNet. The main reason is that sentences in FrameNet dataset are too long and standard LSTM can not learn well due to the large number of irrelevant words that appear in long sentences. To show this, we select the size of truncation window for original FrameNet sentences and we get the best size of 5 on validation data with each 2 words surrounding the target. Finally, we get 0.958 F-score on FrameNet test data which is still lower than TRNN on full sentences. As for PropBank and PDEV dataset, we train one model for each target so the final F-score is the average of all targets. However, the number of training instances per target is limited. TRNN will usually not perform well when it tries to learn some frames which consist of many different concepts and especially when the frame has a few training instances. Considering the sentence 4 of Table 4 as an example, it is difficult to TRNN to learn what is 'Activity' in the correct frame because this concept is huge. TRNN may need lots of data to learn something related to this concept. However, this correct frame only has 6 instances in our training data. The second reason of TRNN's failure is lack of knowledge due to unseen words in test data. The sentence 1 of Table 4 shows TRNN will make the right decision since we observe that it has seen the word 'cow' in the training data and knows this word belongs to the concept 'Animate or Plant' in the correct frame. But TRNN does not know the word 'Elegans' in sentence 3 so it usually selects the most frequent frame seen in the training data. However, in many cases, the unseen words can be captured by well trained word embeddings as the sentence 2 shows where 'ducks', 'chickens' and 'geese' are all unseen words. Table 3 : Results on several semantic frame resources. The format of cell value is \"Fscore/hidden unit\" for TRNN and LSTM and \"Fscore/iteration\" for MaxEnt toolkit.", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 3", "ref_id": null }, { "start": 1363, "end": 1371, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 1741, "end": 1748, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 2245, "end": 2252, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "3.3" }, { "text": "Corpus Pattern Analysis (CPA) is a new technique for identifying the main patterns in which a word is used in text and is currently being used to build the PDEV resource as we mentioned above. It is also a shared task in SemEval-2015 task 15 (Baisa et al., 2015) . The task is divided into three subtasks: CPA parsing, CPA clustering and CPA lexicography. We only introduce the first two related subtasks. CPA parsing aims at identifying the arguments of the target and tagging predefined semantic meaning on them; CPA clustering clusters the instances to obtain CPA frames based on the result of CPA parsing. However, the first step results seem unpromising (Feng et al., 2015; Mills and Levow, 2015; Elia, 2016) which will influence the process of obtaining CPA frames. Since our model can be applied on sentence-level input without feature extraction we can directly evaluate ", "cite_spans": [ { "start": 242, "end": 262, "text": "(Baisa et al., 2015)", "ref_id": "BIBREF1" }, { "start": 659, "end": 678, "text": "(Feng et al., 2015;", "ref_id": "BIBREF6" }, { "start": 679, "end": 701, "text": "Mills and Levow, 2015;", "ref_id": "BIBREF14" }, { "start": 702, "end": 713, "text": "Elia, 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "CPA Experiment", "sec_num": "3.4" }, { "text": "Finally, we choose Word Sense Disambiguation (WSD) task to extend our experiment. As our benchmark for WSD task, we choose English Lexical Sample WSD tasks of SemEval-2007 task 17 (Pradhan et al., 2007 . We use cross-validation on the training set and we observe the model performs better when we update the word vectors which is different from the preceding experimental setup. The number of hidden units is set to 55. The result is in Table 6 . The rows from 4 to 6 come from Iacobacci et al. (2016) . They inte-grate word embeddings into IMS (It Makes Sense) system (Zhong and Ng, 2010) which uses support vector machine as its classifier based on some standard WSD features and they get the best result; they use an exponential decay function, also designed to give more importance to close context, to compute the word representation, but their method need manually choose the window size of the target word and one parameter of their exponential decay function. Both with word vectors only, our model is comparable with the sixth row. ", "cite_spans": [ { "start": 159, "end": 171, "text": "SemEval-2007", "ref_id": null }, { "start": 172, "end": 201, "text": "task 17 (Pradhan et al., 2007", "ref_id": null }, { "start": 478, "end": 501, "text": "Iacobacci et al. (2016)", "ref_id": "BIBREF12" }, { "start": 569, "end": 589, "text": "(Zhong and Ng, 2010)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 437, "end": 444, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Word Sense Disambiguation Experiment", "sec_num": "3.5" }, { "text": "In this paper, we describe an end-to-end neural model to target-specific semantic frame labeling. Without explicit rule construction to fit for some specific resources, our model can be easily applied to a range of semantic frame resources and similar tasks. In the future, non-English semantic frame resources can be considered to extend the coverage of our model and our model can integrate the best features explored in the state-of-the-art work to see how many improvements our model can make.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "The current version of the Semlink project has some problems to get the right position of targets in WSJ section of Penn Treebank. Instead, we use annotations of PropBank corpus, also annotated in WSJ section of Penn Treebank, to index targets.2 http://pdev.org.uk/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://stanfordnlp.github.io/CoreNLP/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/lzhang10/maxent", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers and Li Zhao for their helpful suggestions and comments. The work was supported by the National High Technology Development 863 Program of China (No.2015AA015409).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A supervised algorithm for verb disambiguation into verbnet classes", "authors": [ { "first": "Omri", "middle": [], "last": "Abend", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Rappoport", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omri Abend, Roi Reichart, and Ari Rappoport. 2008. A supervised algorithm for verb disambiguation into verbnet classes. In Proceedings of the 22nd Inter- national Conference on Computational Linguistics- Volume 1. Association for Computational Linguis- tics, pages 9-16.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semeval-2015 task 15: A cpa dictionaryentry-building task", "authors": [ { "first": "V\u00edt", "middle": [], "last": "Baisa", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "Silvie", "middle": [], "last": "Cinkova", "suffix": "" }, { "first": "Ismail", "middle": [ "El" ], "last": "Maarouf", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "Octavian", "middle": [], "last": "Popescu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "315--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "V\u00edt Baisa, Jane Bradbury, Silvie Cinkova, Ismail El Maarouf, Adam Kilgarriff, and Octavian Popes- cu. 2015. Semeval-2015 task 15: A cpa dictionary- entry-building task. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 315-324. http://www.aclweb.org/anthology/S15-2053.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The berkeley framenet project", "authors": [ { "first": "F", "middle": [], "last": "Collin", "suffix": "" }, { "first": "", "middle": [], "last": "Baker", "suffix": "" }, { "first": "J", "middle": [], "last": "Charles", "suffix": "" }, { "first": "John B", "middle": [], "last": "Fillmore", "suffix": "" }, { "first": "", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 17th international conference on Computational linguistics", "volume": "1", "issue": "", "pages": "86--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceed- ings of the 17th international conference on Compu- tational linguistics-Volume 1. Association for Com- putational Linguistics, pages 86-90.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Automatic classification of patterns from the pattern dictionary of english verbs", "authors": [ { "first": "Isma\u0131l", "middle": [], "last": "El Maarouf", "suffix": "" }, { "first": "V\u0131t", "middle": [], "last": "Baisa", "suffix": "" } ], "year": 2013, "venue": "Joint Symposium on Semantic Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Isma\u0131l El Maarouf and V\u0131t Baisa. 2013. Automatic classification of patterns from the pattern dictionary of english verbs. In Joint Symposium on Semantic Processing..", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Disambiguating verbs by collocation: Corpus lexicography meets natural language processing", "authors": [ { "first": "Ismail", "middle": [ "El" ], "last": "Maarouf", "suffix": "" }, { "first": "Jane", "middle": [], "last": "Bradbury", "suffix": "" }, { "first": "V\u00edt", "middle": [], "last": "Baisa", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 2014, "venue": "LREC", "volume": "", "issue": "", "pages": "1001--1006", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ismail El Maarouf, Jane Bradbury, V\u00edt Baisa, and Patrick Hanks. 2014. Disambiguating verbs by col- location: Corpus lexicography meets natural lan- guage processing. In LREC. pages 1001-1006.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Syntactic and semantic classification of verb arguments using dependency-based and rich semantic features", "authors": [ { "first": "Francesco", "middle": [ "Elia" ], "last": "", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francesco Elia. 2016. Syntactic and semantic classi- fication of verb arguments using dependency-based and rich semantic features. arXiv preprint arX- iv:1604.05747 .", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Blcunlp: Corpus pattern analysis for verbs based on dependency chain", "authors": [ { "first": "Yukun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Qiao", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "325--328", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukun Feng, Qiao Deng, and Dong Yu. 2015. Bl- cunlp: Corpus pattern analysis for verbs based on dependency chain. In Proceedings of the 9th International Workshop on Semantic Evalua- tion (SemEval 2015). Association for Computation- al Linguistics, Denver, Colorado, pages 325-328. http://www.aclweb.org/anthology/S15-2054.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An endto-end approach to learning semantic frames with feedforward neural network", "authors": [ { "first": "Yukun", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Yipei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the NAACL Student Research Workshop", "volume": "", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukun Feng, Yipei Xu, and Dong Yu. 2016. An end- to-end approach to learning semantic frames with feedforward neural network. In Proceedings of the NAACL Student Research Workshop. Association for Computational Linguistics, San Diego, California, pages 1-7. http://www.aclweb.org/anthology/N16- 2001.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "How people use words to make meanings: Semantic types meet valencies. Input", "authors": [ { "first": "Patrick", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 2012, "venue": "Process and Product: Developments in Teaching and Language Corpora", "volume": "", "issue": "", "pages": "54--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Hanks. 2012. How people use words to make meanings: Semantic types meet valencies. Input, Process and Product: Developments in Teaching and Language Corpora pages 54-69.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semantic frame identification with distributed word representations", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1448--1458", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. 2014. Semantic frame iden- tification with distributed word representations. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers). Association for Computational Lin- guistics, Baltimore, Maryland, pages 1448-1458. http://www.aclweb.org/anthology/P/P14/P14-1136.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Identifying framenet frames for verbs from a real-text corpus", "authors": [ { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Hawker", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Australasian Language Technology Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Honnibal and Tobias Hawker. 2005. Identi- fying framenet frames for verbs from a real-text cor- pus. In Proceedings of Australasian Language Tech- nology Workshop.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Embeddings for word sense disambiguation: An evaluation study", "authors": [ { "first": "Ignacio", "middle": [], "last": "Iacobacci", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Taher Pilehvar", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "897--907", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ignacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2016. Embeddings for word sense disambiguation: An evaluation study. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics. pages 897-907.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Combining lexical resources: mapping between propbank and verbnet", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Szu-Ting", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 7th International Workshop on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper, Szu-Ting Yi, and Martha Palmer. 2007. Combining lexical resources: mapping between propbank and verbnet. In Proceedings of the 7th In- ternational Workshop on Computational Linguistics, Tilburg, the Netherlands.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cmills: Adapting semantic role labeling features to dependency parsing", "authors": [ { "first": "Chad", "middle": [], "last": "Mills", "suffix": "" }, { "first": "Gina-Anne", "middle": [], "last": "Levow", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "433--437", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chad Mills and Gina-Anne Levow. 2015. Cmill- s: Adapting semantic role labeling features to dependency parsing. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015). Association for Computational Linguistics, Denver, Colorado, pages 433-437. http://www.aclweb.org/anthology/S15-2075.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "The proposition bank: An annotated corpus of semantic roles", "authors": [ { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71- 106.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "14", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In EMNLP. volume 14, pages 1532- 1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semeval-2007 task 17: English lexical sample, srl and all words", "authors": [ { "first": "Edward", "middle": [], "last": "Sameer S Pradhan", "suffix": "" }, { "first": "Dmitriy", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Dligach", "suffix": "" }, { "first": "", "middle": [], "last": "Palmer", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "87--92", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer S Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. Semeval-2007 task 17: English lexical sample, srl and all words. In Pro- ceedings of the 4th International Workshop on Se- mantic Evaluations. Association for Computational Linguistics, pages 87-92.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon", "authors": [ { "first": "Karin Kipper", "middle": [], "last": "Schuler", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karin Kipper Schuler. 2006. VerbNet: A Broad-Coverage, Comprehensive Verb Lex- icon. Ph.D. thesis, University of Penn- sylvania. http://verbs.colorado.edu/ kip- per/Papers/dissertation.pdf.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Chinese frame identification with deep neural network", "authors": [ { "first": "Hongyan", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ru", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Liwen", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hongyan Zhao, Ru Li, Sheng Zhang, and Liwen Zhang. 2016. Chinese frame identification with deep neural network 30(6):75.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "It makes sense: A wide-coverage word sense disambiguation system for free text", "authors": [ { "first": "Zhi", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the ACL 2010 System Demonstrations", "volume": "", "issue": "", "pages": "78--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of the ACL 2010 Sys- tem Demonstrations. Association for Computational Linguistics, pages 78-83.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Architecture of TRNN with an example sentence whose target word is in bold.", "num": null }, "TABREF1": { "type_str": "table", "num": null, "content": "
FrameNetPropBankPDEV
Per-targetNo153 targets407 targets
Train4120631212 (204) 152218 (374)
Test117628568 (56)42328 (104)
Valid.58714131 (27)20350 (50)
Frame33443 (2.89)2197 (5.39)
Words/sent.232312
", "text": "Non per-target examples. Frames are from FrameNet and the target words are in bold.", "html": null }, "TABREF2": { "type_str": "table", "num": null, "content": "", "text": "", "html": null }, "TABREF4": { "type_str": "table", "num": null, "content": "
IDSentencesFrame PredictionTrue Frame
1One of the farmer's cows had died of BSE raising fears of cross-infection...Same with true frameAnimate or Plant dies
2One of the farmer's ducks|chickens|geese had died of BSE raising fears of cross-infection...Same with true frameAnimate or Plant dies
3Elegans also in central America die of damping off as a function of distanceHuman dies ((Time Point)(Location)(Causation) (at Number or at the age of or at birth or earlage))Animate or Plant dies
4Human 1 or Institution 1 advises Human 2 or Institution 2 to-infinitiveHuman or Institution advises Activity
", "text": "Indeed, the MEC does not advise the use of any insecticidal shampoo for...", "html": null }, "TABREF5": { "type_str": "table", "num": null, "content": "
our model on CPA clustering. Unfortunately, the
datasets provided by CPA clustering is a per-target
resource for our model and the targets in train-
ing and test set are not the same. Since this task
is not limited to use extra resources, we use the
training set of FrameNet, a type of non per-target,
mentioned in section 3.1 to solve this problem.
The hyper parameters are the same as before. C-
PA clustering is evaluated by B-cubed F-score, a
metric for clustering problem, so we do not need
to convert the FrameNet frame label to CPA frame
label. The result is in Table 5. All the models
are supervised except for baseline and DULUTH.
Feng et al. (2016) used the MLP to classify fixed-
length local text of the target based on distribut-
ed word embeddings. But the representation of
the target's context is simply constructed with con-
catenated word embeddings and the length of local
context has to be chosen manually. Besides, MLP
may fail to train or predict well when some key
words are out of its input window.
SystemB-cubed F-score
BOB90(Best in SemEval 2015)0.741
SemEval 2015 baseline0.588
DULUTH0.525
Feng et al. (2016)0.70
This paper0.763
", "text": "Case study for CPA frames. The target words are in bold.", "html": null }, "TABREF6": { "type_str": "table", "num": null, "content": "", "text": "Results on Microcheck dataset of CPA clustering.", "html": null }, "TABREF8": { "type_str": "table", "num": null, "content": "
", "text": "Result on Lexical Sample task of SemEval-2007 task 17", "html": null } } } }