{ "paper_id": "S19-1005", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:47:33.489407Z" }, "title": "Multi-Label Transfer Learning for Multi-Relational Semantic Similarity", "authors": [ { "first": "Li", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Michigan", "location": {} }, "email": "" }, { "first": "Steven", "middle": [ "R" ], "last": "Wilson", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Michigan", "location": {} }, "email": "steverw@umich.edu" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Michigan", "location": {} }, "email": "mihalcea@umich.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Multi-relational semantic similarity datasets define the semantic relations between two short texts in multiple ways, e.g., similarity, relatedness, and so on. Yet, all the systems to date designed to capture such relations target one relation at a time. We propose a multi-label transfer learning approach based on LSTM to make predictions for several relations simultaneously and aggregate the losses to update the parameters. This multi-label regression approach jointly learns the information provided by the multiple relations, rather than treating them as separate tasks. Not only does this approach outperform the single-task approach and the traditional multi-task learning approach, it also achieves state-of-the-art performance on all but one relation of the Human Activity Phrase dataset.", "pdf_parse": { "paper_id": "S19-1005", "_pdf_hash": "", "abstract": [ { "text": "Multi-relational semantic similarity datasets define the semantic relations between two short texts in multiple ways, e.g., similarity, relatedness, and so on. Yet, all the systems to date designed to capture such relations target one relation at a time. We propose a multi-label transfer learning approach based on LSTM to make predictions for several relations simultaneously and aggregate the losses to update the parameters. This multi-label regression approach jointly learns the information provided by the multiple relations, rather than treating them as separate tasks. Not only does this approach outperform the single-task approach and the traditional multi-task learning approach, it also achieves state-of-the-art performance on all but one relation of the Human Activity Phrase dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic similarity, or relating short texts or sentences 1 in a semantic space -be those phrases, sentences or short paragraphs -is a task that requires systems to determine the degree of equivalence between the underlying semantics of the two sentences. Although relatively easy for humans, this task remains one of the most difficult natural language understanding problems. The task has been receiving significant interest from the research community. For instance, from 2012 to 2017, the International Workshop on Semantic Evaluation (SemEval) has been holding the Semantic Textual Similarity (STS) shared tasks (Agirre et al., 2012 (Agirre et al., , 2013b (Agirre et al., , 2015 (Agirre et al., , 2016 Cer et al., 2017) , dedicated to tackling this problem, with close to 100 team submissions each year.", "cite_spans": [ { "start": 617, "end": 637, "text": "(Agirre et al., 2012", "ref_id": "BIBREF3" }, { "start": 638, "end": 661, "text": "(Agirre et al., , 2013b", "ref_id": "BIBREF4" }, { "start": 662, "end": 684, "text": "(Agirre et al., , 2015", "ref_id": "BIBREF1" }, { "start": 685, "end": 707, "text": "(Agirre et al., , 2016", "ref_id": "BIBREF2" }, { "start": 708, "end": 725, "text": "Cer et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In some semantic similarity datasets, an example consists of a sentence pair and a single annotated similarity score, while in others, each pair 1 In this work, we do not consider word level similarity. comes with multiple annotations. We refer to the latter as multi-relational semantic similarity tasks. The inclusion of multiple annotations per example is motivated by the fact that there can be different relations, namely different types of similarity between two sentences. So far, these relations have been treated as separate tasks, where a model trains and tests on one relation at a time while ignoring the rest. However, we hypothesize that each relation may contain useful information about the others, and training on only one relation inevitably neglects some relevant information. Thus, training jointly on multiple relations may improve performance on one or more relations.", "cite_spans": [ { "start": 145, "end": 146, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a joint multi-label transfer learning setting based on LSTM, and show that it can be an effective solution for the multi-relational semantic similarity tasks. Due to the small size of multirelational semantic similarity datasets and the recent success of LSTM-based sentence representations (Wieting and Gimpel, 2018; Conneau et al., 2017) , the model is pre-trained on a large corpus and transfer learning is applied using fine-tuning. In our setting, the network is jointly trained on multiple relations by outputting multiple predictions (one for each relation) and aggregating the losses during back-propagation. This is different from the traditional multi-task learning setting where the model makes one prediction at a time, switching between the tasks. We treat the multi-task setting and the single-task setting (i.e., where a separate model is learned for each relation) as baselines, and show that the multi-label setting outperforms them in many cases, achieving state-of-the-art performance on all but one relation of the Human Activity Phrase dataset (Wilson and Mihalcea, 2017) .", "cite_spans": [ { "start": 302, "end": 328, "text": "(Wieting and Gimpel, 2018;", "ref_id": "BIBREF12" }, { "start": 329, "end": 350, "text": "Conneau et al., 2017)", "ref_id": "BIBREF8" }, { "start": 1076, "end": 1103, "text": "(Wilson and Mihalcea, 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In addition to success on multi-relational semantic similarity tasks, the multi-label transfer learning setting that we propose can easily be paired with other neural network architectures and applied to any dataset with multiple annotations available for each training instance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We introduce a multi-label transfer learning setting by modifying the architecture of the LSTMbased sentence encoder, specifically designed for multi-relational semantic similarity tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-Label Transfer Learning", "sec_num": "2" }, { "text": "We employ the \"hard-parameter sharing\" setting (Caruana, 1998) , where some hidden layers are shared across multiple tasks while each task has its own specific output layer. As shown in Figure 1 , using an example of a semantic similarity dataset with two relations, sentence L and sentence R in a pair are first mapped to word vector sequences and then encoded as sentence embeddings. Up to this step, the choice of the word embedding matrix and sentence encoder is flexible, and we outline our choice in the sections to follow. For each relation that has been annotated with a ground-truth label, a dedicated output dense layer takes the two sentence embeddings as input and outputs a probability distribution across the range of possible scores. The output dense layers follow the methods of Tai et al. (2015) .", "cite_spans": [ { "start": 47, "end": 62, "text": "(Caruana, 1998)", "ref_id": "BIBREF6" }, { "start": 795, "end": 812, "text": "Tai et al. (2015)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 186, "end": 194, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Architecture", "sec_num": "2.1" }, { "text": "With two such dense output layers, two losses are calculated, one for each relation. The total loss is calculated as the sum of the two losses for backpropagation which updates all parameters in the end-to-end network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Architecture", "sec_num": "2.1" }, { "text": "We use InferSent (Conneau et al., 2017) as the sentence encoder due to its outstanding performances reported on various semantic similarity tasks.", "cite_spans": [ { "start": 17, "end": 39, "text": "(Conneau et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2.2" }, { "text": "Due to the small sizes of the evaluation datasets, we use the sentence encoder pre-trained on the Stanford Natural Language Inference corpus (Bowman et al., 2015) and Multi-Genre Natural Language Inference corpus (Williams et al., 2018) , and transfer to the semantic similarity tasks using fine-tuning. In this process, the output layers for multi-label learning discussed above are stacked on top of the InferSent network, forming an end-to-end model for training and testing on semantic similarity tasks.", "cite_spans": [ { "start": 213, "end": 236, "text": "(Williams et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "2.2" }, { "text": "Neither multi-task nor multi-label learning have been used for multi-relational semantic similarity datasets. For these datasets, either multi-task or multi-label learning can be achieved by treating each relation as a \"task.\" The key differences between the two are the relations involved in each forward-backward pass and the timing of the parameter updates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Multi-Task Learning", "sec_num": "2.3" }, { "text": "Consider a training step in the two-relation example in Figure 1: A multi-task learning model would pick a batch of sentences pairs, only consider Label L, only calculate Loss L, and all parameters except those of dense layer d R are updated. Then, within the same batch, 2 the model would only consider Label R, only calculate Loss R, and all parameters except those of dense layer d L are updated.", "cite_spans": [], "ref_spans": [ { "start": 56, "end": 65, "text": "Figure 1:", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Comparison with Multi-Task Learning", "sec_num": "2.3" }, { "text": "A multi-label learning model (our model) would pick a batch of sentences pairs, consider both Label L and Label R, calculate Loss L and Loss R, aggregate them as the total loss, and update all parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison with Multi-Task Learning", "sec_num": "2.3" }, { "text": "To show the effectiveness of the multi-label transfer learning setting, we experiment on three semantic similarity datasets with multiple relations annotated, and use one LSTM-based sentence encoder that has been very successful in many downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "We study three semantic similarity datasets with multiple relations with texts of different lengths, spanning phrases, sentences, and short paragraphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Human Activity Phrase (Wilson and Mihalcea, 2017) : a collection of pairs of phrases regarding human activities, annotated with the following four different relations.", "cite_spans": [ { "start": 22, "end": 49, "text": "(Wilson and Mihalcea, 2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 Similarity (SIM): The degree to which the two activity phrases describe the same thing, semantic similarity in a strict sense. Example of high similarity phrases: to watch a film and to see a movie.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 Relatedness (REL): The degree to which the activities are related to one another, a general semantic association between two phrases. Example of strongly related phrases: to give a gift and to receive a present.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 Motivational alignment (MA): The degree to which the activities are (typically) done with similar motivations. Example of phrases with potentially similar motivations: to eat dinner with family members and to visit relatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "\u2022 Perceived actor congruence (PAC): The degree to which the activities are expected to be done by the same type of person. An example of a pair with a high PAC score: to pack a suitcase and to travel to another state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "The phrases are generated, paired and scored on Amazon Mechanical Turk. 3 The annotated input.", "cite_spans": [ { "start": 72, "end": 73, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "3 https://www.mturk.com/ scores range from 0 to 4 for SIM, REL and MA, and \u22122 to 2 for PAC. The evaluation is based on the Spearman's \u03c1 correlation coefficient between the systems' predicted scores and the human annotations. There are 1,000 pairs in the dataset. We also use the supplemental 1,373 pairs from Zhang et al. (2018) in which 1,000 pairs are randomly selected for training and the rest are used for development. We then treat the original 1,000 pairs as a held-out test set so that our results are directly comparable with those previously reported.", "cite_spans": [ { "start": 309, "end": 328, "text": "Zhang et al. (2018)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "SICK (Marelli et al., 2014b,a) : the Sentences Involving Compositional Knowledge benchmark, which includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimensions: relatedness and entailment. The relatedness score ranges from 1 to 5, and Pearson's r is used for evaluation; the entailment relation is categorical, consisting of entailment, contradiction, and neutral. There are 4439 pairs in the train split, 495 in the trial split used for development and 4906 in the test split. The sentence pairs are generated from image and video caption datasets before being paired up using some algorithm. Due to the lack of human supervision in the process, some sentence pairs display minimal difference in semantic components, making the SICK tasks simpler than the others we study.", "cite_spans": [ { "start": 5, "end": 30, "text": "(Marelli et al., 2014b,a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "Typed-Similarity (Agirre et al., 2013b): a collection of meta-data describing books, paintings, films, museum objects and archival records taken from Europeana, 4 presented as the pilot track in the SemEval 2013 STS shared task. Typically, the items consist of title, subject, description, and so on, describing a cultural heritage item and, sometimes, a thumbnail of the item itself. For the purpose of measuring semantic similarity, we concatenate all the textual entries such as title, creator, subject and description into a short paragraph that is used as input, although the annotations might be informed of the image aspects of the meta-data. Each pair of items is annotated on eight dimensions of similarity: general similarity, author, people involved, time, location, event or action involved, subject and description. There are 750 pairs in the train split, of which we randomly sample 500 for training and 250 for development, and 721 in the test split. Pearson's r is used for evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3.1" }, { "text": "We compare the multi-label setting with two baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "\u2022 Single-task, where each relation is treated as an individual task. For each relation, a model with only one output dense layer is trained and tested, ignoring the annotations of all other relations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "\u2022 Multi-task, where only one relation is involved during each round of feed-forward and back-propagation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "3.2" }, { "text": "In each experiment, we use stochastic gradient descent and a batch size of 16. We tune the learning rate over {0.1, 0.5, 1, 5} and number of epochs over {10, 20}. For each dataset discussed above, we tune these hyperparameters on the development set. All other hyperparameters maintain their values from the original code. 5 In the single-task setting, the model is trained and tested on each relation, ignoring the annotations of other relations. In the multi-task settings, the model is trained and tested on all the relations in a dataset. In the multitask setting, relations are presented to the model in the order they are listed in the result tables within each batch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Details", "sec_num": "3.3" }, { "text": "The results are shown in Tables 1, 2 and 3. For every experiment (represented by a cell in the tables), 30 runs with different random seeds are recorded and the average is reported. For each relation (each column in the tables), let the true mean performance of multi-label learning, singletask baseline and multi-task baseline be \u00b5 MLL , \u00b5 single , \u00b5 MTL , respectively. Two one-sided Student's t-tests are conducted to test if multi-label learning outperforms the baselines for that relation. The significance level is chosen to be 0.05. A down-arrow \u2193 indicates that our proposed multilabel learning underperforms a baseline, while an up-arrow \u2191 indicates that our proposed multi-label learning outperforms a baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "For the Human Activity Phrase dataset, the singletask setting already achieves state-of-the-art performances on SIM, REL and PAC relations, surpassing the previous best results reported by Zhang et al. (2018) , which achieved Spearman's correlation coefficient of .710 in SIM, .715 in REL, .690 in MA and .549 in PAC. This approach is based on fine-tuning a bi-directional LSTM with average-pooling pre-trained on translated texts (Wieting and Gimpel, 2018) . Using multi-label learning, our model is able to gain a statistically significant improvement in the performance of REL compared to the single-task setting, while maintaining performance for the other relations. The traditional multi-task setting, however, performs significantly worse than the other settings.", "cite_spans": [ { "start": 189, "end": 208, "text": "Zhang et al. (2018)", "ref_id": "BIBREF15" }, { "start": 431, "end": 457, "text": "(Wieting and Gimpel, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "For the entailment task on the SICK dataset, our multi-label setting outperforms the singletask baseline and the previous best results of In-ferSent. These best results consisted of an accuracy of 86.3% achieved using a logistic regression classifier and sentence embeddings generated by pre-trained InferSent as features (Conneau et al., 2017 ). In the relatedness task, this setting achieved a Pearson's correlation coefficient of .885, which even our our multi-label setting is unable to beat. However, the multi-label setting does have a statistically significant performance gain compared to the single-task setting in the relatedness task, while the traditional multi-task setting underperforms the other settings.", "cite_spans": [ { "start": 322, "end": 343, "text": "(Conneau et al., 2017", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "For the Typed-Similarity dataset, the previous best results are achieved using rich feature engineering without the use of sentence embeddings, with a different scoring scheme for each relation (Agirre et al., 2013a) . While this method yielded better results than all of the transfer learning approaches we compare, it should be noted that this approach is specific to tackling this dataset, unlike the transfer learning settings that are generalizable to other scenarios. One potential reason for the discrepancy in performance is that some relations such as time, people involved, or events may be easily or sometimes trivially captured by information retrieval techniques such as named entity recognition. Using sentence embeddings and transfer learning for all the relations, though simpler, may face greater challenge in the rela- .714\u2191 Table 1 : The performance in Pearson's r on the Typed-Similarity dataset, in accordance with the specification of the dataset to allow for direct comparison with previous results. The results of single task and multi-task learning (MTL) are followed by \u2191 if it is statically significantly lower than those of multi-label learning (MLL), and they are followed by \u2193 otherwise. tions mentioned above. Among the three transfer learning approaches, our multi-label setting is still superior, outperforming the single-task setting in over half of the relations, and outperforming the multi-task setting in all relations.", "cite_spans": [ { "start": 194, "end": 216, "text": "(Agirre et al., 2013a)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 843, "end": 850, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "While our results above show that multi-label learning is almost always the most effective way to transfer sentence embeddings in multi-relational semantic similarity tasks, in some situations simply training with one relation might yield better performance (such as the general similarity relation in the Typed-Similarity dataset). This suggests that the choice of multi-label learning or single-task learning can be tuned as a hyperparameter empirically for the optimal performance on a task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Recommendation", "sec_num": "5.2" }, { "text": "In the multi-label setting, we calculate the total loss by summing the loss from each dimension. We also explore weighting the loss from each di-mension by factors of 2, 5 and 10, but doing so hurts the performance for all dimensions. In the multi-task setting, we attempt different ordering of the dimensions when presenting them to the model within a batch of examples, but the difference in performance is not statistically significant. Furthermore, the multi-task setting takes about n times longer to train than the multi-label setting, where n is number of dimensions of annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Other Considerations and Discussions", "sec_num": "5.3" }, { "text": "We introduced a multi-label transfer learning setting designed specifically for semantic similarity tasks with multiple relations annotations. By experimenting with a variety of relations in three datasets, we showed that the multi-label setting can outperform single-task and traditional multitask settings in many cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Future work includes exploring the performance of this setting with other sentence encoders, as well as multi-label datasets outside of the domain of semantic similarity. This may include NLP datasets annotated with author information for multiple dimensions, or computer vision datasets with multiple annotations for scenes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "In general multi-task learning, a new batch is picked after switching tasks. In multi-relational semantic similarity datasets, each task is a relational label, which shares the same", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.europeana.eu/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/facebookresearch/InferSent", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This material is based in part upon work supported by the Michigan Institute for Data Science, by the John Templeton Foundation (grant #61156), by the National Science Foundation (grant #1815291), and by DARPA (grant #HR001117S0026-AIDA-FP-045). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the John Templeton Foundation, the National Science Foundation, or DARPA.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ubc_uos-typed: Regression for typed-similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Nikolaos", "middle": [], "last": "Aletras", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity", "volume": "1", "issue": "", "pages": "132--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Nikolaos Aletras, Aitor Gonzalez- Agirre, German Rigau, and Mark Stevenson. 2013a. Ubc_uos-typed: Regression for typed-similarity. In Second Joint Conference on Lexical and Computa- tional Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 132-137, Atlanta, Georgia, USA. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Montse", "middle": [], "last": "Maritxalar", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Larraitz", "middle": [], "last": "Uria", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "252--263", "other_ids": { "DOI": [ "10.18653/v1/S15-2045" ] }, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. Semeval-2015 task 2: Semantic tex- tual similarity, english, spanish and pilot on inter- pretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252-263, Denver, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Carmen", "middle": [], "last": "Banea", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "German", "middle": [], "last": "Rigau", "suffix": "" }, { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "497--511", "other_ids": { "DOI": [ "10.18653/v1/S16-1081" ] }, "num": null, "urls": [], "raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 497-511, San Diego, California. Association for Computational Linguis- tics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Semeval-2012 task 6: A pilot on semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "385--393", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pi- lot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Compu- tational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385- 393, Montr\u00e9al, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "*sem 2013 shared task: Semantic textual similarity", "authors": [ { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Aitor", "middle": [], "last": "Gonzalez-Agirre", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity", "volume": "1", "issue": "", "pages": "32--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013b. *sem 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Seman- tics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32-43, Atlanta, Georgia, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A large annotated corpus for learning natural language inference", "authors": [ { "first": "R", "middle": [], "last": "Samuel", "suffix": "" }, { "first": "Gabor", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Angeli", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Potts", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "632--642", "other_ids": { "DOI": [ "10.18653/v1/D15-1075" ] }, "num": null, "urls": [], "raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Multitask learning", "authors": [ { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" } ], "year": 1998, "venue": "Learning to learn", "volume": "", "issue": "", "pages": "95--133", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich Caruana. 1998. Multitask learning. In Learning to learn, pages 95-133. Springer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" }, { "first": "Eneko", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "Inigo", "middle": [], "last": "Lopez-Gazpio", "suffix": "" }, { "first": "Lucia", "middle": [], "last": "Specia", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.18653/v1/S17-2001" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1-14, Vancou- ver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Supervised learning of universal sentence representations from natural language inference data", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "670--680", "other_ids": { "DOI": [ "10.18653/v1/D17-1070" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 8th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--8", "other_ids": { "DOI": [ "10.3115/v1/S14-2001" ] }, "num": null, "urls": [], "raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014a. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 1-8, Dublin, Ireland. Association for Compu- tational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A sick cure for the evaluation of compositional distributional semantic models", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zam- parelli. 2014b. A sick cure for the evaluation of compositional distributional semantic models. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 216-223, Reykjavik, Iceland. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improved semantic representations from tree-structured long short-term memory networks", "authors": [ { "first": "Kai Sheng", "middle": [], "last": "Tai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1556--1566", "other_ids": { "DOI": [ "10.3115/v1/P15-1150" ] }, "num": null, "urls": [], "raw_text": "Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1556-1566, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Paranmt-50m: Pushing the limits of paraphrastic sentence embeddings with millions of machine translations", "authors": [ { "first": "John", "middle": [], "last": "Wieting", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "451--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Wieting and Kevin Gimpel. 2018. Paranmt-50m: Pushing the limits of paraphrastic sentence embed- dings with millions of machine translations. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Melbourne, Australia. As- sociation for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "authors": [ { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nangia", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1112--1122", "other_ids": { "DOI": [ "10.18653/v1/N18-1101" ] }, "num": null, "urls": [], "raw_text": "Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Measuring semantic relations between human activities", "authors": [ { "first": "Steven", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "664--673", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Wilson and Rada Mihalcea. 2017. Measur- ing semantic relations between human activities. In Proceedings of the Eighth International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 664-673, Taipei, Taiwan. Asian Federation of Natural Language Processing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Sequential network transfer: Adapting sentence embeddings to human activities and beyond", "authors": [ { "first": "Li", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Steven R Wilson", "suffix": "" }, { "first": "", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1804.07835" ] }, "num": null, "urls": [], "raw_text": "Li Zhang, Steven R Wilson, and Rada Mihalcea. 2018. Sequential network transfer: Adapting sentence em- beddings to human activities and beyond. arXiv preprint arXiv:1804.07835.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Overview of the multi-label architecture.", "num": null }, "TABREF2": { "content": "
Relatedness Entailment
MLL.88286.7
Single.874\u219186.4\u2191
MTL.871\u219186.2\u2191
", "text": "The performance in Spearman's \u03c1 on the Human Activity Phrase dataset.", "num": null, "type_str": "table", "html": null }, "TABREF3": { "content": "", "text": "The performance in Pearson's r on the SICK dataset, in accordance with the specification of the dataset to allow for direct comparison with previous results.", "num": null, "type_str": "table", "html": null } } } }