{ "paper_id": "S14-2024", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:32:17.566290Z" }, "title": "CECL: a New Baseline and a Non-Compositional Approach for the Sick Benchmark", "authors": [ { "first": "Yves", "middle": [], "last": "Bestgen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Centre for English Corpus Linguistics Universit\u00e9 catholique de Louvain", "location": {} }, "email": "yves.bestgen@uclouvain.be" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes the two procedures for determining the semantic similarities between sentences submitted for the Se-mEval 2014 Task 1. MeanMaxSim, an unsupervised procedure, is proposed as a new baseline to assess the efficiency gain provided by compositional models. It outperforms a number of other baselines by a wide margin. Compared to the wordoverlap baseline, it has the advantage of taking into account the distributional similarity between words that are also involved in compositional models. The second procedure aims at building a predictive model using as predictors MeanMaxSim and (transformed) lexical features describing the differences between each sentence of a pair. It finished sixth out of 17 teams in the textual similarity sub-task and sixth out of 19 in the textual entailment subtask.", "pdf_parse": { "paper_id": "S14-2024", "_pdf_hash": "", "abstract": [ { "text": "This paper describes the two procedures for determining the semantic similarities between sentences submitted for the Se-mEval 2014 Task 1. MeanMaxSim, an unsupervised procedure, is proposed as a new baseline to assess the efficiency gain provided by compositional models. It outperforms a number of other baselines by a wide margin. Compared to the wordoverlap baseline, it has the advantage of taking into account the distributional similarity between words that are also involved in compositional models. The second procedure aims at building a predictive model using as predictors MeanMaxSim and (transformed) lexical features describing the differences between each sentence of a pair. It finished sixth out of 17 teams in the textual similarity sub-task and sixth out of 19 in the textual entailment subtask.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The SemEval-2014 Task 1 (Marelli et al., 2014a) was designed to allow a rigorous evaluation of compositional distributional semantic models (CDSMs). CDSMs aim to represent the meaning of phrases and sentences by composing the distributional representations of the words they contain (Baroni et al., 2013; Bestgen and Cabiaux, 2002; Erk and Pado, 2008; Grefenstette, 2013; Kintsch, 2001; Mitchell and Lapata, 2010) ; they are thus an extension of Distributional Semantic Models (DSMs), which approximate the meaning of words with vectors summarizing their patterns of co-occurrence in a corpus (Baroni and Lenci, This work is licenced under a Creative Commons Attribution 4.0 International License. Page numbers and proceedings footer are added by the organizers. License details: http: //creativecommons.org/licenses/by/4.0/ 2010; Bestgen et al., 2006; Kintsch, 1998; Landauer and Dumais, 1997) . The dataset for this task, called SICK (Sentences Involving Compositional Knowledge), consists of almost 10,000 English sentence pairs annotated for relatedness in meaning and entailment relation by ten annotators (Marelli et al., 2014b) .", "cite_spans": [ { "start": 24, "end": 47, "text": "(Marelli et al., 2014a)", "ref_id": "BIBREF21" }, { "start": 283, "end": 304, "text": "(Baroni et al., 2013;", "ref_id": "BIBREF2" }, { "start": 305, "end": 331, "text": "Bestgen and Cabiaux, 2002;", "ref_id": null }, { "start": 332, "end": 351, "text": "Erk and Pado, 2008;", "ref_id": "BIBREF12" }, { "start": 352, "end": 371, "text": "Grefenstette, 2013;", "ref_id": "BIBREF13" }, { "start": 372, "end": 386, "text": "Kintsch, 2001;", "ref_id": "BIBREF17" }, { "start": 387, "end": 413, "text": "Mitchell and Lapata, 2010)", "ref_id": "BIBREF23" }, { "start": 593, "end": 611, "text": "(Baroni and Lenci,", "ref_id": null }, { "start": 831, "end": 852, "text": "Bestgen et al., 2006;", "ref_id": "BIBREF6" }, { "start": 853, "end": 867, "text": "Kintsch, 1998;", "ref_id": "BIBREF16" }, { "start": 868, "end": 894, "text": "Landauer and Dumais, 1997)", "ref_id": "BIBREF18" }, { "start": 1111, "end": 1134, "text": "(Marelli et al., 2014b)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rationale behind this dataset is that \"understanding when two sentences have close meanings or entail each other crucially requires a compositional semantics step\" (Marelli et al., 2014b) , and thus that annotators judge the similarity between the two sentences of a pair by first building a mental representation of the meaning of each sentence and then comparing these two representations. However, another option was available to the annotators. They could have paid attention only to the differences between the sentences, and assessed the significance of these differences. Such an approach could have been favored by the dataset built on the basis of a thousand sentences modified by a limited number of (often) very specific transformations, producing sentence pairs that might seem quite repetitive. An analysis conducted during the training phase of the challenge brought some support for this hypothesis. The analysis focused on pairs of sentences in which the only difference between the two sentences was the replacement of one content word by another, as in A man is singing to a girl vs. A man is singing to a woman, but also in A man is sitting in a field vs. A man is running in a field. The material was divided into two parts, 3500 sentence pairs in the training set and the remaining 1500 in the test set. First, the average similarity score for each pair of interchanged words was calculated on the training set (e.g., in this sample, there were 16 sentence pairs in which woman and man were interchanged, and their mean similarity score was 3.6). Then, these mean scores were used as the similarity scores of the sentence pairs of the test sample in which the same words were interchanged. The correlation between the actual scores and the predicted score was 0.83 (N=92), a value that can be considered as very high, given the restrictions on the range in which the predicted similarity scores vary (min=3.5 and max=5.0; Howell, 2008, pp. 272-273) . It is important to note that this observation does not prove that the participants have not built a compositional representation, especially as it only deals with a very specific type of transformation. It nevertheless suggests that analyzing only the differences between the sentences of a pair could allow the similarity between them to be effectively estimated.", "cite_spans": [ { "start": 168, "end": 191, "text": "(Marelli et al., 2014b)", "ref_id": "BIBREF22" }, { "start": 1947, "end": 1973, "text": "Howell, 2008, pp. 272-273)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Following these observations, I opted to try to determine the degree of efficacy that can be achieved by two non-compositional approaches. The first approach, totally unsupervised, is proposed as a new baseline to evaluate the efficacy gains brought by compositional systems. The second, a supervised approach, aims to capitalize on the properties of the SICK benchmark. While these approaches have been developed specifically for the semantic relatedness sub-task, the second has also been applied to the textual entailment subtask. This paper describes the two proposed approaches, their implementation in the context of SemEval 2014 Task 1, and the results obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An evident baseline in the field of CDSM is based on the proportion of common words in two sentences after the removal (or retaining) of stop words (Cheung and Penn, 2012) . Its main weakness is that it does not take into account the semantic similarities between the words that are combined in the CDSM models. It follows that a compositional approach may seem significantly better than this baseline, even if it is not compositionality that matters but only the distributional part. At first glance, this problem can be circumvented by using as baseline a simple compositional model like the additive model. The analyses below show that this model is much less effective for the SILK dataset than the distributional baseline proposed here.", "cite_spans": [ { "start": 148, "end": 171, "text": "(Cheung and Penn, 2012)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "A New Baseline for CDSM", "sec_num": "2.1" }, { "text": "MeanMaxSim, the proposed baseline, is an extension of the classic measure based on the proportion of common words, taking advantage of the distributional similarity but not of compositionality. It corresponds to the mean, calculated using all the words of the two sentences, of the maximum semantic similarity between each word in a sentence and all the words of the other sentence. More formaly, given two sentences a = (a 1 , .., a n ) and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A New Baseline for CDSM", "sec_num": "2.1" }, { "text": "b = (b 1 , ..b m ), M M S = ( i max j sim(a i ,b j )+ j max i sim(a i ,b j )) n+m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A New Baseline for CDSM", "sec_num": "2.1" }, { "text": "In this study, the cosine between the word distributional representations was used as the measure of semantic similarity, but other measures may be used. The common words of the two sentences have an important impact on MeanMaxSim, since their similarity with themselves is equal to the maximum similarity possible. Their impact would be much lower if the average similarity between a word and all the words in the other sentence were employed instead of the maximum similarity. Several variants of this measure can be used, for example not taking into account every instance where a word is repeated in a sentence or not allowing any single word to be the \"most similar\" to several other words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A New Baseline for CDSM", "sec_num": "2.1" }, { "text": "The main limitation of the first approach in the context of this challenge is that it is completely unsupervised and therefore does not take advantage of the training set provided by the task organizers. The second approach addresses this limitation. It aims to build a predictive model, using as predictors MeanMaxSim but also lexical features describing the differences between each sentence of a pair. For the extraction of these features, each pair of sentences of the whole dataset (training and testing sets) is analyzed to identify all the lemmas that are not present with the same frequency in both sentences. Each of these differences is encoded as a feature whose value corresponds to the unsigned frequency difference. This step leads to a two-way contingency table with sentence pairs as rows and lexical features as columns. Correspondence Analysis (Blasius and Greenacre, 1994; Lebart et al., 2000) , a statistical procedure available in many off-the-shelf software like R (Nenadic and Greenacre, 2006) , is then used to decompose this table into orthogonal dimensions ordered according to the corresponding part of associations between rows and columns they explain. Each row receives a coordinate on these dimensions and these coordinates are used as predictors of the relatedness scores of the sentence pairs. In this way, not only are the frequencies of lexical features transformed into continuous predictors, but these predictors also take into account the redundancy between the lexical features. Finally, a predictive model is built on the basis of the training set by means of multiple linear regression with stepwise selection of the best predictors. For the textual entailment sub-task, the same procedure was used except that the linear regression was replaced by a linear discriminant analysis.", "cite_spans": [ { "start": 862, "end": 891, "text": "(Blasius and Greenacre, 1994;", "ref_id": "BIBREF8" }, { "start": 892, "end": 912, "text": "Lebart et al., 2000)", "ref_id": "BIBREF20" }, { "start": 987, "end": 1016, "text": "(Nenadic and Greenacre, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Non-Compositional Approach Based on the Differences Between the Sentences", "sec_num": "2.2" }, { "text": "This section describes the steps and additional resources used to implement the proposed approaches for the SICK challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "3" }, { "text": "All sentences were tokenized and lemmatized by the Stanford Parser (de Marneffe et al., 2006; Toutanova et al., 2003) .", "cite_spans": [ { "start": 67, "end": 93, "text": "(de Marneffe et al., 2006;", "ref_id": "BIBREF10" }, { "start": 94, "end": 117, "text": "Toutanova et al., 2003)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing of the Dataset", "sec_num": "3.1" }, { "text": "Latent Semantic Analysis (LSA), a classical DSM (Deerwester et al., 1991; Landauer et al., 1998) , was used to gather the semantic similarity between words from corpora. The starting point of the analysis is a lexical table containing the frequencies of every word in each of the text segments included in the corpus. This table is submitted to a singular value decomposition, which extracts the most significant orthogonal dimensions. In this semantic space, the meaning of a word is represented by a vector and the semantic similarity between two words is estimated by the cosine between their corresponding vectors. Three corpora were used to estimate these similarities. The first one, the TASA corpus, is composed of excerpts, with an approximate average length of 250 words, obtained by a random sampling of texts that American students read (Landauer et al., 1998) . The version to which T.K. Landauer (Institute of Cognitive Science, University of Colorado, Boulder) provided access contains approximately 12 million words.", "cite_spans": [ { "start": 48, "end": 73, "text": "(Deerwester et al., 1991;", "ref_id": null }, { "start": 74, "end": 96, "text": "Landauer et al., 1998)", "ref_id": "BIBREF19" }, { "start": 848, "end": 871, "text": "(Landauer et al., 1998)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Semantics", "sec_num": "3.2" }, { "text": "The second corpus, the BNC (British National Corpus; Aston and Burnard, 1998) is composed of approximately 100 million words and covers many different genres. As the documents included in this corpus can be of up to 45,000 words, they were divided into segments of 250 words, the last segment of a text being deleted if it contained fewer than 250 words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Semantics", "sec_num": "3.2" }, { "text": "The third corpus (WIKI, approximately 600 million words after preprocessing) is derived from the Wikipedia Foundation database, downloaded in April 2011. It was built using WikiExtractor.py by A. Fuschetto. As for the BNC, the texts were cut into 250-word segments, and any segment of fewer than 250 words was deleted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Semantics", "sec_num": "3.2" }, { "text": "All these corpora were lemmatized by means of the TreeTagger (Schmid, 1994) . In addition, a series of functional words were removed as well as all the words whose total frequency in the corpus was lower than 10. The resulting (log-entropy weighted) matrices of co-occurrences were submitted to a singular value decomposition (SVD-PACKC, Berry et al., 1993) and the first 300 eigenvectors were retained.", "cite_spans": [ { "start": 61, "end": 75, "text": "(Schmid, 1994)", "ref_id": "BIBREF25" }, { "start": 338, "end": 357, "text": "Berry et al., 1993)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Distributional Semantics", "sec_num": "3.2" }, { "text": "Before estimating the semantic similarity between a pair of sentences using MeanMaxSim, words (in their lemmatized forms) considered as stop words were filtered out. This stop word list (n=82), was built specifically for the occasion on the basis of the list of the most frequent words in the training dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Approach Details", "sec_num": "3.3" }, { "text": "To identify words not present with the same frequency in both sentences, all the lemmas (including those belonging to the stop word list) were taken into account. The optimization of the parameters of the predictive model was performed using a three-fold cross-validation procedure, with two thirds of the 5000 sentence pairs for training and the remaining third for testing. The values tested by means of an exhaustive search were:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Approach Details", "sec_num": "3.4" }, { "text": "\u2022 Minimum threshold frequency of the lexical features in the complete dataset: from 10 to 70 by step of 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Approach Details", "sec_num": "3.4" }, { "text": "\u2022 Number of dimensions retained from the CA: from 10 to the total number of dimensions available by step of 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Approach Details", "sec_num": "3.4" }, { "text": "\u2022 P-value threshold to enter or remove predictors from the model: 0.01 and from 0.05 to 0.45 by step of 0.05.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Approach Details", "sec_num": "3.4" }, { "text": "This cross-validation procedure was repeated five times, each time changing the random distribution of sentence pairs in the samples. The final values of the three parameters were selected on the basis of the average correlation calculated over all replications. For the relatedness sub-task, the selected values were a minimum threshold frequency of 40, 140 dimensions and a p-value of 0.20. For the entailment sub-task, they were a minimum threshold frequency of 60, 100 dimensions and a p-value of 0.25.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Approach Details", "sec_num": "3.4" }, { "text": "The main measure of performance selected by the task organizers was the Pearson correlation, calculated on the test set (4927 sentence pairs), between the mean values of similarity according to the annotators and the values predicted by the automatic procedures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Relatedness Sub-Task", "sec_num": "4.1" }, { "text": "Unsupervised Approach: MeanMaxSim. Table 1 shows the results obtained by MeanMaxSim, based on the three corpora, and by three other baselines:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Relatedness Sub-Task", "sec_num": "4.1" }, { "text": "\u2022 WO: The word-overlap baseline proposed by the organizers of the task, computed as the number of distinct tokens in both sentences divided by the number of distinct tokens in the longer sentence, optimizing the number of the most frequent words stripped off the sentences on the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Relatedness Sub-Task", "sec_num": "4.1" }, { "text": "\u2022 SWL: The word-overlap baseline computed as in WO but using lemmas instead of words and the stop words list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Relatedness Sub-Task", "sec_num": "4.1" }, { "text": "\u2022 ADD: The simple additive compositional model, in which each sentence is represented by the sum of the vectors of the lemmas that compose it (stripping off stop words and using the best performing corpus) and the similarity is the cosine between these two vectors (Bestgen et al., 2010; Guevara, 2011 MeanMaxSim produces almost identical results regardless of the corpus used. The lack of difference between the three corpora was unexpected.", "cite_spans": [ { "start": 265, "end": 287, "text": "(Bestgen et al., 2010;", "ref_id": "BIBREF7" }, { "start": 288, "end": 301, "text": "Guevara, 2011", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Relatedness Sub-Task", "sec_num": "4.1" }, { "text": "It could be related to the type of vocabulary used in the SICK materials, seemingly mostly frequent and concrete words whose use could be relatively similar in the three corpora. MeanMaxSim performance is clearly superior to all other baselines; among these, the additive model is the worst. This result is important because it shows that this compositional model is not, for the SICK benchmark, the most interesting baseline to assess compositional approaches. In the context of the best performance of the other teams, MeanMaxSim is (hopefully) well below the most effective procedures, which reached correlations above 0.80. Supervised Approach. The supervised approach resulted in a correlation of 0.78044, a value well above all baselines reported above. This correlation ranked the procedure sixth out of 17, tied with another team (0.78019). The three best teams scored significantly higher, with correlations between 0.826 and 0.828.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Relatedness Sub-Task", "sec_num": "4.1" }, { "text": "Only the supervised approach was used for this sub-task. The proposed procedure achieved an accuracy of 79.998%, which ranks it sixth again, but out of 19 teams, still at a respectable distance from the best performance (84.575%).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Textual Entailment Sub-Task", "sec_num": "4.2" }, { "text": "The main contribution of this research seems to be the proposal of MeanMaxSim as baseline for evaluating CDSM. It outperforms a number of other baselines by a wide margin and is very easy to calculate. Compared to the word-overlap baseline, it has the advantage of taking into account the distributional similarity between words that are also involved in compositional models. The supervised approach proposed achieved an acceptable result (sixth out of 17) and it could easily be improved, for example by replacing standard linear regression by a procedure less sensitive to the risk of overfit due to the large number of predictors such as Partial Least Squares regression (Guevara, 2011) . However, since this approach is not compositional and its efficacy (compared to others) is limited, it is not obvious that trying to improve it would be very useful.", "cite_spans": [ { "start": 675, "end": 690, "text": "(Guevara, 2011)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "Yves Bestgen is Research Associate of the Belgian Fund for Scientific Research (F.R.S-FNRS).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The BNC Handbook: Exploring the British National Corpus with SARA", "authors": [ { "first": "Aston", "middle": [], "last": "Guy", "suffix": "" }, { "first": "Burnard", "middle": [], "last": "Lou", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aston Guy, and Burnard Lou (1998). The BNC Hand- book: Exploring the British National Corpus with SARA. Edinburgh: Edinburgh University Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Distributional memory: A general framework for corpusbased semantics", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Lenci", "middle": [], "last": "Alessandro", "suffix": "" } ], "year": 2010, "venue": "Computational Linguistics", "volume": "36", "issue": "", "pages": "673--721", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, Marco, and Lenci Alessandro (2010) Distri- butional memory: A general framework for corpus- based semantics, Computational Linguistics, 36, 673-721.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Frege in space: a program for compositional distributional semantics", "authors": [ { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2013, "venue": "Linguistic Issues in Language Technologies (LiLT), CSLI Publications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, Marco, Bernardi, Raffaella, and Zamparelli, Roberto (2013). Frege in space: a program for com- positional distributional semantics. In Annie Zae- nen, Bonnie Webber and Martha Palmer. Linguistic Issues in Language Technologies (LiLT), CSLI Pub- lications.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Svdpackc: Version 1.0 user's guide", "authors": [ { "first": "Michael", "middle": [], "last": "Berry", "suffix": "" }, { "first": "", "middle": [], "last": "Do", "suffix": "" }, { "first": "", "middle": [], "last": "Theresa", "suffix": "" }, { "first": "", "middle": [], "last": "Brien", "suffix": "" }, { "first": "", "middle": [], "last": "Gavin", "suffix": "" }, { "first": "Vijay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "", "middle": [], "last": "Varadhan", "suffix": "" }, { "first": "", "middle": [], "last": "Sowmini", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Berry, Michael, Do, Theresa, O'Brien, Gavin, Krishna, Vijay, and Varadhan, Sowmini (1993). Svdpackc: Version 1.0 user's guide, Technical Report Num- ber CS-93-194, University of Tennessee, Knoxville, TN.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "L'analyse s\u00e9mantique latente et l'identification des m\u00e9taphores", "authors": [], "year": null, "venue": "Actes de la 9me Conf\u00e9rence annuelle sur le traitement automatique des langues naturelles", "volume": "", "issue": "", "pages": "331--337", "other_ids": {}, "num": null, "urls": [], "raw_text": "L'analyse s\u00e9mantique latente et l'identification des m\u00e9taphores. In Actes de la 9me Conf\u00e9rence annuelle sur le traitement automatique des langues naturelles (pp. 331-337). Nancy : INRIA.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Towards automatic determination of the semantics of connectives in large newspaper corpora", "authors": [ { "first": "Yves", "middle": [], "last": "Bestgen", "suffix": "" }, { "first": "Liesbeth", "middle": [], "last": "Degand", "suffix": "" }, { "first": "Spooren", "middle": [], "last": "", "suffix": "" } ], "year": 2006, "venue": "Discourse Processes", "volume": "41", "issue": "", "pages": "175--193", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bestgen, Yves, Degand, Liesbeth, and Spooren, Wilbert (2006). Towards automatic determination of the semantics of connectives in large newspaper corpora. Discourse Processes, 41, 175-193.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Using latent semantic analysis to measure coherence in essays by foreign language learners?", "authors": [ { "first": "Yves", "middle": [], "last": "Bestgen", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Lories", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Thewissen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of 10th International Conference on Statistical Analysis of Textual Data", "volume": "", "issue": "", "pages": "385--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bestgen, Yves, Lories, Guy, and Thewissen, Jennifer (2010). Using latent semantic analysis to measure coherence in essays by foreign language learners? In Sergio Bolasco, Isabella Chiari and Luca Giu- liano (Eds.), Proceedings of 10th International Con- ference on Statistical Analysis of Textual Data, 385- 395. Roma: LED.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Computation of Correspondence Analysis", "authors": [ { "first": "Jorg", "middle": [], "last": "Blasius", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Greenacre", "suffix": "" } ], "year": 1994, "venue": "Correspondence Analysis in the Social Sciences", "volume": "", "issue": "", "pages": "53--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Blasius, Jorg, and Greenacre, Michael (1994). Com- putation of Correspondence Analysis. In Michael Greenacre and Jorg Blasius (eds.), Correspondence Analysis in the Social Sciences, pp. 53-75. Academic Press, London.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Evaluating distributional models of semantics for syntactically invariant inference", "authors": [ { "first": "Jackie", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Gerald", "middle": [], "last": "Penn", "suffix": "" } ], "year": 2012, "venue": "Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "33--43", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheung, Jackie, and Penn, Gerald (2012). Evaluating distributional models of semantics for syntactically invariant inference. In Conference of the European Chapter of the Association for Computational Lin- guistics, 33-43, Avignon, France.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generating Typed Dependency Parses from Phrase Structure Parses", "authors": [ { "first": "", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "", "middle": [], "last": "Marie-Catherine", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Maccartney", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 5th Edition of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "de Marneffe, Marie-Catherine, MacCartney, Bill, and Manning, Christopher (2006). Generating Typed Dependency Parses from Phrase Structure Parses. In Proceedings of the 5th Edition of the Language Re- sources and Evaluation Conference. Genoa, Italy.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Indexing by latent semantic analysis", "authors": [ { "first": "Scott", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "", "middle": [], "last": "Dumais", "suffix": "" }, { "first": "", "middle": [], "last": "Susan", "suffix": "" }, { "first": "", "middle": [], "last": "Furnas", "suffix": "" }, { "first": "", "middle": [], "last": "George", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "Journal of the American Society for Information Science", "volume": "41", "issue": "", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deerwester, Scott, Dumais, Susan, Furnas, George, Landauer, Thomas and Harshman, Richard (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41, 391- 407.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A structured vector space model for word meaning in context", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pado", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "897--906", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erk, Katrin, and Pado, Sebastian (2008). A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, 897-906, Honolulu, Hawaii.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Category-theoretic quantitative compositional distributional models of natural language semantics", "authors": [ { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grefenstette, Edward (2013). Category-theoretic quantitative compositional distributional models of natural language semantics. PhD Thesis, Univer- sity of Oxford, UK.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Computing semantic compositionality in distributional semantics", "authors": [ { "first": "Emiliano", "middle": [], "last": "Guevara", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Ninth International Conference on Computational Semantics", "volume": "", "issue": "", "pages": "135--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guevara, Emiliano (2011). Computing semantic com- positionality in distributional semantics. In Pro- ceedings of the Ninth International Conference on Computational Semantics, 135-144, Oxford, UK.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "M\u00e9thodes statistiques en sciences humaines", "authors": [ { "first": "David", "middle": [], "last": "Howell", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howell, David (2008). M\u00e9thodes statistiques en sci- ences humaines. Bruxelles, Belgique: De Boeck Universit\u00e9.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Comprehension: A Paradigme for Cognition", "authors": [ { "first": "Walter", "middle": [], "last": "Kintsch", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kintsch, Walter (1998). Comprehension: A Paradigme for Cognition. New York: Cambridge University Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Predication", "authors": [ { "first": "Walter", "middle": [], "last": "Kintsch", "suffix": "" } ], "year": 2001, "venue": "Cognitive Science", "volume": "25", "issue": "2", "pages": "173--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kintsch, Walter (2001). Predication. Cognitive Sci- ence, 25(2), 173-202.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge", "authors": [ { "first": "Thomas", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "2", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landauer, Thomas, and Dumais, Susan, (1997). A so- lution to Plato's problem: The latent semantic anal- ysis theory of acquisition, induction and representa- tion of knowledge. Psychological Review, 104(2), 211-240.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An introduction to latent semantic analysis", "authors": [ { "first": "Thomas", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Foltz", "suffix": "" }, { "first": "Darrell", "middle": [], "last": "Laham", "suffix": "" } ], "year": 1998, "venue": "Discourse Processes", "volume": "25", "issue": "", "pages": "259--284", "other_ids": {}, "num": null, "urls": [], "raw_text": "Landauer, Thomas, Foltz, Peter, and Laham, Darrell (1998). An introduction to latent semantic analysis, Discourse Processes, 25, 259-284.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Statistique exploratoire multidimensionnelle (3e\u00e9dition)", "authors": [ { "first": "Ludovic", "middle": [], "last": "Lebart", "suffix": "" }, { "first": "Piron", "middle": [], "last": "Marie", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lebart, Ludovic, Piron Marie, et Morineau Alain (2000). Statistique exploratoire multidimension- nelle (3e\u00e9dition), Paris: Dunod.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "", "middle": [], "last": "Luisa", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "", "middle": [], "last": "Marco", "suffix": "" }, { "first": "", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "", "middle": [], "last": "Raffaella", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Menini", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of SemEval-2014", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marelli, Marco, Bentivogli, Luisa, Baroni, Marco, Bernardi, Raffaella, Menini, Stefano, and Zampar- elli, Roberto (2014a). Semeval-2014 task 1: Evalu- ation of compositional distributional semantic mod- els on full sentences through semantic relatedness and textual entailment. In Proceedings of SemEval- 2014: Semantic Evaluation Exercises. Dublin, Ire- land.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A SICK cure for the evaluation of compositional distributional semantic models", "authors": [ { "first": "Marco", "middle": [], "last": "Marelli", "suffix": "" }, { "first": "", "middle": [], "last": "Menini", "suffix": "" }, { "first": "", "middle": [], "last": "Stefano", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "", "middle": [], "last": "Marco", "suffix": "" }, { "first": "", "middle": [], "last": "Bentivogli", "suffix": "" }, { "first": "", "middle": [], "last": "Luisa", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zamparelli", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 9th Edition of the Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marelli, Marco, Menini, Stefano, Baroni, Marco, Ben- tivogli, Luisa, Bernardi, Raffaella, and Zamparelli, Roberto (2014b). A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the 9th Edition of the Language Re- sources and Evaluation Conference, Reykjavik, Ice- land.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Composition in distributional models of semantics", "authors": [ { "first": "Jeff", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2010, "venue": "Cognitive Science", "volume": "34", "issue": "", "pages": "1388--1429", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mitchell, Jeff, and Lapata, Mirella (2010). Composi- tion in distributional models of semantics. Cognitive Science, 34, 1388-1429.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Correspondence analysis in R, with two-and threedimensional graphics: the CA package", "authors": [ { "first": "Oleg", "middle": [], "last": "Nenadic", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Greenacre", "suffix": "" } ], "year": 2007, "venue": "Journal of Statistical Software", "volume": "20", "issue": "3", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nenadic, Oleg, and Greenacre, Michael (2007). Cor- respondence analysis in R, with two-and three- dimensional graphics: the CA package, Journal of Statistical Software, 20(3), 1-13.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Probabilistic part-of-speech tagging using decision trees", "authors": [ { "first": "Helmut", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proceedings of the 1994 International Conference on New Methods in Language Processing", "volume": "", "issue": "", "pages": "44--49", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schmid, Helmut (1994). Probabilistic part-of-speech tagging using decision trees. In Proceedings of the 1994 International Conference on New Methods in Language Processing, 44-49, Manchester, UK.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Feature-rich partof-speech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceddings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistic", "volume": "", "issue": "", "pages": "252--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Toutanova, Kristina, Klein, Dan, Manning, Christo- pher, and Singer, Yoram (2003). Feature-rich part- of-speech tagging with a cyclic dependency net- work. In Proceddings of the Human Language Tech- nology Conference of the North American Chap- ter of the Association for Computational Linguistic 2003, 252-259, Edmonton, Canada.", "links": null } }, "ref_entries": {} } }