{ "paper_id": "P05-1019", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:37:38.223831Z" }, "title": "Modelling the substitutability of discourse connectives", "authors": [ { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": {} }, "email": "b.hutchinson@sms.ed.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Processing discourse connectives is important for tasks such as discourse parsing and generation. For these tasks, it is useful to know which connectives can signal the same coherence relations. This paper presents experiments into modelling the substitutability of discourse connectives. It shows that substitutability effects distributional similarity. A novel variancebased function for comparing probability distributions is found to assist in predicting substitutability.", "pdf_parse": { "paper_id": "P05-1019", "_pdf_hash": "", "abstract": [ { "text": "Processing discourse connectives is important for tasks such as discourse parsing and generation. For these tasks, it is useful to know which connectives can signal the same coherence relations. This paper presents experiments into modelling the substitutability of discourse connectives. It shows that substitutability effects distributional similarity. A novel variancebased function for comparing probability distributions is found to assist in predicting substitutability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Discourse coherence relations contribute to the meaning of texts, by specifying the relationships between semantic objects such as events and propositions. They also assist in the interpretation of anaphora, verb phrase ellipsis and lexical ambiguities (Hobbs, 1985; Kehler, 2002; Asher and Lascarides, 2003) . Coherence relations can be implicit, or they can be signalled explicitly through the use of discourse connectives, e.g. because, even though.", "cite_spans": [ { "start": 253, "end": 266, "text": "(Hobbs, 1985;", "ref_id": "BIBREF5" }, { "start": 267, "end": 280, "text": "Kehler, 2002;", "ref_id": "BIBREF9" }, { "start": 281, "end": 308, "text": "Asher and Lascarides, 2003)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For a machine to interpret a text, it is important that it recognises coherence relations, and so as explicit markers discourse connectives are of great assistance (Marcu, 2000) . When discourse connectives are not present, the task is more difficult. For such cases, unsupervised approaches have been developed for predicting relations, by using sentences containing discourse connectives as training data (Marcu and Echihabi, 2002; Lapata and Lascarides, 2004) . However the nature of the relationship between the coherence relations signalled by discourse connectives and their empirical distributions has to date been poorly understood. In particular, one might wonder whether connectives with similar meanings also have similar distributions.", "cite_spans": [ { "start": 164, "end": 177, "text": "(Marcu, 2000)", "ref_id": "BIBREF14" }, { "start": 407, "end": 433, "text": "(Marcu and Echihabi, 2002;", "ref_id": "BIBREF13" }, { "start": 434, "end": 462, "text": "Lapata and Lascarides, 2004)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Concerning natural language generation, texts are easier for humans to understand if they are coherently structured. Addressing this, a body of research has considered the problems of generating appropriate discourse connectives (for example (Moser and Moore, 1995; Grote and Stede, 1998) ). One such problem involves choosing which connective to generate, as the mapping between connectives and relations is not one-to-one, but rather many-to-many. Siddharthan (2003) considers the task of paraphrasing a text while preserving its rhetorical relations. Clauses conjoined by but, or and when are separated to form distinct orthographic sentences, and these conjunctions are replaced by the discourse adverbials however, otherwise and then, respectively.", "cite_spans": [ { "start": 242, "end": 265, "text": "(Moser and Moore, 1995;", "ref_id": "BIBREF17" }, { "start": 266, "end": 288, "text": "Grote and Stede, 1998)", "ref_id": "BIBREF3" }, { "start": 450, "end": 468, "text": "Siddharthan (2003)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The idea underlying Siddharthan's work is that one connective can be substituted for another while preserving the meaning of a text. Knott (1996) studies the substitutability of discourse connectives, and proposes that substitutability can motivate theories of discourse coherence. Knott uses an empirical methodology to determine the substitutability of pairs of connectives. However this methodology is manually intensive, and Knott derives relationships for only about 18% of pairs of connectives. It would thus be useful if substitutability could be predicted automatically.", "cite_spans": [ { "start": 133, "end": 145, "text": "Knott (1996)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposes that substitutability can be predicted through statistical analysis of the contexts in which connectives appear. Similar methods have been developed for predicting the similarity of nouns and verbs on the basis of their distributional similarity, and many distributional similarity functions have been proposed for these tasks (Lee, 1999) . However substitutability is a more complex notion than similarity, and we propose a novel variance-based function for assisting in this task.", "cite_spans": [ { "start": 347, "end": 358, "text": "(Lee, 1999)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper constitutes a first step towards predicting substitutability of cnonectives automatically. We demonstrate that the substitutability of connectives has significant effects on both distributional similarity and the new variance-based function. We then attempt to predict substitutability of connectives using a simplified task that factors out the prior likelihood of being substitutable.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Two types of relationships between connectives are of interest: similarity and substitutability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Relationships between connectives", "sec_num": "2" }, { "text": "The concept of lexical similarity occupies an important role in psychology, artificial intelligence, and computational linguistics. For example, in psychology, Miller and Charles (1991) report that psychologists 'have largely abandoned \"synonymy\" in favour of \"semantic similarity\".' In addition, work in automatic lexical acquisition is based on the proposition that distributional similarity correlates with semantic similarity (Grefenstette, 1994; Curran and Moens, 2002; Weeds and Weir, 2003) .", "cite_spans": [ { "start": 160, "end": 185, "text": "Miller and Charles (1991)", "ref_id": "BIBREF16" }, { "start": 430, "end": 450, "text": "(Grefenstette, 1994;", "ref_id": "BIBREF2" }, { "start": 451, "end": 474, "text": "Curran and Moens, 2002;", "ref_id": "BIBREF1" }, { "start": 475, "end": 496, "text": "Weeds and Weir, 2003)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity", "sec_num": "2.1" }, { "text": "Several studies have found subjects' judgements of semantic similarity to be robust. For example, Miller and Charles (1991) elicit similarity judgements for 30 pairs of nouns such as cord-smile, and found a high correlation with judgements of the same data obtained over 25 years previously (Rubenstein and Goodenough, 1965) . Resnik (1999) repeated the experiment, and calculated an inter-rater agreement of 0.90. Resnik and Diab (2000) also performed a similar experiment with pairs of verbs (e.g. bathe-kneel). The level of inter-rater agreement was again significant (r = 0.76).", "cite_spans": [ { "start": 291, "end": 324, "text": "(Rubenstein and Goodenough, 1965)", "ref_id": "BIBREF20" }, { "start": 327, "end": 340, "text": "Resnik (1999)", "ref_id": "BIBREF19" }, { "start": 415, "end": 437, "text": "Resnik and Diab (2000)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Similarity", "sec_num": "2.1" }, { "text": "1. Take an instance of a discourse connective in a corpus. Imagine you are the writer that produced this text, but that you need to choose an alternative connective.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity", "sec_num": "2.1" }, { "text": "2. Remove the connective from the text, and insert another connective in its place.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity", "sec_num": "2.1" }, { "text": "3. If the new connective achieves the same discourse goals as the original one, it is considered substitutable in this context. Given two words, it has been suggested that if words have the similar meanings, then they can be expected to have similar contextual distributions. The studies listed above have also found evidence that similarity ratings correlate positively with the distributional similarity of the lexical items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similarity", "sec_num": "2.1" }, { "text": "The notion of substitutability has played an important role in theories of lexical relations. A definition of synonymy attributed to Leibniz states that two words are synonyms if one word can be used in place of the other without affecting truth conditions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "Unlike similarity, the substitutability of discourse connectives has been previously studied. Halliday and Hasan (1976) note that in certain contexts otherwise can be paraphrased by if not, as in", "cite_spans": [ { "start": 94, "end": 119, "text": "Halliday and Hasan (1976)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "(1) It's the way I like to go to work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "One person and one line of enquiry at a time. Otherwise/if not, there's a muddle.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "They also suggest some other extended paraphrases of otherwise, such as under other circumstances. Knott (1996) systematises the study of the substitutability of discourse connectives. His first step is to propose a Test for Substitutability for connectives, which is summarised in Figure 1 . An application of the Test is illustrated by (2). Here seeing as was the connective originally used by the writer, however because can be used instead. (2) Seeing as/because we've got nothing but circumstantial evidence, it's going to be difficult to get a conviction. (Knott, p. 177) However the ability to substitute is sensitive to the context. In other contexts, for example (3), the substitution of because for seeing as is not valid.", "cite_spans": [ { "start": 99, "end": 111, "text": "Knott (1996)", "ref_id": "BIBREF10" }, { "start": 562, "end": 569, "text": "(Knott,", "ref_id": null }, { "start": 570, "end": 577, "text": "p. 177)", "ref_id": null } ], "ref_spans": [ { "start": 282, "end": 290, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "(3) It's a fairly good piece of work, seeing as/#because you have been under a lot of pressure recently. (Knott, p. 177) Similarly, there are contexts in which because can be used, but seeing as cannot be substituted for it:", "cite_spans": [ { "start": 105, "end": 112, "text": "(Knott,", "ref_id": null }, { "start": 113, "end": 120, "text": "p. 177)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "(4) That proposal is useful, because/#seeing as it gives us a fallback position if the negotiations collapse. (Knott, p. 177) Knott's next step is to generalise over all contexts a connective appears in, and to define four substitutability relationships that can hold between a pair of connectives w 1 and w 2 . These relationships are illustrated graphically through the use of Venn diagrams in Figure 2 , and defined below.", "cite_spans": [ { "start": 110, "end": 117, "text": "(Knott,", "ref_id": null }, { "start": 118, "end": 125, "text": "p. 177)", "ref_id": null } ], "ref_spans": [ { "start": 396, "end": 404, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "\u2022 w 1 is a SYNONYM of w 2 if w 1 can always be substituted for w 2 , and vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "\u2022 w 1 and w 2 are EXCLUSIVE if neither can ever be substituted for the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "\u2022 w 1 is a HYPONYM of w 2 if w 2 can always be substituted for w 1 , but not vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "\u2022 w 1 and w 2 are CONTINGENTLY SUBSTI-TUTABLE if each can sometimes, but not always, be substituted for the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "Given examples (2)-(4) we can conclude that because and seeing as are CONTINGENTLY SUBSTI-TUTABLE (henceforth \"CONT. SUBS.\"). However this is the only relationship that can be established using a finite number of linguistic examples. The other relationships all involve generalisations over all contexts, and so rely to some degree on the judgement of the analyst. Examples of each relationship given by Knott (1996) include: given that and seeing as are SYNONYMS, on the grounds that is a HY-PONYM of because, and because and now that are EXCLUSIVE.", "cite_spans": [ { "start": 404, "end": 416, "text": "Knott (1996)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "Although substitutability is inherently a more complex notion than similarity, distributional similarity is expected to be of some use in predicting substitutability relationships. For example, if two discourse connectives are SYNONYMS then we would expect them to have similar distributions. On the other hand, if two connectives are EXCLUSIVE, then we would expect them to have dissimilar distributions. However if the relationship between two connectives is HYPONYMY or CONT. SUBS. then we expect to have partial overlap between their distributions (consider Figure 2) , and so distributional similarity might not distinguish these relationships.", "cite_spans": [], "ref_spans": [ { "start": 562, "end": 571, "text": "Figure 2)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "The Kullback-Leibler (KL) divergence function is a distributional similarity function that is of particular relevance here since it can be described informally in terms of substitutability. Given cooccurrence distributions p and q, its mathematical definition can be written as: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutability", "sec_num": "2.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "D(p||q) = x p(x)(log 1 q(x) \u2212 log 1 p(x) )", "eq_num": "(5" } ], "section": "Substitutability", "sec_num": "2.2" }, { "text": "(e) w1 and w2 are EXCLUSIVE Figure 3 : Surprise in substituting w 2 for w 1 (darker shading indicates higher surprise)", "cite_spans": [], "ref_spans": [ { "start": 28, "end": 36, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "w2 w1", "sec_num": null }, { "text": "The value log 1 p(x) has an informal interpretation as a measure of how surprised an observer would be to see event x, given prior likelihood expectations defined by p. Thus, if p and q are the distributions of words w 1 and w 2 then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "w2 w1", "sec_num": null }, { "text": "D(p||q) = E p (surprise in seeing w 2 \u2212 surprise in seeing w 1 ) (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "w2 w1", "sec_num": null }, { "text": "where E p is the expectation function over the distribution of w 1 (i.e. p). That is, KL divergence measures how much more surprised we would be, on average, to see word w 2 rather than w 1 , where the averaging is weighted by the distribution of w 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "w2 w1", "sec_num": null }, { "text": "A distributional similarity function provides only a one-dimensional comparison of two distributions, namely how similar they are. However we can obtain an additional perspective by using a variancebased function. We now introduce a new function V by taking the variance of the surprise in seeing w 2 , over the contexts in which w 1 appears:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A variance-based function for distributional analysis", "sec_num": "3" }, { "text": "V (p, q) = V ar(surprise in seeing w 2 ) = E p ((E p (log 1 q(x) ) \u2212 log 1 q(x) ) 2 ) (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A variance-based function for distributional analysis", "sec_num": "3" }, { "text": "Note that like KL divergence, V (p, q) is asymmetric. We now consider how the substitutability of connectives affects our expectations of the value of V . If two connectives are SYNONYMS then each can always be used in place of other. Thus we would always expect a low level of surprise in seeing one Figure 3a .", "cite_spans": [], "ref_spans": [ { "start": 301, "end": 310, "text": "Figure 3a", "ref_id": null } ], "eq_spans": [], "section": "A variance-based function for distributional analysis", "sec_num": "3" }, { "text": "Relationship Function of w 1 to w 2 D(p||q) D(q||p) V (p, q) V (q, p) SYNONYM Low Low Low", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A variance-based function for distributional analysis", "sec_num": "3" }, { "text": "It follows that the variance in surprise is low. On the other hand, if two connectives are EXCLUSIVE then there would always be a high degree of surprise in seeing one in place of the other. This is indicated using dark shading in Figure 3e . Only one set is shaded because we need only consider the contexts in which w 1 is appropriate. In this case, the variance in surprise is again low. The situation is more interesting when we consider two connectives that are CONT. SUBS.. In this case substitutability (and hence surprise) is dependent on the context. This is illustrated using light and dark shading in Figure 3d . As a result, the variance in surprise is high. Finally, with HYPONYMY, the variance in surprise depends on whether the original connective was the HYPONYM or the HYPERNYM. Table 1 summarises our expectations of the values of KL divergence and V , for the various substitutability relationships. (KL divergence, unlike most similarity functions, is sensitive to the order of arguments related by hyponymy (Lee, 1999) .) The Something happened and something else happened. Something happened or something else happened. 0 1 2 3 4 5 Figure 4 : Example experimental item experiments described below test these expectations using empirical data.", "cite_spans": [ { "start": 1028, "end": 1039, "text": "(Lee, 1999)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 231, "end": 240, "text": "Figure 3e", "ref_id": null }, { "start": 612, "end": 621, "text": "Figure 3d", "ref_id": null }, { "start": 796, "end": 803, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1154, "end": 1162, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "A variance-based function for distributional analysis", "sec_num": "3" }, { "text": "We now describe our empirical experiments which investigate the connections between a) subjects' ratings of the similarity of discourse connectives, b) the substitutability of discourse connectives, and c) KL divergence and the new function V applied to the distributions of connectives. Our motivation is to explore how distributional properties of words might be used to predict substitutability. The experiments are restricted to connectives which relate clauses within a sentence. These include coordinating conjunctions (e.g. but) and a range of subordinators including conjunctions (e.g. because) as well as phrases introducing adverbial clauses (e.g. now that, given that, for the reason that). Adverbial discourse connectives are therefore not considered.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "This experiment tests the hypotheses that 1) subjects agree on the degree of similarity between pairs of discourse connectives, and 2) similarity ratings correlate with the degree of substitutability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: Subject ratings of similarity", "sec_num": "4.1" }, { "text": "We randomly selected 48 pairs of discourse connectives such that there were 12 pairs standing in each of the four substitutability relationships.To do this, we used substitutability judgements made by Knott (1996) , supplemented with some judgements of our own. Each experimental item consisted of the two discourse connectives along with dummy clauses, as illustrated in Figure 4 . The format of the experimental items was designed to indicate how a phrase could be used as a discourse connective (e.g. it may not be obvious to a subject that the phrase the moment is a discourse connective), but without Table 2 : Similarity by substitutability relationship providing complete semantics for the clauses, which might bias the subjects' ratings. Forty native speakers of English participated in the experiment, which was conducted remotely via the internet.", "cite_spans": [ { "start": 201, "end": 213, "text": "Knott (1996)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 372, "end": 380, "text": "Figure 4", "ref_id": null }, { "start": 606, "end": 613, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "4.1.1" }, { "text": "Leave-one-out resampling was used to compare each subject's ratings are with the means of their peers' (Weiss and Kulikowski, 1991) . The average inter-subject correlation was 0.75 (Min = 0.49, Max = 0.86, StdDev = 0.09), which is comparable to previous results on verb similarity ratings (Resnik and Diab, 2000) . The effect of substitutability on similarity ratings can be seen in Table 2 . Post-hoc Tukey tests revealed all differences between means in Table 2 to be significant.", "cite_spans": [ { "start": 103, "end": 131, "text": "(Weiss and Kulikowski, 1991)", "ref_id": "BIBREF23" }, { "start": 289, "end": 312, "text": "(Resnik and Diab, 2000)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 383, "end": 390, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.1.2" }, { "text": "The results demonstrate that subjects' ratings of connective similarity show significant agreement and are robust enough for effects of substitutability to be found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.1.2" }, { "text": "This experiment compares subjects' ratings of similarity with lexical co-occurrence data. It hypothesises that similarity ratings correlate with distributional similarity, but that neither correlates with the new variance in surprise function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Modelling similarity", "sec_num": "4.2" }, { "text": "Sentences containing discourse connectives were gathered from the British National Corpus and the world wide web, with discourse connectives identified on the basis of their syntactic contexts (for details, see Hutchinson (2004b) ). The mean number of sentences per connective was about 32, 000, although about 12% of these are estimated to be errors. From these sentences, lexical co-occurrence data were collected. Only co-occurrences with dis- course adverbials and other structural discourse connectives were stored, as these had previously been found to be useful for predicting semantic features of connectives (Hutchinson, 2004a) .", "cite_spans": [ { "start": 211, "end": 229, "text": "Hutchinson (2004b)", "ref_id": "BIBREF7" }, { "start": 617, "end": 636, "text": "(Hutchinson, 2004a)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.2.1" }, { "text": "A skewed variant of the Kullback-Leibler divergence function was used to compare co-occurrence distributions (Lee, 1999, with \u03b1 = 0.95). Spearman's correlation coefficient for ranked data showed a significant correlation (r = \u22120.51, p < 0.001). (The correlation is negative because KL divergence is lower when distributions are more similar.) The strength of this correlation is comparable with similar results achieved for verbs (Resnik and Diab, 2000) , but not as great as has been observed for nouns (McDonald, 2000) . Figure 5 plots the mean similarity judgements against the distributional divergence obtained using discourse markers, and also indicates the substitutability relationship for each item. (Two outliers can be observed in the upper left corner; these were excluded from the calculations.)", "cite_spans": [ { "start": 430, "end": 453, "text": "(Resnik and Diab, 2000)", "ref_id": "BIBREF18" }, { "start": 504, "end": 520, "text": "(McDonald, 2000)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 523, "end": 531, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2.2" }, { "text": "The \"variance in surprise\" function introduced in the previous section was applied to the same cooccurrence data. 1 These variances were compared to distributional divergence and the subjects' similarity ratings, but in both cases Spearman's correlation coefficient was not significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2.2" }, { "text": "In combination with the previous experiment,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4.2.2" }, { "text": "The previous experiments provide hope that substitutability of connectives might be predicted on the basis of their empirical distributions. However one complicating factor is that EXCLUSIVE is by far the most likely relationship, holding between about 70% of pairs. Preliminary experiments showed that the empirical evidence for other relationships was not strong enough to overcome this prior bias. We therefore attempted two pseudodisambiguation tasks which eliminated the effects of prior likelihoods. The first task involved distinguishing between the relationships whose connectives subjects rated as most similar, namely SYNONYMY and HY-PONYMY. Triples of connectives p, q, q were collected such that SYNONYM(p, q) and either HY-PONYM(p, q ) or HYPONYM(q , p) (we were not attempting to predict the order of HYPONYMY). The task was then to decide automatically which of q and q is the SYNONYM of p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 3: Predicting substitutability", "sec_num": "4.3" }, { "text": "The second task was identical in nature to the first, however here the relationship between p and q was either SYNONYMY or HYPONYMY, while p and q were either CONT. SUBS. or EXCLUSIVE. These two sets of relationships are those corresponding to high and low similarity, respectively. In combination, the two tasks are equivalent to predicting SYN-ONYMY or HYPONYMY from the set of all four relationships, by first distinguishing the high similarity relationships from the other two, and then making a finer-grained distinction between the two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 3: Predicting substitutability", "sec_num": "4.3" }, { "text": "Substitutability relationships between 49 structural discourse connectives were extracted from Knott's (1996) classification. In order to obtain more evaluation data, we used Knott's methodology to obtain relationships between an additional 32 connec- Table 3 : Distributional analysis by substitutability tives. This resulted in 46 triples p, q, q for the first task, and 10,912 triples for the second task.", "cite_spans": [ { "start": 95, "end": 109, "text": "Knott's (1996)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 252, "end": 259, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "max(D 1 , D 2 ) max(V 1 , V 2 ) (V 1 \u2212 V 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "The co-occurrence data from the previous section were re-used. These were used to calculate D(p||q) and V (p, q). Both of these are asymmetric, so for our purposes we took the maximum of applying their arguments in both orders. Recall from Table 1 that when two connectives are in a HYPONYMY relation we expect V to be sensitive to the order in which the connectives are given as arguments. To test this, we also calculated (", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 247, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "V (p, q) \u2212 V (q, p)) 2 , i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "e. the square of the difference of applying the arguments to V in both orders. The average values are summarised in Table 3 , with D 1 and D 2 (and V 1 and V 2 ) denoting different orderings of the arguments to D (and V ), and max denoting the function which selects the larger of two numbers.", "cite_spans": [], "ref_spans": [ { "start": 116, "end": 123, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "These statistics show that our theoretically motivated expectations are supported. In particular, (1) SYNONYMOUS connectives have the least distributional divergence and EXCLUSIVE connectives the most, (2) CONT. SUBS. and HYPONYMOUS connectives have the greatest values for V , and (3) V shows the greatest sensitivity to the order of its arguments in the case of HYPONYMY.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "The co-occurrence data were used to construct a Gaussian classifier, by assuming the values for D and V are generated by Gaussians. 2 First, normal functions were used to calculate the likelihood ratio of p and q being in the two relationships:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (syn|data) P (hyp|data) = P (syn) P (hyp) \u2022 P (data|syn) P (data|hyp) (8) = 1\u2022 n(max(D 1 , D 2 ); \u00b5 syn , \u03c3 syn ) n(max(D 1 , D 2 ); \u00b5 hyp , \u03c3 hyp )", "eq_num": "(9)" } ], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "2 KL divergence is right skewed, so a log-normal model was used to model D, whereas a normal model used for V . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "max(D 1 , D 2 ) 50.0% 76.1% max(V 1 , V 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "84.8% 60.6% Table 4 : Accuracy on pseudodisambiguation task where n(x; \u00b5, \u03c3) is the normal function with mean \u00b5 and standard deviation \u03c3, and where \u00b5 syn , for example, denotes the mean of the Gaussian model for SYNONYMY. Next the likelihood ratio for p and q was divided by that for p and q . If this value was greater than 1, the model predicted p and q were SYNONYMS, otherwise HYPONYMS. The same technique was used for the second task.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Methodology", "sec_num": "4.3.1" }, { "text": "A leave-one-out cross validation procedure was used. For each triple p, q, q , the data concerning the pairs p, q and p, q were held back, and the remaining data used to construct the models. The results are shown in Table 4 . For comparison, a random baseline classifier achieves 50% accuracy.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 224, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "4.3.2" }, { "text": "The results demonstrate the utility of the new variance-based function V . The new variance-based function V is better than KL divergence at distinguishing HYPONYMY from SYNONYMY (\u03c7 2 = 11.13, df = 1, p < 0.001), although it performs worse on the coarser grained task. This is consistent with the expectations of Table 1 . The two classifiers were also combined by making a naive Bayes assumption. This gave an accuracy of 76.1% on the first task, which is significantly better than just using KL divergence (\u03c7 2 = 5.65, df = 1, p < 0.05), and not significantly worse than using V . The combination's accuracy on the second task was 76.2%, which is about the same as using KL divergence. This shows that combining similarity-and variancebased measures can be useful can improve overall performance.", "cite_spans": [], "ref_spans": [ { "start": 313, "end": 320, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "4.3.2" }, { "text": "The concepts of lexical similarity and substitutability are of central importance to psychology, artificial intelligence and computational linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "To our knowledge this is the first modelling study of how these concepts relate to lexical items involved in discourse-level phenomena. We found a three way correspondence between data sources of quite distinct types: distributional similarity scores obtained from lexical co-occurrence data, substitutability judgements made by linguists, and the similarity ratings of naive subjects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The substitutability of lexical items is important for applications such as text simplification, where it can be desirable to paraphrase one discourse connective using another. Ultimately we would like to automatically predict substitutability for individual tokens. However predicting whether one connective can either a) always, b) sometimes or c) never be substituted for another is a step towards this goal. Our results demonstrate that these general substitutability relationships have empirical correlates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We have introduced a novel variance-based function of two distributions which complements distributional similarity. We demonstrated the new function's utility in helping to predict the substitutability of connectives, and it can be expected to have wider applicability to lexical acquisition tasks. In particular, it is expected to be useful for learning relationships which cannot be characterised purely in terms of similarity, such as hyponymy. In future work we will analyse further the empirical properties of the new function, and investigate its applicability to learning relationships between other classes of lexical items such as nouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "In practice, the skewed variant V (p, 0.95q + 0.05p) was used, in order to avoid problems arising when q(x) = 0.these results demonstrate a three way correspondence between the human ratings of the similarity of a pair of connectives, their substitutability relationship, and their distributional similarity.Hutchinson (2005) presents further experiments on modelling connective similarity, and discusses their implications. This experiment also provides empirical evidence that the new variance in surprise function is not a measure of similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "I would like to thank Mirella Lapata, Alex Lascarides, Alistair Knott, and the anonymous ACL reviewers for their helpful comments. This research was supported by EPSRC Grant GR/R40036/01 and a University of Sydney Travelling Scholarship.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Logics of Conversation", "authors": [ { "first": "Nicholas", "middle": [], "last": "Asher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas Asher and Alex Lascarides. 2003. Logics of Conver- sation. Cambridge University Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improvements in automatic thesaurus extraction", "authors": [ { "first": "R", "middle": [], "last": "James", "suffix": "" }, { "first": "M", "middle": [], "last": "Curran", "suffix": "" }, { "first": "", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Workshop on Unsupervised Lexical Acquisition", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "James R. Curran and M. Moens. 2002. Improvements in auto- matic thesaurus extraction. In Proceedings of the Workshop on Unsupervised Lexical Acquisition, Philadelphia, USA.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Explorations in Automatic Thesaurus Discovery", "authors": [ { "first": "Gregory", "middle": [], "last": "Grefenstette", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Grefenstette. 1994. Explorations in Automatic The- saurus Discovery. Kluwer Academic Publishers, Boston.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Discourse marker choice in sentence planning", "authors": [ { "first": "Brigitte", "middle": [], "last": "Grote", "suffix": "" }, { "first": "Manfred", "middle": [], "last": "Stede", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Ninth International Workshop on Natural Language Generation", "volume": "", "issue": "", "pages": "128--137", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brigitte Grote and Manfred Stede. 1998. Discourse marker choice in sentence planning. In Eduard Hovy, editor, Pro- ceedings of the Ninth International Workshop on Natural Language Generation, pages 128-137, New Brunswick, New Jersey. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Cohesion in English", "authors": [ { "first": "M", "middle": [], "last": "Halliday", "suffix": "" }, { "first": "R", "middle": [], "last": "Hasan", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Halliday and R. Hasan. 1976. Cohesion in English. Long- man.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the coherence and structure of discourse", "authors": [ { "first": "A", "middle": [], "last": "Jerry", "suffix": "" }, { "first": "", "middle": [], "last": "Hobbs", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerry A Hobbs. 1985. On the coherence and structure of dis- course. Technical Report CSLI-85-37, Center for the Study of Language and Information, Stanford University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Acquiring the meaning of discourse markers", "authors": [ { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004)", "volume": "", "issue": "", "pages": "685--692", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Hutchinson. 2004a. Acquiring the meaning of discourse markers. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004), pages 685-692.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mining the web for discourse markers", "authors": [ { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004)", "volume": "", "issue": "", "pages": "407--410", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Hutchinson. 2004b. Mining the web for discourse mark- ers. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), pages 407-410, Lisbon, Portugal.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Modelling the similarity of discourse connectives", "authors": [ { "first": "Ben", "middle": [], "last": "Hutchinson", "suffix": "" } ], "year": 2005, "venue": "To appear in Proceedings of the the 27th Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Hutchinson. 2005. Modelling the similarity of discourse connectives. To appear in Proceedings of the the 27th An- nual Meeting of the Cognitive Science Society (CogSci2005).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Coherence, Reference and the Theory of Grammar", "authors": [ { "first": "Andrew", "middle": [], "last": "Kehler", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Kehler. 2002. Coherence, Reference and the Theory of Grammar. CSLI publications.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A data-driven methodology for motivating a set of coherence relations", "authors": [ { "first": "Alistair", "middle": [], "last": "Knott", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alistair Knott. 1996. A data-driven methodology for motivat- ing a set of coherence relations. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Inferring sentenceinternal temporal relations", "authors": [ { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mirella Lapata and Alex Lascarides. 2004. Inferring sentence- internal temporal relations. In In Proceedings of the Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics Annual Meeting, Boston, MA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Measures of distributional similarity", "authors": [ { "first": "Lillian", "middle": [ "Lee" ], "last": "", "suffix": "" } ], "year": 1999, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of ACL 1999.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An unsupervised approach to recognizing discourse relations", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" }, { "first": "Abdessamad", "middle": [], "last": "Echihabi", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An unsuper- vised approach to recognizing discourse relations. In Pro- ceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL-2002), Philadelphia, PA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "The Theory and Practice of Discourse Parsing and Summarization", "authors": [ { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Environmental determinants of lexical processing effort", "authors": [ { "first": "Scott", "middle": [], "last": "Mcdonald", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott McDonald. 2000. Environmental determinants of lexical processing effort. Ph.D. thesis, University of Edinburgh.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Contextual correlates of semantic similarity", "authors": [ { "first": "George", "middle": [ "A" ], "last": "Miller", "suffix": "" }, { "first": "William", "middle": [ "G" ], "last": "Charles", "suffix": "" } ], "year": 1991, "venue": "Language and Cognitive Processes", "volume": "6", "issue": "1", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A. Miller and William G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1-28.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using discourse analysis and automatic text generation to study discourse cue usage", "authors": [ { "first": "M", "middle": [], "last": "Moser", "suffix": "" }, { "first": "J", "middle": [], "last": "Moore", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the AAAI 1995 Spring Symposium on Empirical Methods in Discourse Interpretation and Generation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Moser and J. Moore. 1995. Using discourse analysis and automatic text generation to study discourse cue usage. In Proceedings of the AAAI 1995 Spring Symposium on Empir- ical Methods in Discourse Interpretation and Generation.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Measuring verb similarity", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Diab", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Twenty Second Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik and Mona Diab. 2000. Measuring verb similarity. In Proceedings of the Twenty Second Annual Meeting of the Cognitive Science Society, Philadelphia, US, August.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language", "authors": [ { "first": "Philip", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1999, "venue": "Journal of Artificial Intelligence Research", "volume": "11", "issue": "", "pages": "95--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philip Resnik. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial Intel- ligence Research, 11:95-130.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Contextual correlates of synonymy", "authors": [ { "first": "H", "middle": [], "last": "Rubenstein", "suffix": "" }, { "first": "J", "middle": [ "B" ], "last": "Goodenough", "suffix": "" } ], "year": 1965, "venue": "Computational Linguistics", "volume": "8", "issue": "", "pages": "627--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Rubenstein and J. B. Goodenough. 1965. Contextual corre- lates of synonymy. Computational Linguistics, 8:627-633.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Preserving discourse structure when simplifying text", "authors": [ { "first": "Advaith", "middle": [], "last": "Siddharthan", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2003 European Natural Language Generation Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Advaith Siddharthan. 2003. Preserving discourse structure when simplifying text. In Proceedings of the 2003 European Natural Language Generation Workshop.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A general framework for distributional similarity", "authors": [ { "first": "Julie", "middle": [], "last": "Weeds", "suffix": "" }, { "first": "David", "middle": [], "last": "Weir", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Processing (EMNLP 2003), Sapporo, Japan, July.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Computer systems that learn", "authors": [ { "first": "M", "middle": [], "last": "Sholom", "suffix": "" }, { "first": "Casimir", "middle": [ "A" ], "last": "Weiss", "suffix": "" }, { "first": "", "middle": [], "last": "Kulikowski", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sholom M. Weiss and Casimir A. Kulikowski. 1991. Computer systems that learn. Morgan Kaufmann, San Mateo, CA.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Knott's Test for Substitutability" }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Venn diagrams representing relationships between distributions" }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Similarity versus distributional divergence" }, "TABREF2": { "html": null, "num": null, "type_str": "table", "content": "", "text": "" }, "TABREF5": { "html": null, "num": null, "type_str": "table", "content": "
HYPEX/CONT
", "text": "Input to Gaussian SYN vs SYN/HYP vs Model" } } } }