{ "paper_id": "Q15-1008", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:08:04.657215Z" }, "title": "A Bayesian Model of Grounded Color Semantics", "authors": [ { "first": "Brian", "middle": [], "last": "Mcmahan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rutgers University", "location": {} }, "email": "brian.mcmahan@rutgers.edu" }, { "first": "Matthew", "middle": [], "last": "Stone", "suffix": "", "affiliation": { "laboratory": "", "institution": "Rutgers University", "location": {} }, "email": "matthew.stone@rutgers.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Natural language meanings allow speakers to encode important real-world distinctions, but corpora of grounded language use also reveal that speakers categorize the world in different ways and describe situations with different terminology. To learn meanings from data, we therefore need to link underlying representations of meaning to models of speaker judgment and speaker choice. This paper describes a new approach to this problem: we model variability through uncertainty in categorization boundaries and distributions over preferred vocabulary. We apply the approach to a large data set of color descriptions, where statistical evaluation documents its accuracy. The results are available as a Lexicon of Uncertain Color Standards (LUX), which supports future efforts in grounded language understanding and generation by probabilistically mapping 829 English color descriptions to potentially context-sensitive regions in HSV color space.", "pdf_parse": { "paper_id": "Q15-1008", "_pdf_hash": "", "abstract": [ { "text": "Natural language meanings allow speakers to encode important real-world distinctions, but corpora of grounded language use also reveal that speakers categorize the world in different ways and describe situations with different terminology. To learn meanings from data, we therefore need to link underlying representations of meaning to models of speaker judgment and speaker choice. This paper describes a new approach to this problem: we model variability through uncertainty in categorization boundaries and distributions over preferred vocabulary. We apply the approach to a large data set of color descriptions, where statistical evaluation documents its accuracy. The results are available as a Lexicon of Uncertain Color Standards (LUX), which supports future efforts in grounded language understanding and generation by probabilistically mapping 829 English color descriptions to potentially context-sensitive regions in HSV color space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "To ground natural language semantics in real-world data at large scale requires researchers to confront the vocabulary problem (Furnas et al., 1987) . Much of what people say falls in a long tail of increasingly infrequent and specialized items. Moreover, the choice of how to categorize and describe realworld data varies across people. We can't account for this complexity by deriving one definitive mapping between words and the world.", "cite_spans": [ { "start": 127, "end": 148, "text": "(Furnas et al., 1987)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We see this complexity already in free text descriptions of color patches. English has fewer than Entropy (bits) Figure 1 : A visualization of the variability of the descriptions used to name colors within small bins of color space. For each Hue value, the entropy values for each bin along the Saturation and Value dimensions are grouped and plotted as box plots. The dotted line corresponds to a random choice out of fourteen items and to the perplexity of a histogram model trained on the corpus. a dozen basic color words (Berlin, 1991) , but people's descriptions of colors are much more variable than this would suggest. Measured on the corpus described in Section 4.1, there's an average of 3.845 bits of information in a color description given the color it describes-comparable to rolling a 14-sided die. Figure 1 summarizes the data and plots the entropy of descriptions encountered within small bins of color space. The bins are aggregated over the Saturation and Value dimensions and indexed on the xaxis by the Hue dimension. There's little reason to think that this variability conceals consistent meanings. In formal semantics, one of the hallmarks of vague language is that speakers can make it more precise in alternative, incompatible ways (Barker, 2002) . We see this in practice as well, for example with the image of Figure 2 , where subjects com- Young et al. (2014) , whose subjects describe these dogs as a brown dog and a tan one or a tan dog and a white one.", "cite_spans": [ { "start": 526, "end": 540, "text": "(Berlin, 1991)", "ref_id": "BIBREF4" }, { "start": 1258, "end": 1272, "text": "(Barker, 2002)", "ref_id": "BIBREF3" }, { "start": 1369, "end": 1388, "text": "Young et al. (2014)", "ref_id": "BIBREF50" } ], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 1", "ref_id": null }, { "start": 814, "end": 822, "text": "Figure 1", "ref_id": null }, { "start": 1338, "end": 1346, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "prehensibly describe either of two dogs as the tan one. Systems that robustly understand or generate descriptions of colors in situated dialogue need models of meaning that capture this variability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper makes two key contributions towards this challenge. First, we present a methodology to infer a corpus-based model of meaning that accounts for possible differences in word usage across different speakers. As we explain in Section 2, our approach differs from the typical perspective in grounded semantics (Tellex et al., 2011a; Matuszek et al., 2012; Krishnamurthy and Kollar, 2013) , where a meaning is reduced to a single classifier that collapses patterns of variation. Instead, our model allows for variability in meaning by positing uncertainty in classification boundaries that can get resolved when a speaker chooses to use a word on a specific occasion. We explain the model and its theoretical rationale in Section 3.", "cite_spans": [ { "start": 316, "end": 338, "text": "(Tellex et al., 2011a;", "ref_id": "BIBREF47" }, { "start": 339, "end": 361, "text": "Matuszek et al., 2012;", "ref_id": "BIBREF36" }, { "start": 362, "end": 393, "text": "Krishnamurthy and Kollar, 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Second, we develop and release a Lexicon of Uncertain Color Standards (LUX) by applying our methodology to color descriptions. LUX is an interpretation of 829 distinct English color descriptions as distributions over regions of the Hue-Saturation-Value color space that describe their possible meanings. As we describe in Section 4, the model is trained by machine learning methods from a subset of Randall Munroe's 2010 publicly-available corpus of 3.4 million crowdsourced free-text descriptions of color patches (Munroe, 2010) . Data, models and visualization software are available at http: //mcmahan.io/lux/.", "cite_spans": [ { "start": 515, "end": 529, "text": "(Munroe, 2010)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical evaluation of our model against two alternative approaches documents its effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The model makes better quantitative predictions than a brute-force memorization model; it seems to generalize to unseen data in more meaningful ways. At the same time, our meanings work as well as special-purpose models to explain speaker choice, even though our model supports diverse other reasoning. See Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We see color as the first of many applications of our methodology, and are optimistic about learning vague meanings for other continuous domains as quantity, space, and time. At the same time, the methodology opens up new prospects for research on negotiating meaning interactively (Larsson, 2013) with principled representations and with broad coverage. In fact, many practical situated dialogue systems already identify unfamiliar objects by color. We expect that LUX will provide a broadly useful resource to extend the range of descriptions such systems can generate and understand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Grounded semantics is the task of mapping representations of linguistic meaning to the physical world, whether by perceptual mechanisms (Harnad, 1990) or with the assistance of social interaction (DeVault et al., 2006) . In this paper, we are particularly concerned with grounding the meanings of primitive vocabulary. However, the ultimate test of grounded semantics-whether it is understanding commands (Winograd, 1970; Tellex et al., 2011b) , describing states of the world (Chen and Mooney, 2008) , or identifying objects (Matuszek et al., 2012; Krishnamurthy and Kollar, 2013; Dawson et al., 2013) -is the ability to interpret or generate utterances using lexical and compositional semantics so as to evoke appropriate real-world referents. Grounded semantics therefore involves more than just quantifying the associations between words and perceptual representations, as Chuang et al. (2008) and Heer and Stone (2012) do for color. Grounded semantics involves interpreting semantic primitives in terms of composable categories that let systems discriminate between cases where a word applies and cases where the word does not apply. (Our evaluation compares models of grounded semantics to more direct models of word-world associations.)", "cite_spans": [ { "start": 136, "end": 150, "text": "(Harnad, 1990)", "ref_id": "BIBREF24" }, { "start": 196, "end": 218, "text": "(DeVault et al., 2006)", "ref_id": "BIBREF16" }, { "start": 405, "end": 421, "text": "(Winograd, 1970;", "ref_id": "BIBREF49" }, { "start": 422, "end": 443, "text": "Tellex et al., 2011b)", "ref_id": "BIBREF48" }, { "start": 477, "end": 500, "text": "(Chen and Mooney, 2008)", "ref_id": "BIBREF10" }, { "start": 526, "end": 549, "text": "(Matuszek et al., 2012;", "ref_id": "BIBREF36" }, { "start": 550, "end": 581, "text": "Krishnamurthy and Kollar, 2013;", "ref_id": "BIBREF31" }, { "start": 582, "end": 602, "text": "Dawson et al., 2013)", "ref_id": "BIBREF15" }, { "start": 877, "end": 897, "text": "Chuang et al. (2008)", "ref_id": "BIBREF13" }, { "start": 902, "end": 923, "text": "Heer and Stone (2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Previous research has modeled these categories as regions of suitable perceptual feature spaces. Researchers have explored explicit spaces of high-level perceptual attributes (Farhadi et al., 2009; Silberer et al., 2013) , approximations to such spaces (Matuszek et al., 2012) , or low-level feature spaces such as Bag of Visual Words (Bruni et al., 2012) or Histogram of Gradients (Krishnamurthy and Kollar, 2013) . We specifically follow G\u00e4rdenfors (2000) and J\u00e4ger (2010) in assuming that color categories are convex regions in an underlying color space, and are not just determined by prototypical color values, such as in Andreas and Klein (2014) . However, unlike previous grounded semantics, we do not assume that words name categories unequivocally. Speakers may vary in how they interpret a word, so we treat the link between words and categories probabilistically. The difference makes training our model more indirect than previous approaches to grounded meaning. In particular, our model introduces a new layer of uncertainty that describes what category the speaker uses.", "cite_spans": [ { "start": 175, "end": 197, "text": "(Farhadi et al., 2009;", "ref_id": "BIBREF19" }, { "start": 198, "end": 220, "text": "Silberer et al., 2013)", "ref_id": "BIBREF45" }, { "start": 253, "end": 276, "text": "(Matuszek et al., 2012)", "ref_id": "BIBREF36" }, { "start": 335, "end": 355, "text": "(Bruni et al., 2012)", "ref_id": "BIBREF8" }, { "start": 382, "end": 414, "text": "(Krishnamurthy and Kollar, 2013)", "ref_id": "BIBREF31" }, { "start": 440, "end": 457, "text": "G\u00e4rdenfors (2000)", "ref_id": "BIBREF21" }, { "start": 462, "end": 474, "text": "J\u00e4ger (2010)", "ref_id": "BIBREF27" }, { "start": 627, "end": 651, "text": "Andreas and Klein (2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Similar kinds of uncertainty can be found in Bayesian models of speaker strategy, such as that of Smith et al. (2013) . However, this research has assumed that speakers aim to be as informative as possible. We have no evidence that our speakers do that. We assume only that speakers' utterances are reliable and mirror prevailing usage.", "cite_spans": [ { "start": 98, "end": 117, "text": "Smith et al. (2013)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Prior work by cognitive scientists has studied color terms extensively, but focused on basic onesmonolexemic, top-level color words with general application and high frequency in a language (Kay et al., 2009; Lammens, 1994) . These color categories seem to shape people's expectations and memory for colors (Persaud and Hemmer, 2014) , and patterns of color naming can therefore enhance software for helping people organize and interact with color (Chuang et al., 2008; Heer and Stone, 2012) . Moreover, crosslinguistic evidence suggests that the human perceptual system places strong biases on the meanings of the basic color terms (Regier et al., 2005) , perhaps because basic terms must partition the perceptual space in an efficient way (Regier et al., 2007) . We depart from research on basic color naming in considering a much wider range of terms, much like Andreas and Klein (2014) . We consider subordinate, non-basic terms like beige or lavender; modified colors like light blue or bright green; and named subcategories like olive green, navy blue or brick red.", "cite_spans": [ { "start": 190, "end": 208, "text": "(Kay et al., 2009;", "ref_id": "BIBREF29" }, { "start": 209, "end": 223, "text": "Lammens, 1994)", "ref_id": "BIBREF33" }, { "start": 307, "end": 333, "text": "(Persaud and Hemmer, 2014)", "ref_id": "BIBREF42" }, { "start": 448, "end": 469, "text": "(Chuang et al., 2008;", "ref_id": "BIBREF13" }, { "start": 470, "end": 491, "text": "Heer and Stone, 2012)", "ref_id": "BIBREF25" }, { "start": 633, "end": 654, "text": "(Regier et al., 2005)", "ref_id": "BIBREF43" }, { "start": 741, "end": 762, "text": "(Regier et al., 2007)", "ref_id": "BIBREF44" }, { "start": 865, "end": 889, "text": "Andreas and Klein (2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In order to use semantic primitives for understanding, it's necessary to combine them into an integrated sentence-level representation: this is the problem of semantic parsing. Semantic parsers can be built by hand (Winograd, 1970) , induced through inductive logic programming (Zelle and Mooney, 1996) , or treated as a structured classification problem (Zettlemoyer and Collins, 2005) . Once a suitable logical form is derived, interpretation typically involves a recursive process of finding referents that fit lexical categories and relationships (Mavridis and Roy, 2006; Tellex et al., 2011a) . While this paper does not explicitly address how our meanings might be used in conjunction with such techniques, we see no fundamental obstacle to doing so-for example, by resolving references probabilistically and marginalizing over uncertainty in meaning.", "cite_spans": [ { "start": 215, "end": 231, "text": "(Winograd, 1970)", "ref_id": "BIBREF49" }, { "start": 278, "end": 302, "text": "(Zelle and Mooney, 1996)", "ref_id": "BIBREF51" }, { "start": 355, "end": 386, "text": "(Zettlemoyer and Collins, 2005)", "ref_id": "BIBREF52" }, { "start": 551, "end": 575, "text": "(Mavridis and Roy, 2006;", "ref_id": "BIBREF37" }, { "start": 576, "end": 597, "text": "Tellex et al., 2011a)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our model involves two significant innovations over previous approaches to grounded meaning. The first is to capture the vagueness and flexibility of grounded meaning with semantic representations that treat meaning as uncertain. We represent the semantics of a color description with a distribution over color categories, which weights possible meanings by the relative likelihood of a speaker using this meaning on any particular occasion. For example, speakers might associate yellowish green with a range of possible meanings, differing in how far the color category extends into green hues. By representing uncertainty about meaning, our model makes room to capture variability in language use. For example, it implicitly quantifies how likely speakers are to use words differently, as with the two interpretations of tan in Figure 2 .", "cite_spans": [], "ref_spans": [ { "start": 830, "end": 838, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Using Vague Color Terms: A Model", "sec_num": "3" }, { "text": "Our second contribution is our simple model of the relationship between semantics and pragmatics. We assume that speakers' choices mirror established patterns. In particular, the model learns a measure of availability for each color term that tracks how frequently speakers tend to use it when it is applicable. For example, although the expressions yellowish green and chartreuse are associated with very similar color categories, people say yellowish green much more often: it has a higher availability. Empirically, we find few terms with high availability and a long tail of terms with lower availabilities. We assume speakers simply sample applicable terms from this distribution, which predicts the long tail of observed responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Using Vague Color Terms: A Model", "sec_num": "3" }, { "text": "Mathematically, we develop our approach through the rational analysis methodology for explaining human behavior proposed by Anderson (1991) , along with methodological insights from the linguistics and philosophy of vagueness. In the remainder of this section, we explain the theoretical antecedents in perceptual science, linguistics and cognitive modeling that inform our approach.", "cite_spans": [ { "start": 124, "end": 139, "text": "Anderson (1991)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Using Vague Color Terms: A Model", "sec_num": "3" }, { "text": "Color can be defined as sensations by which the perceptual system tracks the diffuse reflectance of objects, despite variability, uncertainty and ambiguity in the visual input. Red, green, and blue cones in the retina allow the visual system to coarsely estimate frequency bands in the spectrum of incoming light. Cameras and screens that use the redgreen-blue (RGB) color space are designed roughly to correspond to these responses. However, colors in the visual system summarize spectral profiles rather than mere wavelengths of light. For example, we see colors like cyan (green plus blue without red), magenta (blue plus red without green) and yellow (red plus green without blue) as intermediate saturated colors between the familiar primaries. This naturally leads to a wheel of hues describing the relative prominence of different spectral components along a continuum. Fairchild (2013) provides an overview of color appearance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Color Categories", "sec_num": "3.1" }, { "text": "To capture this variation, we'll work in the simple hue-saturation-value (HSV) color space that's common in computer graphics and color picker user interfaces (Hughes et al., 2013) and implemented in python's native colorsys package. This coordinate system represents colors with three distinct qualitative dimensions: Hue (H) represents changes in tint around a color wheel, Saturation (S) represents the relative proportion of color versus gray, and Value (V) represents the location on the white-black continuum. We will associate color categories with rectangular box-shaped regions in HSV space. More sophisticated color spaces have been developed to describe the psychophysics of color more precisely, but they depend on the photometric illumination and other aspects of the viewing context that were not controlled in the collection of the data we are using (Fairchild, 2013).", "cite_spans": [ { "start": 159, "end": 180, "text": "(Hughes et al., 2013)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Color Categories", "sec_num": "3.1" }, { "text": "Our assumption is that color terms are associated probabilistically with color categories. We illustrate the idea for the color label yellowish-green through the plot in Figure 3 . The plot shows variation in use of the term across the Hue dimension: the bar graph is a scaled histogram of the responses in the data we use. There is a range of colors where people use yellowish green often, surrounded by borderline cases where it becomes increasingly infrequent. We represent this variability by assuming that the boundaries that delimit the color are uncertain. In any utterance, yellowish green fits only those Hue values that are above a minimum threshold \u03c4 Lower and below a maximum threshold \u03c4 Upper . However, it is uncertain which thresholds a speaker will use. The model describes this variability with probability density functions. They are shown for yellowish green in Figure 3 as the \u03c4 distributions. The figure shows that there is a central range of hues, between the \u03c4 distributions, that is definitely yellowish green. The \u03c4 distributions peak at the most likely boundaries for yellowish green, encompassing a broad region that's frequently called yellowish green. Further away, threshold values and yellowish green utterances alike become rapidly less likely.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 881, "end": 889, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "Our representation is motivated by Barker (2002) and Lassiter (2009) , who show how sets of possible thresholds 1 can account for many of our intuitions about the use of vague language. Their analysis invites us to capture semantic variability through two geometric constructs. First, there is a certain interval, parameterized by two points, \u00b5 Lower and \u00b5 Upper , within which a color description definitely applies. Outside this interval are regions of borderline cases, delimited by probabilistically-varying thresholds \u03c4 Lower and \u03c4 Upper , where the color description sometimes applies. We represent the position of the threshold with a \u0393(\u03b1, \u03b2) distribution, a standard statistical tool to model processes that start, continue indefinitely, and stop, like waiting times. 2 We can determine a likelihood that a description fits a color by marginalizing over the thresholds: this gives the black curve visualized in Figure 3 . As we describe in Section 3.3, we can use this to account for the graded responses from subjects that we observe near color boundaries.", "cite_spans": [ { "start": 35, "end": 48, "text": "Barker (2002)", "ref_id": "BIBREF3" }, { "start": 53, "end": 68, "text": "Lassiter (2009)", "ref_id": "BIBREF35" }, { "start": 776, "end": 777, "text": "2", "ref_id": null } ], "ref_spans": [ { "start": 919, "end": 927, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "We summarize with a formal definition of our semantic representation. Let X be the 3D space of HSV colors and let x \u2208 X be a measured color value. Each color label k has definite boundaries, \u00b5 Lower and \u00b5 Upper in X, delimiting a box of HSV color space. Surrounding the definite region are regions of uncertainty: the set of possible boundaries beyond \u00b5. These are represented by probability distributions over lower and upper threshold values in each dimension. We'll represent these thresholds by \u03c4 j,d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "k where k \u2208 K indexes the color label, j \u2208 {Lower/L, Upper/U} indexes the boundary, and d \u2208 {H, S, V} indexes color components. We assume the thresholds are distributed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "\u03c4 Lower,d k \u223c \u00b5 Lower,d k \u2212 \u0393(\u03b1 Lower,d k , \u03b2 Lower,d k ) \u03c4 Upper,d k \u223c \u00b5 Upper,d k + \u0393(\u03b1 Upper,d k , \u03b2 Upper,d k ) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "The meaning of a color term is thus a \"blurry box\". The distribution lets us determine the probability of Figure 4 : The Rational Observer observes a color patch, x. The applicability of each label (k true ) is based upon the label parameters (\u03b1, \u03b2, \u00b5) and x. The label (k said ) is sampled proportional to the applicability and a background weight: how often a label is said when it applies. a point x falling into the color category k as in Eq. 2. We also use the compact notation in Eq. 3.", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 114, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (\u03c4 Lower, H k < x H < \u03c4 Upper, H k ) \u00d7 P (\u03c4 Lower, S k < x S < \u03c4 Upper, S k ) \u00d7 P (\u03c4 Lower, V k < x V < \u03c4 Upper, V k ) (2) = d P (\u03c4 L,d k < x d i < \u03c4 U,d k )", "eq_num": "(3)" } ], "section": "Semantic Representation", "sec_num": "3.2" }, { "text": "Our goal is to learn probabilistic representations of the meanings of color terms from subjects' responses. To do this, we need not only a framework for representing colors but also a model of how subjects choose color terms. Inspired by rational analysis (Anderson, 1991), we assume that speakers' choices match their communicative goals and their semantic knowledge. We leverage this assumption to derive a Bayes Rational Observer model linking semantics to observed color descriptions. The graphical model in Figure 4 formalizes our approach. We start from an observed color patch, x. The Rational Observer uses the \u03c4 -distributions for each color description k to determine the likelihood that the speaker judges k applicable. As defined in Eq. 3, the likelihood is the subset of possible boundaries which contain the target color value. Normally, many descriptions will be applicable. Which the speaker chooses depends further on the availability of the label-a background measure of how frequently a label is chosen when it's applicable. Intuitively, availability creates a bias for easy descriptions, capturing how natural or ordinary a descrip-tion is in language use, how easily it springs to mind or how easily it is understood.", "cite_spans": [], "ref_spans": [ { "start": 512, "end": 520, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "We formalize this as a generative model. As we explain in Section 4, we infer the parameters from our data. In Eq. 4, we consider the conditional distribution of a subject observing a color patch given HSV value x and labeling it k:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "(4) P (k said , k true |x) = P (k said |k true )P (k true |x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "In this equation, k said is the event that the subject responds to x with label k and k true is the event that the subject judges k true of the HSV value x. The two factors of Eq. 4 are respectively the availability and applicability of the color label.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "Availability: The prior P (k said |k true ) quantifies the rate at which label k is used when it applies. We refer to this quantity as the availability and denote it as \u03b1 k . Availability captures the observed bias for frequent color terms. When multiple color labels fit a color value, those with higher availability will be used more often, but those with lower availability will still get used. This effect is partially responsible for the long tail of subjects' responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "Applicability: The second factor, P (k true |x), is the probability that k is true of, or applies to, the color value x. We calculate the applicability by marginalizing over all possible thresholds as in Eq. 3. In other words, we calculate the probability mass of the boundaries which allow for this description to apply. We treat each applicability judgment as independent of others. This implies that the relative frequency at which we see a color description used is directly proportional to the proportion of boundaries which license it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "For clearer notation and parameter estimation, we track thresholds with a piecewise function \u03c6 d k (x d ) as in Eq. 5 and Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 130, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c6 d k (x d ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 P (x d > \u03c4 L,d k ), x d \u2264 \u00b5 L,d k P (x d < \u03c4 U,d k ), x d \u2265 \u00b5 U,d k 1, otherwise", "eq_num": "(5)" } ], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "Finally, Eq. 6 rewrites Eq. 4 to make the applicability and availability explicit. The model treats this equation as the probability of success for a Bernoulli trial and the data as sampled from Categorical distributions formed by the set of K Bernoulli random variables. This is discussed further in Section 4.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "(6) P (k said , k true |x) = \u03b1 k d \u03c6 d k (x d )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rational Observer Model", "sec_num": "3.3" }, { "text": "We worked with Randall Munroe's crowdsourced corpus of color judgments, and fit the model using the Metropolis-Hastings Markov Chain Monte Carlo, a Gaussian random walk optimization method. This form of approximate Bayesian inference is described in Section 4.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Learning Experiment", "sec_num": "4" }, { "text": "In 2010, Munroe elicited descriptions of color patches over the web. His platform asked users for background information such as sex, colorblindness, and monitor type, then presented color patches and let the user freely name them. The setup didn't ensure that users see controlled colors or that users' responses are reliable, but the experiment collected over 3.4M items pairing RGB values with text descriptions. Munroe's methodology, data and results are published online (Munroe, 2010) . 3 Munroe summarizes his results with 954 idealized colors-RGB values that best exemplify high frequency color labels. In effect, Munroe's summary offers a prototype theory of color vocabulary, like that of Andreas and Klein (2014) . An alternative theory, which we explore, is that variability in the applicability of labels is an important part of people's knowledge of color semantics. We compare the two theories explicitly in Section 5.", "cite_spans": [ { "start": 476, "end": 490, "text": "(Munroe, 2010)", "ref_id": "BIBREF41" }, { "start": 493, "end": 494, "text": "3", "ref_id": null }, { "start": 699, "end": 723, "text": "Andreas and Klein (2014)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Munroe Color Corpus", "sec_num": "4.1" }, { "text": "Our experiments focus on a subset of Munroe's data comprising 2,176,417 data points and 829 color descriptions, divided into a training set of 70%, a 5% development set, and a held-out test set of 25%. To minimize variability in language use, we selected data from users who self-report as noncolorblind English speakers. This accounts for 2.5M of Munroe's 3.4M items. To get our subset, we further restrict attention to labels used 100 times or more, to ensure that there's substantial evidence of each term's breadth of applicability. We hand curated the responses to correct some minor spelling variations involving a single-character change (\"yellow green\" vs \"yellow-green\"; \"fuchsia\" vs \"fuschia\", \"fushia\", \"fuchia\", and \"fucsia\") and to remove high-frequency spam labels. We are left with 829 color labels that fit these restrictions. Finally, we used python's colorsys to convert from RGB to HSV, where we hypothesize color meanings can be represented more simply. We include these data sets with our release at http://mcmahan. io/lux/ so our results can be replicated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Munroe Color Corpus", "sec_num": "4.1" }, { "text": "Optimization of the model's parameters is framed in a Bayesian framework and interpreted as maximizing the likelihood of the data given the parameters. We fit each label and each dimension independently. The data on each dimension is binned, as in Figure 3 , so we have Binomial random variables for each bin. For each color label k, the probability of success is based on the model's parameters. Non-k data in the bin are observations of failure. This gives Eq. 7:", "cite_spans": [], "ref_spans": [ { "start": 248, "end": 256, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (n d i,k |n d i , Z d k , \u03c6 k ) \u223c Bin(n d i , Z d k \u03c6 d k (i))", "eq_num": "(7)" } ], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "Here n d i is the number of data points in bin i on dimension d, n d i,k is the number of data points for label k in bin i on dimension d, and Z d k is a normalization constant, implicitly reflecting both the availability \u03b1 k and the distribution of responses of the term across other color dimensions. The optimization process is a parameter search method which uses as an objective function the probability of n d i,k in Eq. 7 for all d,i, and k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "Parameter Search: We adopt a Bayesian coordinate descent which sequentially samples the certain region parameter, \u00b5, and the shape and rate parameters (\u03b1 and \u03b2) of the \u0393 distributions for all d and k independently. It also samples the estimated normalization constant, Z d K . More specifically, the sampling is done using Metropolis-Hastings Markov Chain Monte Carlo (Metropolis et al., 1953; Chib and Greenberg, 1995) , which performs a Gaussian random walk on the parameters 4 . For each sample, the likelihood of the data, derived from the Binomial variables, is compared for the new and old set 4 We set the standard deviation of the sampling Gaussian to be 1 for each \u00b5 and 0.3 for each \u03b1 and \u03b2 after finding experimentally that it led to effective parameter search (Gelman et al., 1996) . of parameters. The new parameters are accepted proportionally to the ratio of the two likelihoods. Multiple chains were run using 4 different bin sizes per dimension and monitored for convergence using the generalized Gelman-Rubin diagnostic method (Brooks and Gelman, 1998) . This methodology leaves us not only with the Monte Carlo estimate of the expected value for each parameter, but also a sampling distribution that quantifies the uncertainty in the parameters themselves.", "cite_spans": [ { "start": 368, "end": 393, "text": "(Metropolis et al., 1953;", "ref_id": "BIBREF40" }, { "start": 394, "end": 419, "text": "Chib and Greenberg, 1995)", "ref_id": "BIBREF12" }, { "start": 600, "end": 601, "text": "4", "ref_id": null }, { "start": 772, "end": 793, "text": "(Gelman et al., 1996)", "ref_id": "BIBREF22" }, { "start": 1014, "end": 1070, "text": "Gelman-Rubin diagnostic method (Brooks and Gelman, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "Availability: Availability is estimated as the ratio of the observed frequency of a label to its expected frequency given the parameters which define its distribution. The expected frequency, a marginalization of the color space for the \u03c6 function, is calculated using the midpoint integration approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "(8) \u03b1 k = P (k said , k true ) P (k true ) = count(k)/N x P (k true |x)P (x) 5 Model Evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "LUX explains Munroe's data via speakers' rational use of probabilistic meanings, represented as simple \"blurry boxes\". In this section, we assess the effectiveness of this explanation. We anticipate two arguments against our model: first, that the representation is too simple; second, that factoring speakers' choices through a model of meaning is too cumbersome. We rebut these arguments by providing metrics and results that suggest that LUX escapes these objections and captures almost all of the structure in subjects' responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fitting the Model Parameters", "sec_num": "4.2" }, { "text": "To test LUX's representations, we built a brute-force histogram model (HM) that discretizes HSV space and tracks frequency distributions of labels directly in each discretized bin. Similar histogram models have been developed by Chuang et al. (2008) and (Heer and Stone, 2012) to build interfaces for interacting with color that are informed by human categorization and naming. More precisely, our HM uses a linear interpolation method (Chen and Goodman, 1996) to combine three histograms of various granularity. 5 This amounts to predicting responses by querying the training data. HM has the potential to expose whether LUX is missing important features of the distribution of color descriptions. We also built a direct model of subjects' choices of color terms. Instead of appealing to the applicability and availability of a color label, it works with the observed frequency of a color label and a Gaussian model of the probability of a color value for each label, as in Eq. 9: (9) P (k said , k true |x) \u221d P (x|k true )P (k said , k true )", "cite_spans": [ { "start": 229, "end": 249, "text": "Chuang et al. (2008)", "ref_id": "BIBREF13" }, { "start": 254, "end": 276, "text": "(Heer and Stone, 2012)", "ref_id": "BIBREF25" }, { "start": 436, "end": 460, "text": "(Chen and Goodman, 1996)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Alternative Models", "sec_num": "5.1" }, { "text": "This Gaussian model (GM) generalizes Munroe's pairing of labels with prototypical colors: P (x|k true ) is a Gaussian with diagonal covariance, so it associates each color term with a mean HSV value and with variances in each dimension that determine a label-specific distance metric. GM predicts speaker choice by weighting these distances probabilistically against the priors. GM completely sidesteps the need to model meaning categorically. It therefore has the potential to expose whether our assumptions about semantic representations and speaker choices hinder LUX's performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Alternative Models", "sec_num": "5.1" }, { "text": "We evaluate the models using two classes of metrics on a held-out test set consisting of 25% of the corpus. The first type is based upon the posterior distribution over labels and the ranked position of subjects' actual labels of color values. The second type is based upon the log likelihood of the models, which quantifies model fit.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "5.2" }, { "text": "To answer how accurate a model's predictions are, we can locate subjects' responses in the weighted rankings computed by the models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decision-Based Metrics", "sec_num": "5.2.1" }, { "text": "The TOP K Measures: Each model provides a posterior distribution over the possible labels. The most likely label of this posterior is the maximum likelihood estimate (MLE). We track how often the MLE color label is what the user actually said as 5 Specifically, the histograms are of size (90,10,10), (45,5,5), and (1,1,1) across Hue, Saturation, and Value with interpolation weights of 0.322, 0.643, and 0.035 respectively. These parameters were determined by taking the training set as 5-fold validation sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decision-Based Metrics", "sec_num": "5.2.1" }, { "text": "the T OP 1 measure. For the Histogram Model, the T OP 1 approximates the most frequent label observed in the data for a color value. We also measure how often the correct label appears in the first 5 and 10 most likely labels. These are denoted T OP 5 and T OP 10 respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decision-Based Metrics", "sec_num": "5.2.1" }, { "text": "We can also measure how well a model explains speaker choice using the log likelihood of the labels given the model and the color values, denoted as LLV (M ). This is calculated using Eq. 10 across all N data points in the held-out test set. LLV (M ) is used when computing perplexity and Aikake Information Criterion (AIC). We report all measures in bits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Likelihood-Based Metrics", "sec_num": "5.2.2" }, { "text": "LLV (M ) = log 2 P M (K true , K said |X) = i log 2 P M (k true i , k said i |x i ) (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Likelihood-Based Metrics", "sec_num": "5.2.2" }, { "text": "A more general measure of model fit is the log likelihood of the color values and their labels jointly across the training set, LL(V ), given the model. It is defined and calculated analogously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Likelihood-Based Metrics", "sec_num": "5.2.2" }, { "text": "Perplexity Perplexity has been used in past research to measure the performance of statistical language models (Jelinek et al., 1977; Brown et al., 1992) . Lower perplexity means that the model is less surprised by the data and so describes it more precisely. We use it here to measure how well a model encodes the regularities in color descriptions.", "cite_spans": [ { "start": 111, "end": 133, "text": "(Jelinek et al., 1977;", "ref_id": "BIBREF28" }, { "start": 134, "end": 153, "text": "Brown et al., 1992)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Likelihood-Based Metrics", "sec_num": "5.2.2" }, { "text": "Akaike Information Criterion: AIC is derived from information theory (Akaike, 1974) and balances the model's fit to the data with the complexity of the model by penalizing a larger number of parameters. The intuition is that a smaller AIC indicates a better balance of parameters and model fit.", "cite_spans": [ { "start": 69, "end": 83, "text": "(Akaike, 1974)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Likelihood-Based Metrics", "sec_num": "5.2.2" }, { "text": "Table 1 summarizes the decision-based evaluation results. 6 We see little penalty for LUX and T OP 1 T OP 5 T OP 10 LUX 39.55% 69.80% 80.46% HM 39.40% 71.89% 82.53% GM 39.05% 69.25% 79.99% \u2212LL \u2212LLV AIC Perp LUX 1.13*10 7 2.05*10 6 4.13*10 6 13.61 HM 1.13*10 7 2.09*10 6 4.82*10 6 14.41 GM 1.34*10 7 2.08*10 6 4.17*10 6 14.14 GM's constrained frameworks for modeling choices. However, the differences in the table, though numerically small, are significant (by Binomial test) at p < .02 or less. In particular, the fact that LUX wins T OP 1 hints that its representations enable better generalization than HM or GM. The success of HM at T OP 5 and T OP 10 , meanwhile, suggests that some qualitative aspects of people's use of color words do escape the strong assumptions of LUX and GM-a point we return to below. At the same time, we draw a general lesson from the overall patterns of results in Table 1 . Language users must be quite uncertain about how speakers will describe colors. Speakers do not seem to choose the most likely color label in a majority of responses; their behavior shows a long tail. These results are in line with the probabilistic models of meaning and speaker choice we have developed. Table 2 summarizes the likelihood based metrics. GM's estimates don't fit the distribution of the test data as a whole: GM is a good model of what labels speakers give but not a good model of the points that get particular labels. By contrast, LUX tops out every row in the table. HM is flexible enough in principle to mirror LUX's predictions; HM must suffer circumstances, our model is only applicable 87% of the time, and thus the performance metrics should be scaled down. We do not explicitly report the scaled numbers. from sparse data, given its vast number of parameters. By contrast, LUX is able to capture the distributions of speaker responses in deeper and more flexible ways by using semantics as an abstraction.", "cite_spans": [ { "start": 58, "end": 59, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 896, "end": 903, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1212, "end": 1219, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.3" }, { "text": "Our analysis of patterns of error in LUX suggests that LUX would best improved by more faithful models of linguistic meaning, rather than more elaborate models of subjects' choices or more powerful learning methods. For one thing, neither LUX nor the simple prototype model captures ambiguity, which sometimes arises in Munroe's data. An example is the color label melon, which has a multimodal distribution in the reddish-orange and green areas of color space shown in Figure 5 -most likely corresponding to people thinking about the distinct colors of the flesh of watermelon, cantaloupe and honeydew. Interestingly, our model captures the more common usage.", "cite_spans": [], "ref_spans": [ { "start": 470, "end": 478, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.3" }, { "text": "A different modeling challenge is illustrated by the behavior of greenish in Figure 6 . Greenish seems to be an exception to the general assumption that color terms label convex categories. Actually, greenish seems to fit the boundary of green-the areas that are not definitely green but not definitely not green. (Linguists often appeal to such concepts in the literature on vagueness.) This is not a convex area so, not surprisingly, our model finds a poor match. Additional research is needed to understand when it's appropriate to give meanings more complex representations and how they can be learned.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Results", "sec_num": "5.3" }, { "text": "Natural language color descriptions provide an expressive, precise but open-ended vocabulary to characterize real-world objects. This paper documents Figure 6 : For the Hue dimension, the data for \"greenish\" is plotted against the LUX model's \u03c6 curve.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 158, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "and releases Lexicon of Uncertain Color Standards (LUX), which provides semantic representations of 829 English color labels, derived from a large corpus of attested descriptions. Our evaluation shows that LUX provides a precise description of speakers' free-text labels of color patches. Our expectation therefore is that LUX will serve as a useful resource for building systems for situated language understanding and generation that need to describe colors to English-speaking users.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "Our work in LUX has built closely on linguistic approaches to color meaning and psychological approaches to modeling experimental subjects. Because LUX bridges linguistic theory, psychological data, and system building, LUX also affords a unique set of resources for future research at the intersection of semantics and pragmatics of dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "For example, our work explains subjects' decisions as a straightforward reflection of their communicative goals in a probabilistic setting. Our measures of availability and applicability can be seen as offering computational interpretations of the Gricean Maxims of Manner and Quality (Grice, 1975) . However, these particular interpretations don't give rise to implicatures on our modellargely because our Rational Observer is so inclusive and variable in the descriptions it offers. To show this, we can analyze what an idealized hearer learns about an underlying color x when the speaker uses a color term k: this is P (x|k said ). The model predic-tions are formalized in Eq. 11. P (x|k said ) = P (x|k said , k true ) = P (k said , k true |x)P (x) P (k said , k true ) = P (k said |k true )P (k true |x)P (x) P (k said |k true )P (k true ) = \u03b1 k P (k true |x)P (x) \u03b1 k P (k true ) = P (x|k true )", "cite_spans": [ { "start": 285, "end": 298, "text": "(Grice, 1975)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "We apply Bayes's rule, exploiting our model assumption that the speaker says k only when the speaker first judges that k is true. Our model also tells us that, given that k is true, the speaker's choice of whether to say k depends only on the availability \u03b1 k of the term k. Simplifying, we find that the pragmatic posterior-what we think the speaker was looking at when she said this word-coincides with the semantic posterior-what we think the word is true of. Intuitively, the hearer knows that the term is true because the speaker has used the word, independent of the color x the speaker is describing. Similarly, in our model of speaker choice, the speaker does not take x into account in choosing one of the applicable words to say (one way the speaker could do this, for example, would be to prefer terms that were more informative about the target color x). Instead, the speaker simply samples from the candidates. That's why the speaker's choice reveals only what the semantics says about x. Technically, this makes semantics a Nash equilibrium, where the information the hearer recovers from an utterance is exactly the information the speaker intends to express-in keeping with a longstanding tradition in the philosophy of language (Lewis, 1969; Cumming, 2013) . By contrast, researchers such as Smith et al. (2013) adopt broadly similar formal assumptions but predict asymmetries where sophisticated listeners can second-guess naive speakers' choices and recover \"extra\" information that the speaker has revealed incidentally and unintentionally. The difference between this approach and ours eventually leads to a difference in the priors over utterances, but it's best explained through the different utilities that motivate speakers' different choices in the first place. Smith et al. (2013) assume speakers want to be informative; we assume they want to fit in. The empirical success of our approach on Munroe's data motivates a larger project to elicit data that can explicitly probe subjects' communicative goals in relation to semantic coordination.", "cite_spans": [ { "start": 1245, "end": 1258, "text": "(Lewis, 1969;", "ref_id": "BIBREF36" }, { "start": 1259, "end": 1273, "text": "Cumming, 2013)", "ref_id": "BIBREF14" }, { "start": 1309, "end": 1328, "text": "Smith et al. (2013)", "ref_id": "BIBREF46" }, { "start": 1789, "end": 1808, "text": "Smith et al. (2013)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "Meanwhile, our work formalizes probabilistic theories of vagueness with new scale and precision. These naturally suggest that we test predictions about the dynamics of conversation drawn from the semantic literature on vagueness. For example, in hearing a description for an object, we come to know more about the standards governing the applicability of the description. This is outlined by Barker (2002) as having a meta-semantic effect on the common ground among interlocutors. For example, hearing a yellow-green object called yellowish green should make objects in the same color range more likely to be referred to as yellowish green. We could use LUX straightforwardly to represent such conceptual pacts (Brennan and Clark, 1996) via a posterior over threshold parameters. It's natural to look for empirical evidence to assess the effectiveness of such representations of dependent context.", "cite_spans": [ { "start": 392, "end": 405, "text": "Barker (2002)", "ref_id": "BIBREF3" }, { "start": 711, "end": 736, "text": "(Brennan and Clark, 1996)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "A particularly important case involves descriptive material that distinguishes a target referent from salient alternatives, as in the understanding or generation of referring expressions (Krahmer and van Deemter, 2012) . Following Kyburg and Morreau (2000) , we could represent this using LUX via a posterior over the threshold parameters that fit the target but exclude its alternatives. Again, our model associates such goals with quantitative measures that future research can explore empirically. Meo et al. (2014) present an initial exploration of this idea.", "cite_spans": [ { "start": 187, "end": 218, "text": "(Krahmer and van Deemter, 2012)", "ref_id": "BIBREF30" }, { "start": 231, "end": 256, "text": "Kyburg and Morreau (2000)", "ref_id": "BIBREF32" }, { "start": 501, "end": 518, "text": "Meo et al. (2014)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "These open questions complement the key advantage that makes uncertainty about meaning crucial to the success of the model and experiments we have reported here. Many kinds of language use seem to be highly variable, and approaches to grounded semantics need ways to make room for this variability both in the semantic representations they learn and the algorithms that induce these representations from language data. We have argued that uncertainty about meaning is a powerful new tool to do this. We look forward to future work addressing uncertainty in grounded meanings in a wide range of continuous domains-generalizing from color to quantity, scales, space and time-and pursuing a wide range of reasoning efforts, to corroborate our results and to leverage them in grounded language use.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Conclusion", "sec_num": "6" }, { "text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 103-115, 2015. Action Editor: Lillian Lee. Submission batch: 11/2014; Published 2/2015. c 2015 Association for Computational Linguistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We treat the terms \"boundary\", \"threshold\", and \"standard\" to be synonymous, but useful in different contexts.2 \u0393 distributions rise quickly away from the origin point, then trail off from the peak in an open-ended exponential decay. One intuition for applying them in this case isGraff Fara's (2000) suggestion that a particular categorization decision involves waiting to find a natural break among salient colors. However, we choose them for mathematical convenience rather than psychological or linguistic considerations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://blog.xkcd.com/2010/05/03/ color-survey-results/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "There is a caveat to these performance measures. All of the reported numbers are for the final data subset which we discuss in Section 4.1. We choose to use a subset which did not include color labels that had less than 100 occurrences. In the Englishspeaking and American-citizenship subset, the rare description tail accounts for 13% of the data-Roughly one third of the tail data is unique descriptions. If the tail represents real world", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported in part by NSF DGE-0549115. This work has benefited from discussion and feedback from the reviewers of TACL, Maneesh Agrawala, David DeVault, Jason Eisner, Tarek El-Gaaly, Katrin Erk, Vicky Froyen, Joshua Gang, Pernille Hemmer, Alex Lascarides, and Tim Meo. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A new look at the statistical model identification", "authors": [ { "first": "Hirotugu", "middle": [], "last": "Akaike", "suffix": "" } ], "year": 1974, "venue": "IEEE Transactions on Automatic Control", "volume": "19", "issue": "6", "pages": "716--723", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hirotugu Akaike. 1974. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6):716-723.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The adaptive nature of human categorization", "authors": [ { "first": "R", "middle": [], "last": "John", "suffix": "" }, { "first": "", "middle": [], "last": "Anderson", "suffix": "" } ], "year": 1991, "venue": "Psychological Review", "volume": "98", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John R. Anderson. 1991. The adaptive nature of human categorization. Psychological Review, 98(3):409.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Grounding language with points and paths in continuous spaces", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "58--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas and Dan Klein. 2014. Grounding lan- guage with points and paths in continuous spaces. In Proceedings of the Eighteenth Conference on Com- putational Natural Language Learning, pages 58-67, June.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The dynamics of vagueness", "authors": [ { "first": "Chris", "middle": [], "last": "Barker", "suffix": "" } ], "year": 2002, "venue": "Linguistics and Philosophy", "volume": "25", "issue": "1", "pages": "1--36", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Barker. 2002. The dynamics of vagueness. Lin- guistics and Philosophy, 25(1):1-36.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Basic Color Terms: Their Universality and Evolution", "authors": [ { "first": "Brent", "middle": [], "last": "Berlin", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brent Berlin. 1991. Basic Color Terms: Their Univer- sality and Evolution. Univ of California Press.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Conceptual pacts and lexical choice in conversation", "authors": [ { "first": "Susan", "middle": [ "E" ], "last": "Brennan", "suffix": "" }, { "first": "Herbert", "middle": [ "H" ], "last": "Clark", "suffix": "" } ], "year": 1996, "venue": "Journal of Experimental Psychology: Learning, Memory and Cognition", "volume": "22", "issue": "6", "pages": "1482--1493", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan E. Brennan and Herbert H. Clark. 1996. Concep- tual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory and Cognition, 22(6):1482-1493.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "General methods for monitoring convergence of iterative simulations", "authors": [ { "first": "P", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Brooks", "suffix": "" }, { "first": "", "middle": [], "last": "Gelman", "suffix": "" } ], "year": 1998, "venue": "Journal of Computational and Graphical Statistics", "volume": "7", "issue": "4", "pages": "434--455", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen P. Brooks and Andrew Gelman. 1998. Gen- eral methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7(4):434-455.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An estimate of an upper bound for the entropy of English", "authors": [ { "first": "F", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" }, { "first": "J", "middle": [ "Della" ], "last": "Vincent", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Pietra", "suffix": "" }, { "first": "Stephen", "middle": [ "A" ], "last": "Mercer", "suffix": "" }, { "first": "Jennifer", "middle": [ "C" ], "last": "Della Pietra", "suffix": "" }, { "first": "", "middle": [], "last": "Lai", "suffix": "" } ], "year": 1992, "venue": "Computational Linguistics", "volume": "18", "issue": "1", "pages": "31--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Robert L. Mercer, Stephen A. Della Pietra, and Jennifer C. Lai. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31-40.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distributional semantics in technicolor", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Gemma", "middle": [], "last": "Boleda", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Nam-Khanh", "middle": [], "last": "Tran", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "136--145", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 136-145.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the 34th annual meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "310--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua Goodman. 1996. An empiri- cal study of smoothing techniques for language model- ing. In Proceedings of the 34th annual meeting on As- sociation for Computational Linguistics, pages 310- 318.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning to sportscast: a test of grounded language acquisition", "authors": [ { "first": "L", "middle": [], "last": "David", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: a test of grounded language acquisition.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ICML '08: Proceedings of the 25th international conference on Machine learning", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "In ICML '08: Proceedings of the 25th international conference on Machine learning, pages 128-135.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Understanding the Metropolis-Hastings algorithm", "authors": [ { "first": "Siddhartha", "middle": [], "last": "Chib", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 1995, "venue": "The American Statistician", "volume": "49", "issue": "4", "pages": "327--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhartha Chib and Edward Greenberg. 1995. Un- derstanding the Metropolis-Hastings algorithm. The American Statistician, 49(4):327-335.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A probabilistic model of the categorical association between colors", "authors": [ { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "Maureen", "middle": [], "last": "Stone", "suffix": "" }, { "first": "Pat", "middle": [], "last": "Hanrahan", "suffix": "" } ], "year": 2008, "venue": "Color Imaging Conference", "volume": "", "issue": "", "pages": "6--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Chuang, Maureen Stone, and Pat Hanrahan. 2008. A probabilistic model of the categorical association between colors. In Color Imaging Conference, pages 6-11.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Coordination and content", "authors": [ { "first": "Sam", "middle": [], "last": "Cumming", "suffix": "" } ], "year": 2013, "venue": "Philosophers' Imprint", "volume": "13", "issue": "4", "pages": "1--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sam Cumming. 2013. Coordination and content. Philosophers' Imprint, 13(4):1-16.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A generative probabilistic framework for learning spatial language", "authors": [ { "first": "Colin", "middle": [ "R" ], "last": "Dawson", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Wright", "suffix": "" }, { "first": "Antons", "middle": [], "last": "Rebguns", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Valenzuela Esc\u00e1rcega", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Paul", "middle": [ "R" ], "last": "Cohen", "suffix": "" } ], "year": 2013, "venue": "2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL)", "volume": "", "issue": "", "pages": "1--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin R. Dawson, Jeremy Wright, Antons Rebguns, Marco Valenzuela Esc\u00e1rcega, Daniel Fried, and Paul R. Cohen. 2013. A generative probabilis- tic framework for learning spatial language. In 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL), pages 1-8. IEEE.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Societal grounding is essential to meaningful language use", "authors": [ { "first": "David", "middle": [], "last": "Devault", "suffix": "" }, { "first": "Iris", "middle": [], "last": "Oved", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Twenty-first National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "747--754", "other_ids": {}, "num": null, "urls": [], "raw_text": "David DeVault, Iris Oved, and Matthew Stone. 2006. So- cietal grounding is essential to meaningful language use. In Proceedings of the Twenty-first National Con- ference on Artificial Intelligence, pages 747-754.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Color Appearance Models. The Wiley-IS&T Series in Imaging Science and Technology", "authors": [ { "first": "D", "middle": [], "last": "Mark", "suffix": "" }, { "first": "", "middle": [], "last": "Fairchild", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark D. Fairchild. 2013. Color Appearance Models. The Wiley-IS&T Series in Imaging Science and Tech- nology. Wiley.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Shifting sands: An interestrelative theory of vagueness", "authors": [ { "first": "Delia", "middle": [], "last": "Graff", "suffix": "" }, { "first": "Fara", "middle": [], "last": "", "suffix": "" } ], "year": 2000, "venue": "Philosophical Topics", "volume": "28", "issue": "1", "pages": "45--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delia Graff Fara. 2000. Shifting sands: An interest- relative theory of vagueness. Philosophical Topics, 28(1):45-81.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Describing objects by their attributes", "authors": [ { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Endres", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Hoiem", "suffix": "" }, { "first": "David", "middle": [], "last": "Forsyth", "suffix": "" } ], "year": 2009, "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1778--1785", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. 2009. Describing objects by their attributes. 2009 IEEE Conference on Computer Vision and Pat- tern Recognition, pages 1778-1785, June.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The vocabulary problem in human-system communication", "authors": [ { "first": "George", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "Thomas", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "Louis", "middle": [ "M" ], "last": "Gomez", "suffix": "" }, { "first": "Susan", "middle": [ "T" ], "last": "Dumais", "suffix": "" } ], "year": 1987, "venue": "Communications of the ACM", "volume": "30", "issue": "11", "pages": "964--971", "other_ids": {}, "num": null, "urls": [], "raw_text": "George W. Furnas, Thomas K. Landauer, Louis M. Gomez, and Susan T. Dumais. 1987. The vocabulary problem in human-system communication. Communi- cations of the ACM, 30(11):964-971.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Conceptual Spaces", "authors": [ { "first": "Peter", "middle": [], "last": "G\u00e4rdenfors", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter G\u00e4rdenfors. 2000. Conceptual Spaces. MIT Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Efficient Metropolis jumping rules", "authors": [ { "first": "Andrew", "middle": [], "last": "Gelman", "suffix": "" }, { "first": "O", "middle": [], "last": "Gareth", "suffix": "" }, { "first": "Walter", "middle": [ "R" ], "last": "Roberts", "suffix": "" }, { "first": "", "middle": [], "last": "Gilks", "suffix": "" } ], "year": 1996, "venue": "Bayesian Statistics 5", "volume": "", "issue": "", "pages": "599--607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew Gelman, Gareth O. Roberts, and Walter R. Gilks. 1996. Efficient Metropolis jumping rules. In J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. Smith, editors, Bayesian Statistics 5, pages 599-607. Oxford University Press.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Logic and conversation", "authors": [ { "first": "P", "middle": [], "last": "Herbert", "suffix": "" }, { "first": "", "middle": [], "last": "Grice", "suffix": "" } ], "year": 1975, "venue": "Syntax and Semantics III: Speech Acts", "volume": "", "issue": "", "pages": "41--58", "other_ids": {}, "num": null, "urls": [], "raw_text": "Herbert P. Grice. 1975. Logic and conversation. In P. Cole and J. Morgan, editors, Syntax and Semantics III: Speech Acts, pages 41-58. Academic Press.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The symbol grounding problem", "authors": [ { "first": "Stevan", "middle": [], "last": "Harnad", "suffix": "" } ], "year": 1990, "venue": "Physica D: Nonlinear Phenomena", "volume": "42", "issue": "1-3", "pages": "335--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335-346.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Color naming models for color selection, image editing and palette design", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Heer", "suffix": "" }, { "first": "Maureen", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "1007--1016", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Heer and Maureen Stone. 2012. Color naming models for color selection, image editing and palette design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 1007- 1016.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Computer Graphics: Principles and Practice", "authors": [ { "first": "John", "middle": [ "F" ], "last": "Hughes", "suffix": "" }, { "first": "Andries", "middle": [], "last": "Van Dam", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Mcguire", "suffix": "" }, { "first": "David", "middle": [ "F" ], "last": "Sklar", "suffix": "" }, { "first": "James", "middle": [ "D" ], "last": "Foley", "suffix": "" }, { "first": "Steven", "middle": [ "K" ], "last": "Feiner", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Akeley", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John F. Hughes, Andries van Dam, Morgan McGuire, David F. Sklar, James D. Foley, Steven K. Feiner, and Kurt Akeley. 2013. Computer Graphics: Principles and Practice (3rd Edition). Addison-Wesley Profes- sional.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Natural color categories are convex sets", "authors": [ { "first": "Gerhard", "middle": [], "last": "J\u00e4ger", "suffix": "" } ], "year": 2009, "venue": "Logic, Language and Meaning -17th Amsterdam Colloquium, Amsterdam", "volume": "6042", "issue": "", "pages": "11--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerhard J\u00e4ger. 2010. Natural color categories are con- vex sets. In Maria Aloni, Harald Bastiaanse, Tikitu de Jager, and Katrin Schulz, editors, Logic, Language and Meaning -17th Amsterdam Colloquium, Amster- dam, The Netherlands, December 16-18, 2009, Re- vised Selected Papers, volume 6042 of Lecture Notes in Computer Science, pages 11-20. Springer.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Perplexity-a measure of the difficulty of speech recognition tasks", "authors": [ { "first": "Fred", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "Robert", "middle": [ "L" ], "last": "Mercer", "suffix": "" }, { "first": "Lalit", "middle": [ "R" ], "last": "Bahl", "suffix": "" }, { "first": "James", "middle": [ "K" ], "last": "Baker", "suffix": "" } ], "year": 1977, "venue": "The Journal of the Acoustical Society of America", "volume": "62", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fred Jelinek, Robert L. Mercer, Lalit R. Bahl, and James K. Baker. 1977. Perplexity-a measure of the difficulty of speech recognition tasks. The Journal of the Acoustical Society of America, 62:S63.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "The World Color Survey", "authors": [ { "first": "Paul", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Brent", "middle": [], "last": "Berlin", "suffix": "" }, { "first": "Luisa", "middle": [], "last": "Maffi", "suffix": "" }, { "first": "William", "middle": [ "R" ], "last": "Merrifield", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Cook", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Kay, Brent Berlin, Luisa Maffi, William R. Merri- field, and Richard Cook. 2009. The World Color Sur- vey. CSLI.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Computational generation of referring expressions: A survey", "authors": [ { "first": "Emiel", "middle": [], "last": "Krahmer", "suffix": "" }, { "first": "", "middle": [], "last": "Kees Van Deemter", "suffix": "" } ], "year": 2012, "venue": "Computational Linguistics", "volume": "38", "issue": "1", "pages": "173--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emiel Krahmer and Kees van Deemter. 2012. Compu- tational generation of referring expressions: A survey. Computational Linguistics, 38(1):173-218.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Jointly learning to parse and perceive: Connecting natural language to the physical world", "authors": [ { "first": "Jayant", "middle": [], "last": "Krishnamurthy", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kollar", "suffix": "" } ], "year": 2013, "venue": "Transactions of the Association for Computational Linguistics", "volume": "1", "issue": "2", "pages": "193--206", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural lan- guage to the physical world. Transactions of the Asso- ciation for Computational Linguistics, 1(2):193-206.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Fitting words: Vague words in context. Linguistics and Philosophy", "authors": [ { "first": "Alice", "middle": [], "last": "Kyburg", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Morreau", "suffix": "" } ], "year": 2000, "venue": "", "volume": "23", "issue": "", "pages": "577--597", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alice Kyburg and Michael Morreau. 2000. Fitting words: Vague words in context. Linguistics and Phi- losophy, 23(6):577-597.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A computational model of color perception and color naming", "authors": [ { "first": "Johan", "middle": [ "Maurice" ], "last": "", "suffix": "" }, { "first": "Gisele", "middle": [], "last": "Lammens", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johan Maurice Gisele Lammens. 1994. A computational model of color perception and color naming. Ph.D. thesis, SUNY Buffalo.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Journal of Logic and Computation. Advance online publication", "authors": [ { "first": "", "middle": [], "last": "Staffan Larsson", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1093/logcom/ext059" ] }, "num": null, "urls": [], "raw_text": "Staffan Larsson. 2013. Formal semantics for percep- tual classification. Journal of Logic and Computa- tion. Advance online publication. doi: 10.1093/log- com/ext059.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Vagueness as probabilistic linguistic knowledge", "authors": [ { "first": "Daniel", "middle": [], "last": "Lassiter", "suffix": "" } ], "year": 2009, "venue": "Vagueness in Communication -International Workshop", "volume": "6517", "issue": "", "pages": "127--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Lassiter. 2009. Vagueness as probabilistic lin- guistic knowledge. In Rick Nouwen, Robert van Rooij, Uli Sauerland, and Hans-Christian Schmitz, editors, Vagueness in Communication -International Workshop, ViC 2009, held as part of ESSLLI 2009, Bordeaux, France, July 20-24, 2009. Revised Selected Papers, volume 6517 of Lecture Notes in Computer Science, pages 127-150. Springer.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A joint model of language and perception for grounded attribute learning", "authors": [ { "first": "K", "middle": [], "last": "David", "suffix": "" }, { "first": "", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ma", "middle": [ "Cynthia" ], "last": "Cambridge", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Matuszek", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Liefeng", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Dieter", "middle": [], "last": "Bo", "suffix": "" }, { "first": "", "middle": [], "last": "Fox", "suffix": "" } ], "year": 1969, "venue": "Proceedings of the 29th International Conference on Machine Learning (ICML-12)", "volume": "", "issue": "", "pages": "1671--1678", "other_ids": {}, "num": null, "urls": [], "raw_text": "David K. Lewis. 1969. Convention: A Philosophical Study. Harvard University Press, Cambridge, MA. Cynthia Matuszek, Nicholas Fitzgerald, Luke Zettle- moyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded at- tribute learning. In Proceedings of the 29th Interna- tional Conference on Machine Learning (ICML-12), pages 1671-1678.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Grounded situation models for robots: Where words and percepts meet", "authors": [ { "first": "Nikolaos", "middle": [], "last": "Mavridis", "suffix": "" }, { "first": "Deb", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2006, "venue": "Intelligent Robots and Systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nikolaos Mavridis and Deb Roy. 2006. Grounded situation models for robots: Where words and per- cepts meet. In Intelligent Robots and Systems, 2006", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "IEEE/RSJ International Conference on", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "4690--4697", "other_ids": {}, "num": null, "urls": [], "raw_text": "IEEE/RSJ International Conference on, pages 4690- 4697. IEEE.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Generating and resolving vague color references", "authors": [ { "first": "Timothy", "middle": [], "last": "Meo", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Mcmahan", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2014, "venue": "SEMDIAL 2014: THE 18th Workshop on the Semantics and Pragmatics of Dialogue", "volume": "", "issue": "", "pages": "107--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Meo, Brian McMahan, and Matthew Stone. 2014. Generating and resolving vague color refer- ences. In SEMDIAL 2014: THE 18th Workshop on the Semantics and Pragmatics of Dialogue, pages 107- 115.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Equation of state calculations by fast computing machines", "authors": [ { "first": "Nicholas", "middle": [], "last": "Metropolis", "suffix": "" }, { "first": "Arianna", "middle": [ "W" ], "last": "Rosenbluth", "suffix": "" }, { "first": "Marshall", "middle": [ "N" ], "last": "Rosenbluth", "suffix": "" }, { "first": "Augusta", "middle": [ "H" ], "last": "Teller", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Teller", "suffix": "" } ], "year": 1953, "venue": "The Journal of Chemical Physics", "volume": "21", "issue": "6", "pages": "1087--1092", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nicholas Metropolis, Arianna W. Rosenbluth, Mar- shall N. Rosenbluth, Augusta H. Teller, and Edward Teller. 1953. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087-1092.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Color survey results", "authors": [ { "first": "Randall", "middle": [], "last": "Munroe", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Randall Munroe. 2010. Color survey results. On- line at http://blog.xkcd.com/2010/05/03/color-survey- results/.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "The influence of knowledge and expectations for color on episodic memory", "authors": [ { "first": "Kimele", "middle": [], "last": "Persaud", "suffix": "" }, { "first": "Pernille", "middle": [], "last": "Hemmer", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 36th Annual Conference of the Cognitive Science Society", "volume": "", "issue": "", "pages": "1162--1167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kimele Persaud and Pernille Hemmer. 2014. The in- fluence of knowledge and expectations for color on episodic memory. In P Bello, M Guarini, M Mc- Shane, and B Scassellati, editors, Proceedings of the 36th Annual Conference of the Cognitive Science So- ciety, pages 1162-1167.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Focal colors are universal after all", "authors": [ { "first": "Terry", "middle": [], "last": "Regier", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Richard", "middle": [ "S" ], "last": "Cook", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the National Academy of Sciences", "volume": "102", "issue": "", "pages": "8386--8391", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Regier, Paul Kay, and Richard S. Cook. 2005. Fo- cal colors are universal after all. Proceedings of the National Academy of Sciences, 102:8386-8391.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Color naming reflects optimal partitions of color space", "authors": [ { "first": "Terry", "middle": [], "last": "Regier", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Khetarpal", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the National Academy of Sciences", "volume": "104", "issue": "", "pages": "1436--1441", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Regier, Paul Kay, and Naveen Khetarpal. 2007. Color naming reflects optimal partitions of color space. Proceedings of the National Academy of Sci- ences, 104:1436-1441.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Models of Semantic Representation with Visual Attributes", "authors": [ { "first": "Carina", "middle": [], "last": "Silberer", "suffix": "" }, { "first": "Vittorio", "middle": [], "last": "Ferrari", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "572--582", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of Semantic Representation with Visual Attributes. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguistics, pages 572-582.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Learning and using language via recursive pragmatic reasoning about other agents", "authors": [ { "first": "Nathaniel", "middle": [ "J" ], "last": "Smith", "suffix": "" }, { "first": "Noah", "middle": [ "D" ], "last": "Goodman", "suffix": "" }, { "first": "Michael", "middle": [ "C" ], "last": "Frank", "suffix": "" } ], "year": 2013, "venue": "Advances in Neural Information Processing Systems", "volume": "26", "issue": "", "pages": "3039--3047", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nathaniel J. Smith, Noah D. Goodman, and Michael C. Frank. 2013. Learning and using language via recur- sive pragmatic reasoning about other agents. In Ad- vances in Neural Information Processing Systems 26, pages 3039-3047.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Approaching the symbol grounding problem with probabilistic graphical models", "authors": [ { "first": "Stefanie", "middle": [], "last": "Tellex", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kollar", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Dickerson", "suffix": "" } ], "year": 2011, "venue": "AI magazine", "volume": "32", "issue": "4", "pages": "64--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanie Tellex, Thomas Kollar, and Steven Dickerson. 2011a. Approaching the symbol grounding problem with probabilistic graphical models. AI magazine, 32(4):64-76.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Understanding natural language commands for robotic navigation and mobile manipulation", "authors": [ { "first": "Stefanie", "middle": [], "last": "Tellex", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Kollar", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Dickerson", "suffix": "" }, { "first": "R", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Ashis Gopal", "middle": [], "last": "Walter", "suffix": "" }, { "first": "Seth", "middle": [ "J" ], "last": "Banerjee", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Teller", "suffix": "" }, { "first": "", "middle": [], "last": "Roy", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1507--1514", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth J Teller, and Nicholas Roy. 2011b. Understanding nat- ural language commands for robotic navigation and mobile manipulation. In Proceedings of the Twenty- Fifth AAAI Conference on Artificial Intelligence, pages 1507-1514.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Procedures as a representation for data in a computer program for understanding natural language", "authors": [ { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Winograd. 1970. Procedures as a representation for data in a computer program for understanding nat- ural language. Ph.D. thesis, MIT.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions", "authors": [ { "first": "Peter", "middle": [], "last": "Young", "suffix": "" }, { "first": "Alice", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Micah", "middle": [], "last": "Hodosh", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2014, "venue": "Transactions of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "67--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Young, Alice Lai, Micah Hodosh, and Julia Hock- enmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic in- ference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Learning to parse database queries using inductive logic programming", "authors": [ { "first": "M", "middle": [], "last": "John", "suffix": "" }, { "first": "Raymond", "middle": [ "J" ], "last": "Zelle", "suffix": "" }, { "first": "", "middle": [], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Proceedings of the National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "John M. Zelle and Raymond J. Mooney. 1996. Learn- ing to parse database queries using inductive logic pro- gramming. In Proceedings of the National Conference on Artificial Intelligence, pages 1050-1055.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "authors": [ { "first": "Luke", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2005, "venue": "UAI '05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "658--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke S. Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form: Structured clas- sification with probabilistic categorial grammars. In UAI '05, Proceedings of the 21st Conference in Un- certainty in Artificial Intelligence, pages 658-666.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "num": null, "text": "Image by flickr user Joanne Bacon (jlbacon) from the data set of", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "The LUX model for \"yellowish green\" on the Hue axis plotted against the scaled histogram of the responses in the data. The \u03c6 curve represents the likelihood of \"yellowish green\" for different Hue values. The \u03c4 curves represent possible boundaries.", "type_str": "figure" }, "FIGREF5": { "uris": null, "num": null, "text": "For the Hue dimension, the data for \"melon\" is plotted against the LUX model's \u03c6 curve.", "type_str": "figure" }, "TABREF0": { "content": "", "text": "Decision-based results. The percentage of correct responses of 544,764 test-set data points are shown.", "type_str": "table", "num": null, "html": null }, "TABREF1": { "content": "
: Likelihood-based evaluation results: negative log likelihood of the data, negative log likelihood of labels given points, number of parameters, Akaike In-formation Criterion and perplexity of labels given color values. Parameter counts for AIC are 15751 for LUX, 315669 for HM and 5803 for GM.
", "text": "", "type_str": "table", "num": null, "html": null } } } }