{ "paper_id": "P18-1029", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:40:12.459842Z" }, "title": "Zero-shot Learning of Classifiers from Natural Language Quantification", "authors": [ { "first": "Shashank", "middle": [], "last": "Srivastava", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "ssrivastava@cmu.edu" }, { "first": "Igor", "middle": [], "last": "Labutov", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "ilabutov@cs.cmu.edu" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "tom.mitchell@cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Humans can efficiently learn new concepts using language. We present a framework through which a set of explanations of a concept can be used to learn a classifier without access to any labeled examples. We use semantic parsing to map explanations to probabilistic assertions grounded in latent class labels and observed attributes of unlabeled data, and leverage the differential semantics of linguistic quantifiers (e.g., 'usually' vs 'always') to drive model training. Experiments on three domains show that the learned classifiers outperform previous approaches for learning with limited data, and are comparable with fully supervised classifiers trained from a small number of labeled examples.", "pdf_parse": { "paper_id": "P18-1029", "_pdf_hash": "", "abstract": [ { "text": "Humans can efficiently learn new concepts using language. We present a framework through which a set of explanations of a concept can be used to learn a classifier without access to any labeled examples. We use semantic parsing to map explanations to probabilistic assertions grounded in latent class labels and observed attributes of unlabeled data, and leverage the differential semantics of linguistic quantifiers (e.g., 'usually' vs 'always') to drive model training. Experiments on three domains show that the learned classifiers outperform previous approaches for learning with limited data, and are comparable with fully supervised classifiers trained from a small number of labeled examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As computer systems that interact with us in natural language become pervasive (e.g., Siri, Alexa, Google Home), they suggest the possibility of letting users teach machines in language. The ability to learn from language can enable a paradigm of ubiquitous machine learning, allowing users to teach personalized concepts (e.g., identifying 'important emails' or 'project-related emails') when limited or no training data is available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we take a step towards solving this problem by exploring the use of quantifiers to train classifiers from declarative language. For illustration, consider the hypothetical example of a user explaining the concept of an \"important email\" through natural language statements (Figure 1) . Our framework takes a set of such natural language explanations describing a concept (e.g., \"emails that I reply to are usually important\") and a set of unlabeled instances as input, and produces Figure 1 : Supervision from language can enable concept learning from limited or even no labeled examples. Our approach assumes the learner has sensors that can extract attributes from data, such as those listed in the table, and language that can refer to these sensors and their values. a binary classifier (for important emails) as output. Our hypothesis is that language describing concepts encodes key properties that can aid statistical learning. These include specification of relevant attributes (e.g., whether an email was replied to), relationships between such attributes and concept labels (e.g., if a reply implies the class label of that email is 'important'), as well as the strength of these relationships (e.g., via quantifiers like 'often', 'sometimes', 'rarely'). We infer these properties automatically, and use the semantics of linguistic quantifiers to drive the training of classifiers without labeled examples for any concept. This is a novel scenario, where previous approaches in semi-supervised and constraint-based learning are not directly applicable. Those approaches require manual pre-specification of expert knowledge for model training. In our approach, this knowledge is automatically inferred from noisy natural language explanations from a user.", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 298, "text": "(Figure 1)", "ref_id": null }, { "start": 497, "end": 505, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach is summarized in the schematic in Figure 2 . First, we map the set of natural language explanations of a concept to logical forms Figure 2 : Our approach to Zero-shot learning from Language. Natural language explanations on how to classify concept examples are parsed into formal constraints relating features to concept labels. The constraints are combined with unlabeled data, using posterior regularization to yield a classifier.", "cite_spans": [], "ref_spans": [ { "start": 47, "end": 55, "text": "Figure 2", "ref_id": null }, { "start": 143, "end": 151, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "that identify the attributes mentioned in the explanation, and describe the information conveyed about the attribute and the concept label as a quantitative constraint. This mapping is done through semantic parsing. The logical forms denote quantitative constraints, which are probabilistic assertions about observable attributes of the data and unobserved concept labels. Here the strength of a constraint is assumed to be specified by a linguistic quantifier (such as 'all', 'some', 'few', etc., which reflect degrees of generality of propositions). Next, we train a classification model that can assimilate these constraints by adapting the posterior regularization framework (Ganchev et al., 2010) .", "cite_spans": [ { "start": 679, "end": 701, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Intuitively, this can be seen as defining an optimization problem, where the objective is to find parameter estimates for the classifier that do not simply fit the data, but also agree with the human provided natural language advice to the greatest extent possible. Since logical forms can be grounded in a variety of sensors and external resources, an explicit model of semantic interpretation conceptually allows the framework to subsume a flexible range of grounding behaviors. The main contributions of this work are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. We introduce the problem of zero-shot learning of classifiers from language, and present an approach towards this. 2. We develop datasets for zero-shot classification from natural descriptions, exhibiting tasks with various levels of difficulty. 3. We empirically show that coarse probability estimates to model linguistic quantifiers can effectively supervise model training across three domains of classification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Many notable approaches have explored incorporation of background knowledge into the training of learning algorithms. However, none of them addresses the issue of learning from natural language. Prominent among these are the Constraint-driven learning (Chang et al., 2007a) , Generalized Expectation (Mann and McCallum, 2010) and Posterior Regularization (Ganchev et al., 2010) and Bayesian Measurements (Liang et al., 2009) frameworks. All of these require domain knowledge to be manually programmed in before learning. Similarly, Probabilistic Soft Logic (Kimmig et al., 2012) allows users to specify rules in a logical language that can be used for reasoning over graphical models. More recently, multiple approaches have explored fewshot learning from perspective of term or attributebased transfer (Lampert et al., 2014) , or learning representations of instances as probabilistic programs (Lake et al., 2015). Other work (Lei Ba et al., 2015; Elhoseiny et al., 2013) considers language terms such as colors and textures that can be directly grounded in visual meaning in images. Some previous work (Srivastava et al., 2017) has explored using language explanations for feature space construction in concept learning tasks, where the problem of learning to interpret language, and learning classifiers is treated jointly. However, this approach assumes availability of labeled data for learning classifiers. Also notable is recent work by Andreas et al. (2017) , who propose using language descriptions as parameters to model structure in learning tasks in multiple settings. More generally, learning from language has also been previously explored in tasks such as playing games (Branavan et al., 2012), robot navigation (Karamcheti et al., 2017) , etc.", "cite_spans": [ { "start": 252, "end": 273, "text": "(Chang et al., 2007a)", "ref_id": "BIBREF5" }, { "start": 300, "end": 325, "text": "(Mann and McCallum, 2010)", "ref_id": "BIBREF20" }, { "start": 355, "end": 377, "text": "(Ganchev et al., 2010)", "ref_id": "BIBREF9" }, { "start": 404, "end": 424, "text": "(Liang et al., 2009)", "ref_id": "BIBREF16" }, { "start": 557, "end": 578, "text": "(Kimmig et al., 2012)", "ref_id": "BIBREF11" }, { "start": 803, "end": 825, "text": "(Lampert et al., 2014)", "ref_id": "BIBREF14" }, { "start": 927, "end": 948, "text": "(Lei Ba et al., 2015;", "ref_id": "BIBREF15" }, { "start": 949, "end": 972, "text": "Elhoseiny et al., 2013)", "ref_id": null }, { "start": 1104, "end": 1129, "text": "(Srivastava et al., 2017)", "ref_id": "BIBREF23" }, { "start": 1444, "end": 1465, "text": "Andreas et al. (2017)", "ref_id": "BIBREF0" }, { "start": 1727, "end": 1752, "text": "(Karamcheti et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Natural language quantification has been studied from multiple perspectives in formal logic (Barwise and Cooper, 1981) , linguistics (L\u00f6bner, 1987; Bach et al., 2013) and cognitive psychology (Kurtzman and MacDonald, 1993) . While quantification has traditionally been defined in set-theoretic terms in linguistic theories 1 , our approach joins alternative perspectives that represent quantifiers probabilistically (Moxey and Sanford, 1993; Yildirim et al., 2013) . To the best of our knowledge, this is the first work to leverage the semantics of quantifiers to guide statistical learning models.", "cite_spans": [ { "start": 92, "end": 118, "text": "(Barwise and Cooper, 1981)", "ref_id": "BIBREF2" }, { "start": 133, "end": 147, "text": "(L\u00f6bner, 1987;", "ref_id": "BIBREF19" }, { "start": 148, "end": 166, "text": "Bach et al., 2013)", "ref_id": "BIBREF1" }, { "start": 192, "end": 222, "text": "(Kurtzman and MacDonald, 1993)", "ref_id": "BIBREF12" }, { "start": 416, "end": 441, "text": "(Moxey and Sanford, 1993;", "ref_id": "BIBREF21" }, { "start": 442, "end": 464, "text": "Yildirim et al., 2013)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our approach relies on first mapping natural language descriptions to quantitative constraints that specify statistical relationships between observable attributes of instances and their latent concept labels (Step 1 in Figure 2 ). These quantitative constraints are then imbued into the training of a classifier by guiding predictions from the learned models to concur with them (Step 2). We use semantic parsing to interpret sentences as quantitative constraints, and adapt the posterior regularization principle for our setting to estimate the classifier parameters. Next, we describe these steps in detail. Since learning in this work is largely driven by the semantics of linguistic quantifiers, we call our approach Learning from Natural Quantification, or LNQ.", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 228, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Learning Classifiers from Language", "sec_num": "3" }, { "text": "A key challenge in learning from language is converting free-form language to representations that can be reasoned over, and grounded in data. For example, a description such as 'emails that I reply to are usually important' may be converted to a mathematical assertion such as P (important | replied : true) = 0.7', which statistical methods can reason with. Here, we argue that this process can be automated for a large number of real-world descriptions. In interpreting statements describing concepts, we infer the following key elements:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping language to constraints", "sec_num": "3.1" }, { "text": "1. Feature x, which is grounded in observed attributes of the data. For our example, 'emails replied to' can refer to a predicate such as replied:true, which can be evaluated in context of emails to indicate the whether an email was replied to. Incorporating compositional representations enables more complex reasoning. e.g., 'the subject of course-related emails usually mentions CS100' can map to a composite predicate like 'isStringMatch(field:subject, stringVal('CS100'))' , which can be evaluated for different emails to reflect whether their subject mentions 'CS100'. Mapping language to executable feature functions has been shown to be effective (Srivastava et al., 2017) . For sake of simplicity, here we assume that a statement refers to a single feature, but the method can be extended to handle more complex descriptions. 2. Concept label y, specifying the class of instances a statement refers to. For binary classes, this reduces to examples or non-examples of a concept. For our running example, y corresponds to the positive class of important emails. 3. Constraint-type asserted by the statement. We argue that most concept descriptions belong to one of three categories shown in Table 2 , and these constitute our vocabulary of constraint types for this work. For our running example ('emails that I reply to are usually important'), the type corresponds to P (y | x), since the syntax of the statement indicates an assertion conditioned on the feature indicating whether an email was replied to. On the other hand, an assertion such as 'I usually reply to important emails' indicates an assertion conditioned on the set important emails, and therefore corresponds to the type P (x | y). 4. Strength of the constraint. We assume this to be specified by a quantifier. For our running example, this corresponds to the adverb 'usually'. In this work, by quantifier we specifically refer to frequency adverbs ('usually','rarely', etc.) and frequency determiners ('few', 'all', etc.). 2 Our thesis is that the semantics of quantifiers can be leveraged to make statistical assertions about relationships involving attributes and concept labels. One way to do this might be to simply associate point estimates of probability values, suggesting the fraction of truth values for assertions described with these quantifiers. Table 1 shows probability values we assign to some common frequency quantifiers for English. These values were set simply based on the authors' intuition about their semantics, and do not reflect any empirical distributions. See Figure 8 for empirical distributions corresponding to some linguistic quantifiers in our data. While these probability values maybe inaccurate, and the semantics of these quantifiers may also change based on context and the speaker, they can still serve as a strong signal for learning classifiers since they are not used as hard constraints, but serve to bias classifiers towards better generalization.", "cite_spans": [ { "start": 655, "end": 680, "text": "(Srivastava et al., 2017)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1198, "end": 1205, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 2334, "end": 2341, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Mapping language to constraints", "sec_num": "3.1" }, { "text": "We use a semantic parsing model to map statements to formal semantic representations that specify these aspects. For example, the statement 'Emails that I reply to are usually important' is ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mapping language to constraints", "sec_num": "3.1" }, { "text": "Given a descriptive statement s, the parsing problem consists of predicting a logical form l that best represents its meaning. In turn, we formulate the probability of the logical form l as decomposing into three component factors: (i) probability of observing a feature and concept labels l xy based on the text of the sentence, (ii) probability of the type of the assertion l type based on the identified feature, concept label and syntactic properties of the sentence s, and (iii) identifying the linguistic quantifier, l quant , in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "P (l | s) = P (l xy | s) P (l type | l xy , s) P (l quant | s)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "We model each of the three components as follows: by using a traditional semantic parser for the first component, training a Max-Ent classifier for the constraint-type for the second component, and looking for an explicit string match to identify the quantifier for the third component. Identifying features and concept labels, l xy : For identifying the feature and concept label mentioned in a sentence, we presume a linear score S(s, l xy ) = w T \u03c8(s, l xy ) indicating the goodness of assigning a partial logical form, l xy , to a sentence s. Here, \u03c8(s, l xy ) \u2208 R n are features that can depend on both the sentence and the partial logical form, and w \u2208 R n is a parameter weight-vector for this component. Following recent work in semantic parsing (Liang et al., 2011) , we assume a loglinear distribution over interpretations of a sentence.", "cite_spans": [ { "start": 754, "end": 774, "text": "(Liang et al., 2011)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "P (l xy | s) \u221d w T \u03c8(s, l xy )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "Provided data consisting of statements labeled with logical forms, the model can be trained via maximum likelihood estimation, and be used to predict interpretations for new statements. For training this component, we use a CCG semantic parsing formalism, and follow the feature-set from Zettlemoyer and Collins 2007, consisting of simple indicator features for occurrences of keywords and lexicon entries. This is also compatible with the semantic parsing formalism in Srivastava et al. (2017) , whose data (and accompanying lexicon) are also used in our evaluation. For other datasets with predefined features, this component is learned easily from simple lexicons consisting of trigger words for features and labels. 3 This component is the only part of the parser that is domain-specific. We note that while this component assumes a domain-specific lexicon (and possibly statement annotated with logical forms), this effort is one-time-only, and will find re-use across the possibly large number of concepts in the domain (e.g., email categories).", "cite_spans": [ { "start": 470, "end": 494, "text": "Srivastava et al. (2017)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "Identifying assertion type, l type : The principal novelty in our semantic parsing model is in identifying the type of constraint asserted by a statement. For this, we train a MaxEnt classifier, which uses positional and syntactic features based on the text-spans corresponding to feature and concept mentions to predict the constraint type. We extract the following features from a statement: 1. Boolean value indicating whether the text-span corresponding to the feature x precedes the text span for the concept label y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "2. Boolean value indicating if sentence is in passive (rather than active) voice, as identified by the occurrence of nsubjpass dependency relation. 3. Boolean value indicating whether head of the text-span for x is a noun, or a verb. 4. Features indicating the occurrence of conditional tokens ('if', 'then' and 'that') preceding or following text-spans for x and y. 5. Features indicating presence of a linguistic quantifier in a det or an advmod relation with syntactic head of x or y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "Since the constraint type is determined by syntactic and dependency parse features, this (Lichman, 2013) , and used this model for all experiments.", "cite_spans": [ { "start": 89, "end": 104, "text": "(Lichman, 2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "Identifying quantifiers, l quant : Multiple linguistic quantifiers in a sentence are rare, and we simply look for the first occurrence of a linguistic quantifier in a sentence, i.e. P (l quant |s) is a deterministic function. We note that many real world descriptions of concepts lack an explicit quantifier. e.g., 'Emails from my boss are important'. In this work, we ignore such statements for the purpose of training. Another treatment might be to models these statements as reflecting a default quantifier, but we do not explore this direction here. Finally, the decoupling of quantification from logical representation is a key decision. At the cost of linguistic coarseness, this allows modeling quantification irrespective of the logical representation (lambda calculus, predicate-argument structures, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Parser components", "sec_num": "3.1.1" }, { "text": "In the previous section, we described how individual explanations can be mapped to probabilistic assertions about observable attributes (e.g., the statement 'Emails that I reply to are usually important' may map to P (y = important | replied = true) = p usually ). Here, we describe how a set of such assertions can be used in conjunction with unlabeled data to train classification models. Our approach relies on having predictions from the classifier on a set of unlabeled examples (X = {x 1 . . . x n }) agree with human-provided advice (in form of constraints). The unobserved concept labels (Y = {y 1 . . . y n }) for the unlabeled data constitute latent variables for our method. The training procedure can be seen as iteratively inferring the latent concept labels for unlabeled examples so as to agree with the human advice, and updating the classification models by taking these labels as given. While there are multiple approaches for training statistical models with constraints on latent variables, here we use the Posterior Regularization (PR) framework. The PR objective can be used to optimize a latent variable model subject to a set of constraints, which specify preferences for values of the posterior distributions p \u03b8 (Y | X).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "J Q (\u03b8) = L(\u03b8) \u2212 min q\u2208Q KL(q | p \u03b8 (Y |X))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "Here, the set Q represents a set of preferred posterior distributions over latent variables Y , and is defined as Q :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "= {q X (Y ) : E q [\u03c6(X, Y )] \u2264 b}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "The overall objective consists of two components, representing how well does a model \u03b8 explain the data (likelihood term L(\u03b8)), and how far it is from the set Q (KL-divergence term).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "In our case, each parsed statement defines a probabilistic constraint. The conjunction of all such constraints defines Q (representing models that exactly agree with human-provided advice). Thus, optimizing the objective reflects a tension between choosing models that increase data likelihood, and emulating language advice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "Converting to PR constraints: The set of constraints that PR can handle can be characterized as bounds on expected values of functions (\u03c6) of X and Y (or equivalently, from linearity of expectation, as linear inequalities over expected values of functions of X and Y ). To use the framework, we need to ensure that each constraint type in our vocabulary can be expressed in such a form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "Following the plan in Table 2 , each constraint type can be converted in an equivalent form E q [\u03c6(X, Y )] = b, compatible with PR. In particular, each of these constraint types in our vocabulary can be expressed as equations about expectation values of joint indicator functions of label assignments to instances and their attributes. To explain, consider the assertion P (y = important | replied : true) = p usually . The probability on the LHS can be expressed as the empirical fraction", "cite_spans": [], "ref_spans": [ { "start": 22, "end": 29, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "i E[I y i =important,replied:true ] i E[I replied:true ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": ", which leads to the linear constraints seen in Table 2 (expected values in the table hide summations over instances for brevity). Here, I denote indicator functions. Thus, we can incorporate probability constraints into our adaptation of the PR scheme. Learning and Inference: We choose a loglinear parameterization for the concept classifier.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 55, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "p \u03b8 (y i | x i ) \u221d exp(y\u03b8 T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "x) The training of the classifier follows the modified EM procedure described in Ganchev et al. (2010) . As proposed in the original work, we solve a relaxed version of the optimization that allows slack variables, and modifies the PR objective with a L 2 regularizer. This allows solutions even when the problem is over-constrained, and the set Q is empty (e.g. due to contradictory advice).", "cite_spans": [ { "start": 81, "end": 102, "text": "Ganchev et al. (2010)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "J (\u03b8, q) = L(\u03b8) \u2212 KL(q|p \u03b8 (Y |X)) \u2212 \u03bb ||E q [\u03c6(X, Y )] \u2212 b|| 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "The key step in the training is the computation of the posterior regularizer in the E-step.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "argmin", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "q KL(q | p \u03b8 ) + \u03bb ||E q [\u03c6(X, Y )] \u2212 b|| 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "This objective is strictly convex, and all constraints are linear in q. We follow the optimization procedure from Bellare et al. (2009) , whereby the minimization problem in the E-step can be efficiently solved through gradient steps in the dual space. In the M-step, we update the model parameters for the classifier based on label distributions q estimated in the E-step. This simply reduces to estimating the parameters \u03b8 for the logistic regression classifier, when class label probabilities are known. In all experiments, we run EM for 20 iterations and use a regularization coefficient of \u03bb = 0.1.", "cite_spans": [ { "start": 114, "end": 135, "text": "Bellare et al. (2009)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Classifier training from constraints", "sec_num": "3.2" }, { "text": "For evaluating our approach, we created datasets of classification tasks paired with descriptions of the classes, as well as used some existing resources. In this section, we summarize these steps. Shapes data: To experiment with our approach in a wider range of controlled settings, part of our evaluation focuses on synthetic concepts. For this, we created a set of 50 shape classification tasks that exhibit a range of difficulty, and elicited language descriptions spanning a variety of quantifier expressions. The tasks require classifying geometric shapes with a set of predefined attributes (fill color, border, color, shape, size) into two concept-labels (abstractly named 'selected shape', and 'other').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4" }, { "text": "The datasets were created through a generative process, where features x i are conditionally independent given the concept-label. Each feature's conditional distribution is sampled from a symmetric Dirichlet distribution, and varying the concentration parameter \u03b1 allows tuning the noise level of the generated datasets (quantified via their Bayes Optimal accuracy 4 ). A dataset is then generated by sampling from these conditional distributions. We sample a total of 50 such datasets, consisting of 100 training and 100 test examples each, where each example is a shape and its assigned label. For each dataset, we then collected statements from Mechanical Turk workers that describe the concept. The task required turkers to study a sample of shapes presented on the screen for each of the two concept-labels (see Figure 3(a) ). They were then asked to write a set of statements that would help others classify these shapes without seeing the data. In total, 30 workers participated in this task, generating a mean of 4.3 statements per dataset. Email data: Srivastava et al. (2017) provide a dataset of language explanations from human users describing 7 categories of emails, as well as 1030 examples of emails belonging to those categories. While this work uses labeled examples, and focuses Shapes: If a shape doesn't have a blue border, it is probably not a selected shape. Selected shapes occasionally have a yellow fill. Emails: Emails that mention the word 'meet' in the subject are usually meeting requests Personal reminders almost always have the same recipient and sender Birds: A specimen that has a striped crown is likely to be a selected bird. Birds in the other category rarely ever have dagger-shaped beaks Table 3 : Examples of explanations for each domain Figure 4 : Statement generation task for Birds data on mapping natural language explanations (\u223c30 explanations per email category) to compositional feature functions, we can also use statements in their data for evaluating our approach. While language quantifiers were not studied in the original work, we found about a third of the statements in this data to mention a quantifier. Birds data: The CUB-200 dataset (Wah et al., 2011) contains images of birds annotated with observable attributes such as size, primary color, wing-patterns, etc. We selected a subset of the data consisting of 10 species of birds and 53 attributes (60 examples per species). Turkers were shown examples of birds from a species, and negative examples consisting of a mix of birds from other species, and were asked to describe the classes (similar to the Shapes data, see Figure 4 ). During the task, users also had access to a table enumerating groundable attributes they could refer to. In all, 60 workers participated, generating 6.1 statements on average.", "cite_spans": [ { "start": 2193, "end": 2211, "text": "(Wah et al., 2011)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 817, "end": 828, "text": "Figure 3(a)", "ref_id": "FIGREF0" }, { "start": 1728, "end": 1735, "text": "Table 3", "ref_id": null }, { "start": 1779, "end": 1787, "text": "Figure 4", "ref_id": null }, { "start": 2631, "end": 2639, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "4" }, { "text": "Incorporating constraints from language has not been addressed before, and hence previous approaches for learning from limited data such as Mann and McCallum (2010); Chang et al. (2007b) would not directly work for this setting. Our baselines hence consist of extended versions of previous approaches that incorporate output from the parser, as well as fully supervised classifiers trained from a small number of labeled examples. Classification performance: The top section in Table 4 summarizes performance of various classifiers on the Shape datasets, averaged over all 50 classification tasks. FLGE+ refers to a baseline Figure 5 : LNQ vs Bayes Optimal Classifier performance for Shape datasets. Each dot represents a dataset generated from a known distribution.", "cite_spans": [ { "start": 166, "end": 186, "text": "Chang et al. (2007b)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 478, "end": 485, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 625, "end": 633, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "that uses the Feature Labeling through Generalized Expectation criterion, following the approach in Druck et al. (2008); Mann and McCallum (2010) . The approach is based on labeling features are indicating specific class-labels, which corresponds to specifiying constraints of type P (y|x) 5 . While the original approach (Druck et al., 2008) sets this value to 0.9, we provide the method the quantitative probabilities used by LNQ. Since the original method cannot handle language descriptions, we also provide the approach the concept label y and feature x as identified by the parser. FLGE represents the version that is not provided quantifier probabilities. LR refers to a supervised logistic regression model trained on n = 8 randomly chosen labeled instances. 6 We note that LNQ performs substantially better than both FLGE+ and LR on average. This validates our modeling principle for learning classifiers from explanations alone, and also suggests value in our PR-based formulation, which can handle multiple constraint types. We further note that not using quantifier probabilities significantly deteriorates FLGE's performance. Figure 5 provides a more detailed characterization of LNQ's performance. Each blue dot represents performance on a shape classification task. The horizontal axis represents the accuracy of the Bayes Optimal classifier, and the vertical represents accuracy of the LNQ approach. The blue line represents the trajectory for x = y, representing a perfect statistical classifier in the asymptotic case of infinite samples. We note that LNQ is effective in learning competent classifiers for all levels of hardness. Secondly, except for a small number of outliers, the approach works especially well for learning easy concepts (towards the right). From an error-analysis, we found that a majority of these errors are due to problems in parsing (e.g., missed negation, incorrect constraint type) or due to poor explanations from the teacher (bad grammar, or simply incorrect information). Figure 6 shows results for email classification tasks. In the figure, LN* refers to the approach in Srivastava et al. (2017) , which uses natural language descriptions to define compositional features for email classification, but does not incorporate supervision from quantification. For this task, we found very few of the natural language descriptions to contain quantifiers for some of the individual email categories, making a direct comparison impractical. Thus in this case, we evaluate methods by combining supervision from descriptions in addition to 10 labeled examples (also in line with evaluation in the original paper). We note that additionally incorporating quantification (LNQ) consistently improves classification performance across email categories. On this task, LNQ improves upon FLGE+ and LN* for 6 of the 7 email categories. Figure 7 shows classification results on the Birds data. Here, LR refers to a logistic regression model trained on n=10 examples. The trends in this case are similar, where LNQ consistently outperforms FLGE+, and is competitive with LR. Ablating quantification: From Table 4 , we further observe that the differential associative strengths of linguistic quantifiers are crucial for our method's classification performance. LNQ (no quant) refers to a variant that assigns the same probability value (average of values in Table 1) , irrespective of quantifier. This yields a near random performance, which is what we'd expect if the learning is being driven by the differential strengths of quantifiers. LNQ (coarse quant) refers to a variant that rounds assigned quantifier probabilities in Table 1 to 0 or 1. (i.e., quantifiers such are rarely get mapped to 0, while always gets mapped to a probability of 1). While its performance (0.679) suggests that simple binary feedback is a substantial signal, the difference from the full model indicates value in using soft probabilities. On the other hand, in a sensitivity study, we found the performance of the approach to be robust to small changes in the probability values of quantifiers. Comparison with human performance: For the Shapes data, we evaluated human teachers' own understanding of concepts they teach by evaluating Figure 7 : Classification performance on Birds data them on a quiz based on predicting labels for examples from the test set (see Figure 3(b) ). Second, we solicit additional workers that were not exposed to examples from the dataset, and present them only with the statements describing that data (created by a teacher), which is comparable supervision to what LNQ receives. We then evaluate their performance at the same task. From Table 4 , we note that a human teacher's average performance is significantly worse (p < 0.05, Wilcoxon signed-rank test) than the Bayes Optimal classifier indicating that the teacher's own synthesis of concepts is noisy. The human learner performance is expectedly lower, but interestingly is also significantly worse than LNQ. While this might be potentially be caused by factors such as user fatigue, this might also suggest that automated methods can be better at reasoning with constraints than humans in certain scenarios. These results need to be validated through comprehensive experiments in more domains. Empirical semantics of quantifiers: We can estimate the distributions of probability values for different quantifiers from our labeled data. For this, we aggregate sentences mentioning a quantifier, and calculate the empirical value of the (conditional) probability associated with the statement, leading to a set of probability values for each quantifier. Figure 8 shows empirical distributions of probability values for six quantifiers. We note that while a few estimates (e.g., 'rarely' and 'often') roughly align with pre-registered beliefs, others are somewhat off (e.g., 'likely' shows a much higher value), and yet others (e.g., 'sometimes') show a large spread of values to be meaningfully modeled as point values. LNQ's performance, inspite of this, shows strong stability in the approach. We don't use these empirical probabilities in experiments, (instead of pre-registered values), so as not to tune the hyperparameters to a specific dataset. Table 1 Such estimates would not be available for a new task without labeled data. Further, using labeled data for estimating these probabilities, and then using the learned model for predicting labels would constitute overfitting, biasing evaluation.", "cite_spans": [ { "start": 100, "end": 120, "text": "Druck et al. (2008);", "ref_id": "BIBREF7" }, { "start": 121, "end": 145, "text": "Mann and McCallum (2010)", "ref_id": "BIBREF20" }, { "start": 322, "end": 342, "text": "(Druck et al., 2008)", "ref_id": "BIBREF7" }, { "start": 767, "end": 768, "text": "6", "ref_id": null }, { "start": 2121, "end": 2145, "text": "Srivastava et al. (2017)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1139, "end": 1147, "text": "Figure 5", "ref_id": null }, { "start": 2021, "end": 2029, "text": "Figure 6", "ref_id": "FIGREF1" }, { "start": 2869, "end": 2877, "text": "Figure 7", "ref_id": null }, { "start": 3136, "end": 3143, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 3389, "end": 3397, "text": "Table 1)", "ref_id": null }, { "start": 3659, "end": 3666, "text": "Table 1", "ref_id": null }, { "start": 4247, "end": 4255, "text": "Figure 7", "ref_id": null }, { "start": 4377, "end": 4388, "text": "Figure 3(b)", "ref_id": "FIGREF0" }, { "start": 4681, "end": 4688, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 5653, "end": 5661, "text": "Figure 8", "ref_id": "FIGREF2" }, { "start": 6251, "end": 6258, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "Our approach is surprisingly effective in learning from free-form language. However, it does not address linguistic issues such as modifiers (e.g., very likely), nested quantification, etc. On the other hand, we found no instances of nested quantification in the data, suggesting that people might be primed to use simpler language when teaching. While we approximate quantifier semantics as absolute probability values, they may vary significantly based on the context, as shown by cognitive studies such as Newstead and Collis (1987) . Future work can model how these parameters can be adapted in a task specific way (e.g., cases such as cancer prediction where base rates are small), and provide better models of quantifier semantics. e.g., as distributions, rather than point values.", "cite_spans": [ { "start": 509, "end": 535, "text": "Newstead and Collis (1987)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "Our approach is a step towards the idea of using language to guide learning of statistical models. This is an exciting direction, which contrasts with the predominant theme of using statistical learning methods to advance the field of NLP. We believe that language may have as much to help learning, as statistical learning has helped NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6" }, { "text": "e.g., 'some A are B' \u21d4 A \u2229 B = \u2205", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is a significantly restricted definition, and does not address non-frequency determiners (e.g.,'the', 'only', etc. ) or mass quantifiers (e.g. 'a lot', 'little'), among other categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also identify whether a feature x is negated, through the existence of a neg dependency relation with the head of its text-span. e.g., Important emails are usually not deleted", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "This is the accuracy of a theoretically optimal classifier, which knows the true distribution of the data and labels", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In general, Generalized Expectation can also handle broader constraint types, similar to Posterior Regularization 6 LNQ models are indistinct from LR w.r.t. parametrization, but trained to maximize a different objective. The choice of n here is arbitrary, but is roughly twice the number of explanations for each task in this domain", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported by the CMU -Yahoo! InMind project. The authors would also like to thank the anonymous reviewers for helpful comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Learning with latent language", "authors": [ { "first": "Jacob", "middle": [], "last": "Andreas", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Levine", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Andreas, Dan Klein, and Sergey Levine. 2017. Learning with latent language. CoRR abs/1711.00482. http://arxiv.org/abs/1711.00482.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Quantification in natural languages", "authors": [ { "first": "Elke", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Eloise", "middle": [], "last": "Jelinek", "suffix": "" }, { "first": "Angelika", "middle": [], "last": "Kratzer", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Bh Partee", "suffix": "" } ], "year": 2013, "venue": "", "volume": "54", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elke Bach, Eloise Jelinek, Angelika Kratzer, and Bar- bara BH Partee. 2013. Quantification in natural lan- guages, volume 54. Springer Science & Business Media.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generalized quantifiers and natural language", "authors": [ { "first": "Jon", "middle": [], "last": "Barwise", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Cooper", "suffix": "" } ], "year": 1981, "venue": "Linguistics and philosophy", "volume": "4", "issue": "2", "pages": "159--219", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jon Barwise and Robin Cooper. 1981. Generalized quantifiers and natural language. Linguistics and philosophy 4(2):159-219.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Alternating projections for learning with expectation constraints", "authors": [ { "first": "Kedar", "middle": [], "last": "Bellare", "suffix": "" }, { "first": "Gregory", "middle": [], "last": "Druck", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence", "volume": "", "issue": "", "pages": "43--50", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kedar Bellare, Gregory Druck, and Andrew McCallum. 2009. Alternating projections for learning with ex- pectation constraints. In Proceedings of the Twenty- Fifth Conference on Uncertainty in Artificial Intelli- gence. AUAI Press, pages 43-50.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Learning to win by reading manuals in a monte-carlo framework", "authors": [ { "first": "David", "middle": [], "last": "Srk Branavan", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Silver", "suffix": "" }, { "first": "", "middle": [], "last": "Barzilay", "suffix": "" } ], "year": 2012, "venue": "Journal of Artificial Intelligence Research", "volume": "43", "issue": "", "pages": "661--704", "other_ids": {}, "num": null, "urls": [], "raw_text": "SRK Branavan, David Silver, and Regina Barzilay. 2012. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intel- ligence Research 43:661-704.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Guiding semi-supervision with constraint-driven learning", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7--1036", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007a. Guiding semi-supervision with constraint-driven learning. In Proceedings of the 45th Annual Meeting of the Association of Computational Lin- guistics. Association for Computational Linguis- tics, Prague, Czech Republic, pages 280-287. http://www.aclweb.org/anthology/P07-1036.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Guiding semi-supervision with constraint-driven learning", "authors": [ { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2007, "venue": "ACL", "volume": "", "issue": "", "pages": "280--287", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming-Wei Chang, Lev Ratinov, and Dan Roth. 2007b. Guiding semi-supervision with constraint-driven learning. In ACL. pages 280-287.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning from labeled features using generalized expectation criteria", "authors": [ { "first": "Gregory", "middle": [], "last": "Druck", "suffix": "" }, { "first": "Gideon", "middle": [], "last": "Mann", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "595--602", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Druck, Gideon Mann, and Andrew McCallum. 2008. Learning from labeled features using gener- alized expectation criteria. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pages 595-602.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Write a classifier: Zero-shot learning using purely textual descriptions", "authors": [], "year": 2013, "venue": "The IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgam- mal. 2013. Write a classifier: Zero-shot learning us- ing purely textual descriptions. In The IEEE Inter- national Conference on Computer Vision (ICCV).", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Posterior regularization for structured latent variable models", "authors": [ { "first": "Kuzman", "middle": [], "last": "Ganchev", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Gillenwater", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "2001--2049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. 2010. Posterior regularization for structured latent variable models. Journal of Machine Learn- ing Research 11(Jul):2001-2049.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A tale of two draggns: A hybrid approach for interpreting actionoriented and goal-oriented instructions", "authors": [ { "first": "Siddharth", "middle": [], "last": "Karamcheti", "suffix": "" }, { "first": "C", "middle": [], "last": "Edward", "suffix": "" }, { "first": "Dilip", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Mina", "middle": [], "last": "Arumugam", "suffix": "" }, { "first": "Nakul", "middle": [], "last": "Rhee", "suffix": "" }, { "first": "", "middle": [], "last": "Gopalan", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Lawson", "suffix": "" }, { "first": "Stefanie", "middle": [], "last": "Wong", "suffix": "" }, { "first": "", "middle": [], "last": "Tellex", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.08668" ] }, "num": null, "urls": [], "raw_text": "Siddharth Karamcheti, Edward C Williams, Dilip Aru- mugam, Mina Rhee, Nakul Gopalan, Lawson LS Wong, and Stefanie Tellex. 2017. A tale of two draggns: A hybrid approach for interpreting action- oriented and goal-oriented instructions. arXiv preprint arXiv:1707.08668 .", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A short introduction to probabilistic soft logic", "authors": [ { "first": "Angelika", "middle": [], "last": "Kimmig", "suffix": "" }, { "first": "H", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Bach", "suffix": "" }, { "first": "Bert", "middle": [], "last": "Broecheler", "suffix": "" }, { "first": "Lise", "middle": [], "last": "Huang", "suffix": "" }, { "first": "", "middle": [], "last": "Getoor", "suffix": "" } ], "year": 2012, "venue": "NIPS Workshop on Probabilistic Programming: Foundations and Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angelika Kimmig, Stephen H. Bach, Matthias Broecheler, Bert Huang, and Lise Getoor. 2012. A short introduction to probabilistic soft logic. In NIPS Workshop on Probabilistic Programming: Foundations and Applications.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Resolution of quantifier scope ambiguities", "authors": [ { "first": "S", "middle": [], "last": "Howard", "suffix": "" }, { "first": "Maryellen", "middle": [ "C" ], "last": "Kurtzman", "suffix": "" }, { "first": "", "middle": [], "last": "Macdonald", "suffix": "" } ], "year": 1993, "venue": "Cognition", "volume": "48", "issue": "3", "pages": "243--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howard S Kurtzman and Maryellen C MacDonald. 1993. Resolution of quantifier scope ambiguities. Cognition 48(3):243-279.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Human-level concept learning through probabilistic program induction", "authors": [ { "first": "Ruslan", "middle": [], "last": "Brenden M Lake", "suffix": "" }, { "first": "Joshua", "middle": [ "B" ], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2015, "venue": "Science", "volume": "350", "issue": "6266", "pages": "1332--1338", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. 2015. Human-level concept learning through probabilistic program induction. Science 350(6266):1332-1338.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Attribute-based classification for zero-shot visual object categorization", "authors": [ { "first": "H", "middle": [], "last": "Christoph", "suffix": "" }, { "first": "Hannes", "middle": [], "last": "Lampert", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Nickisch", "suffix": "" }, { "first": "", "middle": [], "last": "Harmeling", "suffix": "" } ], "year": 2014, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "36", "issue": "3", "pages": "453--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. 2014. Attribute-based classification for zero-shot visual object categorization. IEEE Trans- actions on Pattern Analysis and Machine Intelli- gence 36(3):453-465.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Predicting deep zero-shot convolutional neural networks using textual descriptions", "authors": [ { "first": "Jimmy", "middle": [ "Lei" ], "last": "Ba", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Swersky", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2015, "venue": "The IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jimmy Lei Ba, Kevin Swersky, Sanja Fidler, and Rus- lan Salakhutdinov. 2015. Predicting deep zero-shot convolutional neural networks using textual descrip- tions. In The IEEE International Conference on Computer Vision (ICCV).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Learning from measurements in exponential families", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Michael I Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 26th annual international conference on machine learning", "volume": "", "issue": "", "pages": "641--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning from measurements in exponential fami- lies. In Proceedings of the 26th annual international conference on machine learning. ACM, pages 641- 648.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning dependency-based compositional semantics", "authors": [ { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Michael I Jordan", "suffix": "" }, { "first": "", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "590--599", "other_ids": {}, "num": null, "urls": [], "raw_text": "Percy Liang, Michael I Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies-Volume 1. Association for Computational Linguistics, pages 590-599.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "UCI machine learning repository", "authors": [ { "first": "M", "middle": [], "last": "Lichman", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Lichman. 2013. UCI machine learning repository. http://archive.ics.uci.edu/ml.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Quantification as a major module of natural language semantics. Studies in discourse representation theory and the theory of generalized quantifiers", "authors": [ { "first": "Sebastian", "middle": [], "last": "L\u00f6bner", "suffix": "" } ], "year": 1987, "venue": "", "volume": "8", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian L\u00f6bner. 1987. Quantification as a major module of natural language semantics. Studies in discourse representation theory and the theory of generalized quantifiers 8:53.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Generalized expectation criteria for semi-supervised learning with weakly labeled data", "authors": [ { "first": "S", "middle": [], "last": "Gideon", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mann", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2010, "venue": "Journal of machine learning research", "volume": "11", "issue": "", "pages": "955--984", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gideon S Mann and Andrew McCallum. 2010. Gener- alized expectation criteria for semi-supervised learn- ing with weakly labeled data. Journal of machine learning research 11(Feb):955-984.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Prior expectation and the interpretation of natural language quantifiers", "authors": [ { "first": "M", "middle": [], "last": "Linda", "suffix": "" }, { "first": "Anthony J", "middle": [], "last": "Moxey", "suffix": "" }, { "first": "", "middle": [], "last": "Sanford", "suffix": "" } ], "year": 1993, "venue": "European Journal of Cognitive Psychology", "volume": "5", "issue": "1", "pages": "73--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linda M Moxey and Anthony J Sanford. 1993. Prior expectation and the interpretation of natural lan- guage quantifiers. European Journal of Cognitive Psychology 5(1):73-91.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Context and the interpretation of quantifiers of frequency", "authors": [ { "first": "E", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "Janet", "middle": [ "M" ], "last": "Newstead", "suffix": "" }, { "first": "", "middle": [], "last": "Collis", "suffix": "" } ], "year": 1987, "venue": "Ergonomics", "volume": "30", "issue": "10", "pages": "1447--1462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen E Newstead and Janet M Collis. 1987. Con- text and the interpretation of quantifiers of frequency. Ergonomics 30(10):1447-1462.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Joint concept learning and semantic parsing from natural language explanations", "authors": [ { "first": "Shashank", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Igor", "middle": [], "last": "Labutov", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Mitchell", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1528--1537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceed- ings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing. Association for Computational Linguistics, pages 1528-1537. http://aclweb.org/anthology/D17-1161.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The Caltech-UCSD Birds", "authors": [ { "first": "C", "middle": [], "last": "Wah", "suffix": "" }, { "first": "S", "middle": [], "last": "Branson", "suffix": "" }, { "first": "P", "middle": [], "last": "Welinder", "suffix": "" }, { "first": "P", "middle": [], "last": "Perona", "suffix": "" }, { "first": "S", "middle": [], "last": "Belongie", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Wah, S. Branson, P. Welinder, P. Perona, and S. Be- longie. 2011. The Caltech-UCSD Birds-200-2011", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Linguistic variability and adaptation in quantifier meanings", "authors": [ { "first": "Ilker", "middle": [], "last": "Yildirim", "suffix": "" }, { "first": "Judith", "middle": [], "last": "Degen", "suffix": "" }, { "first": "K", "middle": [], "last": "Michael", "suffix": "" }, { "first": "T Florian", "middle": [], "last": "Tanenhaus", "suffix": "" }, { "first": "", "middle": [], "last": "Jaeger", "suffix": "" } ], "year": 2013, "venue": "CogSci", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilker Yildirim, Judith Degen, Michael K Tanenhaus, and T Florian Jaeger. 2013. Linguistic variability and adaptation in quantifier meanings. In CogSci.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Online learning of relaxed ccg grammars for parsing to logical form", "authors": [ { "first": "S", "middle": [], "last": "Luke", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "EMNLP-CoNLL", "volume": "", "issue": "", "pages": "678--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luke S Zettlemoyer and Michael Collins. 2007. On- line learning of relaxed ccg grammars for parsing to logical form. In EMNLP-CoNLL. pages 678-687.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Shapes data: Mechanical Turk tasks for (a) collecting concept descriptions, and (b) human evaluation from concept descriptions", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Classification performance (F1) on Email data. (LN* Results from Srivastava et al. (2017))", "type_str": "figure", "num": null, "uris": null }, "FIGREF2": { "text": "Empirical probability distributions for six quantifiers (Shapes data). Plots show Beta distributions with Method-of-Moment estimates. Red bars correspond to values from", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "num": null, "content": "
TypeExample descriptionConversion to Expectation Constraint
P (y
", "text": "| x) Emails that I reply to are usually important E[I y=important,reply(x):true ] \u2212 p usually \u00d7 E[I reply(x):true ] = 0 P (x | y) I often reply to important emails E[I y=important,reply(x):true ] \u2212 p of ten \u00d7 E[I y=important ] = 0 P (y)I rarely get important emails Same as P (y|x 0 ), where x 0 is a constant feature", "type_str": "table", "html": null }, "TABREF2": { "num": null, "content": "", "text": "Common constraint-types, and their representation as expectations over feature values component does not need to be retrained for new domains. In this work, we trained this classifier based on a manually annotated set of 80 sentences describing classes in the small UCI Zoo dataset", "type_str": "table", "html": null }, "TABREF4": { "num": null, "content": "
: Classification performance on Shapes
datasets (averaged over 50 classification tasks).
", "text": "", "type_str": "table", "html": null } } } }