{ "paper_id": "H86-1013", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:35:03.332102Z" }, "title": "COMMONSENSE METAPHYSICS AND LEXICAL SEMANTICS", "authors": [ { "first": "Jerry", "middle": [ "R" ], "last": "Hobbs", "suffix": "", "affiliation": {}, "email": "" }, { "first": "William", "middle": [], "last": "Croft", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Todd", "middle": [], "last": "Davies", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Douglas", "middle": [], "last": "Edwards", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Kenneth", "middle": [], "last": "Laws", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "H86-1013", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "In the TACITUS project for using commonsense knowledge in the understanding of texts \u2022bout mechanical devices and their failures, we have been developing various commonsense theories that are needed to mediate between the way we talk about the behavior of such devices and causal models of their operation. Of central importance in this effort is the axiomatization of what might be called \"commonsense metaphysics'. This includes a number of areas that figure in virtually every domain of discourse, such as scalar notions, granularity, time, space, material, physical objects, causality, functionality, force, and shape. Our approach to lexical semantics is then to construct core theories of each of these areas, and then to define, or at least characterize, \u2022 large number of lexical items in terms provided by the core theories. In the TACITUS system, processes for solving pragrnatics problems posed by \u2022 text will use the knowledge base consisting of these theories in conjunction with the logical forms of the sentences in the text to produce an interpretation. In this paper we do not stress these interpretation processes; this is another, important aspect of the TACITUS project, and it will be described in subsequent papers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This work represents a convergence of research in lexical semantics in linguistics and efforts in AI to encode commonsense knowledge. Lexical semanticist* over the years have developed formalisms of increasing adequacy for encoding word meaning, progressing from simple sets of features (Katz and Fodor, 1963) to notations for predicateargument structure (Lakoff, 1972; Miller and Johnson-Laird, 1976 ), but the early attempts still limited to world knowledge and assumed only very restricted sorts of processing. Workers in computational linguistics introduced inference (Rieger, 1974; Schank, 1975 ) and other complex cognitive processes (Herskovits, 1982) into our understanding of the role of word meaning. Recently, linguists have given greater attention to the cognitive processes that would operate on their representations (e.g., Talmy, 1983; Croft, 1986) . Independently, in AI an effort arose to encode large amounts of commonsense knowl-edge (Hayes, 1979; Hobbs and Moore, 1985; Hobbs et al. 1985) . The research reported here represents a convergence of these various developments. By developing core theories of several fundamental phenomena and defining lexical items within these theories, using the full power of predicate calculus, we awe able to cope with complex-Sties of word meaning that have hitherto escaped lexical semanticists, within a framework that gives full scope to the planning and reasoning processes that manipulate representations of word meaning.", "cite_spans": [ { "start": 287, "end": 309, "text": "(Katz and Fodor, 1963)", "ref_id": null }, { "start": 355, "end": 369, "text": "(Lakoff, 1972;", "ref_id": null }, { "start": 370, "end": 400, "text": "Miller and Johnson-Laird, 1976", "ref_id": "BIBREF16" }, { "start": 572, "end": 586, "text": "(Rieger, 1974;", "ref_id": "BIBREF17" }, { "start": 587, "end": 599, "text": "Schank, 1975", "ref_id": "BIBREF18" }, { "start": 640, "end": 658, "text": "(Herskovits, 1982)", "ref_id": "BIBREF6" }, { "start": 838, "end": 850, "text": "Talmy, 1983;", "ref_id": "BIBREF21" }, { "start": 851, "end": 863, "text": "Croft, 1986)", "ref_id": "BIBREF1" }, { "start": 953, "end": 966, "text": "(Hayes, 1979;", "ref_id": "BIBREF5" }, { "start": 967, "end": 989, "text": "Hobbs and Moore, 1985;", "ref_id": "BIBREF11" }, { "start": 990, "end": 1008, "text": "Hobbs et al. 1985)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In constructing the core theories we are attempting to adhere to several methodological principles. I. One should aim for characterization of concepts. rather than definition. One cannot generally expect to find necessary and sulBcient conditions for a concept. The most we tun hope for is to find a number of necessary conditions and \u2022 number of sufficient conditions. This amounts to ~ying that \u2022 great many predicates are primitive, but primitives that are highly interrelated with the rest of the knowledge base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. One lhould determine the minimal structure necesury for \u2022 concept to make sense. In efforts to axiomatize some area, there are two positions one may take, exemplified by set theory and by group theory. In axiomatizing set theory, one attempts to capture exactly some concept one ham strong intuitions about. If the axiomatization turns out to have unexpected models, this exposes an inadequacy. In group theory, by contrast, one characterizes an abstract clam of structures. If there turn out to be unexpected models, this is a serendipitous discovery of a new phenomenon that we can reason about using an old theory. The pervffisive character of metaphor in natural language discourse shows that our commonsense theories of the world ought to be much more like group theory than set theory. By seeking minimal structures in axiomatizing concepts, we optimize the possibilities of using the theories in metaphorical and analogical contexts. This principle is illustrated below in the section on regions. One consequence of this principle is that our approach will seem more syntactic than semantic. We have concentrated more on specifying axioms than on constructing models. Our view is that the chief role of models in our effort is for proving the consistency and independence of sets of axioms, and for showing their adequacy. As an example of the last point, many of the spatial and temporal theories we construct are intended at least to have Euclidean space or the real numbers as one model, and a subclass of graph-theoretical structures as other models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. A balance must be struck between attempting to cover all cases and aiming only for the prototypical cases. In general, we have tried to cover as many cases as poesable with an elegant axiomatization, in line with the two previous principles, but where the formalization begins to look baroque, we assume that higher processes will suspend some inferences in the marginal cases. We assume that inferences will be drawn in a controlled fashion. Thus, every outr~, highly context-dependent counterexample need not be accounted for, and to a certain extent, definitions can be geared specifically for a prototype.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "4. Where competing ontologies suggest themselves in \u2022 domain, one should attempt to construct a theory that accommodates both. Rather than commit oneself to adopting one set of primitives rather than another, one should show how each set of primitives can be characterized in terms of the other. Generally, each of the ontologies is useful for different purposes, and it is convenient to be able to appeal to both. Our treatment of time iUustrates this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "5. The theories one constructs should be richer in axioms than in theorems. In mathematics, one expectsto state half a dozen axioms and prove dozens of theorems from them. In encoding commonsense knowledge it seems to be just the opposite. The theorems we seek to prove on the basis of these axioms are theorems \u2022bout specific situations which are to be interpreted, in particular, theorems about a text that the system is attempting to understand. 6. One should \u2022void falling into \"black holes'. There are a few \"mysterious\" concepts which crop up repeatedly in the formalization of commonsense metaphysics. Among these are \"relevant* (that is, relevant to the task at hand) and \"normative\" {or conforming to some norm or pattern). To insist upon giving a satisfactory analysis of these before using them in analyzing other concepts is to crom the event horizon that separates lexical semantics from philosophy. On the other hand, our experience suggests that to \u2022void their use entirely is crippling; the lexical semantics of \u2022 wide variety of other terms depends upon them. Instead, we have decided to leave them minimally analyzed for the moment and use them without scruple in the analysis of other commonsense concepts. This approach will allow us to accumulate many examples of the use of these mysterious concepts, and in the end, contribute to their successful analysis. The use of these concepts appears below in the discussions of the words \"immediately\", \"sample', and \"operate\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We chose as an initial target problem to encode the commonsense knowledge that underlies the concept of \"wear\", as in \u2022 part of a device wearing out. Our aim was to define \"we\u2022.r\" in ternm of predicates characterized elsewhere in the knowledge base and to infer consequences of wear. For something to wear, we decided, is for it to lose imperceptible bits of material from its surface due to abrasive action over time. One goal,which we have not yet achieved, is to be able to prove as a theorem that since the shape of a part of \u2022 mechanical device is often functional and since loss of material can result in \u2022 change of shape, wear of a part of \u2022 device can result in the failure of the device as a whole. In addition, as we have proceded, we have characterized a number of words found in a set of target texts, as it has become possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We are encoding the knowledge as axioms in what is for the most part \u2022 first-order logic, described in Hobbs (1985a) , although quantification over predicates is sometimes convenient. In the formalism there is a nominalization operator \" ' \" for reifying events and conditions, as expressed in the following axiom schema:", "cite_spans": [ { "start": 103, "end": 116, "text": "Hobbs (1985a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "That is, p is true of z if and only if there is a condition e of p being true of z and e exists in the real world.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "0'*)~(*) E (3.)~(e, .) ^ E~ot(e)", "sec_num": null }, { "text": "In our implementation so Ear, we have been proving simple theorems from our axioms using the CG5 theoremprover developed by Mark Stickel (1982) , but we are only now beginning to use the knowledge base in text processinf.", "cite_spans": [ { "start": 129, "end": 143, "text": "Stickel (1982)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "0'*)~(*) E (3.)~(e, .) ^ E~ot(e)", "sec_num": null }, { "text": "There is a notational convention used below that deserves some explanation. It has frequently been noted that relational words in natural language can take only certain types of words as their arguments. These are usually described as selectional constraints. The same is true of pred-icJttm in our knowledge base. They are expressed below by ruin of the form p(*, y) : .(*, y)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements on Arguments of Predicates", "sec_num": "2" }, { "text": "This means that for p even to make sense applied to r and If, it must be the case that r is true of z and y. The logical import of this rule is that wherever there is an axiom of the form 0'*,v)p(.,y) D q(.,.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements on Arguments of Predicates", "sec_num": "2" }, { "text": "this is really to be read as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Requirements on Arguments of Predicates", "sec_num": "2" }, { "text": "The checking of selectional constraints, therefore, falls out as a by-product of other logical operations: the constraint r(z, y) must be verified if anything else is to be proven from p(z, ~).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(vz, y)p(z,~) ^ r(z,,) D q(z,y)", "sec_num": null }, { "text": "The simplest example of such an r(z, If) is a conjunction of sort constraints r,(z) ^ r,(y). Our approach is a generalization of this, because much more complex requirements can be placed on the arguments. Consider, for example, the verb \"range\". if z ranges from If to :, there must be a scale s that includes It and z, and z must be a set of entries that are located at various places on the scale. This can be represented as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(vz, y)p(z,~) ^ r(z,,) D q(z,y)", "sec_num": null }, { "text": "range(z, If, :) : (3.}.eels(.} ^ If E \u2022 ^ z E \u2022 ^ .st(z) ^ (vu)lu E : 3 (3v)v e \u2022 ^ ~(n, v)] 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(vz, y)p(z,~) ^ r(z,,) D q(z,y)", "sec_num": null }, { "text": "The Knowledge Base", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(vz, y)p(z,~) ^ r(z,,) D q(z,y)", "sec_num": null }, { "text": "At the foundation of the knowledge base is an axiomatizw tion of set theory. It follows the standard Zermelo-Frankel approach, except that there is no Axiom of Infinity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sets and Granularity", "sec_num": null }, { "text": "Since so many concepts used in discourse are $q'aindependent, a theory of granularity is also fundamental (see Hobbs 1985b) . A grain is defined in terms of an indistinguishability relation, which is reflexive and symmetric, but not necessarily transitive. One grain can be a reJ~nemeal of another with the obvious definition. The most refined grain is the identity grain, i.e., the one in which every two distinct elements are distinguishable. One possible relationship between two grains, one of which is z refinement of the other, is what we call an \"Archimedean relation', after the Archimedean property of real numbers. Intuitively, if enough events occur that are imperceptible at the coarser grain 9* but perceptible at the \u00a7ner grain Ih, then the aggregate will eventually be.perceptible at the coarser grain. This is an important property in phenomena 1rob. ject to the Heap Paradox. Wear, for instance, eventually has significant consequences.", "cite_spans": [ { "start": 111, "end": 123, "text": "Hobbs 1985b)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Sets and Granularity", "sec_num": null }, { "text": "A great many of the most common words in English havre scales as their subject matter. This includes many prepmb tions, the most common adverbs, comparatives, and many abstract verbs. When spatial vocabulary is used metaphorically, it is generally the scalar aspect of space that carries over to the target domain. A scale is defined as a set of elements, together with a partial ordering and a granular,. ity (or an indistinguishability relation). The partial ordering and the indistinguishability relation are consistent with each other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scales", "sec_num": "3.2" }, { "text": "It is useful to have an adjacency relation between points on a scale, and there are a number of ways we could introduce it. We could simply take it to be primitive; in a scale having a distance function, we could define two points to be adjacent when the distance between them is less than some ~; finally, we could define adjacency in terms of the grain-size:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "0fz.if.z)z dl, it ceases being topologically iavaxiant, and at \u2022 force of strength d\u2022 _~ d:, it simply break\u2022. Metals exhibit the full range of possibilities, that is, 0 < dl < d: < dl < oo. For forces of strength d < dl, the material is \"hard\"; for forces of strength d where d~ < d < d,, it is \"flexible*; for forces of strength d where d, < d < ds, it is \"malleable\". Words such as \"ductile\" and \"elastic\" can be defined in terms of this vocabulary, together with predicates about the geometry of the bit of material. Words such as \"brittle\" (d, = d, = da) ud \"fluid\" (d, = O, d, = co) can also be defined in these", "uris": null }, "FIGREF7": { "type_str": "figure", "num": null, "text": "with Talmy's treatment (1985), in terms of the predications force(a, b, dl ) and resist(b, a, d:)--a forces against b", "uris": null }, "FIGREF8": { "type_str": "figure", "num": null, "text": "of o: ebr-~r(e, m, o, be) : mat~-/./(m) A topolofficaily-invariant(o) (V e, m, O, bo)abr-~ent'(e, m, O, be) --= (3 t, b, a, ~,,e,. e,, e,).t(., t) ^ \u00a2on,iatR-o$(o, b, t} ^ au,f.ce(s, b) A par~ide(~, s) ^ change'(c, st, \"=) ^ attacAed'(e,, be, b) ^ not ~ (e:, el ) ^cause(ca, e) ^ hit'(ca, m, be) After the abrasive event, the pointlike bit b0 is no longer a part of the object o: (Ve, m, o, be, e,, e:, t:)abr-e~ent'(e, m, o, be) ^ changd(e, e,, e:} ^ attached'(el, be, b) Anot'(e~,ej) A at(e:,t:) ^ consists-of(o, b:, t:) D -,part(bo, b:) It is necessary to state this explicitly since objects and bits of material can be discontinuous. An abrasion is a large number of abrasive events widely distributed through some nonpointlike region on the surface of an object: (Ve, m, o)abrad~(e, m, o) E (3bs}[(Ve,)[e, E e D (3be)be 6 bs A abr-eventJiet,m,o, bo)] ^ (V b, s, tHat(e, t) ^ consists-of(o, b, t) ^ surface(s, b) D (3 r)subregion(r, S) ^ widelpdistributed(bs, r)]]", "uris": null }, "FIGREF9": { "type_str": "figure", "num": null, "text": "\u2022 condition to the abrasive event which renders it \u2022 (single) corrode event: corrode-event(m, o, be) : fluid(m) ^contact(m, be) O's, m, o, be)corrode-~e.r(e, m, o, be) ~_ . .", "uris": null }, "FIGREF10": { "type_str": "figure", "num": null, "text": "re, m, o, bo, b2, e,, e:, t:)detach'(e, m, o, be) ^ changd(e, eh e:} ^ attached'(ct, be, b) ^ not'(ez, e,) ^ at(e:, t:) ^ Consists-of(o, b:, t:) D -,(part(be, b:))", "uris": null } } } }