{ "paper_id": "T75-2017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:43:16.837962Z" }, "title": "", "authors": [], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "T75-2017", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "In this paper we shall report on an initial attempt to relate the representation problem for four areas to each other through the use of a uniform formal structure. The four areas we have been concerned with are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(I) interpretation of events (2) initiation of actions (3) understanding language (4) using language Finding such a representation would be extremely useful and very suggestive even though it would not by itself constitute a solution to the whole problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Clearly, (I) and (2) are \"pragmatic\" in nature and are not limited to natural language processing, while (3) and(4) may be viewed as special cases of (I) and (2) respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "One of our main goals is to show how both pragmatic and semantic issues may be approached in a formal framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We have chosen to study the area of \"speech acts\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(conversational activities like \"request,\" \"command,\" \"promise,\" ...) as this area is especially rich in interactions among the four areas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Our goals can be divided into two categories: operational and methodological. On the operational side, we want to implement an actual system which would recognzze\" and \"perform\" speech acts and which would use and understand the verbs of \"saying'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The recognition that a particular speech act has occurred is to be on the basis of context and not solely on explicit markers like a performative verb or a question mark.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We also want a symmetric system which could generate, in the context of reversed roles, anything it could understand.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Initially we would be satisfied that the input and output be in an artificial language which we felt to be adequate to represent the underlying structures of English sentences (I).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "On the methodological side, we have two primary desiderata: unformity of representation, and generality in the procedural component. We do not wish to write an intricate procedure for each speech act.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We want to represent the speech acts in a structure with useful formal properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "(We settled on the lattice.) We want the \"state of the system\" to be a mathematically tractable object as well. The heart of the procedural component is to consist of straightforward (algebraic) operations and relations (LUB, GLB, i) which could be related to certain cognitive and linguistic phenomena.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A system designed along these lines is being implemented in LISP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "RELATED RESEARCH 2This work cuts across several areas in linguistics, natural language processing, and artificial intelligence and is related to work done on \"lexical factorization\" by certain generative semanticists and others. Here, as there, the attempt was to decompose the meanings of various predicates into combinations of a small group of \"core predicates\" or \"primitives'. However, whereas in general the decomposition was allowed to be expressed in any suitable form (trees, dependency networks, ...) we shall decompose into a slightly extended predicate calculus in order to exploit the underlying Boolean algebra and ultimately to construct our derived lattice. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "CONTRO | | ACTIONS I 'l l ~(INCLUDING\" [ GOALS UTTERANCES) J Figure 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "The block which stands for the procedural component is labeled CONTROL; all the rest are data structures. The SCHEMATA block contains the lattice whose points consist of (2) A more detailed review of related research will be included in the final version of this paper. Some examples are [B75] ,", "cite_spans": [ { "start": 288, "end": 293, "text": "[B75]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "[F71], [JM75], [J74], [KP?5], [Sch73], [Sc72], [St74], [W72].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "the lexical decompositions (definitions) and certain other elements while the LEXICON contains the non-definitional information. The LEXICON and SCHEMATA remain fixed durin~ the course of the conversation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "The \"state or \"instantaneous description\" of the system is to be found in the BELIEFS and GOALS, which are constantly being updated as the conversation progresses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "In order to avoid confusion, we should point out that in our discussion of the system, \"beliefs\" and \"goals\" are meant as technical terms to be defined entirely by their function in the system. These terms are not to be confused with their corresponding lexical items.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "We shall have more to say about \"goals\" later, but for now we will concentrate on \"beliefs'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "At any given time, the system has as its \"beliefs\" a set of propositions in a predicate calculus slightly modified primarily to allow for sentence embeddings. This set has the following properties:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "(i) closure --if a proposition is in the belief set, then all its direct consequences (i.e., those following from the definitions of the lexical items) are also in the belief set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "( 2)consistency --the Boolean product of the propositions in the belief set cannot be the element \"false'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "In order to briefly illustrate these restrictions, consider the definition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "bachelQr (x) =~ man (x) & -married (x)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": ", and the following sets:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "(1) {bachelor(John), man(John)} (2) {bachelor(John), -married(John), -man(John)} (3) {bachelor(John), -married to an unnegated atomic proposition also yields an atomic proposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "We can think of all atomic sentences, their conjunctions and disjunctions, together with a \"greatest\" element * and a ~least\" element 0, as forming a Boolean algebra, Bool. In this algebra eery element (except * and 0) is written as a sum-of-products of atomic propositions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "We define the \"less-than-or-equal\" relation (~) as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "(I) ~ x~ Bool, x~* (2) W x~ Bool, 0~x (3) If", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "II.", "sec_num": null }, { "text": "Intuitively speaking, we have absorbed the non-paradigmatic information states to paradigm points; ~L corresponds to \"jumping to a conclusion\" --but only to the least conclusion which is needed to explain the givens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "The criteria for how much to extend are in the structure itself.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "The actual computation of x L.~Ly is not difficult,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "given that we have ~ and ~ from T.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "One method follows from the observation that the least upper bound is the greatest lower bound of all upper bounds and that X~-~y~x~Ly. By this method one first computes t, the least upper bound in T. (This is straightforward, asT is a Boolean algebra.) Set r to *.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "Then for each element x of L for which t~x, set r to r~x. When we exhaust all such x, the value of r will be the least upper bound. Of course, other more efficient methods for computing the l.u.b, also exist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "The mechanism for event interpretation operates in the following manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "The least upper bound is taken of the points in the lattice which, under variable substitution, correspond to the propositions in the belief set and propositions in some input set. Any matched schemata (and their consequences) are added to the belief set. If the least upper bound taken in this way turns out to be *, one of two things has occured. Either the belief set contained a proposition which contradicted an input proposition, (the belief set, one should recall, could never be self-contradictory), or there is no single schema which encompasses all the propositional information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "In the former case, a control decision must be made on how to integrate the new material into the belief set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "Inthe latter case, we use the operation \"generalized LUB', which returns a set of points, each of which is a l.u.b. for a subset of the propositions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 3", "sec_num": null }, { "text": "As was noted before, an attempt was made to correlate the schemata with lexical decompositions of English words, especially the verbs of \"saying'. It can be seen that definitional direct consequences (a type of entailment) corresponds precisely to the relation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "That is, the fact that a sentence using the defined predicate bache!en has man as its direct consequence implies that the point in L into which man is mapped is less-than-or-equal-to (~) the point into which bachelor is mapped. If we label points in the lattice with items from the lexicon, we get structures similar to the one shown in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 337, "end": 345, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "Detailed information about the arguments of each predicate has been left out for the sake of readability. The reason for embedding lexical items in the lattice is that the l.u.b, operation can be used to choose appropriate words to describe a situation (given as a \"belief set').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "That is, we want the act of word selection to be identified with an operation that is naturally suggested by the formal structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "The selection of groups of words is identified with the \"generalized LUB.\" One interesting challenge emanating from this approach was to find a way in which well-known semantic properties of lexical items, such as induced presuppositions, could be integrated into the framework. For this purpose we introduced a new connective, @, whose behavior is illustrated in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 364, "end": 372, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "e & N~~ /\\ --> /\\ P A P A \"WEAK\" NEGATIO_ N not & @ --> - /\\ t P A P A \"STRONG\" NEGATIO_ N (DeMorgan's Law) neg V , / \\ , > ] /\\ P A P A Figure 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "If ~ is taken to be the presupposition and A the assertion, then the two negation rewritings correspond to the usual understanding of presupposition. However both can be expressed as points in the Boolean algebra. Furthermore, if S is a sentence rewritten as a 9 b, then neg(S) not(S) (since ~a + ~b ~ a & ~b.) Also, if A(a) and A(b) (i.e., if a and b are atomic) then S and not(S) are higher in the lattice than the atomic sentences, but neg(S) is lower. In this circumstance, if the \"belief set\" and the \"goal set\" satisfy enough pre-and post-conditions respectively for a particular schema to be matched by the l.u.b, operation, then the action may be taken.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "Of course, in the case of complete information (perfect match) the use of the schemata reduces to conditional expressions and as such is sufficient to represent any sequence of actions --or to perform any computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "What is more interesting, however, is how the lattice provides a model of \"intelligent\" or \"appropriate\" choice of actions in the case of incomplete information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null }, { "text": "In this context, too, the \"generalized LUB\" plays a role, namely that of selecting several compatible actions to be performed. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "V. LINGUISTIC RELEVANCE", "sec_num": null } ], "back_matter": [ { "text": "[BT75] Bajcsy, R., and Tidhar, A. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Belief Systems and Language Understanding", "authors": [ { "first": "B", "middle": [], "last": "Bruce", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bruce, B. (1975) Belief Systems and Language Understanding, BBN Report No. 2973, AI. Report No. 21, Cambridge, Massachusetts.", "links": null } }, "ref_entries": { "FIGREF2": { "text": "Figure 4.", "type_str": "figure", "num": null, "uris": null } } } }