{ "paper_id": "P02-1041", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:30:31.474982Z" }, "title": "Coupling CCG and Hybrid Logic Dependency Semantics", "authors": [ { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Geert-Jan", "middle": [ "M" ], "last": "Kruijff", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Categorial grammar has traditionally used the \u03bb-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.", "pdf_parse": { "paper_id": "P02-1041", "_pdf_hash": "", "abstract": [ { "text": "Categorial grammar has traditionally used the \u03bb-calculus to represent meaning. We present an alternative, dependency-based perspective on linguistic meaning and situate it in the computational setting. This perspective is formalized in terms of hybrid logic and has a rich yet perspicuous propositional ontology that enables a wide variety of semantic phenomena to be represented in a single meaning formalism. Finally, we show how we can couple this formalization to Combinatory Categorial Grammar to produce interpretations compositionally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The \u03bb-calculus has enjoyed many years as the standard semantic encoding for categorial grammars and other grammatical frameworks, but recent work has highlighted its inadequacies for both linguistic and computational concerns of representing natural language semantics (Copestake et al., 1999; . The latter couples a resource-sensitive categorial proof theory (Moortgat, 1997) to hybrid logic (Blackburn, 2000) to formalize a dependency-based perspective on meaning, which we call here Hybrid Logic Dependency Semantics (HLDS). In this paper, we situate HLDS in the computational context by explicating its properties as a framework for computational semantics and linking it to Combinatory Categorial Grammar (CCG) .", "cite_spans": [ { "start": 269, "end": 293, "text": "(Copestake et al., 1999;", "ref_id": "BIBREF7" }, { "start": 360, "end": 376, "text": "(Moortgat, 1997)", "ref_id": "BIBREF14" }, { "start": 393, "end": 410, "text": "(Blackburn, 2000)", "ref_id": "BIBREF5" }, { "start": 710, "end": 715, "text": "(CCG)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The structure of the paper is as follows. In \u00dc2, we briefly introduce CCG and how it links syntax and semantics, and then discuss semantic representations that use indexes to identify subparts of logical forms. \u00dc3 introduces HLDS and evaluates it with respect to the criteria of other computational semantics frameworks. \u00dc4 shows how we can build HLDS terms using CCG with unification and \u00dc5 shows how intonation and information structure can be incorporated into the approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditionally, categorial grammar has captured meaning using a (simply typed) \u03bb-calculus, building semantic structure in parallel to the categorial inference (Morrill, 1994; Moortgat, 1997; Steedman, 2000b) . For example, a (simplified) CCG lexical entry for a verb such as wrote is given in (1).", "cite_spans": [ { "start": 158, "end": 173, "text": "(Morrill, 1994;", "ref_id": "BIBREF15" }, { "start": 174, "end": 189, "text": "Moortgat, 1997;", "ref_id": "BIBREF14" }, { "start": 190, "end": 206, "text": "Steedman, 2000b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "(1) wrote \u00b4\u00d7\u00d2\u00d2\u00b5 \u00d2 : \u03bbx \u03bby write\u00b4y x\u00b5 Rules of combination are defined to operate on both categories and \u03bb-terms simultaneously. For example, the rules allow the following derivation for Ed wrote books.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "(2) Ed wrote books \u00d2:Ed\u00b4\u00d7\u00d2\u00d2\u00b5 \u00d2:\u03bbx \u03bby write\u00b4y x\u00b5 \u00d2:books \u00d7\u00d2\u00d2 : \u03bby write\u00b4y books\u00b5 \u00d7 : write\u00b4Ed books\u00b5 Derivations like (2) give rise to the usual sort of predicate-argument structure whereby the order in which the arguments appear (and are bound by the \u03bb's) is essentially constitutive of their meaning. Thus, the first argument could be taken to correspond to the writer, whereas the second argument corresponds to what is being written.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "One deficiency of \u03bb-calculus meaning representations is that they usually have to be type-raised to the worst case to fully model quantification, and this can reverberate and increase the complexity of syntactic categories since a verb like wrote will need to be able to take arguments with the types of generalized quantifiers. The approach we advocate in this paper does not suffer from this problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "For CCG, the use of the \u03bb-terms is simply a convenient device to bind arguments when presenting derivations (Steedman, 2000b) . In implementations, a more common strategy is to compute semantic representations via unification, a tactic explicitly employed in Unification Categorial Grammar (UCG) (Zeevat, 1988) . Using a unification paradigm in which atomic categories are bundles of syntactic and semantic information, we can use an entry such as (3) for wrote in place of (1). In the unification setting, (3) permits a derivation analogous to (2).", "cite_spans": [ { "start": 108, "end": 125, "text": "(Steedman, 2000b)", "ref_id": "BIBREF17" }, { "start": 296, "end": 310, "text": "(Zeevat, 1988)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "(3) wrote \u00b4\u00d7 : write\u00b4y x\u00b5\u00d2\u00d2:y\u00b5 \u00d2:x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "For creating predicate-argument structures of this kind, strategies using either \u03bb-terms or unification to bind arguments are essentially notational variants. However, UCG goes beyond simple predicateargument structures to instead use a semantic representation language called Indexed Language (InL). The idea of using indexes stems from Davidson (event variables), and are a commonly used mechanism in unification-based frameworks and theories for discourse representation. InL attaches one to every formula representing its discourse referent. This results in a representation such as (4) for the sentence Ed came to the party.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Indexed semantic representations", "sec_num": "2" }, { "text": "InL thus flattens logical forms to some extent, using the indexes to spread a given entity or event through multiple predications. The use of indexes is crucial for UCG's account of modifiers, and as we will see later, we exploit such referents to achieve similar ends when coupling HLDS and CCG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4) e party\u00b4x\u00b5 past\u00b4e\u00b5 to\u00b4e x\u00b5 come\u00b4e Ed\u00b5", "sec_num": null }, { "text": "Minimal Recursion Semantics (MRS) (Copestake et al., 1999; Copestake et al., 2001 ) is a framework for computational semantics that is designed to simplify the work of algorithms which produce or use semantic representations. MRS provides the means to represent interpretations with a flat, underspecified semantics using terms of the predicate calculus and generalized quantifiers. Flattening is achieved by using an indexation scheme involving labels that tag particular groups of elementary predications (EPs) and handles (here, h 1 h 2 ) that reference those EPs. Underspecification is achieved by using unresolved handles as the arguments for scope-bearing elements and declaring constraints (with the q operator) on how those handles can be resolved. Different scopes can be reconstructed by equating unresolved handles with the labels of the other EPs obeying the q constraints. For example, (5) would be given as the representation for every dog chases some white cat.", "cite_spans": [ { "start": 34, "end": 58, "text": "(Copestake et al., 1999;", "ref_id": "BIBREF7" }, { "start": 59, "end": 81, "text": "Copestake et al., 2001", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "(4) e party\u00b4x\u00b5 past\u00b4e\u00b5 to\u00b4e x\u00b5 come\u00b4e Ed\u00b5", "sec_num": null }, { "text": "(5) h 0 h 1 :every\u00b4x h 2 h 3 \u00b5 h 4 :dog\u00b4x\u00b5 h 11 :cat\u00b4y\u00b5 h 8 :some\u00b4y h 9 h 10 \u00b5 h 11 :white\u00b4y\u00b5 h 7 :chase\u00b4x y\u00b5 h 0 q h 7 h 2 q h 4 h 9 q h 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(4) e party\u00b4x\u00b5 past\u00b4e\u00b5 to\u00b4e x\u00b5 come\u00b4e Ed\u00b5", "sec_num": null }, { "text": "Copestake et al. argue that these flat representations facilitate a number of computational tasks, including machine translation and generation, without sacrificing linguistic expressivity. Also, flatness permits semantic equivalences to be checked more easily than in structures with deeper embedding, and underspecification simplifies the work of the parser since it does not have to compute every possible reading for scope-bearing elements. proposes an alternative way to representing linguistically realized meaning: namely, as terms of hybrid modal logic (Blackburn, 2000) explicitly encoding the dependency relations between heads and dependents, spatio-temporal structure, contextual reference, and information structure. We call this unified perspective combining many levels of meaning Hybrid Logic Dependency Semantics (HLDS). We begin by discussing how hybrid logic extends modal logic, then look at the representation of linguistic meaning via hybrid logic terms.", "cite_spans": [ { "start": 561, "end": 578, "text": "(Blackburn, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "(4) e party\u00b4x\u00b5 past\u00b4e\u00b5 to\u00b4e x\u00b5 come\u00b4e Ed\u00b5", "sec_num": null }, { "text": "Though modal logic provides a powerful tool for encoding relational structures and their properties, it contains a surprising inherent asymmetry: states (\"worlds\") are at the heart of the model theory for modal logic, but there are no means to directly reference specific states using the object language. This inability to state where exactly a proposition holds makes modal logic an inadequate representation framework for practical applications like knowledge representation (Areces, 2000) or temporal reasoning (Blackburn, 1994) . Because of this, computational work in knowledge representation has usually involved re-engineering first-order logic to suit the task, e.g., the use of metapredicates such as Hold of Kowalski and Allen. Unfortunately, such logics are often undecidable.", "cite_spans": [ { "start": 478, "end": 492, "text": "(Areces, 2000)", "ref_id": "BIBREF2" }, { "start": 515, "end": 532, "text": "(Blackburn, 1994)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "Hybrid logic extends standard modal logic while retaining decidability and favorable complexity (Areces, 2000 ) (cf. (Areces et al., 1999 for a complexity roadmap). The strategy is to add nominals, a new sort of basic formula with which we can explicitly name states in the object language. Next to propositions, nominals are first-class citizens of the object language: formulas can be formed using both sorts, standard boolean operators, and the satisfaction operator \"@\". A formula @ i p states that the formula p holds at the state named by i. 1 (There are more powerful quantifiers ranging over nominals, such as , but we do not consider them here.)", "cite_spans": [ { "start": 96, "end": 109, "text": "(Areces, 2000", "ref_id": "BIBREF2" }, { "start": 110, "end": 137, "text": ") (cf. (Areces et al., 1999", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "With nominals we obtain the possibility to explicitly refer to the state at which a proposition holds. As Blackburn (1994) argues, this is essential for capturing our intuitions about temporal reference. A standard modal temporal logic with the modalities and \u00c8 (future and past, respectively) cannot correctly represent an utterance such as Ed finished the book because it is unable to refer to the specific time at which the event occurred. The addition of nominals makes this possible, as shown in (6), where the nominal i represents the Reichenbachian event time.", "cite_spans": [ { "start": 106, "end": 122, "text": "Blackburn (1994)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "(6) \u00c8 \u00b4i Ed-finish-book\u00b5 Furthermore, many temporal properties can be defined in terms of pure formulas which use nominals and contain no propositional variables. For example, the following term defines the fact that the relations for and \u00c8 are mutually converse:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "1 A few notes on our conventions: p q r are variables over any hybrid logic formula; i j k are variables over nominals; d i and h i denote nominals (for dependent and head, respectively).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "(7) @ i \u00c8 i @ i \u00c8 i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "It is also possible to encode a variety of other representations in terms of hybrid logics. For example, nominals correspond to tags in attribute-value matrices (AVMs), so the hybrid logic formula in (8) corresponds to the AVM in (9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "(8) SUBJ \u00b4i AGR singular PRED dog\u00b5 COMP SUBJ i (9) \u00be SUBJ 1 AGR singular PRED dog COMP SUBJ 1 \u00bf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "A crucial aspect of hybrid logic is that nominals are at the heart of a sorting strategy. Different sorts of nominals can be introduced to build up a rich sortal ontology without losing the perspicuity of a propositional setting. Additionally, we can reason about sorts because nominals are part and parcel of the object language. We can extend the language of hybrid logic with \u00cb\u00d3\u00d6\u00d8:Nominal to facilitate the explicit statement of what sort a nominal is in the language and carry this modification into one of the existing tableaux methods for hybrid logic to reason effectively with this information. This makes it possible to capture the rich ontologies of lexical databases like WordNet in a clear and concise fashion which would be onerous to represent in first-order logic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hybrid Logic", "sec_num": "3.1" }, { "text": "Hybrid logic enables us to logically capture two essential aspects of meaning in a clean and compact way, namely ontological richness and the possibility to refer. Logically, we can represent an expression's linguistically realized meaning as a conjunction of modalized terms, anchored by the nominal that identifies the head's proposition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "(10) @ h\u00b4p roposition \u00ce \u03b4 i \u00b4d i dep i \u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "Dependency relations are modeled as modal relations \u03b4 i , and with each dependent we associate a nominal d i , representing its discourse referent. Technically, (10) states that each nominal d i names the state where a dependent expressed as a proposition dep i should be evaluated and is a \u03b4 i successor of h, the nominal identifying the head. As an example, the sentence Ed wrote a long book in London receives the represention in (11).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "(11) @ h 1\u00b4w rite ACT \u00b4d 0 Ed\u00b5 PAT \u00b4d 5 book GR \u00b4d 7 long\u00b5\u00b5 LOC \u00b4d 9 London\u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "The modal relations ACT, PAT, LOC, and GR stand for the dependency relations Actor, Patient, Locative, and General Relationship, respectively. See for the model-theoretic interpretation of expressions such as (11). Contextual reference can be modeled as a statement that from the current state (anaphor) there should be an accessible antecedent state at which particular conditions hold. Thus, assuming an accessibility relation XS, we can model the meaning of the pronoun he as in (12).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "(12) @ i XS \u00b4j male\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "During discourse interpretation, this statement is evaluated against the discourse model. The pronoun is resolvable only if a state where male holds is XSaccessible in the discourse model. Different accessibility relations can be modeled, e.g. to distinguish a local context (for resolving reflexive anaphors like himself ) from a global context .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "Finally, the rich temporal ontology underlying models of tense and aspect such as Moens and Steedman (1988) can be captured using the sorting strategy. Earlier work like Blackburn and Lascarides (1992) already explored such ideas. HLDS employs hybrid logic to integrate Moens and Steedman's notion of the event nucleus directly into meaning representations. The event nucleus is a tripartite structure reflecting the underlying semantics of a type of event. The event is related to a preparation (an activity bringing the event about) and a consequent (a state ensuing to the event), which we encode as the modal relations PREP and CONS, respectively. Different kinds of states and events are modeled as different sorts of nominals, shown in (13) using the notation introduced above.", "cite_spans": [ { "start": 82, "end": 107, "text": "Moens and Steedman (1988)", "ref_id": "BIBREF13" }, { "start": 170, "end": 201, "text": "Blackburn and Lascarides (1992)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "(13) @ \u00d8 \u00da \u00d8\u00dd:e1 PREP \u00da \u00d1 \u00d2\u00d8:e2 @ \u00da \u00d1 \u00d2\u00d8:e2 CONS \u00cb\u00d8 \u00d8 :e3 To tie (13) in with a representation like (11), we equate the nominal of the head with one of the nominals in the event nucleus (E)a and state its temporal relation (e.g. P ). Given the event nucleus in (13), the representation in (11) becomes 14, where the event is thus located at a specific time in the past.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "(14) @ h1\u00b4E\u00b413\u00b5 write ACT \u00b4d0 Ed\u00b5 PAT \u00b4d 5 book GR \u00b4d 7 long\u00b5\u00b5 LOC \u00b4d 9 London\u00b5\u00b5 @ h1 \u00da \u00d1 \u00d2\u00d8:e2 P \u00da \u00d1 \u00d2\u00d8:e2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "Hybrid logic's flexibility makes it amenable to representing a wide variety of semantic phenomena in a propositional setting, and it can furthermore be used to formulate a discourse theory (Kruijff and Kruijff-Korbayov\u00e1, 2001 ).", "cite_spans": [ { "start": 189, "end": 225, "text": "(Kruijff and Kruijff-Korbayov\u00e1, 2001", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Encoding linguistic meaning", "sec_num": "3.2" }, { "text": "Here we consider the properties of HLDS with respect to the four main criteria laid out by Copestake et al. (1999) which a computational semantics framework must meet: expressive adequacy, grammatical compatibility, computational tractability, and underspecifiability.", "cite_spans": [ { "start": 91, "end": 114, "text": "Copestake et al. (1999)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "Expressive adequacy refers to a framework's ability to correctly express linguistic meaning. HLDS was designed not only with this in mind, but as its central tenet. In addition to providing the means to represent the usual predicate-valency relations, it explicitly marks the named dependency relations between predicates and their arguments and modifiers. These different dependency relations are not just labels: they all have unique semantic imports which project new relations in the context of different heads. HLDS also tackles the representation of tense and aspect, contextual reference, and information structure, as well as their interaction with discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "The criterion of grammatical compatibility requires that a framework be linkable to other kinds of grammatical information. shows that HLDS can be coupled to a rich grammatical framework, and in \u00dc4 we demonstrate that it can be tied to CCG, a much lower power formalism than that assumed by Kruijff. It should furthermore be straightforward to use our approach to hook HLDS up to other unification-based frameworks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "The definition of computational tractability states that it must be possible to check semantic equivalence of different formulas straightforwardly. Like MRS, HLDS provides the means to view linguistic meaning in a flattened format and thereby ease the checking of equivalence. For example, (15) describes the same relational structure as (11).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "(15) @ h 1\u00b4w rite ACT d 0 PAT d 5 LOC d 9 \u00b5 @ d 0 Ed @ d 5 book @ d 9 London @ d 7 long @ d 5 GR d 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "This example clarifies how the use of nominals is related to the indexes of UCG's InL and the labels of MRS. However, there is an important difference: nominals are full citizens of the object language with semantic import and are not simply a device for spreading meaning across several elementary predications. They simultaneously represent tags on subparts of a logical form and discourse referents on which relations are predicated. Because it is possible to view an HLDS term as a flat conjunction of the heads and dependents inside it, the benefits described by Copestake et al. with respect to MRS's flatness thus hold for HLDS as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "Computational tractability also requires that it is straightforward to express relationships between representations. This can be done in the object language of HLDS as hybrid logic implicational statements which can be used with proof methods to discover deeper relationships. Kruijff's model connecting linguistic meaning to a discourse context is one example of this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "Underspecifiability means that semantic representations should provide means to leave some semantic distinctions unresolved whilst allowing partial terms to be flexibly and monotonically resolved. (5) shows how MRS leaves quantifier scope underspecified, and such formulas can be transparently encoded in HLDS. Consider (16), where the relations RESTR and BODY represent the restriction and body arguments of the generalized quantifiers, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "(16) @ h 7\u00b4c hase ACT h 4 PAT h 11 \u00b5 @ h 1\u00b4e very RESTR i BODY j\u00b5 @ h 8\u00b4s ome RESTR k BODY l\u00b5 @ h 4 dog @ h 11 cat @ h 11 GR \u00b4h 12 white\u00b5 @ i QEQ h 4 @ k QEQ h 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "MRS-style underspecification is thus replicated by declaring new nominals and modeling q as a modal relation between nominals. When constructing the fully-scoped structures generated by an underspecified one, the q constraints must be obeyed according to the qeq condition of Copestake etal. Because HLDS is couched directly in terms of hybrid logic, we can concisely declare the qeq condition as the following implication:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "(17) @ i QEQ j @ i j \u00b4@ i BODY k @ k QEQ j\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "Alternatively, it would in principle be possible to adopt a truly modal solution to the representation of quantifiers. Following Alechina (1995), (generalized) quantification can be modeled as modal operators. The complexity of generalized quantification is then pushed into the model theory instead of forcing the representation to carry the burden.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to MRS", "sec_num": "3.3" }, { "text": "In Dependency Grammar Logic (DGL), Kruijff (2001) couples HLDS to a resourcesensitive categorial proof theory (CTL) (Moortgat, 1997) . Though DGL demonstrates a procedure for building HLDS terms from linguistic expressions, there are several problems we can overcome by switching to CCG. First, parsing with CCG grammars for substantial fragments is generally more efficient than with CTL grammars with similar coverage. Also, a wide-coverage statistical parser which produces syntactic dependency structures for English is available for CCG (Clark et al., 2002) . Second, syntactic features (modeled by unary modalities) in CTL have no intuitive semantic reflection, whereas CCG can relate syntactic and semantic features perspicuously using unification. Finally, CCG has a detailed syntactic account of the realization of information structure in English.", "cite_spans": [ { "start": 116, "end": 132, "text": "(Moortgat, 1997)", "ref_id": "BIBREF14" }, { "start": 542, "end": 562, "text": "(Clark et al., 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "To link syntax and semantics in derivations, every logical form in DGL expresses a nominal identifying its head in the format @ i p. This handles dependents in a linguistically motivated way through a linking theory: given the form of a dependent, its (possible) role is established, after which its meaning states that it seeks a head that can take such a role. However, to subsequently bind that dependent into the verb's argument slot requires logical axioms about the nature of various dependents. This not only requires extra reduction steps to arrive at the desired logical form, but could also lead to problems depending on the underlying theory of roles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "We present an alternative approach to binding dependents, which overcomes these problems without abandoning the linguistic motivation. Because we work in a lexicalist setting, we can compile the effects of the linguistic linking theory directly into category assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "The first difference in our proposal is that arguments express only their own nominal, not the nominal of a head as well. For example, proper nouns receive categories such as (18).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(18) Ed \u00d2 : @ d 1 Ed", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "This entry highlights our relaxation of the strict connection between syntactic and semantic types traditionally assumed in categorial grammars, a move in line with the MRS approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "In contrast with DGL, the semantic portion of a syntactic argument in our system does not declare the role it is to take and does not identify the head it is to be part of. Instead it identifies only its own referent. Without using additional inference steps, this is transmuted via unification into a form similar to DGL's in the result category. (19) is an example of the kind of head category needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(19) sleeps \u00d7 : @ h 2\u00b4s leep ACT \u00b4i p\u00b5\u00b5\u00d2\u00d2 : @ i p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "To derive Ed sleeps, (18) and (19) combine via backward application to produce (20), the same term as that built in DGL using one step instead of several.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(20) @ h 2\u00b4s leep ACT \u00b4d 1 Ed\u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "To produce HLDS terms that are fully compatible with the way that Kruijff and Kruijff-Korbayov\u00e1 (2001) model discourse, we need to mark the informativity of dependents as contextually bound (CB) and contextually nonbound (NB). In DGL, these appear as modalities in logical forms that are used to create a topic-focus articulation that is merged with the discourse context. For example, the sentence he wrote a book would receive the following (simplified) interpretation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(21) @ h 1\u00b4 NB write NB PAT \u00b4d 5 book\u00b5 CB ACT \u00b4d 6 XS \u00b4d 3 male\u00b5\u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "DGL uses feature-resolving unary modalities (Moortgat, 1997) to instantiate the values of informativity. In unification-based approaches such as CCG, the transferal of feature information into semantic representations is standard practice. We thus employ the feature inf and mark informativity in logical forms with values resolved syntactically.", "cite_spans": [ { "start": 44, "end": 60, "text": "(Moortgat, 1997)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(22) Ed \u00d2 inf CB : @ d 1 Ed (23) sleeps \u00d7 : @ h 2\u00b4 NB sleep q ACT \u00b4i p\u00b5\u00b5\u00d2\u00d2 inf q :@ i p", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "Combining these entries using backward application gives the following result for Ed sleeps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(24) \u00d7 : @ h 2\u00b4 NB sleep CB ACT \u00b4d 1 Ed\u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "A major benefit of having nominals in our representations comes with adjuncts. With HLDS, we consider the prepositional verbal modifier in the sentence Ed sleeps in the bed as an optional Locative dependent of sleeps. To implement this, we follow DGL in identifying the discourse referent of the head with that of the adjunct. However, unlike DGL, this is compiled into the category for the adjunct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(25) in \u00b4\u00d7 : @ i\u00b4p r LOC \u00b4j q\u00b5\u00b5\u00d2\u00d7:@ i p\u00b5 \u00d2 inf r :@ j q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "To derive the sentence Ed sleeps in the bed (see Figure 1 ), we then need the following further entries:", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 57, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "(26) the \u00d2 inf CB :p \u00d2 inf NB :p (27) bed \u00d2 inf NB : @ d 3 bed", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "This approach thus allows adjuncts to insert their semantic import into the meaning of the head, making use of nominals in a manner similar to the use of indexes in Unification Categorial Grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CCG Coupled to HLDS", "sec_num": "4" }, { "text": "Information Structure (IS) in English is in part determined by intonation. For example, given the question in (28), an appropriate response would be (29 Steedman (2000a) incorporates intonation into CCG syntactic analyses to determine the contribution of different constituents to IS. Steedman calls segments such as Ed wrote of (29) the theme of the sentence, and a book the rheme. The former indicates the part of the utterance that connects it with the preceding discourse, whereas the latter provides information that moves the discourse forward.", "cite_spans": [ { "start": 153, "end": 169, "text": "Steedman (2000a)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "In the context of Discourse Representation Theory, Kruijff-Korbayov\u00e1 (1998) this in HLDS as a term incorporating the \u00ba\u00bb operator. Equating topic and focus with Steedman's theme and rheme, we encode the interpretation of (29) as:", "cite_spans": [ { "start": 51, "end": 75, "text": "Kruijff-Korbayov\u00e1 (1998)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "(30) @ h 7\u00b4 CB write CB ACT \u00b4d 1 Ed\u00b5 \u00ba\u00bb NB PAT \u00b4d 4 book\u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "DGL builds such structures by using a rewriting system to produce terms with topic/focus articulation from the terms produced by the syntax. Steedman uses the pitch accents to produce lexical entries with values for the INFORMATION feature, which we call here sinf . L+H* and H* set the value of this feature as \u03b8 (for theme) or \u03c1 (for rheme), respectively. He also employs categories for the boundary tones that carry blocking values for sinf which stop incomplete intonational phrases from combining with others, thereby avoiding derivations for utterances with nonsensical intonation contours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "Our approach is to incorporate the syntactic aspects of Steedman's analysis with DGL's rewriting system for using informativity to partition sentential meaning. In addition to using the syntactic feature sinf , we allow intonation marking to instantiate the values of the semantic informativity feature inf . Thus, we have the following sort of entry:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "(31) WROTE (L+H*) \u00d7 sinf \u03b8 :\u03c6\u00d2\u00d2 inf w sinf \u03b8 :@ i p \u00d2 inf x sinf \u03b8 :@ j q \u03c6 @ h 2\u00b4 CB write w ACT \u00b4i p\u00b5 x PAT \u00b4j q\u00b5\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "We therefore straightforwardly reap the syntactic benefits of Steedman's intonation analysis, while IS itself is determined via DGL's logical form rewriting system operating on the modal indications of informativity produced during the derivation. The articulation of IS can thus be performed uniformly across languages, which use a variety of strategies including intonation, morphology, and word order variation to mark the informativity of different elements. The resulting logical form plugs directly into DGL's architecture for incorporating sentence meaning with the discourse.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Intonation and Information Structure", "sec_num": "5" }, { "text": "Since it is couched in hybrid logic, HLDS is ideally suited to be logically engineered to the task at hand. Hybrid logic can be made to do exactly what we want, answering to the linguistic intuitions we want to formalize without yielding its core assets -a rich propositional ontology, decidability, and favorable computational complexity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Various aspects of meaning, like dependency relations, contextual reference, tense and aspect, and information structure can be perspicuously encoded with HLDS, and the resulting representations can be built compositionally using CCG. CCG has close affinities with dependency grammar, and it provides a competitive and explanatorily adequate basis for a variety of phenomena ranging from coordination and unbounded dependencies to information structure. Nonetheless, the approach we describe could in principle be fit into other unification-based frameworks like Head-Driven Phrase Structure Grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Hybrid logic's utility does not stop with sentential meaning. It can also be used to model discourse interpretation and is closely related to logics for knowledge representation. This way we can cover the track from grammar to discourse with a single meaning formalism. We do not need to translate or make simplifying assumptions for different processing modules to communicate, and we can freely include and use information across different levels of meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "We have implemented a (preliminary) Java package for creating and manipulating hybrid logic terms and connected it to Grok, a CCG parsing system. 3 The use of HLDS has made it possible to improve the representation of the lexicon. Hybrid logic nominals provide a convenient and intuitive manner of localizing parts of a semantic structure, which has made it possible to greatly simplify the use of inheritance in the lexicon. Logical forms are created as an accumulation of different levels in the hierarchy including morphological information. This is particularly important since the system does not otherwise support typed feature structures with inheritance.", "cite_spans": [ { "start": 146, "end": 147, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Hybrid logics provide a perspicuous logical language for representing structures in temporal logic, description logic, AVMs, and indeed any relational structure. Terms of HLDS can thus be marshalled into terms of these other representations with the potential of taking advantage of tools developed for them or providing input to modules expecting them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "In future work, we intend to combine techniques for building wide-coverage statistical parsers for CCG Clark et al., 2002) with corpora that have explicitly marked semantic dependency relations (such as the Prague Dependency Treebank and NEGRA) to produce HLDS terms as the parse output.", "cite_spans": [ { "start": 103, "end": 122, "text": "Clark et al., 2002)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Work", "sec_num": "6" }, { "text": "Following Pierrehumbert's notation, the intonational contour L+H* indicates a low-rising pitch accent, H* a sharplyrising pitch accent, and both LH% and LL% are boundary tones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The software is available at http://opennlp.sf.net and http://grok.sf.net under an open source license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Patrick Blackburn, Johan Bos, Nissim Francez, Alex Lascarides, Mark Steedman, Bonnie Webber and the ACL reviewers for helpful comments on earlier versions of this paper. All errors are, of course, our own. Jason Baldridge's work is supported in part by Overseas Research Student Award ORS/98014014. Geert-Jan Kruijff's work is supported by the DFG Sonderforschungsbereich 378 Resource-Sensitive Cognitive Processes, Project NEGRA EM6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "University of Amsterdam", "authors": [ { "first": "Natasha", "middle": [ "Alechina" ], "last": "", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natasha Alechina. 1995. Modal Quantifiers. Ph.D. thesis, Uni- versity of Amsterdam, Amsterdam, The Netherlands.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A road-map on complexity for hybrid logics", "authors": [ { "first": "Carlos", "middle": [], "last": "Areces", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Marx", "suffix": "" } ], "year": 1999, "venue": "Computer Science Logic, number 1683 in Lecture Notes in Computer Science", "volume": "", "issue": "", "pages": "307--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Areces, Patrick Blackburn, and Maarten Marx. 1999. A road-map on complexity for hybrid logics. In J. Flum and M. Rodr\u00edguez-Artalejo, editors, Computer Science Logic, number 1683 in Lecture Notes in Computer Science, pages 307-321. Springer-Verlag.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Logic Engineering. The Case of Description and Hybrid Logics", "authors": [ { "first": "Carlos", "middle": [], "last": "Areces", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Areces. 2000. Logic Engineering. The Case of Descrip- tion and Hybrid Logics. Ph.D. thesis, University of Amster- dam, Amsterdam, The Netherlands.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sorts and operators for temporal semantics", "authors": [ { "first": "Patrick", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 1992, "venue": "Proc. of the Fourth Symposium on Logic and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Blackburn and Alex Lascarides. 1992. Sorts and oper- ators for temporal semantics. In Proc. of the Fourth Sympo- sium on Logic and Language, Budapest, Hungary.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Tense, temporal reference and tense logic", "authors": [ { "first": "Patrick", "middle": [], "last": "Blackburn", "suffix": "" } ], "year": 1994, "venue": "Journal of Semantics", "volume": "11", "issue": "", "pages": "83--101", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Blackburn. 1994. Tense, temporal reference and tense logic. Journal of Semantics, 11:83-101.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Representation, reasoning, and relational structures: a hybrid logic manifesto", "authors": [ { "first": "Patrick", "middle": [], "last": "Blackburn", "suffix": "" } ], "year": 2000, "venue": "Logic Journal of the IGPL", "volume": "8", "issue": "3", "pages": "339--625", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Blackburn. 2000. Representation, reasoning, and rela- tional structures: a hybrid logic manifesto. Logic Journal of the IGPL, 8(3):339-625.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Building deep dependency structures using a wide-coverage CCG parser", "authors": [ { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures using a wide-coverage CCG parser. In Proc. of the 40th Annual Meeting of the As- sociation of Computational Linguistics, Philadelphia, PA.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Minimal recursion semantics: An introduction. ms", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Pollard", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Copestake, Dan Flickinger, Ivan Sag, and Carl Pollard. 1999. Minimal recursion semantics: An introduction. ms, www-csli.stanford.edu/\u02dcaac/newmrs.ps.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "An algebra for semantic construction in constraint-based grammars", "authors": [ { "first": "Ann", "middle": [], "last": "Copestake", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Flickinger", "suffix": "" } ], "year": 2001, "venue": "Proc. of the 39th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proc. of the 39th Annual Meeting of the Association of Computational Linguistics, pages 132-139, Toulouse, France.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Generative models for statistical parsing with combinatory categorial grammar", "authors": [ { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2002, "venue": "Proc. of the 40th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with combinatory categorial grammar. In Proc. of the 40th Annual Meeting of the As- sociation of Computational Linguistics, Philadelphia, PA.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A hybrid logic formalization of information structure sensitive discourse interpretation", "authors": [ { "first": "M", "middle": [], "last": "Geert-Jan", "suffix": "" }, { "first": "Ivana", "middle": [], "last": "Kruijff", "suffix": "" }, { "first": "", "middle": [], "last": "Kruijff-Korbayov\u00e1", "suffix": "" } ], "year": 2001, "venue": "Proc. of the Fourth Workshop on Text, Speech and Dialogue", "volume": "2166", "issue": "", "pages": "31--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geert-Jan M. Kruijff and Ivana Kruijff-Korbayov\u00e1. 2001. A hybrid logic formalization of information structure sensitive discourse interpretation. In Proc. of the Fourth Workshop on Text, Speech and Dialogue, volume 2166 of LNCS/LNAI, pages 31-38. Springer-Verlag.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Dynamic Potential of Topic and Focus: A Praguian Approach to Discourse Representation Theory", "authors": [ { "first": "", "middle": [], "last": "Ivana Kruijff-Korbayov\u00e1", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivana Kruijff-Korbayov\u00e1. 1998. The Dynamic Potential of Topic and Focus: A Praguian Approach to Discourse Repre- sentation Theory. Ph.D. thesis, Charles University, Prague, Czech Republic.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Categorial Modal Architecture of Informativity: Dependency Grammar Logic & Information Structure", "authors": [ { "first": "Jan", "middle": [ "M" ], "last": "Geert", "suffix": "" }, { "first": "", "middle": [], "last": "Kruijff", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geert-Jan M. Kruijff. 2001. A Categorial Modal Architec- ture of Informativity: Dependency Grammar Logic & Infor- mation Structure. Ph.D. thesis, Charles University, Prague, Czech Republic.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Temporal ontology and temporal reference", "authors": [ { "first": "Marc", "middle": [], "last": "Moens", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 1988, "venue": "Computational Linguistics", "volume": "14", "issue": "", "pages": "15--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Moens and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational Linguistics, 14:15- 28.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Categorial type logics", "authors": [ { "first": "Michael", "middle": [], "last": "Moortgat", "suffix": "" } ], "year": 1997, "venue": "Handbook of Logic and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Moortgat. 1997. Categorial type logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language. Elsevier Science B.V.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Type Logical Grammar: Categorial Logic of Signs", "authors": [ { "first": "Glyn", "middle": [ "V" ], "last": "Morrill", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Glyn V. Morrill. 1994. Type Logical Grammar: Categorial Logic of Signs. Kluwer Academic Publishers, Dordrecht, Boston, London.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Information structure and the syntaxphonology interface", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "Linguistic Inquiry", "volume": "34", "issue": "", "pages": "649--689", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2000a. Information structure and the syntax- phonology interface. Linguistic Inquiry, 34:649-689.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "The Syntactic Process", "authors": [ { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Steedman. 2000b. The Syntactic Process. The MIT Press, Cambridge Mass.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Combining categorial grammar and unification", "authors": [ { "first": "Henk", "middle": [], "last": "Zeevat", "suffix": "" } ], "year": 1988, "venue": "Natural Language Parsing and Linguistic Theories", "volume": "", "issue": "", "pages": "202--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henk Zeevat. 1988. Combining categorial grammar and unifi- cation. In Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories, pages 202-229. Reidel, Dordrecht.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Derivation of Ed sleeps in the bed.", "num": null, "type_str": "figure", "uris": null }, "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "
Ed sleeps\u00b4 \u00b424\u00b5\u00b5inthebed
", "text": "represents IS by splitting DRT structures into a topic/focus articulation of the form TOPIC \u00ba\u00bb FOCUS . We represent : @ h 2\u00b4 NB sleep CB ACT \u00b4d 1 Ed\u00b5\u00b5 \u00d7 : @ i\u00b4p r LOC \u00b4j q\u00b5\u00b5\u00d2\u00d7:@ i p\u00b5 \u00d2 inf r :@ j q \u00d2 inf CB :s \u00d2 inf NB :s \u00d2 inf NB :@ d 3 bed \u00d2 inf CB : @ d 3 bed \u00d7 : @ i\u00b4p CB LOC \u00b4d 3 bed\u00b5\u00b5\u00d2\u00d7:@ i p \u00d7 : @ h 2\u00b4 NB sleep CB ACT \u00b4d 1 Ed\u00b5 CB LOC \u00b4d 3 bed\u00b5\u00b5" } } } }