{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:39:42.244327Z" }, "title": "Induction of Minimalist Grammars over Morphemes", "authors": [ { "first": "Marina", "middle": [], "last": "Ermolaeva", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chicago", "location": {} }, "email": "mermolaeva@uchicago.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [], "body_text": [ { "text": "Syntactic literature tends towards a big-picture outlook, abstracting away from details such as full specifications of lexical items or features involved in derivations. However, a lower-level description is required to identify differences between competing analyses of the same phenomenon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For a concrete example, consider the double object construction (e.g. John gave Mary a book) in English. One option is to combine the internal arguments Mary and a book in a \"small clause\" or PP-like structure and then merge the verb with this constituent (e.g. Kayne 1984; Pesetsky 1996; Harley and Jung 2015) . The alternative is to have the verb select the arguments one by one, giving rise to VP-shells (Larson, 1988) and analyses inspired by them (Kawakami, 2018) .", "cite_spans": [ { "start": 262, "end": 273, "text": "Kayne 1984;", "ref_id": "BIBREF7" }, { "start": 274, "end": 288, "text": "Pesetsky 1996;", "ref_id": "BIBREF14" }, { "start": 289, "end": 310, "text": "Harley and Jung 2015)", "ref_id": "BIBREF3" }, { "start": 407, "end": 421, "text": "(Larson, 1988)", "ref_id": "BIBREF10" }, { "start": 452, "end": 468, "text": "(Kawakami, 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is natural to ask whether it would be possible, assuming a sufficiently rich formalism compatible with the Minimalist framework, to choose the answer to this and similar questions based on some robust quantitative metric.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Minimalist grammars (Stabler, 1997) are a natural choice for this task. As a formalization of Chomsky's (1995) Minimalist Program, they are wellsuited for implementing analyses of syntactic phenomena, yet at the same time explicit regarding the assumptions about syntactic units and operations.", "cite_spans": [ { "start": 20, "end": 35, "text": "(Stabler, 1997)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Minimalist grammars", "sec_num": "2" }, { "text": "Minimalist grammars define lexical items (atomic expressions) as pairs consisting of a phonetic exponent and a sequence of syntactic features (1). The first feature of each lexical item is accessible to the operations, Merge and Move, that target and delete matching features of opposing polarities. Merge combines two expressions to build a new one, whereas Move is unary and attracts a sub-expression into the specifier of the main structure. Merge with head movement (HM) concatenates pronounced features of the heads of its arguments, providing a simple implementation of concatenative morphology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimalist grammars", "sec_num": "2" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Minimalist grammars", "sec_num": "2" }, { "text": "Negative polarity", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "Merge =x (right selector) x (category) =>x (HM selector) x= (left selector) Move +x (licensor) -x (licensee)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "Whichever expression contributed the positive feature becomes the head of the new expression. A complete sentence is an expression with no features left but the category t on its head. An example lexicon is given in (2), along with the derived tree of the sentence Mary laughs generated by it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "(2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "Mary :: d.-k s :: =>v.+k.t ed :: =>v.+k.t laugh :: =d.v jump :: =d.v > < < \u270f laugh =d.v s =>v.+k.t Mary d.-k 3 Learning from dependencies", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "There is a substantial body of work dedicated to learning grammars from unstructured strings; e.g. an overview in (Clark, 2017) . In particular, Yoshinaka (2011) presents an algorithm for learning certain subclasses of multiple context-free grammars. One can construct an equivalent Minimalist grammar for any multiple context-free grammar (Michaelis, 2001) . However, such a grammar would not make for a good starting point if our goal is to compare and evaluate proposals of theoretical syntax, as modern syntactic theory heavily relies on highly abstract concepts such as empty categories, not directly visible in the raw data.", "cite_spans": [ { "start": 114, "end": 127, "text": "(Clark, 2017)", "ref_id": "BIBREF1" }, { "start": 340, "end": 357, "text": "(Michaelis, 2001)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "On the other hand, Siskind (1996) suggests that rather than obtain syntactic structure from unstructured input, the learner can start the process of grounding, or mapping linguistic units to atoms of meaning, before learning syntax. Then it is plausible that the learner can identify relations formed by Merge and Move before knowing what lexical items or syntactic features are involved, which gives rise to the approach to learning proposed by Kobele et al. (2002) . For each sentence the learner is given ordered and directed dependencies between morphemes, with suffixes marked as such (3).", "cite_spans": [ { "start": 19, "end": 33, "text": "Siskind (1996)", "ref_id": "BIBREF16" }, { "start": 446, "end": 466, "text": "Kobele et al. (2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "Mary laugh -s 1 2 2 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "In this scenario, full lexical items (unique for each sentence) can be recovered from the dependencies. The learner's task is to determine which feature distinctions should be kept and which need to be collapsed, or unified. The pressure for unification comes from a restriction on the number of homophonous lexical items (Kanazawa, 1995) .", "cite_spans": [ { "start": 322, "end": 338, "text": "(Kanazawa, 1995)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "As an illustration, consider the corpus of two sentences, Mary laugh -s and Mary laugh -ed. The learner assembles lexical items by assigning a fresh feature to each dependency, assuming that each data point is a complete sentence of category t. The ordering of dependencies determines whether each of them corresponds to Merge or Move. The initial lexicon (4) contains two copies each of Mary and laugh. The final step is to rename the corresponding features throughout the lexicon in order to collapse each pair of items into one. A familiar-looking lexicon will arise if f1 and f4 are mapped to d, f2 and f5 to k, and f3 and f6 to v. After feature unification, the grammar shrinks from six to four lexical items, which can still derive the input sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Positive polarity", "sec_num": null }, { "text": "This paper builds on (Kobele et al., 2002) , aiming to relax the segmentation requirement and let the algorithm learn the structure within complex words and any generalizations it would lead to.", "cite_spans": [ { "start": 21, "end": 42, "text": "(Kobele et al., 2002)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "Compare the lexicon in (2) with (5), which generates exactly the same set of sentences. Intuitively, (2) is better than (5), even though both have the same number of lexical items. It captures the similarities between different forms of the same verb and recognizes the verbs' internal structure: two correct generalizations that (5) misses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "(5) This difference can be quantified in a number of ways -naively as the number of phonetic and/or syntactic units, length of encoding the grammar or, taking into account the cost of encoding the corpus, as minimum description length (Rissanen, 1978) .", "cite_spans": [ { "start": 235, "end": 251, "text": "(Rissanen, 1978)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "How to transition from a grammar over words such as (5) to a grammar over morphemes (2)? In linguistic terms, the latter reanalyzes the verb as a complex head formed by head movement. This can be generalized to a decomposition operation (Kobele, 2018) that splits a lexical item's syntactic and phonetic features, producing a new item with a fresh category (6). The morphological operation generating w from the stem u and suffix v is denoted by ; in the simplest case it corresponds to string concatenation.", "cite_spans": [ { "start": 237, "end": 251, "text": "(Kobele, 2018)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "(6) w :: \u21b5 x > < \u21b5 w \u21b5 x # u :: \u21b5y v :: =>y x w = u v > < < \u21b5 u \u21b5y v =>y x", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "If syntactic decomposition is not accompanied by splitting the phonological material, one of the new lexical items will be an empty functional head. Otherwise, the algorithm has to construct a morphological rule by searching for phonological similarities across the lexicon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "Concatenative morphology has been shown to be successfully learnable in an unsupervised scenario (Goldsmith, 2001) , with a possibility of using the results to infer the syntactic category of words (Hu et al., 2005) ; the problem of irregular and non-concatenative patterns (such as sings vs. sang) is also addressed in the literature (e.g. Lee and Goldsmith 2014). Thus, in our case the learner has access to two separate sources of information -syntactic features and phonological patterns -to base its decisions on.", "cite_spans": [ { "start": 97, "end": 114, "text": "(Goldsmith, 2001)", "ref_id": "BIBREF2" }, { "start": 198, "end": 215, "text": "(Hu et al., 2005)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "Multiple lexical items sharing a sub-sequence of syntactic features can be decomposed simultaneously, factoring out the shared features. The pressure to do this comes from a reduced cost in features; replacing repeating sequences is a well-known compression technique (cf. Nevill-Manning et al. 1994) .", "cite_spans": [ { "start": 273, "end": 300, "text": "Nevill-Manning et al. 1994)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Lexical item decomposition", "sec_num": "4" }, { "text": "The following example shows how a naive wordbased grammar can be transformed into a linguistically motivated grammar over morphemes via decomposition and feature unification. Let the learner start with dependency structures (over non-segmented words) for the following eight sentences: Merge dependencies in this lexicon can be conveniently visualized as a directed graph. In (9) vertices are category features; each edge corresponds to a lexical item and connects the category of its complement (first phrase it selects) to that of its own. We begin by decomposing lexical verbs, producing the lexicon in (10). The three lexical items laughs, laughing, and laugh are a valid target for decomposition; and so are jumps, jumping, and jump. Both transitions are motivated both phonologically (factoring out a common prefix) and syntactically (splitting three feature bundles starting with =d).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Towards a grammar over morphemes", "sec_num": "5" }, { "text": "(7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Towards a grammar over morphemes", "sec_num": "5" }, { "text": "(10) The next step targets another repeated sequence of syntactic features: +d.t. This essentially creates a dedicated Tense projection, which hosts the surface position of the subject (12). At this point, concatenation is no longer sufficient for the morphological rules, highlighting the need for a richer theory of morphology. This grammar still contains two copies of be. While they could be collapsed by unifying v and x, this move would cause the grammar to overgenerate, producing, for example, the set of ungrammatical sentences Mary (will) + be laughing. However, adding an edge (empty head) from v to x would make two of these items redundant without generating any unwanted sentences (13). This move can be thought of as decomposing be :: =g.x into be :: =>g.z and \u270f :: =z.x, where z is a fresh feature, and then unifying z with v. The same is applicable to \u270f :: =V.x and \u270f :: =V.v. We have shown how a Minimalist grammar can be compressed in a way compatible with linguistic theory through repeated application of lexical item decomposition and feature unification. Together they offer a principled way to identify repeating patterns in the lexicon, instantiate them as new lexical items, and collapse any emerging duplicates. Our current work in progress involves building a learning algorithm for syntax with these two operations at its core. This approach would allow to derive (potentially empty) functional heads, producing linguistically motivated generalizations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Towards a grammar over morphemes", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Minimalist Program", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1995. The Minimalist Program. MIT Press.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Computational learning of syntax", "authors": [ { "first": "Alexander", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2017, "venue": "Annual Review of Linguistics", "volume": "3", "issue": "", "pages": "107--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Clark. 2017. Computational learning of syntax. Annual Review of Linguistics, 3:107-123.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unsupervised learning of the morphology of a natural language", "authors": [ { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2001, "venue": "Computational linguistics", "volume": "27", "issue": "2", "pages": "153--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational linguistics, 27(2):153-198.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "In support of the p HAVE analysis of the double object construction", "authors": [ { "first": "Heidi", "middle": [], "last": "Harley", "suffix": "" }, { "first": "Hyun", "middle": [ "Kyoung" ], "last": "Jung", "suffix": "" } ], "year": 2015, "venue": "Linguistic inquiry", "volume": "46", "issue": "4", "pages": "703--730", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heidi Harley and Hyun Kyoung Jung. 2015. In support of the p HAVE analysis of the double object construc- tion. Linguistic inquiry, 46(4):703-730.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Using morphology and syntax together in unsupervised learning", "authors": [ { "first": "Yu", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Irina", "middle": [], "last": "Matveeva", "suffix": "" }, { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Sprague", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Workshop on Psychocomputational Models of Human Language Acquisition", "volume": "", "issue": "", "pages": "20--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Hu, Irina Matveeva, John Goldsmith, and Colin Sprague. 2005. Using morphology and syntax to- gether in unsupervised learning. In Proceedings of the Workshop on Psychocomputational Models of Human Language Acquisition, pages 20-27. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learnable Classes of Categorial Grammars", "authors": [ { "first": "Makoto", "middle": [], "last": "Kanazawa", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Makoto Kanazawa. 1995. Learnable Classes of Cate- gorial Grammars. Ph.D. thesis, Stanford University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Double object constructions: Against the small clause analysis", "authors": [ { "first": "Masahiro", "middle": [], "last": "Kawakami", "suffix": "" } ], "year": 2018, "venue": "Journal of Humanities and Social Sciences", "volume": "45", "issue": "", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masahiro Kawakami. 2018. Double object construc- tions: Against the small clause analysis. Journal of Humanities and Social Sciences, 45:209-226.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Connectedness and Binary Branching", "authors": [ { "first": "Richard", "middle": [ "S" ], "last": "Kayne", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard S. Kayne. 1984. Connectedness and Binary Branching. Foris, Dordrecht.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lexical decomposition. Computational Syntax lecture notes", "authors": [ { "first": "Gregory", "middle": [ "M" ], "last": "Kobele", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory M. Kobele. 2018. Lexical decomposition. Computational Syntax lecture notes.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning mirror theory", "authors": [ { "first": "Gregory", "middle": [ "M" ], "last": "Kobele", "suffix": "" }, { "first": "Travis", "middle": [], "last": "Collier", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Edward", "middle": [ "P" ], "last": "Stabler", "suffix": "" } ], "year": 2002, "venue": "Proceedings of TAG+ 6", "volume": "", "issue": "", "pages": "66--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory M. Kobele, Travis Collier, Charles Taylor, and Edward P. Stabler. 2002. Learning mirror theory. In Proceedings of TAG+ 6, pages 66-73.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On the double object construction", "authors": [ { "first": "", "middle": [], "last": "Richard K Larson", "suffix": "" } ], "year": 1988, "venue": "Linguistic inquiry", "volume": "19", "issue": "3", "pages": "335--391", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard K Larson. 1988. On the double object con- struction. Linguistic inquiry, 19(3):335-391.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Automatic morphological alignment and clustering", "authors": [ { "first": "Jackson", "middle": [], "last": "Lee", "suffix": "" }, { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jackson Lee and John Goldsmith. 2014. Automatic morphological alignment and clustering. Technical report, Technical report TR-2014-07, Department of Computer Science, University of Chicago.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "On formal properties of minimalist grammars", "authors": [ { "first": "Jens", "middle": [], "last": "Michaelis", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jens Michaelis. 2001. On formal properties of mini- malist grammars. Ph.D. thesis, U of Potsdam.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Compression by induction of hierarchical grammars", "authors": [ { "first": "", "middle": [], "last": "Craig G Nevill-Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Ian", "suffix": "" }, { "first": "David", "middle": [ "L" ], "last": "Witten", "suffix": "" }, { "first": "", "middle": [], "last": "Maulsby", "suffix": "" } ], "year": 1994, "venue": "Proceedings of DCC'94", "volume": "", "issue": "", "pages": "244--253", "other_ids": {}, "num": null, "urls": [], "raw_text": "Craig G Nevill-Manning, Ian H Witten, and David L Maulsby. 1994. Compression by induction of hierar- chical grammars. In Proceedings of DCC'94, pages 244-253.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Zero syntax: Experiencers and cascades", "authors": [ { "first": "David", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Pesetsky", "middle": [], "last": "", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Michael Pesetsky. 1996. Zero syntax: Experi- encers and cascades. MIT press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Modeling by shortest data description", "authors": [ { "first": "Jorma", "middle": [], "last": "Rissanen", "suffix": "" } ], "year": 1978, "venue": "Automatica", "volume": "14", "issue": "5", "pages": "465--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorma Rissanen. 1978. Modeling by shortest data de- scription. Automatica, 14(5):465-471.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A computational study of cross-situational techniques for learning word-tomeaning mappings", "authors": [ { "first": "Jeffrey", "middle": [ "Mark" ], "last": "Siskind", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "", "issue": "", "pages": "39--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Mark Siskind. 1996. A computational study of cross-situational techniques for learning word-to- meaning mappings. Cognition, 61(1-2):39-91.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Derivational minimalism", "authors": [ { "first": "Edward", "middle": [ "P" ], "last": "Stabler", "suffix": "" } ], "year": 1997, "venue": "Selected Papers from LACL '96", "volume": "", "issue": "", "pages": "68--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward P. Stabler. 1997. Derivational minimalism. In Christian Retor\u00e9, editor, Selected Papers from LACL '96, pages 68-95. Springer Berlin Heidelberg.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Efficient learning of multiple context-free languages with multidimensional substitutability from positive data", "authors": [ { "first": "Ryo", "middle": [], "last": "Yoshinaka", "suffix": "" } ], "year": 2011, "venue": "Theoretical Computer Science", "volume": "412", "issue": "19", "pages": "1821--1831", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryo Yoshinaka. 2011. Efficient learning of multiple context-free languages with multidimensional sub- stitutability from positive data. Theoretical Com- puter Science, 412(19):1821-1831.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "type_str": "figure", "text": "Mary :: f1.-f2 laugh :: =f1.f3 -s :: =>f3.+f2.t Mary :: f4.-f5 laugh :: =f4.f6 -ed :: =>f6.+f5.t" }, "FIGREF1": { "num": null, "uris": null, "type_str": "figure", "text": "Mary :: d.-k laughs :: =d.+k.t laughed :: =d.+k.t jumps :: =d.+k.t jumped :: =d.+k.t" }, "FIGREF2": { "num": null, "uris": null, "type_str": "figure", "text": "this data set, the algorithm discussed in section 3 can extract the lexical items shown in(8)by collapsing homophonous items. (8) Mary :: d.-k is :: =g.+k.t will :: =v.+k.t be :: =g.v laughs :: =d.+k.t laughing :: =d.g laugh :: =d.v jumps :: =d.+k.t jumping :: =d.g jump :: =d.v" }, "FIGREF4": { "num": null, "uris": null, "type_str": "figure", "text": "This move created two copies each of s, ing, and \u270f. All of them can be conflated by unifying a single pair of features, V1 and V2," }, "FIGREF5": { "num": null, "uris": null, "type_str": "figure", "text": "Mary :: d.-k s :: =x.+k.t be :: =g.x will :: =v.x be :: =g.v laugh :: =d.V jump :: =d.V \u270f :: =V.x ing :: =V.g \u270f :: =V." } } } }