{ "paper_id": "P86-1006", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:13:02.200583Z" }, "title": "COMPUTATIONAL COMPLEXITY OF CURRENT GPSG THEORY", "authors": [ { "first": "Eric", "middle": [ "Sven" ], "last": "Ristad", "suffix": "", "affiliation": { "laboratory": "MIT Artificial Intelligence Lab Thinking Machines Corporation 545 Technology Square", "institution": "", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "An important goal of computational linguistics has been to use linguistic theory to guide the construction of computationally efficient real-world natural language processing systems. At first glance, generalized phrase structure grammar (GPSG) appears to be a blessing on two counts. First, the precise formalisms of GPSG might be a direct and fransparent guide for parser design and implementation. Second, since GPSG has weak context-free generative power and context-free languages can be parsed in O(n ~) by a wide range of algorithms, GPSG parsers would appear to run in polynomial time. This widely-assumed GPSG \"efficient parsability\" result is misleading: here we prove that the universal recognition problem for current GPSG theory is exponential-polynomial time hard, and assuredly intractable. The paper pinpoints sources of complexity (e.g. metarules and the theory of syntactic features) in the current GPSG theory and concludes with some linguistically and computationally motivated restrictions on GPSG.", "pdf_parse": { "paper_id": "P86-1006", "_pdf_hash": "", "abstract": [ { "text": "An important goal of computational linguistics has been to use linguistic theory to guide the construction of computationally efficient real-world natural language processing systems. At first glance, generalized phrase structure grammar (GPSG) appears to be a blessing on two counts. First, the precise formalisms of GPSG might be a direct and fransparent guide for parser design and implementation. Second, since GPSG has weak context-free generative power and context-free languages can be parsed in O(n ~) by a wide range of algorithms, GPSG parsers would appear to run in polynomial time. This widely-assumed GPSG \"efficient parsability\" result is misleading: here we prove that the universal recognition problem for current GPSG theory is exponential-polynomial time hard, and assuredly intractable. The paper pinpoints sources of complexity (e.g. metarules and the theory of syntactic features) in the current GPSG theory and concludes with some linguistically and computationally motivated restrictions on GPSG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "An important goal of computational linguistics has been to use linguistic theory to guide the construction of computationally efficient real-world natural language processing systems. Generalized Phrase Structure Grammar (GPSG) linguistic theory holds out considerable promise as an aid in this task. The precise formalisms of GPSG offer the prospect of a direct and transparent guide for parser design and implementation. Furthermore, and more importantly, GPSG's weak context-free generative power suggests an efficiency advantage for GPSG-based parsers. Since context-free languages can be parsed in polynomial time, it seems plausible that GPSGs can also be parsed in polynomial time. This would in turn seem to provide \"the beginnings of an explanation for the obvious, but largely ignored, fact thatlhumans process the utterances they hear very rapidly (Gazdar, 198] :155) .\" 1", "cite_spans": [ { "start": 859, "end": 867, "text": "(Gazdar,", "ref_id": null }, { "start": 868, "end": 878, "text": "198] :155)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper I argue that the expectations of the informal complexity argument from weak context-free generative power are not in fact met. I begin by examining the computational complexity of metarules and the feature system of GPSG and show that these systems can lead to computational intractabil-~See also Joshi, \"Tree Adjoining Grammars ~ p.226, in Natural Language Parsing (1985) ed. by D. Dowty, L. Karttunen, and A. Zwicky, Cambridge University Press: Cambridge, and aExceptlons to the Rule, ~ Science News 128: 314-315.", "cite_spans": [ { "start": 380, "end": 386, "text": "(1985)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "ity. Next I prove that the universal recognition problem for current GPSG theory is Exp-Poly hard, and assuredly intractable. 2 That is, the problem of determining for an arbitrary GPSG G and input string z whether x is in the language L(G) generated by G, is exponential polynomial time hard. This result puts GPSG-Recognition in a complexity class occupied by few natural problems: GPSG-Recognition is harder than the traveling salesman problem, context-sensitive language recognition, or winning the game of Chess on an n x n board. The complexity classification shows that the fastest recognition algorithm for GPSGs must take exponential time or worse. One role of a computational analysis is to provide formal insights into linguistic theory. To this end, this paper pinpoints sources of complexity in the current GPSG theory and concludes with some linguistically and computationally motivated restrictions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A generalized phrase structure grammar contains five languageparticular components --immediate dominance (ID) rules, metarules, linear precedence (LP) statements, feature co-occurrence restrictions (FCRs), and feature specification defaults (FSDs)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of GPSG Components", "sec_num": "2" }, { "text": "and four universal components --a theory of syntactic features, principles of universal feature instantiation, principles of semantic interpretation, and formal relationships among various components of the grammar, s Syntactic categories are partial functions from features to atomic feature values and syntactic categories. They encode subcategorization, agreement, unbounded dependency, and other significant syntactic information. The set K of syntactic categories is inductively specified by listing the set F of features, the set A of atomic feature values, the function po that defines the range of each atomic-valued feature, and a set R of restrictive predicates on categories (FCRs).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "The set of ID rules obtained by taking the finite closure of the metarules on the ID rules is mapped into local phrase structure trees, subject to principles of universal feature instantiation, FSDs, FCRs, and LP statements. Finally, local trees are 2We use the universal problem to more accurately explore the power of a grammatical formalism (see section 3.1 below for support). Ristad(1985) has previously proven that the universal recognition problem for the GPSG's of Gazdar(1981) is NP-hard and likely to be intractable, even under severe metarule restrictions.", "cite_spans": [ { "start": 381, "end": 393, "text": "Ristad(1985)", "ref_id": "BIBREF7" }, { "start": 473, "end": 485, "text": "Gazdar(1981)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "3This work is based on current GPSG theory as presented in Gazdar et. al. (1985) , hereafter GKPS. The reader is urged to consult that work for a formal presentation and thorough exposition of current GPSG theory. assembled to form phrase structure trees, which are terminated by lexical elements.", "cite_spans": [ { "start": 59, "end": 80, "text": "Gazdar et. al. (1985)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "To identify sources of complexity in GPSG theory, we consider the isolated complexity of the finite metarule closure Ol>station and the rule to tree mapping, using the finite closure membership and category membership problems, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "Informally, the finite closure membership problem is to determine if an ID rule is in the finite closure of a set of metarules M on a set of ID rules R. The category membership problem is to determine if a category or C or a legal extension of C is in the set K of all categories based the function p and the sets A, F and R. Note that both problems must be solved by any GPSGbased parsing system when computing the ID rule to local tree mapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "The major results are that finite closure membership is NPhard and category membership is PSPACE-hard. Barton(1985) has previously shown that the recognition problem for ID/LP grammars is NP-hard. The components of GPSG theory are computationally complex, as is the theory as a whole.", "cite_spans": [ { "start": 103, "end": 115, "text": "Barton(1985)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "Assumptions. In the following problem definitions, we allow syntactic categories to be based on arbitrary sets of features and feature values. In actuality, GPSG syntactic categories are based on fixed sets and a fixed function p. As such, the set K of permissible categories is finite, and a large table containing K could, in princip}e, be given. 4 We (uncontroversially) generalize to arbitrary sets and an arbitrary function p to prevent such a solution while preserving GPSG's theory of syntactic features, s No other modifications to the theory are made.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "An ambiguity in GKPS is how the FCRs actually apply to embedded categories. 6 Following Ivan Sag (personal communication), I make the natural assumption here that FCRs apply top-level and to embedded categories equally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "4This suggestion is of no practical significance, because the actual number of GPSG syntactic categories is extremely large. The total number of categories, given the 25 atomic features and 4 category-valued features, is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "J K = K' I = 32s((1 +32s)C(1 +32s)((1 \u00f732~)(1 +32s)~)2)s)\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "~_ 32s(1 + 32~) s4 > 3 le2~ > 10 TM See page 10 for details. Many of these categories will be linguistically meaningless, but all GPSGs will generate all of them and then filter some out in consideration of FCRs, FSDs, universal feature instantiation, and the other admissible local trees and lexical entries in the GPSG. While the FCRs in some grammars may reduce the number of categories, FCRs are a language-particular component of the grammar. The vast number of categories cited above is inherent in the GPSG framework.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "SOur goal is to identify sources of complexity in GPSG theory. The generalization to arbitrary sets allows a fine-grained study of one component of GPSG theory (the theory of syntactic features) with the tools of computational complexity theory. Similarly, the chess board is uncontroverslally generalized to size n \u00d7 a in order to study the computational complexity of chess.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "eA category C that is defined for a feature ], f E (F -Atom) n DON(C) (e.g. f = SLASH ), contains an embedded category C~, where C(f) ---C~. GKPS does not explain whether FCR's must be true of C~ as well as C.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "--", "sec_num": null }, { "text": "The complete set of ID rules in a GPSG is the maximal set that can be arrived at by taking each metarule and applying it to the set of rules that have not themselves arisen as a result of the application of that metarule. This maximal set is called the finite closure (FC) of a set R of lexical ID rules under a set M of metarules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metarules", "sec_num": "2.1" }, { "text": "The cleanest possible complexity proof for metarule finite closure would fix the GPSG (with the exception of metarules) for a given problem, and then construct metarules dependent on the problem instance that is being reduced. Unfortunately, metarules cannot be cleanly removed from the GPSG system. Metarules take ID rules as input, and produce other ID rules as their output. If we were to separate metarules from their inputs and outputs, there would be nothing left to study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metarules", "sec_num": "2.1" }, { "text": "The best complexity proof for metarules, then, would fix the GPSG modulo the metarules and their input. We ensure the input is not inadvertently performing some computation by requiring the one ID rule R allowed in the reduction to be fully specified, with only one 0-1evel category on the left-hand side and one unanalyzable terminal symbol on the right-hand side. Furthermore, no FCRs, FSDs, or principles of universal feature instantiation are allowed to apply. These are exceedingly severe constraints. The ID rules generated by this formal system will be the finite closure of the lone ID rule R under the set M of metarules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metarules", "sec_num": "2.1" }, { "text": "The (strict, The set of ID rules consists of the one ID rule R, whose mother category represents the formula variables and clauses, and a set of metarules M s.t. an extension of the ID rule A is in the finite closure of M over R iff F is satisfiable. The metarules generate possible truth assignments for the formula variables, and then compute the truth value of F in the context of those truth assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metarules", "sec_num": "2.1" }, { "text": "Let w be the string of formula literals in F, and let wl denote the i th symbol in the string w. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Metarules", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Vi, 1 < i < m {[yi 0],[STAGE I]} -* W (i) {[Yi I],[STAGE 1]} ~ W", "eq_num": "(" } ], "section": "The ID rules R,A", "sec_num": "1." }, { "text": "b) one metarule to stop the assignment generation process", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ID rules R,A", "sec_num": "1." }, { "text": "{[STAGE 1]) -~ W (2) {[STAGE 2]} --* W (c) I w[ metarules to verify assignments Vi,j,k 1 (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ID rules R,A", "sec_num": "1." }, { "text": "The reduction constructs O(I w l) metarules of size log(I w [), and clearly may be performed in polynomial time: the reduction time is essentially the number of symbols needed to write the GPSG down. Note that the strict finite closure membership problem is also NP-hard. One need only add a polynomial number of metarules to \"change\" the feature values of the mother node C to some canonical value when C(STAGE ) = 3 --all 0, for example, with the exception of STAGE . Let F = {[Yi 0] : l Q.\u00a3.P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ID rules R,A", "sec_num": "1." }, { "text": "The major source of intractability is the finite closure operation itself. Informally, each metarule can more than double the number of ID rules, hence by chaining metarules (i.e. by applying the output of a metarule to the input of the next metarule) finite closure can increase the number of ID rules exponentiallyff", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The ID rules R,A", "sec_num": "1." }, { "text": "Here we show that the complex feature system employed by GPSG leads to computational intractability. The underlying insight for the following complexity proof is the almost direct equivalence between Alternating Turing Machines (ATMs) and syntactic categories in GPSG. The nodes of an ATM computation correspond to 0-level syntactic categories, and the ATM computation tree corresponds to a full, n-level syntactic category. The finite feature closure restriction on categories, which limits the depth of category nesting, will limit the depth of the corresponding ATM computation tree. Finite feature closure constrains us to specifying (at most) a polynomially deep, polynomially branching tree in polynomial time. This is exactly equivalent to a polynomial time ATM computation, and by Chandra and Stockmeyer(1976) , also equivalent to a deterministic polynomial space-bounded 'luring Machine computation.", "cite_spans": [ { "start": 789, "end": 817, "text": "Chandra and Stockmeyer(1976)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "As a consequence of the above insight, one would expect the GPSG Category-Membership problem to be PSPACE-hard. The actual proof is considerably simpler when framed as a reduction from the Quantified Boolean Formula (QBF) problem, a known PSPACE-complete problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "Let a specification of K be the arbitrary sets of features F, atomic features Atom, atomic feature values A, and feature cooccurrence restrictions R and let p be an arbitrary function, all equivalent to those defined in chapter 2 of GKPS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "The category membership problem is: Given a category C and a specification of a set K of syntactic categories, determine if3C Is.t. C I~CandC IEK.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "The QBF problem is {QIF1Qzyz... Qmy,nF(yh YZ,..., y,n QlylQ2y2 . . . QmymF(yl, y2, . . . , y, ~) we construct an instance P of the Category-Membership problem in polynomial time, such that f~ E QBF if and only if P is true.", "cite_spans": [ { "start": 32, "end": 53, "text": "Qmy,nF(yh YZ,..., y,n", "ref_id": null }, { "start": 54, "end": 78, "text": "QlylQ2y2 . . . QmymF(yl,", "ref_id": null }, { "start": 79, "end": 82, "text": "y2,", "ref_id": null }, { "start": 83, "end": 90, "text": ". . . ,", "ref_id": null }, { "start": 91, "end": 93, "text": "y,", "ref_id": null }, { "start": 94, "end": 96, "text": "~)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "Consider the QBF as a strictly balanced binary tree, where the i th quantifier Qi represents pairs of subtrees < Tt, T! > such that (1) Tt and T! each immediately dominate pairs of subtrees representing the quantifiers Qi+l ... Qra, and (2) the i th variable yi is true in T~ and false in Tf. All nodes at level i in the whole tree correspond to the quantifier Qi. The leaves of the tree are different instantiations of the formula F, corresponding to the quantifier-determined truth assignments to the m variables. A leaf node is labeled true if the instantiated formula F that it represents is true. An internal node in the tree at level i is labeled true if 1. Qi = \"3\" and either daughter is labeled true, or 2. Qi -= \"V\" and both daughters are labeled true.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "Otherwise, the node is labeled false.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "Similarly, categories can be_understood as trees, where the features in the domain of a category constitute a node in the tree, and a category C immediately dominates all categories C ~ such that Sf e ((r -Atom)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "A DON(C))[C(f) = C'].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "In the QBF reduction, the atomic-valued features are used to represent the m variables, the clauses of F, the quantifier the category represents, and the truth label of the category. The category-valued features represent the quantifiers --two category-valued features qk,qtk represent the subtree pairs < Tt, T I > for the quantifier Qk. FCRs maintain quantifier-imposed variable truth assignments \"down the tree\" and calculate the truth labeling of all leaves, according to F, and internal nodes, according to quantifier meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "Details. Let w be the string of formula literals in F, and w~ denote the i th symbol in the string w. We specify a set K of permissible categories based on A, F, p,.and the set of FCRs R s.t. the category [[LABEL 1]] or an extension of it is an element of K iff ~ is true.", "cite_spans": [ { "start": 205, "end": 216, "text": "[[LABEL 1]]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "First we define the set of possible 0-level categories, which encode the formula F and truth assignments to the formula variables. The feature wi represents the formula literal wi in w, yj represents the variable yj in f2, and ci represents the truth value of the i th clause in F. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "[LEVEL rn + l]&[c, 0] D [LABEL 0] [LEVEL m+ 1]d~[Cx 1]&:[c2 l]&...&[c~ol/31 ] D [LABEL 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "The reduction constructs O(1~1) features and O(m ~) FCRs of size O(log m) in a simple manner, and consequently may be seen to be polynomial time. 0.~.P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "The primary source of intractability in the theory of syntactic features is the large number of possible syntactic categories (arising from finite feature closure) in combination with the computational power of feature co-occurrence restrictions, s FCRs of the \"disjunctive consequence\" form", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "[f v] D [fl vl] V ... V [fn vn]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "compute the direct analogue of Satisfiability: when used in conjunction with other FCRs, the GPSG effectively must try all n feature-value combinations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Theory of Syntactic Features", "sec_num": "2.2" }, { "text": "Two isolated membership problems for GPSG's component formal devices were considered above in an attempt to isolate sources of complexity in GPSG theory. In this section the recognition problem (RP) for GPSG theory as a whole is considered. I begin by arguing that the linguistically and computationally relevant recognition problem is the universal recognition problem, as opposed to the fixed language recognition problem. I then show that the former problem is exponential-polynomial (Exp-Poly) time-hard. where E~=o ~ converges toe ~ 2.7 very rapidly and a,b = O(IGI) ; a = 25, b = 4 in GKPS. The smallest category in K will be 1 symbol (null set), and the largest, maximally-specified, category wilt be of symbol-slze log I K I = oca. b!).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of GPSG-Recognition", "sec_num": null }, { "text": "The universal recognition problem is: given a grammar G and input string x, is z C L(G)?. Alternately, the recognition problem for a class of grammars may be defined as the family of questions in one unkown. This fized language recognition problem is: given an input string x, is z E L for some fixed language L?. For the fixed language RP, it does not matter which grammar is chosen to generate L --typically, the fastest grammar is picked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Recognition Problem", "sec_num": "3.1" }, { "text": "It seems reasonable clear that the universal RP is of greater linguistic and engineering interest than the fixed language RP. The grammars licensed by linguistic theory assign structural descriptions to utterances, which are used to query and update databases, be interpreted semantically, translated into other human languages, and so on. The universal recognition problem --unlike the fixed language problem --determines membership with respect to a grammar, and therefore more accurately models the parsing problem, which must use a grammar to assign structural descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Recognition Problem", "sec_num": "3.1" }, { "text": "The universal RP also bears most directly on issues of natural language acquisition. The language learner evidently possesses a mechanism for selecting grammmars from the class of learnable natural language grammars/~a on the basis of linguistic inputs. The more fundamental question for linguistic theory, then, is \"what is the recognition complexity of the class /~c?\". If this problem should prove computationally intractable, then the (potential) tractability of the problem for each language generated by a G in the class is only a partial answer to the linguistic questions raised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Recognition Problem", "sec_num": "3.1" }, { "text": "Finally, complexity considerations favor the universal RP. The goal of a complexity analysis is to characterize the amount of computational resources (e.g. time, space) needed to solve the problem in terms of all computationally relevent inputs on some standard machine model (typically, a multi-tape deterministic Turing machine). We know that both input string length and grammar size and structure affect the complexity of the recognition problem. Hence, excluding either input from complexity consideration would not advance our understanding. 9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Recognition Problem", "sec_num": "3.1" }, { "text": "Linguistics and computer science are primarily interested in the universal recognition problem because both disciplines are concerned with the formal power of a family of grammars. Linguistic competence and performance must be considered in the larger context of efficient language acquisition, while computational considerations demand that the recognition problem be characterized in terms of both input string and grammar size. Excluding grammar size from complexity consideration in order SThis ~consider all relevant inputs ~ methodology is universally assumed in the formal language and computational complexity literature. For example, Hopcraft and Ullman(1979:139) define the context-free grammar recognition problem as: \"Given a CFG G = (V,T,P, $) and a string z in Y', is x in L(G)?.\". Garey and Johnson(1979) is a standard reference work in the field of computational complexity. All 10 automata and language recognition problems covered in the book (pp. 265-271) are universal, i.e. of the form \"Given an instance of a machine/grammar and an input, does the machine/grammar accept the input7 ~ The complexity of these recognition problems is alt#ays calculated in terms of grammar and input size.", "cite_spans": [ { "start": 796, "end": 819, "text": "Garey and Johnson(1979)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Defining the Recognition Problem", "sec_num": "3.1" }, { "text": "to argue that the recognition problem for a family of grammars is tractable is akin to fixing the size of the chess board in order to argue that winning the game of chess is tractable: neither claim advances our scientific understanding of chess or natural language. Let S(n) be a polynomial in n. Then, on input M, a S(n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Defining the Recognition Problem", "sec_num": "3.1" }, { "text": "space-bounded one tape alternating Turing Machine (ATM), and string w, we construct a GPSG G in polynomial time such that w E L(M) iff $0wllw22...w,~n$n\u00f7l E L(G).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GPSG-Recognition", "sec_num": "3.2" }, { "text": "By Chandra and Stockmeyer(1976) ,", "cite_spans": [ { "start": 3, "end": 31, "text": "Chandra and Stockmeyer(1976)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "GPSG-Recognition", "sec_num": "3.2" }, { "text": "c:>0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASPACE(S(n)) = U DTIM~ cs(\"))", "sec_num": null }, { "text": "where ASPACE(S(n)) is the class of problems solvable in space Sin ) on an ATM, and DTIME(F(n)) is the class of problems solvable in time F(n) on a deterministic Turing Machine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASPACE(S(n)) = U DTIM~ cs(\"))", "sec_num": null }, { "text": "As a consequence of this result and our following proof, we have the immediate result that GPSG-Recognition is DTIME(cS(n))hard, for all constants c, or Exp-Poly time-hard. An alternating Turing Machine is like a nondeterministic TM, except that some subset of its states will be referred to as universal states, and the remainder as existential states. A nondeterministic TM is an alternating TM with no universal states. 10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASPACE(S(n)) = U DTIM~ cs(\"))", "sec_num": null }, { "text": "The nodes of the ATM computation tree are represented by syntactic categories in K \u00b0 --one feature for every tape square, plus three features to encode the ATM tape head positions and the current state. The reduction is limited to specifying a polynomial number of features in polynomial time; since these features are used to encode the ATM tape, the reduction may only specify polynomial space bounded ATM computations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASPACE(S(n)) = U DTIM~ cs(\"))", "sec_num": null }, { "text": "The ID rules encode the ATM NextM() relation, i.e. C ---* NextM(C) for a universal configuration C. The reduction constructs an ID rule for every combination of possible head position, machine state, and symbol on the scanned tape square. Principles of universal feature instantiation transfer the rest of the instantaneous description (i.e. contents of the tape) from mother to daughters in ID rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ASPACE(S(n)) = U DTIM~ cs(\"))", "sec_num": null }, { "text": "1\u00b0Our ATM definition is taken from Chandra and Stockmeyer(1976) , with the restriction that the work tapes are one-way infinite, instead of two-way infinite. Without loss of generality, we use a 1-tape ATM, so", "cite_spans": [ { "start": 35, "end": 63, "text": "Chandra and Stockmeyer(1976)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "ASPACE(S(n)) = U DTIM~ cs(\"))", "sec_num": null }, { "text": "Let NextM(C ) ----{C0,Cl,... ,Ck}. If C is a universal configuration, then we construct an ID rule of the form c ~ Co, Cl,...,ck", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "C (Q x r) \u00d7 (Q \u00d7 r k \u00d7 (L,R} x (L,R))", "sec_num": null }, { "text": "Otherwise, C is an existential coi~figuration and we construct the k + 1 ID rules c --, c~ vi, 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Details", "sec_num": null }, { "text": "The reduction plainly may be performed in polynomial time in the size of the simulated ATM, by inspection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Details", "sec_num": null }, { "text": "No metarules or LP statements are needed, although rectarules could have been used instead of the Head Feature Convention. Both devices are capable of transferring the contents of the ATM tape from the mother to the daughter(s). One metarule would be needed for each tape square/tape symbol combination in the ATM. GKPS Definition 5.14 of Admissibility guarantees that admissible trees must be terminated, n By the construction above --see especially the ID rule 10 --an [A 1] node can be terminated only if it is an accepting configuration (i.e. it has halted and printed Y on its first square). This means the only admissible trees are accepting ones whose yield is the input string followed by a very long empty string. P.C.P **The admissibility of nonlocal trees is defined as follows (GKPS, p.104):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Details", "sec_num": null }, { "text": "Definition: Admissibility Let R be a set of ID rules. Then a tree t is admissible from R if and only if 1. t is terminated, and 2. every local subtree in. t is either terminated or locally admissible from some r 6 R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Details", "sec_num": null }, { "text": "3.3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Details", "sec_num": null }, { "text": "The two sources Of intractability in GPSG theory spotlighted by this reduction are null-transitions in ID rules (see the ID rule 10 above), and universal feature instantiation (in this case, the Head Feature Convention).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources of Intractability", "sec_num": null }, { "text": "Grammars with unrestricted null-transitions can assign elaborate phrase structure to the empty string, which is linguistically undesirable and computationally costly. The reduction must construct a GPSG G and input string x in polynomial", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources of Intractability", "sec_num": null }, { "text": "time such that x E L(G) iff w E L(M),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources of Intractability", "sec_num": null }, { "text": "where M is a PSPACEbounded ATM with input w. The 'polynomial time' constraint prevents us from making either x or G too big. Null-transitions allow the grammar to simulate the PSPACE ATM computation (and an Exp-Poly TM computation indirectly) with an enormously long derivation string and then erase the string. If the GPSG G were unable to erase the derivation string, G would only accept strings which were exponentially larger than M and w, i.e. too big to write down in polynomial time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources of Intractability", "sec_num": null }, { "text": "The Head Feature Condition transfers HEAD feature values from the mother to the head daughters just in case they don't conflict. In the reduction we use HEAD'features to encode the ATM tape, and thereby use the HFC to transfer the tape contents from one\" ATM configuration C (represented by the mother) to its immediate successors Co,... ,Cn (the head daughters}. The configurations C, C0,... ,Ca have identical tapes, with the critical exception of one tape square. If the HFC enforced absolute agreement between the HEAD features of the mother and head daughters, we would be unable to simulate the PSPACE ATM computation in this manner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sources of Intractability", "sec_num": null }, { "text": "At first glance, a proof that GPSG-Recognition is Exp-Poly hard appears to contradict the fact that context-free languages can be recognized in O(n s) time by a wide range of algorithms. To see why there is no contradiction, we must first explicitly state the argument from weak context-free generative power, which we dub the efficient parsability (EP) argument.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "The EP argument states that any GPSG can be converted into a weakly equivalent context-free grammar (CFG), and that CFG-Recognition is polynomial time; therefore, GPSG-Recognition must also be polynomial time. The EP argument continues: if the conversion is fast, then GPSG-Recognition is fast, but even if the conversion is slow, recognition using the \"compiled\" CFG will still be fast, and we may justifiably lose interest in recognition using the original, slow, GPSG.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "The EP argument is misleading because it ignores both the effect conversion has on grammar size, and the effect grammar size has on recognition speed. Crucially, grammar size affects recognition time in all known algorithms, and the only gram-mars directly usable by context-free parsers, i.e. with the same complexity as a CFG, are those composed of context-free productions with atomic nonterminal symbols. For GPSG, this is the set of admissible local trees, and this set is astronomical: o((3 m~','+' ", "cite_spans": [], "ref_spans": [ { "start": 492, "end": 504, "text": "o((3 m~','+'", "ref_id": null } ], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": ") (Iz)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "in a GPSG G of size m. ]~ Context-free parsers like the Earley algorithm run in time O(I G' j2 .n3) where I G'I is the size of the CFG G' and n the input string length, so a GPSG G of size m will be recognized in time O(3=.m!m=~+' . ~3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "The hyper-exponential term will dominate the Earley algorithm complexity in the reduction above because m is a function of the size of the ATM we are simulating. Even if the GPSG is held constant, the stunning derived grammar size in formula 13 turns up as an equally stunning 'constant' multiplicative factor in 14, which in turn will dominate the real-world performance of the Earley algorithm for all expected inputs (i.e. any that can be written down in the universe), every time we use the derived grammar.iS Pullum(1985) has suggested that \"examination of a suitable 'typical' GPSG description reveals a ratio of only 4 to I between expanded and unexpanded grammar statements,\" strongly implying that GPSG is efficiently processable as a consequence. 14 But this \"expanded grammar\" is not adequately expanded, i.e. it is not composed of context-free productions with unanalyz-12As we saw above, the metarule finite closure operation can increase the ID rule grammar size from I R I = O(I G I) to O(m 2~) in a GPSG G of size m. We ignore the effects of ID/LP format on the number of admissible local trees here, and note that if we expanded out all admissible linear precedence possibilities in FC(M,R}, the resultant 'ordered' ID rule grammar would be of size O(rn2'~7). In the worst case, every symbol in FC(M,R) is underspecified, and every category in K extends every symbol in the FC(M,R} grammar. (3~.~,) ,,,,') = o(s~, ,,,,~*' ) i.e. astronomical. Ristad(1986) argues that the minimal set of admissible local trees in GKPS' GPSG for English is considerably smaller, yet still contains more than 10 z\u00b0 local trees.", "cite_spans": [ { "start": 514, "end": 526, "text": "Pullum(1985)", "ref_id": "BIBREF6" }, { "start": 1460, "end": 1472, "text": "Ristad(1986)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1408, "end": 1440, "text": "(3~.~,) ,,,,') = o(s~, ,,,,~*' )", "ref_id": null } ], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "laThe compiled grammar recognition problem is at least as intractable as the uncompiled one. Even worse, Barton{1985) shows how the grammar expansion increases both the space and time costs of recognltlon, when compared to the cost of using the grammar directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "14Thls substantive argument is somewhat strange coming from a co-author of a book which advocates the purely formal investigation of linguistics: \"The universalism [of natural language 1 is, ultimately, intended to be entirely embodied in the formal system, not expressed by statements made in it.'GKPS(4). It is difficult to respond precisely to the claims made in Pul-Ium(1985) , since the abstract is (necessarily) brief and consists of assertions unsupported by factual documentation or clarifying assumptions. able nonterminal symbols. 15 These informal tractability arguments are a particular instance of the more general EP argument and are equally misleading.", "cite_spans": [ { "start": 366, "end": 379, "text": "Pul-Ium(1985)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "The preceding discussion of how intractability arises when converting a GPSG into a weakly equivalent CFG does not in principle preclude the existence of an efficient compilation step. If the compiled grammar is truly fast and assigns the same structural descriptions as the uncompiled GPSG, and it is possible to compile the GPSG in practice, then the complexity of the universal recognition problem would not accurately reflect the real cost of parsing. 16 But until such a suggestion is forthcoming, we must assume that it does not exist. 1~,1s iS,Expanded grammar\" appears to refer to the output of metarule finite closure (i.e. ID rules), and this expanded grammar is tra,=table only if the grammar is directly usable by the Earley algorithm exactly as contextfree productions are: all noaterminals in the context-free productions must be unanalyzable. But the categories and ID rules of the metarule finite closure grammar do not have this property. Nonterminals in GPSG are decomposable into a complex set of feature specifications and cannot be made atomicj in part because not all extensions of ID rule categories are legal. For example, the categories -OO01Vl~[-tCF1g}~ PA$] and VP[+INV, VFOI~ FIN] are not legal extensions of VP in English, while VP [\u00f7INV, +AUX. VFORI~ FINI is. FCRs, FSDs, LP statements, and principles of universal feature instantiation --all of which contribute to GPSG's intractability -must all still apply to the rules of this expanded grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "Even if we ignore the significant computational complexity introduced by the machinery mentioned in the previous paragraph (i.e. theory of syntactic features, FCRs, FSDs, ID/LP format, null-transitions, and metarules), GPSG will still not obtain an e.fficient parsability result. This is because the Head Feature Convention alone ensures that the universal recognition problem for GPSGs will be NP-hard and likely to be intractable. Ristad(1986) contains a proof. This result should not be surprising, given that (1) principles of universal feature instant]ation in current GPSG theory replace the metarules of earlier versions of GPSG theory, and (2) metarules are known to cause intractability in GPSG.", "cite_spans": [ { "start": 433, "end": 445, "text": "Ristad(1986)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "~6The existence or nonexistence of efficient compilation functions does not affect either our scientific interest in the universal grammar recognition problem or the power and relevance of a complexity analysis. If complexity theory classifies a problem as intractable, we learn that something more must be said to obtain tractability, and that any efficient compilation step, if it exists at all, must itself be costly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "17Note that the GPSG we constructed in the preceding reduction will actually accept any input x of length less than or equal to Iwl if and only if the ATM M accepts it using S(]wl) space. We prepare an input string $ for the GPSG by converting it to the string $0xl lx22.., xn nSr~-1 e.g. shades is accepted by the ATM if and only if the string $Oalb2a3d4e5e657 is accepted by the GPSG. Trivial changes in the grammar allows us to permute and \"spread\" the characters of \u2022 across an infinite class of strings in an unbounded number of ways, e.g. $O'~x~i'~2...~zll'yb...?~$a\u00f7l where each ~ is a string over an alphabet which is distinct from the ~i alphabet. Although the flexibility of this construction results in a more complicated GPSG, it argues powerfully against the existence of any efficient compilation procedure for GPSGs. Any efficient compilation procedure must perform more than an exponential polynomial amount of work (GPSG-Recognition takes at least Exp-Poly time) on at least an exponential number of inputs (all inputs that fit in the t w t space of the ATM's read-only tape). More importantly, the required compilation procedure will convert say exponential-polynomial time bounded Turing Machine into a polynomial*time TM for the class inputs whose membership can be determined within a arbitrary (fixed) exp-poly time bound. Simply listing the accepted inputs will not work because both the GPSG and TM may accept an infinite class of inputs. Such a compilation procedure would be extremely powerful.", "cite_spans": [ { "start": 545, "end": 574, "text": "$O'~x~i'~2...~zll'yb...?~$a\u00f7l", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "lSNote that compilation illegitimately assumes that the compilation step", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Power and Computational Complexity", "sec_num": "4.1" }, { "text": "The major complexity result of this paper proves that the fastest algorithm for GPSG-Recognition must take more than exponential time. The immediately preceding section demonstrates exactly how a particular algorithm for GPSG-Recognition (the EP argument) comes to grief: weak context-free generative power does not ensure efficient parsability because a GPSG G is weakly equivalent to a very large CFG G ~, and CFG size affects recognition time. The rebuttal does not suggest that computational complexity arises from representational succinctness, either here or in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "Complexity results characterize the amount of resources needed to solve instances of a problem, while succinctness results measure the space reduction gained by one representation over another, equivalent, representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "There is no casual connection between computational complexity and representational succinctness, either in practice or principle. In practice, converting one grammar into a more succinct one can either increase or decrease the recognition cost. For example, converting an instance of context-free recognition (known to be polynomial time) into an instance of contextsensitive recognition (known to be PSPACE-complete and likely to be intractable) can significantly speed the recognition problem if the conversion decreases the size of the CFG logarithmically or better. Even more strangely, increasing ambiguity in a CFG can speed recognition time if the succinctness gain is large enough, or slow it down otherwise --unambiguous CFGs can be recognized in linear time, while ambiguous ones require cubic time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "In principle, tractable problems may involv~ succinct representations. For example, the iterating coordination schema (ICS) of GPSG is an unbeatably succinct encoding of an infinite set of context-free rules; from a computational complexity viewpoint, the ICS is utterly trivial using a slightly modified Earley algorithm. 19 Tractable problems may also be verbosely represented: consider a random finite language, which may be recognized in essentially constant time on a typical computer (using a hash table), yet whose elements must be individually listed. Similarly, intractable problems may be represented both succinctly and nonsuccinctly. As is well known, the Turing machine for any arbitrary r.e. set may be either extremely small or monstrously big. Winning the game of chess when played on an n x n board is likely to be computationMly intractable, yet the chess board is not intended to be an encoding of another representation, succinct or otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "is free. There is one theory of primitive language learning and use: conjecture a grammar and use it. For this procedure to work, grammars should be easy to test on small inputs. The overall complexity of learning, testing, and speech must be considered. Compilation speeds up the speech component at the expense of greater complexity in the other two components. For this linguistic reason the compilation argument is suspect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "X~A more extreme example of the unrelatedness of succinctness and complexity is the absolute succinctness with which the dense language ~\" may be represented --whether by a regular expression, CFG, or even Taring machine --yet members of E \u00b0 may be recognized in constant time (i.e. always accept).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "Tractable problems may involve succinct or nonsuccinct representations, as may intractable problems. The reductions in this paper show that GPSGs are not merely succinct encodings of some context-free grammars; they are inherently complex grammars for some context-free languages. The heart of the matter is that GPSG's formal devices are computationally complex and can encode provably intractable problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity and Succinctness", "sec_num": "4.2" }, { "text": "Relevance of the Result", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.3", "sec_num": null }, { "text": "In this paper, we argued that there is nothing in the GPSG formal framework that guarantees computational tractability: proponents of GPSG must look elsewhere for an explanation of efficient parsability, if one is to be given at all. The crux of the matter is that the complex components of GPSG theory interact in intractable ways, and that weak context-free generative power does not guarantee tractability when grammar size is taken into account. A faithful implementation of the GPSG formalisms of GKPS will provably be intractable; expectations computational linguistics might have held in this regard are not fulfilled by current GPSG theory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.3", "sec_num": null }, { "text": "This formal property of GPSGs is straightforwardly interesting to GPSG linguists. As outlined by GKPS, \"an important goal of the GPSG approach to linguistics [is! the construction of theories of the structure of sentences under which significant properties of grammars and languages fall out as theorems as opposed to being stipulated as axioms (p.4).\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.3", "sec_num": null }, { "text": "The role of a computational analysis of the sort provided here is fundamentally positive: it can offer significant formal insights into linguistic theory and human language, and suggest improvements in linguistic theory and real-world parsers. The insights gained may be used to revise the linguistic theory so that it is both stronger linguistically and weaker formally. Work on revising GPSG is in progress. Briefly, some proposed changes suggested by the preceding reductions are: unit feature closure, no FCRs or FSDs, no null-transitions in ID rules, metarule unit closure, and no problematic feature specifications in the principles of universal feature instantiation. Not only do these restrictions alleviate most of GPSG's computational intractability, but they increase the theory's linguistic constraint and reduce the number of nonnatural language grammars licensed by the theory. Unfortunately, there is insufficient space to discuss these proposed revisions here --the reader is referred to Ristad(1986) for a complete discussion.", "cite_spans": [ { "start": 1004, "end": 1016, "text": "Ristad(1986)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "4.3", "sec_num": null } ], "back_matter": [ { "text": "Robert Berwick, Jim Higginbotham, and Richard Larson greatly assisted the author in writing this paper. The author is also indebted to Sandiway Fong and David Waltz for their help, and to the MIT Artificial Intelligence Lab and Thinking Machines Corporation for supporting this research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments.", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "On the Complexity of ID/LP Parsing", "authors": [ { "first": "G", "middle": [ "E" ], "last": "Barton", "suffix": "" } ], "year": 1985, "venue": "Computational Linguistics", "volume": "11", "issue": "4", "pages": "205--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barton, G.E. (1985). \"On the Complexity of ID/LP Parsing,\" Computational Linguistics, 11(4): 205-218.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Alternation", "authors": [ { "first": "A", "middle": [], "last": "Chandra", "suffix": "" }, { "first": "L", "middle": [], "last": "Stockmeyer", "suffix": "" } ], "year": 1976, "venue": "17 th Annual Symposium on Foundations of Computer Science", "volume": "", "issue": "", "pages": "98--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chandra, A. and L. Stockmeyer (1976). \"Alternation,\" 17 th Annual Symposium on Foundations of Computer Science,: 98-108.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Unbounded Dependencies and Coordinate Structure", "authors": [ { "first": "G", "middle": [], "last": "Gazdar", "suffix": "" } ], "year": 1981, "venue": "Linguistic Inquiry", "volume": "12", "issue": "", "pages": "155--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gazdar, G. (1981). \"Unbounded Dependencies and Coordinate Structure,\" Linguistic Inquiry 12: 155-184.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generalized Phrase Structure Grammar", "authors": [ { "first": "G", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "E", "middle": [], "last": "Klein", "suffix": "" }, { "first": "G", "middle": [], "last": "Pullum", "suffix": "" }, { "first": "I", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gazdar, G., E. Klein, G. Pullum, and I. Sag (1985). Gener- alized Phrase Structure Grammar. Oxford, England: Basil Blackwell.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Computers and Intractability", "authors": [ { "first": "M", "middle": [], "last": "Garey", "suffix": "" }, { "first": "D", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garey, M, and D. Johnson (1979). Computers and Intractabil- ity. San Francisco: W.H. Freeman and Co.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Introduction to Automata Theory, Languages, and Computation", "authors": [ { "first": "J", "middle": [ "E" ], "last": "Hopcroft", "suffix": "" }, { "first": "J", "middle": [ "D" ], "last": "Ullman", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hopcroft, J.E., and J.D. Ullman (1979). Introduction to Au- tomata Theory, Languages, and Computation. Reading, MA: Addison-Wesley.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The Computational Tractability of GPSG", "authors": [ { "first": "G", "middle": [ "K" ], "last": "Pullum", "suffix": "" } ], "year": 1985, "venue": "Abstracts of the 60th Annual Meeting of the Linguistics Society of America", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pullum, G.K. (1985). \"The Computational Tractability of GPSG,\" Abstracts of the 60th Annual Meeting of the Linguistics So- ciety of America, Seattle, WA: 36.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "GPSG-Recognition is NP-hard", "authors": [ { "first": "E", "middle": [ "S" ], "last": "Ristad", "suffix": "" } ], "year": 1985, "venue": "M.I.T. Artificial Intelligence Laboratory", "volume": "837", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ristad, E.S. (1985). \"GPSG-Recognition is NP-hard,\" A.I. Memo No. 837, Cambridge, MA: M.I.T. Artificial Intelli- gence Laboratory.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Complexity of Linguistic Models: A Computational Analysis and Reconstruction of Generalized Phrase Structure Grammar", "authors": [ { "first": "E", "middle": [ "S" ], "last": "Ristad", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ristad, E.S. (1986). \"Complexity of Linguistic Models: A Com- putational Analysis and Reconstruction of Generalized Phrase Structure Grammar,\" S.M. Thesis, MIT Department of Elec- trical Engineering and Computer Science. (In progress).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "resp.) finite closure membership problem for GPSG metarules is: Given an ID rule r and sets of metarules M and ID rules R, determine if 3r e such that r I ~ r (r I = r, resp.) and r I \u2022 FC(M, R).", "uris": null, "type_str": "figure", "num": null }, "FIGREF1": { "text": "Finite Closure Membership is NP-hard Proof: On input 3-CNF formula F of length n using the m variables zl... x,~, reduce 3-SAT, a known NP-complete problem, to Metarule-Membership in polynomial time.", "uris": null, "type_str": "figure", "num": null }, "FIGREF2": { "text": ") I Qi 6 {V, 3}, where the yi are boolean variables, F is a boolean formula of length n in conjunctive normal form with exactly ~More precisely, the metarule finite closure operation can increase the size of a GPSG G worse than exponentially: from I Gi to O(] G [2~). Given a set of ID rules R of symbol size n, and a set M of m metarule, each of size p, the symbol size of FC(M,R) is O(n z~) = O(IGIZ~). Each met~ule can match the productions in R O(n) different ways, inducing O(n + p) new symbols per match: each metarule can therefore square the ID rule grammar size. There are m metarules, so finite closure can create an ID rule grammar with O(n 2~) symbols. three variables per clause (3-CNF), and the quantified formula is true}. Theorem 2: GPSG Category-Membership is PSPACE-hard Proof: By reduction from QBF. On input formula fl =", "uris": null, "type_str": "figure", "num": null }, "FIGREF3": { "text": "SFinite feature closure admits a surprisingly large number of possible categories. Given a specification (F, Atom, A, R, p) of K, let a =lAteral and b =IF -Atom I. Assume that all atomic features are binary: a feature may be +,-, or undefined and there are 3 a 0-1evel categories. The b categoryvalued features may each assume O(3 ~) possible values in a 1-1evel category, so I/f' I= O(3=(3\")b). More generally,", "uris": null, "type_str": "figure", "num": null }, "FIGREF4": { "text": "is Exp-Poly hard Theorem 3: GPSG-Recognition is Exp-Poly time-hard Proof 3: By direct simulation of a polynomial space bounded alternating Turing Machine M on input w.", "uris": null, "type_str": "figure", "num": null }, "FIGREF5": { "text": "Since there are o(s--,') possible syntactic categories, and O(m TM) symbols in FU(M,R), the number of admissible local trees (= atomic context-free productions} in G is o(", "uris": null, "type_str": "figure", "num": null }, "TABREF2": { "text": "\u00b0 represents a node of an ATM computation tree, where the features in Atom encode the ATM configuration. Labeling is performed by ID rules.", "html": null, "content": "
A category in K (a) definition of F, Atom, A
F : Atom ={STATE ,HEADO ,HEAD1 ,A}
u {i:O<i<[wl+l }
u {ri: 1 _< j _< S(Iwl)}
A =Q U E U r; as defined earlier
(b) definition of p0
p\u00b0(A)= {1,2,3}
p\u00b0(STATE ) = Q; the ATM state set
p\u00b0(HEADO ) : {j: 1 < j <-I~1}
p\u00b0(HEAD1 ) = {i: 1 < i < S(I~I)}
vf \u2022 {;: o < ; <1~1 +1}
po(f) = Z U {$}; the ATM input alphabet
Vf \u2022 {ry : 1 < j < s(l~l)}
pO(f) = F; the ATM tape alphabet
(c) definition of HEAD feature set
HEAD = {i: 0 _< ; -<M +l}u{rj. : 1 _< j _< S(l~l)}
(d) FCRs to ensure full specification of all categories ex-cept null ones..
Vf.f e Atom,[STATE ] D [f]Le__tt
2. Grammatical rulesResult0M(i, a, d) =
[[HEAD0 i+ll,[i a],[A 1]]ifd=R
Vi,j,q,a,b :1< / <lwl, 1 < J-< S(I~I), qcQ, aeZ, bet[[HEAD0 i -1],[i a], [A 1]]if d = L
ResultlM(j, c, p, d) =
if TransM(q, a, b) # @, construct the following ID rules.[[HEAD1 j+l],[rfc][STATE p]]if d= R
(a) if q \u2022 U (universal state)[[HEAD1 j-l],[ri c][STATE pl]if d= L
{[HEADO i], [i a], [HEAD1 j], Jr; b], [STATE q], [A I]} --*TransM(q, a, b) = ((p, c, dl, d2): ((q, a, b), (p;c, dl, d2>) e B}
{ResultOM(i, a, dlk) U Result 1M(j, ck, Pk, d2k) :
(Pk, ck, dlk, d2k) e TransM(q, a, b)} where all categories on the RHS are heads. (b) otherwise q \u2022 Q -U (existential state)(s)where a bis the read-only (R/O) tape symbol currently being scanned is the read-write (R/W) tape symbol cur-
rently being scanned
V(pk, ck, dlk, d2~) E TransM(q, a, b),dl is the R/O tape direction
{[HEADO i], [i a], [HEAD1 j], [rj b], [STATE q], [A I]} ---+d2 is the R/W tape direction
ResultOM({ , a, dlk ) U Result 1M(], ck,pk , d2k )
The GPSG G contains:
1. Feature definitions
", "type_str": "table", "num": null } } } }