{ "paper_id": "A94-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:43.334999Z" }, "title": "Yet Another Chart-Based Technique for Parsing Ill-Formed Input", "authors": [ { "first": "Tsuneaki", "middle": [], "last": "Kato", "suffix": "", "affiliation": { "laboratory": "NTT Information and Communication Systems Laboratories", "institution": "", "location": { "addrLine": "1-2356 Take, Yokosuka-shi", "postCode": "238-03", "settlement": "Kanagawa", "country": "JAPAN" } }, "email": "kato@nttnly.ntt.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A new chart-based technique for parsing ill-formed input is proposed. This can process sentences with unknown/misspelled words, omitted words or extraneous words. This generalized parsing strategy is, similar to Mellish's, based on an active chart parser, and shares the many advantages of Mellish's technique. It is based on pure syntactic knowledge, it is independent of all grammars, and it does not slow down the original parsing operation if there is no ill-formedness. However, unlike Mellish's technique, it doesn't employ any complicated heuristic parameters. There are two key points. First, instead of using a unified or interleaved process for finding errors and correcting them, we separate the initial error detection stage from the other stages and adopt a version of bi-directional parsing. This effectively prunes the search space. Second, it employs normal top-down parsing, in which each parsing state reflects the global context, instead of topdown chart parsing. This enables the technique to determine the global plausibility of candidates easily, based on an admissible A* search. The proposed strategy could enumerate all possible minimal-penalty solutions in just 4 times the time taken to parse the correct sentences.", "pdf_parse": { "paper_id": "A94-1018", "_pdf_hash": "", "abstract": [ { "text": "A new chart-based technique for parsing ill-formed input is proposed. This can process sentences with unknown/misspelled words, omitted words or extraneous words. This generalized parsing strategy is, similar to Mellish's, based on an active chart parser, and shares the many advantages of Mellish's technique. It is based on pure syntactic knowledge, it is independent of all grammars, and it does not slow down the original parsing operation if there is no ill-formedness. However, unlike Mellish's technique, it doesn't employ any complicated heuristic parameters. There are two key points. First, instead of using a unified or interleaved process for finding errors and correcting them, we separate the initial error detection stage from the other stages and adopt a version of bi-directional parsing. This effectively prunes the search space. Second, it employs normal top-down parsing, in which each parsing state reflects the global context, instead of topdown chart parsing. This enables the technique to determine the global plausibility of candidates easily, based on an admissible A* search. The proposed strategy could enumerate all possible minimal-penalty solutions in just 4 times the time taken to parse the correct sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "It is important that natural language interface systems have the capability of composing the globally most plausible explanation if a given input can not be syntactically parsed. This would be useful for handling erroneous inputs from the user and for offsetting grammar and lexicon insufficiency. Also, such a capability could be applied to deal with the ungrammatical sentences and sentence fragments that frequently appear in spoken dialogs (Bear, Dowding and Shriberg, 1992) . Several efforts have been conducted to achieve this objective ( (Lang, 1988; Saito and Tomita, 1988) , for example.) One major decision to be made in designing this capability is whether knowledge other than purely syntactic knowledge is to be used. Otherthan syntactic knowledge includes grammar specific recovery rules such as recta-rules (Weishedel and Sondheimer, 1983) , semantic or pragmatic knowledge which may depend on a particular domain (Carbonell and Hayes, 1983) or the characteristics of the ill-formed utterances observed in human discourse (Hindle, 1983) . Although it is obvious that the utilizing such knowledge allows us to devise more powerful strategies, we should first determine the effectiveness of using only syntactic knowledge. Moreover, the result can be applied widely, as using syntactic knowledge is a base of the most of strategies.", "cite_spans": [ { "start": 444, "end": 478, "text": "(Bear, Dowding and Shriberg, 1992)", "ref_id": "BIBREF0" }, { "start": 545, "end": 557, "text": "(Lang, 1988;", "ref_id": "BIBREF5" }, { "start": 558, "end": 581, "text": "Saito and Tomita, 1988)", "ref_id": "BIBREF11" }, { "start": 822, "end": 854, "text": "(Weishedel and Sondheimer, 1983)", "ref_id": "BIBREF14" }, { "start": 929, "end": 956, "text": "(Carbonell and Hayes, 1983)", "ref_id": "BIBREF1" }, { "start": 1037, "end": 1051, "text": "(Hindle, 1983)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One significant advance in the usage of syntactic knowledge was contained in the technique proposed by Mellish (1989) .", "cite_spans": [ { "start": 103, "end": 117, "text": "Mellish (1989)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It can handle not only unknown/misspelled words, but also omitted words and extraneous words in sentences. It can deal with such problems, and develop plausible explanations quickly since it utilizes the full syntactic context by using an active chart parser (Kay, 1980; Gazdar and Mellish, 1989) . One problem with his technique is that its performance heavily depends on how the search heuristics, which is implemented as a score calculated from six parameters, is set. The heuristics complicates the algorithm significantly. This must be one of reasons why the performance of the method, as Mellish himself noted, dropped dramatically when the input contains multiple errors.", "cite_spans": [ { "start": 259, "end": 270, "text": "(Kay, 1980;", "ref_id": "BIBREF4" }, { "start": 271, "end": 296, "text": "Gazdar and Mellish, 1989)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper proposes a new technique for parsing inputs that contain simple kinds of ill-formedness. This generalized parsing strategy is, similar to Mellish's, based on an active chart parser, and so shares the many advantages of Mellish's technique. It is based on pure syntactics, it is independent of all grammars, and it does not slow down the original parsing operation if there is no iU-formedness. However, unlike Mellish's technique, it doesn't employ any complicated heuristic parameters. There are two key points. First, instead of using a unified or interleaved process for finding errors and correcting them, we separate the initial error detection stage from the other stages and adopt a version of bidirectional parsing, which has been pointed out to be a useful strategy for fragment parsing by itself (Satta and Stock, 1989) . This effectively prunes the search space and allows the new technique to take full account of the right-side context. Second, it employs normal top-down parsing, in which each parsing state reflects the global context, instead of top-down chart parsing. This enables the technique to determine the global plausibility of candidates easily. The results of preliminary experiments are encouraging. The proposed strategy could enumerate all possible minimal-penalty solutions in just 4 times the time taken to parse the correct sentences. That is, it is almost twice as fast as Mellish's strategy.", "cite_spans": [ { "start": 817, "end": 840, "text": "(Satta and Stock, 1989)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The basic strategy of Mellish's technique is to run a bottom-up parser over the input and then, if this fails to find a complete parse, to run a generalized top-down parser over the resulting chart to hypothesize complete parse candidates. When the input is well-formed, the bottom-up parser, precisely speaking, a left corner parser without top-down filtering, would generate the parse without any overhead. Even if it failed, it is guaranteed to find all complete constituents of all possible parses. Reference to these constituents, enables us to avoid repeating existing works and to exploit the full syntactic context rather just the left-side context of error candidates. The generalized top-down parser attempts to find out minimal errors by refining the set of \"needs\" that originates with bottom-up parsing. Each need indicates the absence of an expected constituent. The generalized parser hypothesizes, and so remedies an error, when it was sufficiently focused on. Next, the parser tries to construct a complete parse by taking account of the hypothesis. In the case of multiple errors, the location and recovery phases are repeated until a complete parse is obtained.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mellish's Technique And Its Problems", "sec_num": "2" }, { "text": "The data structure introduced for representing information about local needs is called the generalized edge. It is an extension of active and inactive edges, and is described as < C from S to E needs CSl from Sl to el, cs2 from s 2 to e 2 ..... cs n from s n to en > where C is category, csi are sequences of categories (which will be shown inside square brackets), S, E, si, and ei are positions in the input. The special symbol \"*\" denotes the position that remains to be determined. The presence of an edge of this kind in the chart indicates that the parser is attempting to find a phrase of category C that covers the input from position S to E but that in order to succeed it must still satisfy all the needs listed. Each need satisfies a sequence of categories cs i that must be found contiguously to occupy the portion from s i to e i. An edge with an empty need, which corresponds to an inactive edge is represented as < C from S to E needs nothing>.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mellish's Technique And Its Problems", "sec_num": "2" }, { "text": "The generalized top-down parser that uses the generalized edge as the data structure is governed by six rules: three for finding out errors and the other three for recovering from the three kinds of error. The three error locating rules are the top-down rule, the fundamental rule and the simplification rule. The first one is also used in the ordinary top-down chart parser, and the third one is just for house keeping. The second rule, the fundamental rule, directs to combine an active edge with a inactive edge. It was extended from the ordinary rule so that found constituents could be incorporated from either direction. However, the constituents that can be absorbed are limited to those in the first category sequence; that is, one of the categories belonging to CSl. The application of the six rules is mainly controlled by the scores given to edges, that is, agenda control is employed. The score of a particular edge reflects its global plausibility and is calculated from six parameters, one of which, for example, says that edges that arise from the fundamental rule are preferable to those that arise from the top-down rule.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mellish's Technique And Its Problems", "sec_num": "2" }, { "text": "Although Mellish's technique has a lot of advantages such as the ability to utilize the right-side context of errors and independence of a specific grammar, it can create a huge number of edges, as it mainly uses the topdown rule for finding errors. That is, refining a set of error candidates toward a pre-terminal category by applying only the top-down rule may create too many alternatives. In addition, since the generalized edges represent just local needs and don't reflect the global needs that created them, it is hard to decide if they should be expanded. In particular, these problems become critical when parsing ill-formed inputs, since the topdown rule may be applied without any anchoring; preterminals can not be considered as anchors, as preterminals may be freely created by error recovery rules. This argument also applies to the start symbol, as that symbol may be created depending the constituent hypothesized by error recovery rules and the fundamental rule. Mellish uses agenda control to prevent the generation of potentially useless edges. For this purpose, the agenda control needs complicated heuristic scoring, which complicates the whole algorithm. Moreover, so that the scoring reflects global plausibility, it must employs a sort of dependency analysis, a mechanism for the propagation of changes and an easily reordered agenda, which clearly contradicts his original idea in which edges must be reflected only local needs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mellish's Technique And Its Problems", "sec_num": "2" }, { "text": " C1 --> ...Csl C Cs2... where Csl is not empty (in the grammar) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bottom-up rule:", "sec_num": null }, { "text": "where if Cs 2 is empty then E2 ffi E else E2 =*. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bottom-up rule:", "sec_num": null }, { "text": "The first phase of the process is invoked after the failure of left comer parsing. The bottom-up parsing leaves behind all complete constituents of every possible parse and unsatisfied active edges for all error points that are to the immediate fight of sequences of constituents corresponding to the RHS. Since parsing proceeds left to fight, an active edge is generated only when an error point exists to the fight of the found constituents. In the first phase, bi-directional bottom-up parsing generates all generalized edges that represent unsatisfied expectations to the right and left of constituents. From some perspectives, the role this phase plays is similar to that of the covered bi-directional phase of the Picky parser (Magerman and Weir, 1992) , though the method proposed herein does not employ stochastic information at all. This process can be described in three rules as shown in Figure 1 . As can be seen, this is bi-directional bottom-up parsing that uses generalized edges as the data structure. For simplicity, the details for avoiding duplicated edge generation have been omitted. It is worth noting that after this process, the needs listed in each generalized edge indicate that the expected constituents did not exist, while, before this process, a need may exist just because an expectation has not been checked.", "cite_spans": [ { "start": 733, "end": 758, "text": "(Magerman and Weir, 1992)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 899, "end": 907, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Figure", "sec_num": null }, { "text": "The second phase finds out errors and corrects them. The location operation proceeds by refining a need into more precise one, and it starts from the global need that refers to the start symbol, S, from 0 to n, where n is the length of the given input. In the notion of generalized edges, that need can be represented as, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure", "sec_num": null }, { "text": "The data structure reflecting global needs directly is used in this phase, so the left part of each generalized edge is redundant and can be omitted. In addition, two values, g and h, are introduced, g denotes how much cost has been", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure", "sec_num": null }, { "text": "expended for the recovery so far, and h is the estimation of how much cost will be needed to reach a solution. Cost involves solution plausibility; solutions with low plausibility have high costs. Thus, the data structure used in this phase is, 109 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bi-Directional Parsing Rules", "sec_num": "1." }, { "text": "Here, the number of errors corrected so far is taken as g, and the total number of categories in the needs is used as h. As mentioned above, since the needs listed indicate only the existence of errors as detected by the preceding process and to be refined, the value of h is always less than or equal to the number of the errors that must be corrected to get a solution. That is, the best first search using g+h as the cost functions is an admissible A* search (Rich and Knight, 1991) . Needless to say, more sophisticated cost functions can also be used, in which, for example, the cost depends on the kind of error. The rules governing the second phase, which correspond to the search state transition operators in the context of search problems, are shown in Figure 2 . The top-down rule and the refining rule locate errors and the other three rules are for correcting them. Most important is the refining rule, which tries to find out errors by using generalized edges in a top-down manner toward preterminals. This reduces the frequency of using the topdown rule and prevents an explosion in the number of altematives.", "cite_spans": [ { "start": 462, "end": 485, "text": "(Rich and Knight, 1991)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 763, "end": 771, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "The Bi-Directional Parsing Rules", "sec_num": "1." }, { "text": "This process starts from .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bi-Directional Parsing Rules", "sec_num": "1." }, { "text": "To reach the following need means to get one solution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bi-Directional Parsing Rules", "sec_num": "1." }, { "text": ". ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Bi-Directional Parsing Rules", "sec_num": "1." }, { "text": "The need with the smallest value of g+h is processed first. If two needs have the same value of g+h, the one with the smaller h values dominates. This control strategy guarantees to find the solution with minimal cost first; that is, the solution with the minimum number of recoveries. Figure 3 shows an example of this technique in operation. (a) shows the sample grammar adopted, (b) shows the input to be processed, and (c) shows some of the edges left behind after the failure of the original bottom-up parsing. As shown in (d), the first phase generates several edges that indicate unsatisfied expectations to the left of found constituents. The second phase begins with need (e-1). Among the others, (e-2) and (e-3) are realized by applying the refining rule and the top-down rule, respectively. Since (e-2) has the smallest value of g+h, it takes precedence to be expanded. The refining rule processes (e-2) and generates (e-4) and (e-7), among others. The solution indicated by (e-6), which says that the fifth word of the input must be a preposition, is generated from (e-4). Another solution indicated by (e-9), which says that the fifth word of the input must be a conjunctive is derived from (e-7). That the top-down rule played no role in this example was not incidental. In reality, application of the top-down rule may be meaningful only when all the constituents listed in the RHS of a grammar rule contain errors. In every other case, generalized edges derived from that rule must have been generated already by the first phase. The application of the top-down rule can be restricted to cases involving unary rules, if one assumes at most one error may exist.", "cite_spans": [], "ref_spans": [ { "start": 286, "end": 294, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Figure 2. The Error Locating and Recovery Rules", "sec_num": null }, { "text": "In order to evaluate the technique described above, some preliminary experiments were conducted. The experiments employed the same framework as used by Mellish, and used a similar sized grammar, the small efree CF-PSG for a fragment of English with 141 rules and 72 categories. Random sentences (10 for each length considered) were generated from the grammar, and then random occurrences of specific types of errors were introduced into these sentences. The errors considered were none, deletion of one word, adding one known or unknown word, and substituting one unknown or known word for one word of the sentence. The amount of work done by the parser was calculated using the concept of \"cycle\". The parser consumes one cycle for processing each edge. The results are shown in Table 1 . The The preliminary results show that, for short sentences with one error, enumerating all possible minimumpenalty errors takes about 4 times as long as parsing the correct sentences. This is almost twice the speed of Mellish's strategy. As 75% of the process are occupied by the first bi-directional parsing operation, more cycles are needed to get the first solution with the proposed technique than with Mellish's strategy.", "cite_spans": [], "ref_spans": [ { "start": 780, "end": 787, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "iii the Error Recovery Process", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preliminary Experiments", "sec_num": "4" }, { "text": "The second phase of the proposed technique is based on ordinary top-down parsing or tree search rather than chart parsing. As a consequence, some error location operations may be redundant, as Mellish pointed out. For example, suppose a new grammar rule, N ~ N PP is added to the grammar given in Figure 3 . In that case, the following edge, located in the first phase, may cause a redundant error locating process, as the same search is triggered by (e-4).", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 305, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": ".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "One way for avoiding such redundancies is to use a data structure that reflects just local needs. However, it is true that an effective error location process must take into account global needs. There is a tradeoff between simplicity and the avoidance of duplicated efforts. The technique proposed here employs a data structure that directly reflects the global needs. Mellish, on the other hand, utilized a structure that reflected just local needs and tried to put global needs into the heuristic function. The result, at least so far as confirmed by tests, was that pruning allowed the simple method to overcome the drawback of duplicated effort. Moreover, Mellish's dependency control mechanism, .introduced to maintain the plausibility scores, means that edges are no longer local. In addition, it can be expected that a standard graph search strategy for avoiding duplicated search is applicable to the technique proposed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Theoretical investigation is needed to confirm how the number of grammar rules and the length of input will affect the amount of computation needed. Furthermore, the algorithm has to be extended in order to incorporate the high level knowledge that comes from semantics and pragmatics. Stochastic information such as statistics on category trigrams must be useful for effective control.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5" }, { "text": "Proposed AlgorithmThe technique proposed here resolves the above problems as follows. First, some portion of the error location process is separated from and precedes the processes that are governed by agenda control, and is archived by using a version of bi-directional parsing. Second, so that the search process can be anchored by the start symbol, a data structure is created that can represent global plausibility. Third, in order to reduce the dependency on the top-down rule, a rule is developed that uses two active edges to locate errors. This process is closer to ordinary top-down parsing than chart parsing and global plausibility scoring is accurate and easily calculated. For simplicity of explanation, simple CF-PSG grammar formalism is assumed throughout this paper, although there are obvious generalizations to other formalism such as DCG(Pereira and Warren, 1980) or unification based grammars(Shieber, 1986).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog", "authors": [ { "first": "John", "middle": [], "last": "Bear", "suffix": "" }, { "first": "John", "middle": [], "last": "Dowding", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Shriberg", "suffix": "" } ], "year": 1992, "venue": "Proceedings of 30th ACL", "volume": "", "issue": "", "pages": "56--63", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bear, John, Dowding, John and Shriberg, Elizabeth (1992). Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog. Proceedings of 30th ACL, 56 -63.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Recovery Strategies for Parsing Extra grammatical Language", "authors": [ { "first": "Jaime", "middle": [ "G" ], "last": "Carbonell", "suffix": "" }, { "first": "Philip", "middle": [ "J" ], "last": "Hayes", "suffix": "" } ], "year": 1983, "venue": "JACL", "volume": "9", "issue": "3-4", "pages": "123--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carbonell, Jaime G. and Hayes, Philip J. (1983). Recovery Strategies for Parsing Extra grammatical Language. JACL, 9 (3-4), 123 -146.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Natural Language Processing in LISP", "authors": [ { "first": "Gerald", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Mellish", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gazdar, Gerald and Mellish, Chris (1989). Natural Language Processing in LISP. Workingham: Addison-Wesley.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Deterministic Parsing of Syntactic Non-fluencies", "authors": [ { "first": "Donald", "middle": [], "last": "Hindle", "suffix": "" } ], "year": 1983, "venue": "Proceedings of 21st ACL", "volume": "", "issue": "", "pages": "123--128", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hindle, Donald (1983). Deterministic Parsing of Syntactic Non-fluencies. Proceedings of 21st ACL, 123 -128.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Algorithm Schemata and Data Structures in Syntactic Processing", "authors": [ { "first": "Martin", "middle": [], "last": "Kay", "suffix": "" } ], "year": 1980, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kay, Martin (1980). Algorithm Schemata and Data Structures in Syntactic Processing. Research Report CSL-80-12 Xerox PARC.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Parsing Incomplete Sentences", "authors": [ { "first": "Bernard", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1988, "venue": "Proceedings of COLING", "volume": "88", "issue": "", "pages": "365--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lang, Bernard (1988). Parsing Incomplete Sentences. Proceedings of COLING 88, 365 -371.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Efficiency, Robustness and Accuracy in Picky Chart Parsing", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Magerman", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Weir", "suffix": "" } ], "year": 1992, "venue": "Proceedings of 3Oth ACL", "volume": "", "issue": "", "pages": "40--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magerman, David M. and Weir, Carl (1992). Efficiency, Robustness and Accuracy in Picky Chart Parsing. Proceedings of 3Oth ACL, 40 -47.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Some Chart-Based Techniques for Parsing Ill-Formed Input", "authors": [ { "first": "Chris", "middle": [ "S" ], "last": "Mellish", "suffix": "" } ], "year": 1989, "venue": "Proceedings of 27th ACL", "volume": "", "issue": "", "pages": "102--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mellish, Chris S. (1989). Some Chart-Based Techniques for Parsing Ill-Formed Input. Proceedings of 27th ACL, 102-109.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Definite Clause Grammars for Language Analysis -A Survey of the Formalism and a Comparison with Augmented Transition Networks", "authors": [], "year": null, "venue": "Artificial Intelligence", "volume": "13", "issue": "3", "pages": "231--278", "other_ids": {}, "num": null, "urls": [], "raw_text": "Definite Clause Grammars for Language Analysis -A Survey of the Formalism and a Comparison with Augmented Transition Networks. Artificial Intelligence, 13 (3), 231 -278.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Artificial Intelligence", "authors": [ { "first": "Elaine", "middle": [], "last": "Rich", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rich, Elaine and Knight, Kevin (1991). Artificial Intelligence (2nd ed.). New York: McGraw-Hill.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Parsing Noisy Sentences", "authors": [ { "first": "Hiroaki", "middle": [], "last": "Saito", "suffix": "" }, { "first": "Masaru", "middle": [], "last": "Tomita", "suffix": "" } ], "year": 1988, "venue": "Proceedings of COLING", "volume": "88", "issue": "", "pages": "561--566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saito, Hiroaki and Tomita, Masaru (1988). Parsing Noisy Sentences. Proceedings of COLING 88, 561 -566.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Formal Properties and Implementation of Bidirectional Charts", "authors": [ { "first": "Giorgio", "middle": [], "last": "Satta", "suffix": "" }, { "first": "Oliviero", "middle": [], "last": "Stock", "suffix": "" } ], "year": 1989, "venue": "Proceedings oflJCAl", "volume": "89", "issue": "", "pages": "1480--1485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satta, Giorgio and Stock, Oliviero (1989). Formal Properties and Implementation of Bidirectional Charts. Proceedings oflJCAl 89, 1480 -1485.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "An Introduction to Unification-Based Approaches to Grammar", "authors": [ { "first": "Stuart", "middle": [ "M" ], "last": "Shieber", "suffix": "" } ], "year": 1986, "venue": "Stanford: CSLI Lecture Notes", "volume": "4", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shieber, Stuart M. (1986). An Introduction to Unification-Based Approaches to Grammar. Stanford: CSLI Lecture Notes 4.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Meta-Rules as a Basis for Processing Ill-Formed Input", "authors": [ { "first": "Ralph", "middle": [ "M" ], "last": "Weishedel", "suffix": "" }, { "first": "Norman", "middle": [ "K" ], "last": "Sondheimer", "suffix": "" } ], "year": 1983, "venue": "JACL", "volume": "9", "issue": "3-4", "pages": "161--177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weishedel, Ralph M. and Sondheimer, Norman K. (1983). Meta-Rules as a Basis for Processing Ill- Formed Input. JACL, 9 (3-4), 161 -177.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Empty category rule: ", "num": null, "type_str": "figure", "uris": null }, "FIGREF1": { "text": "* needs [VP] from 2 to *> ...(c-2) ...(c-3) ...(c-4) ...(c-5) (d) Examples edges generated in the bi-directional parsing: ...(d-l) Bottom up rule, (c-l), (a-2) ...(d-2) Fundamental rule, (c-l), (c-5) (e) Focusing on and recovering from errors: ...(e-l) Initial needs ...(e-2) Refining rule, (e-l), (c-2) ...(e-3) Top-down rule, (e-l), (a-l) ...(e-4) Refining rule, (e-2), (c-3) ...(e-5) Refining rule, (e-4), (d-l) ...(e-6) Unknown word rule, (e-5) The fifth word, \"an\", is hypothesized to be an unknown preposition (P) ...(e-7) Refining rule, (e-2), (c-4) ...(e-8) Refining rule, (e-7), (d-2) ...(e-9) Unknown word rule, (e-8) The fifth word, \"an\", hypothesized to be an unknown conjunctive (C)", "num": null, "type_str": "figure", "uris": null }, "FIGREF2": { "text": "An Example of statistics in the table are described as follows. BU cycles is the number of cycles taken to exhaust the chart in the initial bottom-up parsing. BD cycles is the number of cycles required for bi-directional bottom-up parsing in the first phase. #solns is the number of different solutions and represents descriptions of possible errors. First~Last is the number of cycles required for error location and recovery to find the first / last solution. LR cycles is the number of cycles in the error locating and recovery phase required to exhaust all possibilities of sets of errors with the same penalty as the first solution.", "num": null, "type_str": "figure", "uris": null }, "TABREF0": { "num": null, "text": " where s 1 < S1 or Sl-*, E1 < el or el -*. Csi+ 1 from s to ei+l .... > . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .", "type_str": "table", "html": null, "content": "
Fundamental rule:
Simplification rule:
<C from S to E needs
.... Csi-1 from si_ 1 to s, [] from s to s,
" }, "TABREF1": { "num": null, "text": "Top-down rule: C1 --> ...RHS (in grammar) Cl...Csl] from Sl to el ..... g: G, h: H> where C1 is a pre-terminal where Sl < SI [CI...Csl] from Sl to el ..... g: G, h: H> where C1 is a pre-terminal where the edge, does not exist in the chart", "type_str": "table", "html": null, "content": "
Refining rule:
<needs [...Csl 1, C1, Csl2...] from s 1 to e 1 ..... g: G, h: H>
<C1 from S to E needs Cs 1 from S 1 to E1 ..... Csn from Sn to En >
<needs Csll from s I to S, Csl from S1 to E1 ..... Csn from Sn to En ,
Csl2 from E to el ..... g: G, h: H+~(length of Csn)-l>
The result must be well-formed, that is sl < S1 or sl--* or SI=* and so on.
Garbage rule:
<needs [Unknown word rule:
" }, "TABREF2": { "num": null, "text": "", "type_str": "table", "html": null, "content": "
EITOrLength ofBU cyclesBD cycles#solnsFirstLastLR cycles
original
6701.3
None1141.4
121702.0
Delete6421326.082431
one9792554.5193243
wt~xl121113786.2254357
Add6601913.0142028
unknown9993223.7253746
word121475342.6466172
Add6
known
word
Substitute
unknown
wc~l
Substitute
known
wtxd
" } } } }