ACL-OCL / Base_JSON /prefixI /json /iwpt /1991.iwpt-1.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "1991",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:33:57.965331Z"
},
"title": "SLOW AND FAST PARALLEL RECOGNITION",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "De Vreught",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delft University of Te",
"location": {
"addrLine": "Julianalaan 132",
"postCode": "2628 BL",
"settlement": "Delft",
"country": "The Netherlands"
}
},
"email": ""
},
{
"first": "Job",
"middle": [],
"last": "Honig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Delft University of Te",
"location": {
"addrLine": "Julianalaan 132",
"postCode": "2628 BL",
"settlement": "Delft",
"country": "The Netherlands"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In the first part of this paper a slow paral lel recognizer is described for general CFG's. The recognizer runs in 0(n 3 / p(n)) time with p (n) = O(n 2) processors. It generalizes the items of the Earley algorithm to double dot ted items, which are more suited to parallel parsing. In the second part a fast parallel recognizer is given for general CFG 's. The recognizer runs in O(log n) time using 0(n 6) processors. It is a generalisation of the Gib bons and Rytter algorithm for grammars in CNF.",
"pdf_parse": {
"paper_id": "1991",
"_pdf_hash": "",
"abstract": [
{
"text": "In the first part of this paper a slow paral lel recognizer is described for general CFG's. The recognizer runs in 0(n 3 / p(n)) time with p (n) = O(n 2) processors. It generalizes the items of the Earley algorithm to double dot ted items, which are more suited to parallel parsing. In the second part a fast parallel recognizer is given for general CFG 's. The recognizer runs in O(log n) time using 0(n 6) processors. It is a generalisation of the Gib bons and Rytter algorithm for grammars in CNF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The subject of context-free parsing is well studied, e.g. see (Aho and Ullman, 1972; 1973; Harrison, 1978) . Nowadays, research on the subject has shifted to parallel context free parsing ( op den Akker, Alblas, Nijholt, and Oude Luttighuis, 1989) . Two areas of interest can be distinguished: slow and fast parallel parsing. We call a parallel algorithm fast when it does its job in 'polylogarithmic time. This is in contrast to the sequential case, in which algorithms are called fast when they run in polynomial time. Obtaining a fast parallel algorithm is often quite simple: when the fast sequential algorithm is highly parallelizable, using an exponential number of processors is sufficient. This is not very realistic, however.",
"cite_spans": [
{
"start": 62,
"end": 84,
"text": "(Aho and Ullman, 1972;",
"ref_id": null
},
{
"start": 85,
"end": 90,
"text": "1973;",
"ref_id": null
},
{
"start": 91,
"end": 106,
"text": "Harrison, 1978)",
"ref_id": null
},
{
"start": 212,
"end": 247,
"text": "Nijholt, and Oude Luttighuis, 1989)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "A parallel algorithm is called f ea.si ble only when it uses a polynomial number of proces sors. Note that when a fe asible slow par allel algorithm runs in polynomial time, it",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "\u2022using initials: J.P.M. de Vreught and H.J. Honig.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "can be simulated by a fast sequential algo rithm. Therefore in practice we often see that slow parallel is fast enough; fast parallel al gorithms often achieve their speed because of their huge number of processors and large amounts of storage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "Several authors have studied a\ufffdgorithms for slow parallel recognition. Most of these. al gorithms are variants of the Cocke-Younger Kasami ( CYK) algorithm and the Earley al gorithm. In the first part of this paper an other slow parallel recognizer is given ( de Vreught and Honig, 1989; 19\ufffd0b )_-Its new fe ature is that it uses double dotted it\ufffdms, which are more natural for parallel parsing; these items make it easy to do error determi nation, a fe ature that is shared with niost parallel bottom up algorithms. Although there are some similarities between the three algorithms, they should not be regarded as variants of each other since they all fill their respective matrices with different 'items' and for entirely different reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "When compared to a. parallel versio\ufffd of the Earley algorithm, which would have to be bottom up, our algorithm generates far less i terns on the principal diagonal of the recognition matrix. A detailed comparison of the items required by the given algorithm and the Earley algorithm will be necessary to show the strengths \u2022 or weaknesses of both approaches to parallel parsing. The si zes of the item sets in relation to particular classes of grammars is still under research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "The subject of fast parallel pars\ufffdng is rel atively new. Amongst the first to give a fast parallel recognizer were Gibbons and Rytter (1988). Their recognizer requires a grammar in CNF; it can be regarded as the fast par-allel version of the slow parallel CYK algo rithm. The speeded up version is obtained by also examining the consequences of incom plete items. When an incomplete item gets completed, we can also complete the conse quences immediately. The reason for the al gorithm being fast is based on the fact that for every skewed tree (with n internal nodes) of height 0( n) describing the composition of a certain item, there exists a reasonably well balanced one of height O(log n) that uses both complete and incomplete items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "In the second part of the paper a fas_ t par allel recognizer for general CFG 's is given ( de Vreught and Honig, 1990a). In spite of the fact that any CFG can be transformed into CNF in 0(1) time, usi\ufffdg CNF is undesirable in practice ( especially in natural language processing). The fast parallel recognizer does not need to transform the grammar. The fast parallel recognizer can be.regarded as the fast parallel version of the slow paraliel recognizer described in the first part. The fa st parallel recognizer is based on the Gibbons and J;lyt ter algorithm for grammars in CNF (Gibbons and Rytter, 1988). The paper is concluded with some final remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1"
},
{
"text": "We start by sketching the ideas behind the slow parallel recognizer. Then we will give an inductive relation which plays a central role in our algorithms. Finally we will present the slow parallel recognizer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SLOW PARALLEL RECOGNIZER",
"sec_num": "2"
},
{
"text": "Let a1 \u2022\u2022\u2022 a n be the string to be recognized. We are going to build an upper triangular matrix U as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMAL DESCRIPTION",
"sec_num": "2.1"
},
{
"text": "In each cell Uij we enter items of the form A --+ a \u2022/3 \u2022, such that A --+ a{3 1 is a produc tion and /3 \u21d2 * ai+I ... aj . We will also insist 128 that if f3 = A then a, = A: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMAL DESCRIPTION",
"sec_num": "2.1"
},
{
"text": "A \ufffd a f3 , \ufffd i i J Suppose B --+ \u2022 /3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMAL DESCRIPTION",
"sec_num": "2.1"
},
{
"text": "A \ufffd a B , /3 \ufffd i i J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 E U ij and let A --+ aB, be a production. In that case we can assert that A--+ a \u2022 B\u2022, E Uij :",
"sec_num": null
},
{
"text": "s ! a \ufffd a o \u2022 a1 ... a n \u2022 a n+l i i 0 n S --+ \u2022 a \u2022 E U On",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This assertion follows from the application of the inclusion operation to B --+ \u2022 /3 \u2022 E Uij \u2022 Another operation is concatenation. If",
"sec_num": null
},
{
"text": "In the following, we will give a relation U defining the item sets constructed dur ing the recognition process. We do this by identifying the matrix U with the relation U This item is a base item.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This assertion follows from the application of the inclusion operation to B --+ \u2022 /3 \u2022 E Uij \u2022 Another operation is concatenation. If",
"sec_num": null
},
{
"text": "such that A --+ -a\u2022/3\u2022, E Uij iff (i,j,A --+ o:\u2022/3\u2022, ) EU .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "This assertion follows from the application of the inclusion operation to B --+ \u2022 /3 \u2022 E Uij \u2022 Another operation is concatenation. If",
"sec_num": null
},
{
"text": "Let G = ( V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE RELATION",
"sec_num": "2.2"
},
{
"text": "This operation is called inclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 If A--+ aB; E P and (i,j, B--+ \u2022/3\u2022 ) E U' then (i,j,A --+ a\u2022B \u2022, ) E U'.",
"sec_num": null
},
{
"text": "\u2022 If (i, k,A --+ o:\u2022/31 \u2022/32, ) E U' and (k,j,A --+ af31 \u2022f32 \u2022, ) E U' then (i,j,A --+ o:\u2022/31/32 \u2022 ,) EU ' .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 If A--+ aB; E P and (i,j, B--+ \u2022/3\u2022 ) E U' then (i,j,A --+ a\u2022B \u2022, ) E U'.",
"sec_num": null
},
{
"text": "This operation is called concatenation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 If A--+ aB; E P and (i,j, B--+ \u2022/3\u2022 ) E U' then (i,j,A --+ a\u2022B \u2022, ) E U'.",
"sec_num": null
},
{
"text": "\u2022 Nothing is in U' except those elements which must be in U' by applying the pre ceding rules finitely often.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 If A--+ aB; E P and (i,j, B--+ \u2022/3\u2022 ) E U' then (i,j,A --+ a\u2022B \u2022, ) E U'.",
"sec_num": null
},
{
"text": "It can be proved that U = U'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u2022 If A--+ aB; E P and (i,j, B--+ \u2022/3\u2022 ) E U' then (i,j,A --+ a\u2022B \u2022, ) E U'.",
"sec_num": null
},
{
"text": "We will now present the recognition . algo rithm ( de Vreught and Honig, 1989). In the algorithm mode is ei ther sequence or par allel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE RECOGNIZER",
"sec_num": "2.3"
},
{
"text": "Recognizer ( ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE RECOGNIZER",
"sec_num": "2.3"
},
{
"text": "In this section we will sketch the ideas behind the fast algorithm. The proof that the rec-ognizer is fast uses a pebble game, described in ( Gibbons and Rytter, 1988), and critically depends on the fact that the 'minimal compo sition trees' are linear in size (with respect to the length of the string to be recognized). In stead of determining U directly we will com pute its extension U, on which the fast par allel recognizer is based. Finally we will de scribe the recognizer for a general CFG. The algorithms is based on the fast parallel Gib bons and Rytter recognizer for CFG 's in CNF ( Gibbons and Rytter, 1988).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE FA ST PARALLEL RECOGNIZER",
"sec_num": "3"
},
{
"text": "showing why x E U. Sometimes an item x can be justified in more than -one way. We will consider justifications one at a time . . A complete justification of an item x in U will be called a composition for x; such a com position can be represented by a composition tree T x . The nodes in T x are labelled with the items mentioned in the antecedents of the rules of definition 2.2.2 that are applied; the root is labelled x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMPOSITION TREES Definition 2.2.2 offers a way of justifying the presence of an item x in U. A justification is a sequence of rules corresponding to a proof",
"sec_num": "3.1"
},
{
"text": "Example 3.1.1 Suppose w is the result of an inclusion of x, x is the result of a con catenation of y and z, and y and z are base items. The composition tree T w for w is as given below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMPOSITION TREES Definition 2.2.2 offers a way of justifying the presence of an item x in U. A justification is a sequence of rules corresponding to a proof",
"sec_num": "3.1"
},
{
"text": "T w : w ! X \ufffd y z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COMPOSITION TREES Definition 2.2.2 offers a way of justifying the presence of an item x in U. A justification is a sequence of rules corresponding to a proof",
"sec_num": "3.1"
},
{
"text": "We will speed up the slow parallel algorithm that computes relation U to a fast parallel algorithm computing U by using a relation denoted by U (given in section 3.4 ). The presence of each item x in U can be justified by means of a composition tree T x . In T x all I As an immediate consequence we have that each subtree of T x is a composition tree too. We will represent T x ( or to be more exact: the existence of T x ) as in the figure below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INFORMAL DESCRIPTION",
"sec_num": "3.2"
},
{
"text": "\u2022 Suppose T y exists. Thus we assume y E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-+ a\u2022B\u2022 B-+ A\u2022c\u2022c B-+ A\u2022cc \u2022 B -+ Ac\u2022c\u2022 B-+ A\u2022c\u2022c B-+ Ac\u2022c\u2022",
"sec_num": null
},
{
"text": "\u2022u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-+ a\u2022B\u2022 B-+ A\u2022c\u2022c B-+ A\u2022cc \u2022 B -+ Ac\u2022c\u2022 B-+ A\u2022c\u2022c B-+ Ac\u2022c\u2022",
"sec_num": null
},
{
"text": "Let us see what the consequences of this assumption are. Suppose we can derive T x for item x from T y :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-+ a\u2022B\u2022 B-+ A\u2022c\u2022c B-+ A\u2022cc \u2022 B -+ Ac\u2022c\u2022 B-+ A\u2022c\u2022c B-+ Ac\u2022c\u2022",
"sec_num": null
},
{
"text": "Assume we don't no\ufffd wet her or not y ac tually is in U. Instead . of saying that we have determined T x , we say that we have deter mined T x except for the part T y : we have the partial composition tree T x +y ( or better: its existence) represented as given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tx&",
"sec_num": null
},
{
"text": "Tx-\ufffd Note that T x +y might exist whilst T y does not (because y (j_ U). By using these partial composition trees, we draw concl usions from facts yet to be established. This makes the algorithm for the recognizer fast; the proof of this is based on Rytter's pebble game ( Gib bons and Rytter, 1988) .",
"cite_spans": [
{
"start": 271,
"end": 299,
"text": "( Gib bons and Rytter, 1988)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tx&",
"sec_num": null
},
{
"text": "For each base item x (in U), we can assert the existence of a composition tree T x :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tx&",
"sec_num": null
},
{
"text": "Suppose x can be obtained from y by means of an inclusion operation. In that case we can assert the partial composition tree T x +y : Now suppose that x can be obtained from y and z by means of a concatenation opera tion and assume that T y \u2022 exists ( the case that T z exists, is handled analogously). In that case we can assert the partial composition tree T x +z :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T\ufffd",
"sec_num": null
},
{
"text": "The rules for the inclusion and concate nation operations are called activation rules ( the names of all rules are borrowed from the pebble game).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T\ufffd",
"sec_num": null
},
{
"text": "The square rule ( a misnomer) merges two partial composition trees T x +y and T y +z to obtain the partial composition tree T x+-z :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T\ufffd",
"sec_num": null
},
{
"text": "Ty-\ufffd ---------________ J",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "T\ufffd",
"sec_num": null
},
{
"text": "The final rule is the pebble rule, which merges a partial composition tree T x +y and a composition tree T y to obtain the composi tion tree T x : ---------------..... ",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 167,
"text": "---------------.....",
"ref_id": null
}
],
"eq_spans": [],
"section": "T\ufffd",
"sec_num": null
},
{
"text": "When we would define a composition tree for if in the same way as we did for U, we would find that for an arbi trary U composition tree T x there exists a reason ably well balanced U-composition tree T x , w hi eh also asserts that the i tern x is in U. It can be shown that if the activation rule, the square rule, and the pebble rule are iterated O(log n) times, we have found the existence of at least one composition tree T x for every x in U ( and for only those). Therefore we can say that we can compute U in O(log n) time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". J",
"sec_num": null
},
{
"text": "As a notational shortcut we will speak of an item x in Uij , by which we mean that x EU and that x is of the form ( i, j, A -+ a\ufffd /3 \u2022, ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MINIMAL COMPOSITION SIZE",
"sec_num": "3.3"
},
{
"text": "The composition size will be defined as the number of operations in the composition tree. We call a composition tree minimal iff its composition size is minimal. In this section we will argue why the minimal composition size for item x in Uij is linear in ji + 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MINIMAL COMPOSITION SIZE",
"sec_num": "3.3"
},
{
"text": "There are two cases to consider:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MINIMAL COMPOSITION SIZE",
"sec_num": "3.3"
},
{
"text": "\u2022 A composition tree which has an item appearing twice as a label on a path ( such a tree is called a 'cyclic ' 1 composi tion tree) is not mi nimal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MINIMAL COMPOSITION SIZE",
"sec_num": "3.3"
},
{
"text": "\u2022 An 'acyclic' composition tree has a li\u00b5ear composition si ze.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MINIMAL COMPOSITION SIZE",
"sec_num": "3.3"
},
{
"text": "1 A misnomer on our part.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE MINIMAL COMPOSITION SIZE",
"sec_num": "3.3"
},
{
"text": "Assume that for item x in Uij we have found a cyclic composition tree T x , So on a certain path in T x we must have a certain item y in U pq that appears twice as a label ( the non trivial path between those nodes is called a 'cycle'):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "132",
"sec_num": null
},
{
"text": "U pq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "132",
"sec_num": null
},
{
"text": "It is clear that when the part between the up per y and the lower y is removed from T x , the number of operations in Tx ' is less than the number in T x , So after removing a cycle, we allways get a smaller composition tree. Thus the minimal composition tree is a member of the set of the acyclic composition trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cycle removed",
"sec_num": null
},
{
"text": "We will now argue that any acyclic compo sition tree has a composition size bounded by a function linear in the length of the string to be recognized. Since we don't need a tight upper bound, we will not use an actual\u2022 com position. Instead, we will assume that in ev ery step on our way the worst case occurs. This may lead to a 'case' that is worse than the actual worst case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cycle removed",
"sec_num": null
},
{
"text": "We will assume that every internal node is the result of a concatenation. Suppose x is the result of an inclusion of y: in that case T x contains one more operation than T y , But when x is the result of a concatenation of y and z, then T x contains one more operation than T y and T z together. Thus a concate nation can only lead to more ( and never to fewer) operations than an inclusion. We will assume that the compositions are acyclic. The next case is an item x in V j-l,j, see figure 2(b ). We know that there must exist a path from x to a base item y in Vj-l,j\u2022 All nodes on that path are in V j -l,j and the path is bounded in length by 0(M). Any internal node on that path has one son in Vj -1,j and one son in either Vj -1,j -1 or V jj (if the node corresponds to an inclusion, this last son does not exist). Here too, it can be shown that only 0(1) operations are possible for item x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cycle removed",
"sec_num": null
},
{
"text": "The last case will be item x in Vij with i + 1 < j, see figure 2( c ). This is essentially like the previous case, but y is not a base item anymore. In this case y is the result of a concatenation of an item in Vik and an item in U kj with i < k < j. So instead we get 0(1) operations plus the number of op erations needed for the item in Vik and the item in V k j \u2022 These considerations lead to a difference equation, the solution of which shows that the number of operations for x is 0( n ), see ( de Vreught and Honig, 1990a) .",
"cite_spans": [
{
"start": 498,
"end": 528,
"text": "( de Vreught and Honig, 1990a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cycle removed",
"sec_num": null
},
{
"text": "Definition 3.4.1 The relation U over :I U :1 2 is defined as follows :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE EXTENDED RELATION",
"sec_num": "3.4"
},
{
"text": "\u2022 If A \ufffd A E P then (j, j, A \ufffd ) E U for any j E { 0, ... , n}. This rule is used for the initialization.",
"cite_spans": [
{
"start": 47,
"end": 63,
"text": "E { 0, ... , n}.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE EXTENDED RELATION",
"sec_num": "3.4"
}
],
"back_matter": [
{
"text": "\u2022 If A \ufffd aan E P then (j -l,j,A \ufffd a\u2022 a r 1 ) E U for any j E { 1, ... , n}. This rule is used for the initialization.\u2022 If A \ufffd aB, E P and B \ufffd /3 E P thenThis rule is called the activation rule for the inclusion operation.\u2022 If ( i, k , A \ufffd a \u2022 /31 \u2022 /32 1 ) E U then ( i, j, A \ufffd a\u2022 /31/32 \u2022, )f--( k, j, A \ufffd a/31 \u2022/32 \u2022, ) EU with k \ufffd j \ufffd n.This rule is called an activation rule for the concatenation operation.\u2022 If (k,j, A \ufffd a/31 \u2022/32 \u2022, ) E U then (i,j, A \ufffd a\u2022/31 /32 \u2022, )f--(i, k, A \ufffdThis rule is called an activation rule for the concatenation operation .\u2022 If x f--y E fJ and y f-This rule is called the square rule.\u2022 If x f--y E fJ and y E fJ then x E U.This rule is called the pebble rule.\u2022 Nothing is in fJ except those elements which must be in U by applying the pre ceding rules finitely often.It can be shown that U = U n :I.",
"cite_spans": [
{
"start": 59,
"end": 75,
"text": "E { 1, ... , n}.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "We will present the fast parallel recognizer ( de Vreught and Honig, 1990a).Recognizer( n ) : iJ := 0 for all i 1 , i 2 such that O \ufffd i 1 \ufffd i 2 \ufffd n in parallel do Initialization( i 1 ) Activatelnclusion(i 1 , i 2) while U still changes do for all i 1 , ... , i6 such that 0 \ufffd i 1 \ufffd ... \ufffd i6 \ufffd n in parallel do ActivateConcatenation( i1 , ... , i 3) Square( i 1 , ... , i6) Square( i 1 , ... , i6) Pebble( i 1 , ... , i4) return Test( n) lnitialization(j): It can be shown ( de Vreught and Honig, 1990a) that the algorithm will compute the relation U on a CRCW-PRAM with p(n) = 0(n 6 ) processors in T(n) = O(log n) time using S( n) = 0( n 4 ) space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE RECOGNIZER",
"sec_num": "3.5"
},
{
"text": "The slow parallel recognizer is based on a rel atively simple idea. In spite of several sim ilarities, it is not a variant of the ' Cocke Younger-Kasami (CYK) algorithm or the Earley algorithm (Aho and Ullman, 1972; Harrison, 1978; Earley, 1970) ; the algebraic definitions specifying. the. algorithms all differ considerably, and therefore these algorithms all enter their 'items' into their respective matrices for different reasons. Just as for the given algorithm, there exist slow paral lel versions of the CYK algorithm and of the Earley algorithm (Nijholt, 1990; Chiang and Fu , 1984) .The topic of fast parallel recognizing and parsing is still young and little research on the subject has been conducted. One of the first publications of a fast parallel recognizer is (Brent and Goldschlager, 1984 ) . Far better known are the results of Gibbons and Rytter. , 1988) . Unfortunately, CNF is undesirable for many purposes. This is why we have developed a new fast parallel rec ognizer that leaves the grammar unchanged. Another recognizer with the same property can be found in (Sikkel and Nijholt, 1991 ).",
"cite_spans": [
{
"start": 193,
"end": 215,
"text": "(Aho and Ullman, 1972;",
"ref_id": null
},
{
"start": 216,
"end": 231,
"text": "Harrison, 1978;",
"ref_id": null
},
{
"start": 232,
"end": 245,
"text": "Earley, 1970)",
"ref_id": null
},
{
"start": 554,
"end": 569,
"text": "(Nijholt, 1990;",
"ref_id": null
},
{
"start": 570,
"end": 591,
"text": "Chiang and Fu , 1984)",
"ref_id": "BIBREF7"
},
{
"start": 777,
"end": 808,
"text": "(Brent and Goldschlager, 1984 )",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 867,
"end": 874,
"text": ", 1988)",
"ref_id": null
}
],
"eq_spans": [],
"section": "FINAL REMARKS",
"sec_num": "4"
},
{
"text": "Although not given in this paper there also exist parallel parsers which can be used in conjunction with the parallel recognizers. For the slow parallel recognizer there exists a slow parallel parser that can do its job with 0( n) processors in 0( n log n) time ( de Vreught and Honig, 1990b ). When the grammar is acyclic, there exists a fast parallel parser run ning with 0( n 6 ) processors in O(log n) time ( de Vreught and Honig, 1990a).Since the subject of fast parallel parsing is so young, there are many open questions, some of which will probably be solved in the near future. For instance, at this moment it is not yet known whether or not fast parsing of general CFG's is possible without trans forming the grammar ( we suspect that it is). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "They have described a fast parallel recognizer and parser for grammars in CNF (Gibbons and Rytter",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "The Theory of Parsing, Tra nslation and Compiling, Vo lume I: Parsing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Theory of Parsing, Tra nslation and Compiling, Vo lume I: Parsing, Prentice Hall, Englewood Cliffs, NJ.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Theory of Parsing, Tra nslation and Compiling, Vo lume II: Compiling",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Theory of Parsing, Tra nslation and Compiling, Vo lume II: Compiling, Prentice Hall, Englewood Cliffs, NJ.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Rieks op den",
"authors": [
{
"first": "",
"middle": [],
"last": "Akker",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akker, Rieks op den; Alblas, Henk;",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Parallel Algorithm for Context Free Parsing",
"authors": [
{
"first": "Richard",
"middle": [
"P"
],
"last": "Brent",
"suffix": ""
},
{
"first": "Leslie",
"middle": [
"M"
],
"last": "Goldschlager",
"suffix": ""
}
],
"year": 1984,
"venue": "Austral. Co mput. Sci. Co mm",
"volume": "6",
"issue": "",
"pages": "7--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brent, Richard P. and Goldschlager, Leslie M. 1984 A Parallel Algorithm for Context Free Parsing, Austral. Co mput. Sci. Co mm. 6: 7-1 -7-10.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Par allel Parsing Algorithms and VLSI Imple mentations for Syntactic\u2022 Pattern Recogni",
"authors": [
{
"first": "Y",
"middle": [
"T"
],
"last": "Chiang",
"suffix": ""
},
{
"first": "King",
"middle": [
"S"
],
"last": "Fu",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, Y.T. and Fu , King S. 1984 Par allel Parsing Algorithms and VLSI Imple mentations for Syntactic\u2022 Pattern Recogni-",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "When the entries of matrix U are closed with respect to the operations, we look for an item S--+ \u2022a\u2022 E U on where Sis the start symbol of the grammar:",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": ", \ufffd, P, S) be the CFG in question and let x = a 1 \u2022\u2022 , a n the string to be recog nized. Furthermore, let \u2022 (/. V and let A be the empty string. Finally, let .J = {O, ... , n } 2 x { A --+ a\u2022 /3 \u2022, I A --+ a{); E P}. Vreught and Honig, 1989) some vari ants of U are examined; for instance, one of them takes context into account. The dis advantage of definition 2.2.1 is that it is not immediately clear how to determine whether or not an item is in the relation. For this purpose we need an inductive definition. Definition 2.2.2 The relation U' over .J is defined as follows: \u2022 If A --+AEPthen (j, j,A --+ \u2022\u2022 )EU ' for any j E { 0, ... , n}. This item is a base item.\u2022 If A --+ aan E P then (jl ,j,A --+ a\u2022aj \u2022i ) EU ' for any j E { 1, ... ,n } .",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "j + k,j + i) while U i ,i +i still changes do Concateri ator(j, j, j + i) Concatenator(j, j + i, j + i) Includer(j, j + i) i, k, j): for all A--+ 0:\u2022/31 \u2022/32, E uik with I /31 I = 1 do for all A --+ o:/31 \u2022/32 \u2022, n) = 0( n) processors can fill the ma trix in T(n) = 0(n 3 /p(n)) time. The concatenations done in the loop over k in Recognizer can also be done independently of each other. However, in that case the ar chitecture must allow parallel writing in cell U j , j +i\u2022 Thus when mode = parallel, it can be shown that a CRCW-PRAM (Concur rent Read Concurrent Wri te -Parallel RAM) ( Quinn, 1987) with p( n) = 0( n 2 ) processors can fill the matrix in T ( n) = 0( n 3 / p( n)) time. In both cases the space complexity is dominated by the matrix: S( n) = 0( Consider the string aabcc and the CFG G = (V, E, P, A): \u2022 V = {A, B} u \ufffd \u2022 E = {a, b, c} \u2022 P contains the following productions:",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Matrix U for aabcc nodes are labelled with items in U. The root is labelled x. The other nodes are labelled by the items mentioned in the antecedents of the rules of the inductive definition of U.",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "We define M = l{A -+ a \u2022/3 \u2022, I A-+ a/3, E P} I ; M is an upper bound for the number of i terns in any U ij . Let us focus on an i tern x in Ujj , see figure 2(a). We know that item x has an acyclic composition , so T x is bounded in height by O(M). Since a completely bal anced tree has the maximum number of op erations, we have an exponential number of V j -1, j y A simplified partial subtree of an acyclic T x operations in M. However, this number is in dependent of n. Thus there exist only 0(1) many operations in such a composition.",
"uris": null
}
}
}
}