{ "paper_id": "C02-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:18:45.960925Z" }, "title": "A Chart-Parsing Algorithm for Efficient Semantic Analysis", "authors": [ { "first": "Pascal", "middle": [], "last": "Vaillant", "suffix": "", "affiliation": {}, "email": "vaillant@tsi.enst.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In some contexts, well-formed natural language cannot be expected as input to information or communication systems. In these contexts, the use of grammar-independent input (sequences of uninflected semantic units like e.g. languageindependent icons) can be an answer to the users' needs. However, this requires that an intelligent system should be able to interpret this input with reasonable accuracy and in reasonable time. Here we propose a method allowing a purely semantic-based analysis of sequences of semantic units. It uses an algorithm inspired by the idea of \"chart parsing\" known in Natural Language Processing, which stores intermediate parsing results in order to bring the calculation time down.", "pdf_parse": { "paper_id": "C02-1023", "_pdf_hash": "", "abstract": [ { "text": "In some contexts, well-formed natural language cannot be expected as input to information or communication systems. In these contexts, the use of grammar-independent input (sequences of uninflected semantic units like e.g. languageindependent icons) can be an answer to the users' needs. However, this requires that an intelligent system should be able to interpret this input with reasonable accuracy and in reasonable time. Here we propose a method allowing a purely semantic-based analysis of sequences of semantic units. It uses an algorithm inspired by the idea of \"chart parsing\" known in Natural Language Processing, which stores intermediate parsing results in order to bring the calculation time down.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As the mass of international communication and exchange increases, icons as a mean to cross the language barriers have come through in some specific contexts of use, where language independent symbols are needed (e.g. on some machine command buttons). The renewed interest in iconic communication has given rise to important works in the field of Design (Aicher and Krampen, 1996; Dreyfuss, 1984; Ota, 1993) , on reference books on the history and development of the matter (Frutiger, 1991; Liungman, 1995; Sassoon and Gaur, 1997) , as well as newer studies in the fields of Human-Computer Interaction and Digital Media (Yazdani and Barker, 2000) or Semiotics (Vaillant, 1999) .", "cite_spans": [ { "start": 354, "end": 380, "text": "(Aicher and Krampen, 1996;", "ref_id": "BIBREF0" }, { "start": 381, "end": 396, "text": "Dreyfuss, 1984;", "ref_id": "BIBREF2" }, { "start": 397, "end": 407, "text": "Ota, 1993)", "ref_id": null }, { "start": 474, "end": 490, "text": "(Frutiger, 1991;", "ref_id": "BIBREF5" }, { "start": 491, "end": 506, "text": "Liungman, 1995;", "ref_id": null }, { "start": 507, "end": 530, "text": "Sassoon and Gaur, 1997)", "ref_id": null }, { "start": 620, "end": 646, "text": "(Yazdani and Barker, 2000)", "ref_id": "BIBREF14" }, { "start": 660, "end": 676, "text": "(Vaillant, 1999)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "We are here particularly interested in the field of Information Technology. Icons are now used in nearly all possible areas of human computer interaction, even office software or operating systems. However, there are contexts where richer information has to be managed, for instance: Alternative & Augmentative Communication systems designed for the needs of speech or language im-paired people, to help them communicate (with icon languages like Minspeak, Bliss, Commun-I-Mage); Second Language Learning systems where learners have a desire to communicate by themselves, but do not master the structures of the target language yet; Cross-Language Information Retrieval systems, with a visual symbolic input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "In these contexts, the use of icons has many advantages: it makes no assumption about the language competences of the users, allowing impaired users, or users from a different linguistic background (which may not include a good command of one of the major languages involved in research on natural language processing), to access the systems; it may trigger a communication-motivated, implicit learning process, which helps the users to gradually improve their level of literacy in the target language. However, icons suffer from a lack of expressive power to convey ideas, namely, the expression of abstract relations between concepts still requires the use of linguistic communication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "An approach to tackle this limitation is to try to \"analyse\" sequences of icons like natural language sentences are parsed, for example. However, icons do not give grammatical information as clues to automatic parsers. Hence, we have defined a method to interpret sequences of icons by implementing the use of \"natural\" semantic knowledge. This method allows to build knowledge networks from icons as is usually done from text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "The analysis method that will be presented here is logically equivalent to the parsing of a dependency grammar with no locality constraints. Therefore, the complexity of a fully recursive parsing method grows more than exponentially with the length of the input. This makes the reaction time of the system too long to be acceptable in normal use. We have now defined a new parsing algorithm which stores intermediate results in \"charts\", in the way chart parsers (Earley, 1970) do for natural language.", "cite_spans": [ { "start": 463, "end": 477, "text": "(Earley, 1970)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": null }, { "text": "Assigning a signification to a sequence of information items implies building conceptual relations between them. Human linguistic competence consists in manipulating these dependency relations: when we say that the cat drinks the milk, for example, we perceive that there are well-defined conceptual connections between 'cat', 'drink', and 'milk'-that 'cat' and 'milk' play given roles in a given process. Symbolic formalisms in AI (Sowa, 1984) reflect this approach. Linguistic theories have also been developed specifically to give account of these phenomena (Tesni\u00e8re, 1959; Kunze, 1975; Mel'\u010duk, 1988) , and to describe the transition between semantics and various levels of syntactic description: from deep syntactic structures which actually reflect the semantics contents, to the surface structure whereby messages are put into natural language.", "cite_spans": [ { "start": 432, "end": 444, "text": "(Sowa, 1984)", "ref_id": "BIBREF11" }, { "start": 561, "end": 577, "text": "(Tesni\u00e8re, 1959;", "ref_id": "BIBREF12" }, { "start": 578, "end": 590, "text": "Kunze, 1975;", "ref_id": "BIBREF6" }, { "start": 591, "end": 605, "text": "Mel'\u010duk, 1988)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Description of the problem", "sec_num": "1" }, { "text": "Human natural language reflects these conceptual relations in its messages through a series of linguistic clues. These clues, depending on the particular languages, can consist mainly in word ordering in sentence patterns (\"syntactical\" clues, e.g. in English, Chinese, or Creole), in word inflection or suffixation (\"morphological\" clues, e.g. in Russian, Turkish), or in a given blend of both (e.g. in German). Parsers are systems designed to analyse natural language input, on the base of such clues, and to yield a representation of its informational contents. In contexts where icons have to be used to convey complex meanings, the problem is that morphological clues are of course not available, when at the same time we cannot rely on a precise sentence pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Description of the problem", "sec_num": "1" }, { "text": "We thus should have to use a parser based on computing the dependencies, such as some which have been written to cope with variable-word-order languages (Covington, 1990) . However, since no morphological clue is available either to tell that an icon is, e.g., accusative or dative, we have to rely on semantic knowledge to guide role assignment. In other words, an icon parser has to know that drinking is something generally done by living beings on liquid objects.", "cite_spans": [ { "start": 153, "end": 170, "text": "(Covington, 1990)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Description of the problem", "sec_num": "1" }, { "text": "The icon parser we propose performs semantic analysis of input sequences of icons by the use of an algorithm based on best-unification: when an icon in the input sequence has a \"predicative\" structure (it may become the head of at least one dependency relation to another node, labeled \"actor\"), the other icons around it are checked for compatibility. Compatibility is measured as a unification score between two sets of feature structures: the intrinsic semantic features of the candidate actor, and the \"extrinsic\" semantic features of the predicative icon attached to a particular semantic role (i.e. the properties \"expected\" from, say, the agent of kiss , the direct object of drink , or the concept qualified by the adjective fierce ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "The result yielded by the semantic parser is the graph that maximizes the sum of the compatibilities of all its dependency relations. It constitutes, with no particular contextual expectations, and given the state of world knowledge stored in the iconic database in the form of semantic features, the \"best\" interpretation of the users' input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "The input is a sequence of icons \u00a2 \u00a1 , \u00a4 \u00a3 , . . . \u00a6 \u00a5 , each of which has a set of intrinsic features: \u00a7 \u00a9 (where is a set of simple Attribute-Value semantic features, used to represent intrinsic features of the concept-like {,} for Daddy).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "Some of the symbols also have selectional features, which, if grouped by case type, form a case structure: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "! # \" \u00a9 % $ ' & ) ( \u00a4 \u00a1 \u00a6 0 1 \u00a1 3 2 4 0 & ) ( 3 \u00a3 5 0 1 \u00a3 6 2 4 0 8 7 8 7 8 7 6 & ) ( 9 \u00a5 @ 0 1 \u00a5 A 2 C B (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "\" H 0 1 ( E G E P I R Q & ) ( E 0 1 G E 2 T S ! U \"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "For example, we can write: \" (write,agent) {} The semantic compatibility is the value we seek to maximize to determine the best assignments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "1. At the feature level (compatibility between two features), it is defined so as to \"match\" extrinsic and intrinsic features. This actually includes a somehow complex definition, taking into account the modelling of conceptual inheritance between semantic features; but for the sake of simplicity in this presentation, we may assume that the semantic compatibility at the semantic feature level is defined as in Eq. 1, which would be the case for a \"flat\" ontology 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "2. At the feature structure level, i.e. where the semantic contents of icons are defined, semantic compatibility is calculated between two homogeneous sets of Attribute-Value couples: on one side the selectional features attached to a given case slot of the predicate icon-stripped here of the case type-, on the other side the intrinsic features of the candidate icon.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "The basic idea here is to define the compatibility as the sum of matchings in the two sets of attributevalue pairs, in ratio to the number of features being compared to. It should be noted that semantic compatibility is not a symmetric norm: it has to measure how good the candidate actor fills the expectations of a given predicative concept in respect to one of its particular cases. Hence there is a filtering set ( \" ) and a filtered set ( \u00a7 \u00a9 ), and it is the cardinal of the filtering set which is used as denominator: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "! V \u00a7 \u00a9 0 \" W ! $ \u00a4 X ' \u00a1 F \u00a1 \u00a6 0 8 7 8 7 8 7 0 C X ' \u00a1 \u1ef2 a B \u00a2 0 3 $ \u00a4 X 5 \u00a3 9 \u00a1 \u00a4 0 8 7 8 7 8 7 \u00a6 0 C X 5 \u00a3 b \u00a5 @ B 6 d c E 8 e g f\u00a1 F h\u00a5 i c p e g f\u00a1 F hY q i ! X \u00a1 b 0 C X \u00a3 E \u00a4 D", "eq_num": "(2)" } ], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "& ) r s \u00a1 0 F t g \u00a1 2 and & ) r g \u00a3 E 0 F t u \u00a3 E 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": ", respectively). A threshold of acceptability is used to shed out improbable associations without losing time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "Even with no grammar rules, though, it is necessary to take into account the distance between two icons in the sequence, which make it more likely that the actor of a given predicate should be just before or just after it, than four icons further, out of its context. Hence we also introduce a \"fading\" function, to weight the virtual semantic compatibility of a candidate actor to a predicate, by its actual distance to the predicate in the sequence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "v a \u00a6 w 0 1 ( F E \u00a2 0 C \u00a4 x 6 \u00a9 y w 0 C x u 4 7 ! V \u00a7 \u00a9 x u 4 0 \" 0 1 ( F E 6 F", "eq_num": "(3)" } ], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": ": v a 0 1 ( E 0 C \u00a4 x 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "is the value of the assignment of candidate icon", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "x as filler of the role ( E of predicate ; y is the fading function (decreasing from 1 to 0 when the distance between the two icons goes from 0 to ); and ! V \u00a7 q \u00a4 x 5 4 0 \" \u00a6 w 0 1 ( F E \u00a4 F the (virtual) semantic compatibility of the intrinsic features of x to the selectional features of for the case ( E , with no consideration of distance (as defined in Eq. 2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "3. Eventually a global assignment of actors (chosen among those present in the context) to the case slots of the predicate, has to be determined. An assignment is an application of the set of icons (other than the predicate being considered) into the set of cases of the predicate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "The semantic compatibility of this global assignment is defined as the sum of the values (as defined in Eq. 3) of the individual case-filler allotments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "4. For a sequence of icon containing more than one predicative symbol, the calculus of the assignments is done for every one of them. A global interpretation of the sequence is a set of assignments for every predicate in the sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The semantic analysis method", "sec_num": "2" }, { "text": "In former works, this principle was implemented by a recursive algorithm (purely declarative PROLOG). Then, for a sequence of concepts, and supposing we have the (mean value of) (valency) roles to fill for every predicate, let us evaluate the time we need to compute the possible interpretations of the sequence, when we are in the worst case, i.e. the icons are all predicates. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "! & ) r @ \u00a1 \u00a6 0 F t ' \u00a1 9 2 4 0 & ) r g \u00a3 u 0 F t u \u00a3 2 F d if r @ \u00a1 f e r g \u00a3 ! & ) r g 0 F t ' \u00a1 8 2 4 0 & ) r @ 0 F t u \u00a3 \u00a4 2 F h if t g \u00a1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "o x 3 \u00a1 r x k x 8 \u00a1 k m l \u00a1 n o x 7.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "Lastly, the final scoring of every interpretation involves summing the scores of the assignments, which takes up elementary (binary) sums. This sum is computed every time an interpretation is set, i.e. every time the system reaches a leaf of the choice tree, i.e. every time an assignment for the icon is reached, that is k l \u00a1 n o k times. So, there is an additional computing time which also is a function of , namely, expressed in number of elementary sums:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "| T k m l \u00a1 n o k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "Hence, if we label r the ratio of the computing time used to compute the score of a role/filler allotment to the computing time of an elementary binary sum 2 , the number of elementary operations involved in computing the scores of the interpretations of the whole sequence is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "t f 4 7 k l \u00a1 n o k h r o x 8 \u00a1 r x 7 k x 8 \u00a1 k m l \u00a1 n f o x (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of a recursive algorithm", "sec_num": "3" }, { "text": "To avoid this major impediment, we define a new algorithm which stores the results of the low-level operations uselessly recomputed at every backtrack: With this system, at level (b) (calculation of the values of assignments), the value of the role/filler couples are re-used from the compatibility table, and are not recomputed many times. In the same way, at level (c), the computation of the interpretations' values by adding the assignments' values does not recompute the assignments values at every step, but simply uses the values stored in the assignments table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The chart algorithm", "sec_num": "4" }, { "text": "Furthermore, the system has been improved for the cases where only partial modifications are done to the graph, e.g. when the users want to perform an incremental generation, by generating the graph again at every new icon added to the end of the sequence; or when they want to delete one of the icons of the sequence only, optionally to replace it by another one. In these cases, a great part of the information remains unchanged. To take this property into account, the system stores the current sequence and the charts resulting from the parse in memory, allowing them to be only partially replaced afterwards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The chart algorithm", "sec_num": "4" }, { "text": "Finally, we have implemented three basic interface functions to be performed by the parser. The first one implements a full parse, the second partially re-parses a sequence where new icons have been added, the third partially re-parses a sequence where icons have been removed. The three functions can be described as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The chart algorithm", "sec_num": "4" }, { "text": "1. Spot the icons in the new sequence which are potential predicates (which have a valency frame). 3. Go through the sequence and identify the set of possible assignments for each predicate. For every assignment, compute its score using the values stored in compatibil-ity_table, and multiplying by the fading coefficients y , y , . . . Store the values found in: assignments_table (Tab. 1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Parsing from scratch:", "sec_num": null }, { "text": "tion (1 interpretation is 1 sequence of assignments). Store them along with their values in interpretations_table.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the list of all the possible interpreta-", "sec_num": "4." }, { "text": "Add a list of icons to the currently stored sequence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the list of all the possible interpreta-", "sec_num": "4." }, { "text": "1. Add the icons of list of icons to the currently stored sequence. Remove a list of icons from the currently stored sequence:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the list of all the possible interpreta-", "sec_num": "4." }, { "text": "1. Remove the icons of list of icons from the sequence stored in memory.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the list of all the possible interpreta-", "sec_num": "4." }, { "text": "2. Remove the entries of compatibil-ity_table or assignments_table involving at least one of the icons of list of icons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the list of all the possible interpreta-", "sec_num": "4." }, { "text": "3. Recompute the table of interpretations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Calculate the list of all the possible interpreta-", "sec_num": "4." }, { "text": "First, let us evaluate the complexity of the algorithm presented in Section 4 assuming that only the first interface function is used (parsing from scratch every time a new icon is added to the sequence).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of the chart algorithm", "sec_num": "5" }, { "text": "In the worst case: the icons are all predicates; no possible role/filler allotment in the whole sequence is below the threshold of acceptability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of the chart algorithm", "sec_num": "5" }, { "text": "For every predicate, every combination between one single role and one single other icon in the sequence is evaluated: After the assignments table has been filled, its values are used to compute the score of the possible interpretations of the sentence. The computation of the score of every single interpretation is simply a sum of scores of assignments: since there possibly are predicates, there might be up to figures to sum to compute the score of an interpretation. An interpretation is an element of the cartesian product of the sets of all possible assignments for every predicate. Since every one of these sets has k m l \u00a1 n f o elements, there is a total number of k m l \u00a1 n f o k r 4 p k r t 4 p k interpretations to compute. As each computation might involve v elementary sums (there are figures to sum up), we may conclude that the time to fill the interpretations table is in a relation to which may be written so: r T k m l \u00a1 n f o k . In the end, the calculation time is not the product, but the sum, of the times used to fill each of the tables. So, if we label r and two constants, representing, respectively, the ratio of the computing time used to get the score of an elementary role/filler allotment to the computing time of an elementary binary addition, and the ratio of the computing time used to get the score of an assignment from the scores of the role/filler allotments (adding up of them, multiplied by values of the y function), to the computing time of an elementary binary addition, the total computing time for calculating the scores of all possible interpretations of the sentence is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complexity of the chart algorithm", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "4 7 k l \u00a1 4 n o k h R r i h 4 k m l \u00a1 C n f o", "eq_num": "(6)" } ], "section": "Complexity of the chart algorithm", "sec_num": "5" }, { "text": "We have presented a new algorithm for a completely semantic parse of a sequence of symbols in a graphbased formalism. The new algorithm has a temporal complexity like in Eq. 6, to be compared to the complexity of a purely recursive algorithm, like in Eq. 5. In the worst case, the second function is still dominated by a function which grows hyperexponentially in relation to : the number of possible interpretations multiplied by the time used to sum up the score of an interpretation 3 . In practice, the values of the parameters r and are fairly large, so this member is still small during the first steps, but it grows very quickly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "As for the other member of the function, it is hyperexponential in the case of Eq. 5, whereas it is of order 4 k m l \u00a1 n o , i.e. it is \u00a1 o \u00a2 \u00a1 , in the case of Eq. 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Practically, to make the semantic parsing algorithm acceptable, the problem of the hyperexponential growth of the number of interpretations has to be eliminated at some point. In the system we have implemented, a threshold mechanism allows to reject, for every predicate, the unlikely assignments. This practically leaves up only a small maximum number of assignments in the assignments table, for every predicate-typically 3. This means that the number of interpretations is no longer of the order of k l \u00a1 n o k , but \"only\" of \u00a3 k : it becomes \"simply\" exponential. This implementation mechanism makes the practical computing time acceptable when running on an average computer for input sequences of no more than approximately 15 symbols.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "In order to give a comprehensive solution to the problem, future developments will try to develop heuristics to find out the best solutions without having to compute the whole list of all possible interpretations and sort it by decreasing value of semantic compatibility. For example, by trying to explore the search space (of all possible interpreta- 3 Namely,", "cite_spans": [ { "start": 352, "end": 353, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "\u00a4 { \u00a5 \u00a6 1 \u00a7 \u00a9 \u00a4 \u00ab \u00aa \u00ac ' \u00ae q U \u00a7 \u00aa .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "tions) from maximum values of the assignments, it may be possible to generate only the 10 or 20 best interpretations without having to score all of them to start with.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The difference in computing time may be neglected in the following reasoning, since the actual formula taking into account inheritance involves a maximum number of computing steps depending on the depth of the semantic features ontology, which does not vary during the processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Zeichensysteme der visuellen Kommunikation", "authors": [ { "first": "Otl", "middle": [], "last": "Aicher", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Krampen", "suffix": "" } ], "year": 1996, "venue": "Ernst & Sohn", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Otl Aicher and Martin Krampen. 1996. Zeichen- systeme der visuellen Kommunikation. Ernst & Sohn, Berlin (F.R.G.), second edition.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A dependency parser for variable-word-order languages", "authors": [ { "first": "Michael", "middle": [], "last": "Covington", "suffix": "" } ], "year": 1990, "venue": "Retrieved\u00b0October 1999\u00b1 from the URL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Covington. 1990. A dependency parser for variable-word-order languages. Research Report AI-1990-01, University of Georgia, Artificial In- telligence Programs, Athens, Georgia (U.S.A.). Retrieved\u00b0October 1999\u00b1 from the URL: http://www.ai.uga.edu/~mc/ai199001.ps.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Symbol Sourcebook. Van Nostrand Reinhold", "authors": [ { "first": "Henry", "middle": [], "last": "Dreyfuss", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry Dreyfuss. 1984. Symbol Sourcebook. Van Nostrand Reinhold, New York (U.S.A.), second edition.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An efficient context-free parsing algorithm", "authors": [ { "first": "Jay", "middle": [], "last": "Earley", "suffix": "" } ], "year": 1970, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay Earley. 1970. An efficient context-free pars- ing algorithm. In Karen Sparck-Jones, Barbara J.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Readings in Natural Language Processing", "authors": [ { "first": "Bonnie", "middle": [ "Lynn" ], "last": "Grosz", "suffix": "" }, { "first": "", "middle": [], "last": "Webber", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "25--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grosz, and Bonnie Lynn Webber, editors, Read- ings in Natural Language Processing, pages 25- 33. Morgan Kaufmann, Los Altos, California (U.S.A.).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Der Mensch und seine Zeichen", "authors": [ { "first": "Adrian", "middle": [], "last": "Frutiger", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adrian Frutiger. 1991. Der Mensch und seine Zei- chen. Fourier, Wiesbaden (F.R.G.).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Abh\u00e4ngigkeitsgrammatik. Studia Grammatica XII", "authors": [ { "first": "J\u00fcrgen", "middle": [], "last": "Kunze", "suffix": "" } ], "year": 1975, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00fcrgen Kunze. 1975. Abh\u00e4ngigkeitsgrammatik. Studia Grammatica XII. Akademie-Verlag, Berlin (G.D.R.).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Dependency syntax: theory and practice. SUNY series in linguistics", "authors": [ { "first": "Igor", "middle": [], "last": "", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Igor' Aleksandrovi\u010d Mel'\u010duk. 1988. Dependency syntax: theory and practice. SUNY series in lin- guistics. State University of New York Press, Al- bany, New York (U.S.A.).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Conceptual structures: information processing in mind and machine", "authors": [ { "first": "John", "middle": [], "last": "Sowa", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Sowa. 1984. Conceptual structures: informa- tion processing in mind and machine. Addison Wesley, New York (U.S.A.).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "\u00c9l\u00e9ments de syntaxe structurale", "authors": [ { "first": "Lucien", "middle": [], "last": "Tesni\u00e8re", "suffix": "" } ], "year": 1959, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lucien Tesni\u00e8re. 1959. \u00c9l\u00e9ments de syntaxe struc- turale. Klincksieck, Paris (France). Republished 1988.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "S\u00e9miotique des langages d'ic\u00f4nes. Slatkine", "authors": [ { "first": "Pascal", "middle": [], "last": "Vaillant", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pascal Vaillant. 1999. S\u00e9miotique des langages d'ic\u00f4nes. Slatkine, Geneva (Switzerland).", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Iconic Communication", "authors": [ { "first": "Masoud", "middle": [], "last": "Yazdani", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Barker", "suffix": "" } ], "year": 2000, "venue": "Intellect", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masoud Yazdani and Philip Barker. 2000. Iconic Communication. Intellect, Bristol, England (U.K.).", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "of the semantic compatibility at the feature structure level, defined in Eq. 2, roughly involves f computations of the semantic compatibility at the feature level, defined in Eq. 1 ( being the average number of selectional features for a given role on a given predicate, and the average number of intrinsic features of the entries in the semantic lexicon), which itself involves asequence of elementary operations (comparisons, floating point number multiplication). It does not depend on , the number of icons in the sequence. a. The low-level role/filler compatibility values, in a chart called 'compatibil-ity_table'. The values stored here correspond to the values defined at Eq. 2. b. The value of every assignment, in 'assign-ments_table'. The values stored here correspond to assignments of multiple case slots of a predicate, as defined at point 3 of Section 2; they are the sum of the values stored at level (a), multiplied by a fading function of the distance between the icons involved. c. The value of the interpretations of the sentence, in 'interpretations_table'. The values stored here correspond to global interpretations of the sentence, as defined at point 4 of Section 2.", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "content": "
ples which are attached toas selectional features
for the case( E :
where each of the agent, object, goal..., and each ( F E is a case type such as D G E a set of sim-
ple Attribute-Every couple& ) ( E 0 1 G E2present in the case struc-
ture means thatG Eis a set of Attribute-Value cou-
", "text": "Value semantic features, used to determine what features are expected from a given casefiller-e.g. is a feature that the agent of the verb write should possess).", "num": null, "html": null } } } }