{ "paper_id": "C96-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:50:57.904029Z" }, "title": "Parsing spoken language without syntax", "authors": [ { "first": "Jean-Yves", "middle": [], "last": "Antoine", "suffix": "", "affiliation": { "laboratory": "", "institution": "CLIPS-IMAG", "location": { "addrLine": "BP 53 --F-38040 GRENOBLE Cedex 9", "country": "FRANCE" } }, "email": "antoine@imag.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Parsing spontaneous speech is a difficult task because of the ungrammatical nature of most spoken utterances. To overpass this problem, we propose in this paper to handle the spoken language without considering syntax. We describe thus a microsemantic parser which is uniquely based on an associative network of semantic priming. Experimental results on spontaneous speech show that this parser stands for a robust alternative to standard ones.", "pdf_parse": { "paper_id": "C96-1010", "_pdf_hash": "", "abstract": [ { "text": "Parsing spontaneous speech is a difficult task because of the ungrammatical nature of most spoken utterances. To overpass this problem, we propose in this paper to handle the spoken language without considering syntax. We describe thus a microsemantic parser which is uniquely based on an associative network of semantic priming. Experimental results on spontaneous speech show that this parser stands for a robust alternative to standard ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The need of a robust parsing of spontaneous speech is a more and more essential as spoken human -machine communication meets a really impressive development. Now, the extreme structural variability of the spoken language balks seriously the attainment of such an objective. Because of its dynamic and uncontrolled nature, spontaneous speech presents indeed a high rate of ungrammatical constructions (hesitations, repetitious, a.s.o...). As a result, spontaneous speech catch rapidly out most syntactic parsers, in spite of the frequent addition of some ad hoc corrective methods [Seneff 92 ]. Most speech systems exclude therefore a complete syntactic parsing of the sentence. They on the contrary restrict the analysis to a simple keywords extraction [Appelt 92 ]. This selective approach led to significant results in some restricted applications (ATIS...). It seems however unlikely that it is appropriate for higher level tasks, which involve a more complex communication between the user and the computer. Thus, neither the syntactic methods nor the selective approaches can fully satisfy the constraints of robustness and of exhaustivity spoken human-machine communication needs. This paper presents a detailed semantic parser which masters most spoken utterances. In a first part, we describe the semantic knowledge our parser relies on. We then detail its implementation. Experimental results, which suggest the suitability of this model, are finally provided.", "cite_spans": [ { "start": 580, "end": 590, "text": "[Seneff 92", "ref_id": null }, { "start": 753, "end": 763, "text": "[Appelt 92", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "i. Introduction", "sec_num": null }, { "text": "Most syntactic formalisms (LFG [Bresnan 82], HPSG ]Pollard 87], TAG [Joshi 87]) give a major importance to subcategorization, which accounts for the grammatical dependencies inside the sentence. We consider on the contrary that subcategorization issue from a lexical semantic knowledge we will further name microsemantics [Rastier 94 ]. Our parser aims thus at building a microsemantic structure (figure 1) which fully describes the meaning dependencies inside the sentence. The corresponding relations are labeled by several microsemantic cases (Table 1) which only intend to cover the system's application field (computer-helped drawing).", "cite_spans": [ { "start": 68, "end": 79, "text": "[Joshi 87])", "ref_id": null }, { "start": 322, "end": 333, "text": "[Rastier 94", "ref_id": null } ], "ref_spans": [ { "start": 546, "end": 555, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Microsemantics", "sec_num": "2." }, { "text": "The microsemantic parser achieves a fully lexicalized analysis. It relies indeed on a microsemantic lexicon in which every input represents a peculiar lexeme I. Each lexeme is described by the following features structure : PRED lexeme identifier MORPI1 morphological realizations SEM semantic domain SUBCAT subcategorization frame I Lexeme = lexical unit of meaning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Microsemantics", "sec_num": "2." }, { "text": "Pr ed = 'to draw' Morph = {' draw',' draws',' drew',' drawn The microsemantic subcategorization frames describe the meaning dependencies the lexeme dominate. Their arguments are not ordered. The optional arguments are in brackets, by opposition with the compulsory ones. At last, the adverbial phrases are not subcategorized. ", "cite_spans": [ { "start": 18, "end": 59, "text": "Morph = {' draw',' draws',' drew',' drawn", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Exam )le : to draw", "sec_num": null }, { "text": "Any speech recognition system involves a high perplexity which requires the definition of topdown parsing constraints. This is why we based the microsemantic parsing on a priming process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic Priming", "sec_num": "3." }, { "text": "The semantic priming is a predictive process where some already uttered words (priming words) are calling some other ones (primed words) through various meaning associations. It aims a double goal :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming process", "sec_num": "3.1." }, { "text": "\u2022 It constrains the speech recognition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming process", "sec_num": "3.1." }, { "text": "\u2022 It characterizes the meaning dependencies inside the sentence. Each priming step involves two successive processes. At first, the contextual adaptation favors the priming words which are consistent with the semantic context. The latter is roughly modeled by two semantic fields: the task domain and the computing domain. On the other hand, the relational priming identifies the lexemes which share a microsemantic relation with one of the already uttered words. These relations issue directly from the subcategorization frames of these priming words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming process", "sec_num": "3.1." }, { "text": "The priming process is carried out by an associative multi-layered network (figure 2) which results from the compilation of the lexicon. Each cell of the network corresponds to a specific lexeme. The inputs represent the priming words. Their activities are propagated up to the output layer which corresponds to the primed words. An additional layer (Structural layer S) handles furthermore the coordinations and the prepositions. We will now describe the propagation of the priming activities. Let us consider :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "\u2022 t current step of analysis \u2022 a;/(t) activity of the cell j of the layer i at stept (i e {1, 2, 3, 4, 5, 6, S} ) \u2022 ~J(t) synaptic weight between the cell k of the layer i and the cell I of the layer j.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "Temporal forgetting --At first, the input activities are slightly modulated by a process of temporal forgetting : ail(t) =amax if i is to the current word. ail(t) = amax if i is to the primer of thisword. a~l(t) = Max (0, ail(t-1)-Afo, g~t ) otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "Although it favors the most recent lexemes, this process does not prevent long distance primings. Contextual adaptation --Each cell of the second layer represents a peculiar semantic field. Its activity depends on the semantic affiliations of the priming words :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "a~ (t)= Eoli,:~(t).air (t) (1) i C0il',~(t) = COma x if i belongs to j. with: c01j:~(t) = -Olma x otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "Then, these contextual cells modulate the initial priming activities :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "(t)= ai i (t) + y__. i i i m2,3(t) . a 2 (t) i with: coi2'i (t) = Aoo,~,ex, if j belongs to i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "coi2'i (t) = -Aco,,ext otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "The priming words which are consistent with the current semantic context are therefore favored. (1) otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "The inner synaptic weights of the case-based sub-networks represent the relations between the priming and the primed words :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Priming network", "sec_num": "3.2." }, { "text": "031~',5<,(t) = mmax if iandj share a microscmanlic relation which corresponds io the case ~7~ '\"',''''", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "logidel,' , AGT =/DET = [Pred = le ] LTAG =' par'Interrogations-Three interrogative forms are met in French : subject inversion (fl), est-ce-que questions (f2) and intonative questions (f3).(fl) ddpla~'ons nous le carrd ? (f2) est-ce-que nous ddplafons le carrd ? (f3) nous dgplacfons le carrd ?", "num": null, "type_str": "figure" }, "FIGREF6": { "uris": null, "text": "(1l) *Select the device ... the right (_tevice. (12) *Close the display ~ ... the window.", "num": null, "type_str": "figure" }, "FIGREF7": { "uris": null, "text": "The left door on the right too.", "num": null, "type_str": "figure" }, "FIGREF8": { "uris": null, "text": "line ... that's it ... on the right..", "num": null, "type_str": "figure" }, "TABREF0": { "text": "Some examples of microsemantic cases.", "content": "
LabelSemantic case
DETdeterminer
AGTagent
ATTattribute
OBJobject / theme
LOClocation / destination
OWNmeronomy / ownership
MODmodality
INSinstrument
COOcoordination
TAGcase marker (prdposition)
REFanaphoric reference
", "type_str": "table", "html": null, "num": null }, "TABREF3": { "text": "", "content": "
.\" Average robustness of the LFG and the
microsemantic. Accuracy rate = number of correct
analyses /number of tested utterances.
Parsercorpus 1 corpus 2 corpus 3~-~n
LFG0.408 0.4010.767 0.525 0.170
Semantics0.853 0.7850.866 0.835 0.036
", "type_str": "table", "html": null, "num": null }, "TABREF4": { "text": "Number of parallel hypothetic structuresl according to utterances' length", "content": "
LengthLFG parserMicrosemantic
4 words1,52,5
6 words1,53,5
8 words28
10 words212,5
12 words1,2519,75
", "type_str": "table", "html": null, "num": null } } } }