{ "paper_id": "A97-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:14:36.764513Z" }, "title": "High Performance Segmentation of Spontaneous Speech Using Part of Speech and Trigger Word Information", "authors": [ { "first": "Marsal", "middle": [], "last": "Gavaldh", "suffix": "", "affiliation": { "laboratory": "Interactive Systems Labs. Interactive Systems Labs", "institution": "Language Technologies Institute", "location": {} }, "email": "marsal@cs@cmu.edu" }, { "first": "Klaus", "middle": [], "last": "Zechner", "suffix": "", "affiliation": { "laboratory": "Interactive Systems Labs. Interactive Systems Labs", "institution": "Language Technologies Institute", "location": {} }, "email": "" }, { "first": "Comp", "middle": [ "Ling" ], "last": "Program", "suffix": "", "affiliation": { "laboratory": "", "institution": "Carnegie Mellon University Pittsburgh", "location": { "postCode": "15213, 15213", "settlement": "Pittsburgh", "region": "PA, PA", "country": "USA, USA" } }, "email": "" }, { "first": "Gregory", "middle": [], "last": "Aist", "suffix": "", "affiliation": { "laboratory": "", "institution": "Mellon University Pittsburgh", "location": { "postCode": "15213", "region": "PA", "country": "USA" } }, "email": "aist@andrew@cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe and experimentally evaluate an efficient method for automatically determining small clause boundaries in spontaneous speech. Our method applies an artificial neural network to information about part of speech and trigger words. We find that with a limited amount of data (less than 2500 words for the training set), a small sliding context window (+/-3 tokens) and only two hidden units, the neural net performs extremely well on this task: less than 5% error rate and F-score (combined precision and recall) of over .85 on unseen data. These results prove to be better than those reported earlier using different approaches.", "pdf_parse": { "paper_id": "A97-1003", "_pdf_hash": "", "abstract": [ { "text": "We describe and experimentally evaluate an efficient method for automatically determining small clause boundaries in spontaneous speech. Our method applies an artificial neural network to information about part of speech and trigger words. We find that with a limited amount of data (less than 2500 words for the training set), a small sliding context window (+/-3 tokens) and only two hidden units, the neural net performs extremely well on this task: less than 5% error rate and F-score (combined precision and recall) of over .85 on unseen data. These results prove to be better than those reported earlier using different approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In the area of machine translation, one important interface is that between the speech recognizer and the parser. In the case of human-to-human dialogues, the speech recognizer's output is a sequence of turns (a contiguous segment of a single speaker's utterance) which in turn can consist of multiple clauses. Lavie et al. (1996) discuss that using smaller units rather than whole turns can greatly facilitate the task of the parser since it reduces the complexity of its input.", "cite_spans": [ { "start": 311, "end": 330, "text": "Lavie et al. (1996)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The problem is thus how to correctly segment an utterance into clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The segmentation procedure described in Lavie et al. (1996) uses a combination of acoustic information, statistical calculation of boundary-trigrams, some highly indicative keywords and also some heuristics from the parser itself. Stolcke and Shriberg (1996) studied the relevance of several word-level features for segmentation performance on the Switchboard corpus (see Godfrey et al. (1992) ). Their best results were achieved by using part of speech n-grams, enhanced by a couple of trigger words and biases.", "cite_spans": [ { "start": 40, "end": 59, "text": "Lavie et al. (1996)", "ref_id": "BIBREF1" }, { "start": 231, "end": 258, "text": "Stolcke and Shriberg (1996)", "ref_id": "BIBREF3" }, { "start": 372, "end": 393, "text": "Godfrey et al. (1992)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Another, more acoustics-based approach for turn segmentation is reported in Takagi and Itahashi (1996) . Palmer and Hearst (1994) used a neural network to find sentence boundaries in running text, i.e. to determine whether a period indicates end of sentence or end of abbreviation. The input to their network is a window of words centered around a period, where each word is encoded as a vector of 20 reals: 18 values corresponding to the word's probabilistic membership to each of 18 classes and 2 values representing whether the word is capitalized and whether it follows a punctuation mark. Their best result of 98.5% accuracy was achieved with a context of 6 words and 2 hidden units.", "cite_spans": [ { "start": 76, "end": 102, "text": "Takagi and Itahashi (1996)", "ref_id": "BIBREF4" }, { "start": 105, "end": 129, "text": "Palmer and Hearst (1994)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper we bring their idea to the realm of speech and investigate the performance of a neural network on the task of turn segmentation using parts of speech, indicative keywords, or both of these features to hypothesize segment boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For our experiments we took as data the first 1000 turns (roughly 12000 words or 12 full dialogues) of transcripts from the Switchboard corpus in a version that is already annotated for parts of speech (e.g. noun, adjective, personal pronoun, etc.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "2" }, { "text": "The definition of a small clause which we wanted the neural network to learn the boundaries of is as follows: Any finite clause that contains an inflected verbal form and a subject (or at least either of them, if not possible otherwise). However, common phrases such as good bye, and stuff like that, etc. are also considered small clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "2" }, { "text": "Preprocessing the data involved (i) expansion of some contracted forms (e.g. l'm -+ I am), (ii) correction of frequent tagging errors, and (iii) generation of segment boundary candidates using some simple heuristics to speed up manual editing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "2" }, { "text": "Thus we obtained a total of 1669 segment boundaries, which means that on average approximately after every seventh token (i.e. 14% of the text) there is a segment boundary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data preparation", "sec_num": "2" }, { "text": "3.1 Features The transcripts are tagged with part of speech (POS) data from a set of 39 tags 1 and were processed to extract trigger words, i.e. words that are frequently near small clause boundaries (). Two scores were assigned to each word w in the transcript according to the following formulae:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and input encoding", "sec_num": "3" }, { "text": "scorepre(W) = C(w) /5(w]w) scorepost(W) ----C(w) /5(w[w)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and input encoding", "sec_num": "3" }, { "text": "where C is the number of times w occurred as the word (before/after) a boundary, and /5 is the Bayesian estimate for the probability that a boundary occurs (after/before) w.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and input encoding", "sec_num": "3" }, { "text": "This score is thus high for words that are likely (based on/5) and reliable (based on C) predictors of small clause boundaries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and input encoding", "sec_num": "3" }, { "text": "The pre-and post-boundary trigger words were then merged and the top 30 selected to be used as features for the neural network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Features and input encoding", "sec_num": "3" }, { "text": "The information generated for each word consisted of a data label (a unique tracking number, the actual word, and its part of speech), a vector of real values xl, ..., xc and a label ('+' or '-') indicating whether a segment boundary had preceded the word in the original segmented corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input encoding", "sec_num": "3.2" }, { "text": "The real numbers xl, ..., xc are the values given as input to the first layer of the network. We tested three different encodings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input encoding", "sec_num": "3.2" }, { "text": "1. Boolean encoding of POS: xi (1 < i < c = 39)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input encoding", "sec_num": "3.2" }, { "text": "is set to 0.9 if the word's part of speech is the i th part of speech, and to 0.1 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input encoding", "sec_num": "3.2" }, { "text": "xi (1 < i < c = 30) is set to 0.9 if the word is the ith trigger, and to 0.1 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boolean encoding of triggers:", "sec_num": "2." }, { "text": "3. Concatenation of boolean POS and trigger encodings (c = 39 + 30 = 69).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boolean encoding of triggers:", "sec_num": "2." }, { "text": "We use a fully connected feed-forward three-layer (input, hidden, and output) artificial neural network and the standard backpropagation algorithm to train it (with learning rate ~/= 0.3 and momentum ~ = 0.3). Given a window size of W and c features per encoded word, the input layer is dimensioned to c \u00d7 W units, that is W blocks of c units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The neural network", "sec_num": "4" }, { "text": "The number of hidden units (h) ranged in our experiments from 1 to 25. As for the output layer, in all the experiments it was fixed to a single output unit which indicates the presence or absence of a segment boundary just before the word currently at the middle of the window. The actual threshold to decide between segment boundary and no segment boundary is the parameter 0 which we varied from 0.1 to 0.9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The neural network", "sec_num": "4" }, { "text": "The data was presented to the network by simulating a sliding window over the sequence of encoded words, that is by feeding the input layer with the c \u00d7 W encodings of, say, words wi...wi+w-1 and then, as the next input to the network, shifting the values one block (c units) to the left, thereby admitting from the right the c values corresponding to the encoding of wi+w. Note that at the beginning of each speaker turn or utterance the first c x (w _ 1) input units need be padded with a \"dummy\" value, so that the first word can be placed just before the middle of the window. Symmetrically, at the end of each turn, the last c \u00d7 (w _ 1) input units are also padded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The neural network", "sec_num": "4" }, { "text": "Results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "We created two data sets for our experiments, all from randomly chosen turns from the original data: (i) the \"small\" data set (a 20:20:60(%) split between training, validation, and test sets), and (ii) the \"large\" data set (a 60:20:20(%) split). First, we ran 180 experiments on the \"small\" data set, exhaustively exploring the space defined by varying the following parameters: * encoding scheme: POS only, triggers only, POS and triggers. * window size: W E {2, 4, 6, 8}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "and discussion", "sec_num": null }, { "text": "\u2022 number of hidden units: h E {2, 10, 25}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "and discussion", "sec_num": null }, { "text": "\u2022 output threshold: 0 E { 0.1, 0.3, 0.5, 0.7, 0.9 } Precision (number of correct boundaries found by the neural network divided by total number of boundaries found by the neural network), recall (number of correct boundaries found by the neural network divided by true number of boundaries 2. precision, recall in the data) and F-score (defined as precision+recall / were computed for each training, validation and test sets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "and discussion", "sec_num": null }, { "text": "To be fair, we chose to take the epoch with the maximum F-score on the validation set as the best configuration of the net, and we report results from the test set only. Figure 1 shows a typical training/learning curve of a neural network.", "cite_spans": [], "ref_spans": [ { "start": 170, "end": 178, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "and discussion", "sec_num": null }, { "text": "The best performance was obtained using a net with 2 hidden units, a window size of 6 and the output unit threshold set to 0.7. The following results were achieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "and discussion", "sec_num": null }, { "text": "O. 845 0.860 0.852", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Icl ssi cati\u00b0nr t\u00b0lpr\u00b0cisi\u00b0nlrec lllFsc\u00b0rel0 8", "sec_num": null }, { "text": "Some general trends are observed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Icl ssi cati\u00b0nr t\u00b0lpr\u00b0cisi\u00b0nlrec lllFsc\u00b0rel0 8", "sec_num": null }, { "text": "\u2022 As the window size gets larger, the performance increases, but it seems to peak at around size 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Icl ssi cati\u00b0nr t\u00b0lpr\u00b0cisi\u00b0nlrec lllFsc\u00b0rel0 8", "sec_num": null }, { "text": "\u2022 Fewer hidden units yield better results; generally we get the best results for just two hidden units.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Icl ssi cati\u00b0nr t\u00b0lpr\u00b0cisi\u00b0nlrec lllFsc\u00b0rel0 8", "sec_num": null }, { "text": "\u2022 The global performance as measured by the proportion of correct classifications (i.e. both '+' and '-') increases as the F-score increases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Icl ssi cati\u00b0nr t\u00b0lpr\u00b0cisi\u00b0nlrec lllFsc\u00b0rel0 8", "sec_num": null }, { "text": "\u2022 High performance (correct classifications >95%, F-score >0.85) is easily achieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Icl ssi cati\u00b0nr t\u00b0lpr\u00b0cisi\u00b0nlrec lllFsc\u00b0rel0 8", "sec_num": null }, { "text": "\u2022 The optimal threshold for a high F-score lies in the 0.5 ' indicates a small clause boundary. A '*' indicates the location of the error.", "content": "
TypeHarmful?ReasonContext
false positivenotrigger wordto work <b> and* when I had
false positiveyesnon-clausal andwork off * and on
false negativeyesspeech repair<b> but * and they are
false positive?trigger wordhe you know * gets to a certain
false positiveyesnon-clausal andif you like trip * and fall or something
false negativeyesspeech repair<b> we * that's been
false positivenoCORRECT<b> but i think * its relevance
false negativenoCORRECT<b> and she * she was
false negativeyesembedded relative clauseinto nursing homes * die very quickly
false positivenotrigger wordwait lists * and all
Table 1: Sample of misclassifications (on unseen data, net with encoding of POS and triggers, W ---6, h = 2,
0 = 0.7).
", "type_str": "table", "html": null } } } }