Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H89-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:32:26.272496Z"
},
"title": "INTRODUCTION TO COMPUTATIONAL LINGUISTICS",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We want to divide the sentence up into phrases, and the phrases up into smaller consituents, until we reach individual words; you may have learned to \"diagram\" a sentence in this way. We can represent this structure by a",
"pdf_parse": {
"paper_id": "H89-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "We want to divide the sentence up into phrases, and the phrases up into smaller consituents, until we reach individual words; you may have learned to \"diagram\" a sentence in this way. We can represent this structure by a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The ability to automatically analyze and understand natural language opens up a wide variety of applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why study natural language processing?",
"sec_num": "1.1."
},
{
"text": "One of the first was machine translation; after many years of a relatively low level of effort, this area has seen a strong resurgence in the 1980's. Another application area is information retrieval; since much of the world's store of information is in written texts, systems which could understand these texts and extract information on request would have great value. The general area which has seen the greatest activity is man-machine interfaces, and in particular \"question-answering systems\" (natural language interfaces for data base retrieval). Current systems are still quite primitive, but such interfaces should make computer systems much more accessible to computer-naive users in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why study natural language processing?",
"sec_num": "1.1."
},
{
"text": "In addition, work on the processing of natural language has provided new insights into language itself. It has encouraged the use of explicit procedural models and a wholistic view of the language faculty, including in particular the interaction of language and knowledge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why study natural language processing?",
"sec_num": "1.1."
},
{
"text": "When used in concert with speech recognition, natural language processing has two roles to play. First, it can provide a rich set of expectations to aid the recognizer in identifying words. Second, for most functions (except dietation) we want a natural language system in order to do something in response to our utterance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why study natural language processing?",
"sec_num": "1.1."
},
{
"text": "Our objectives in this very brief introduction are twofold. First, we want to describe how a combination of relatively simple mechanisms can provide us with a rudimentary natural language understanding ability. This should give you a good idea of how some of the systems now seeing the commercial light of day operate. Second, we want to point out in what respects these mechanisms only \"scratch the surface\" of our natural language abilities: how much more research needs to be done to develop a truly \"natural\" language facility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our objectives",
"sec_num": "1.2."
},
{
"text": "Our brief tour of natural language processing will be organized in three parts: syntax analysis (determining the structure of a sentence and the relationships between its words); semantic analysis (translating a sentence into a formal or readily interpretable language), and discourse analysis (identifying the relationships between sentences and the information implicit in a text).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline",
"sec_num": "1.3."
},
{
"text": "This tutorial is organized roughly along the lines of my book, Computational Linguistics: An Introduction (Cambridge University Press, 1986) . I have necessarily hit only a few of the highlights, and have been sometimes forced to oversimplify some issues. The tutorial is split into short sections corresponding, for the most part, to individual foils of the presentation. (Some of the terms will be explained below.)",
"cite_spans": [
{
"start": 106,
"end": 140,
"text": "(Cambridge University Press, 1986)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An Outline",
"sec_num": "1.3."
},
{
"text": "We want to characterize the language by a set of rules, independent of the procedure we will use for analyzing the sentence. Such a set of rules is called a formal grammar. A formal grammar determines a set of grammatical sentences, and assigns a structure to these sentences. Our challenge is to develop a formal grammar which matches the intuitions of grammaticality of speakers of the language, and which assigns structures which are useful in determining the meaning of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal grammar",
"sec_num": "2.1.1."
},
{
"text": "Most computationaUy oriented grammars are based on, or are extensions of, context-free phrase structure rules. Each such rule describes one type of sentence constituent, specifying how it is composed from words or other sentence constituents. For example, sentence ~ subject verb object says that a sentence is composed of a subject followed by a verb followed by an object. Similarly, subject ~ *noun I *adjective *noun says that a subject is composed either of a noun or of an adjective followed by a noun. Alternatives are separated by \"1\". Symbols designating classes of words are prefixed by \"*\" The brackets in these rules indicate optional elements (that an article and adjective are optional before a noun, for example). The symbol \"np\" stands for \"noun phrase\", and \"det\" for determiner (an article, such as \"a\" or \"the\", or a quantifier, such as \"some\" or \"every\"). This simple grammar can generate a wide variety of sentences, such as Cats eat fish. The young cat under the car drinks milk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phrase structure rules",
"sec_num": "2.1.2."
},
{
"text": "Given this grammar, the sentence \"The young cat under the car drinks milk.\" would be assigned the structure: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Sample Parse",
"sec_num": "2.1.4."
},
{
"text": "Parsers are usually classified as top-down or bottom-up. These terms refer to the direction in which the parse tree is built.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-down vs. Bottom-up",
"sec_num": "2.2.1."
},
{
"text": "A top-down parser starts with the sentence node. It thinks as follows: I'm trying to decide if these words are a sentence. A sentence is defined (in the grammar) as a subject, a verb, and an object. Let me first look for a subject. A subject is a noun-phrase, so let me look for a noun phrase. A noun-phrase may begin with a determiner --is there a determiner here as the first word? Yes, let me look next for an adjective and then a noun; if I find all three, I've succeeded in finding a noun-phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-down vs. Bottom-up",
"sec_num": "2.2.1."
},
{
"text": "A bottom-up parser starts with the words, and builds a tree upwards. It thinks as follows: Here's an article, followed by an adjective, followed by a noun. Are there any constituents made up of these three word classes. Yes --it's a noun-phrase. This noun-phrase could be a subject; it could also be an object. Etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Top-down vs. Bottom-up",
"sec_num": "2.2.1."
},
{
"text": "The basic strategy for bottom-up parsing is quite simple. Starting with the sentence words, we look for sequences of words or constituents which we can link together to form a larger constituent. We repeat this process until we cannot build any more constituents. We then look for any constituents named \"sentence\" which cover all the words of the sentence. These constituents are the parses of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bottom-up Algorithm",
"sec_num": "2.2.2."
},
{
"text": "To show how this works, let us use an extremely simple grammar: in the order given in parentheses after the node names. In this example there are two nodes which are created but not used in any larger constituent: VERB (5), corresponding to the possible usage of \"beds\" as a verb (as in, \"He beds down for the night.\"), and NP (8), corresponding to the analysis of the word \"beds\" as a complete noun phrase (without the modifier \"soft\"). For a larger grammar, there would be many more such \"dead ends\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An example of Bottom-up Analysis",
"sec_num": "2.2.3."
},
{
"text": "The grammar given above (section 2.1.3) is so restrictive that it avoids a number of basic issues. Therefore, before we proceed further we will indicate how the grammar can be extended a little bit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Elaborating the grammar",
"sec_num": "2.3."
},
{
"text": "The progressive tense uses a form of \"be\" plus the present participle of the verb: \"The cat is sleeping.\", \"Mary is eating corn.\". There are several ways of extending the grammar to handle such forms. One possibility is to consider \"is\" to be the main verb, and the present participle plus its object to be (together) the object of the sentence, so that \"The cat is eating fish.\" would be analyzed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Progressive Tense",
"sec_num": "2.3.1."
},
{
"text": "SENTENCE SUBJECT VERB OBJECT i i / \\ NP is VING OBJECT / \\ L i DET NOUN eating NP I I I The eat NOUN I fish",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Progressive Tense",
"sec_num": "2.3.1."
},
{
"text": "We can include such structures in our grammar by adding the rule object ~ *ring object",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Progressive Tense",
"sec_num": "2.3.1."
},
{
"text": "where \"*ving\" is our name for present participles (verbs ending in \"ing\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Progressive Tense",
"sec_num": "2.3.1."
},
{
"text": "In comparing a passive sentence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passive",
"sec_num": "2.3.2."
},
{
"text": "The cake was baked by Sam.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passive",
"sec_num": "2.3.2."
},
{
"text": "with the corresponding active sentence Sam baked the cake.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passive",
"sec_num": "2.3.2."
},
{
"text": "we see that three things have happened: the object of the active sentence has moved into subject position, the subject has moved into a \"by\" phrase, and the verb has changed to a form of \"be\" + the past participle. The net effect is that we still have a noun phrase in the subject, but we now have a \"by\" phrase where the object used to be. We can therefore analyze passives by treating \"be\" as the main verb (as we did for progressives) and adding the production object ~ *ven \"by\" np to our small grammar (where \"*yen\" is our name for past participles, which usually end in \"en\" or \"ed\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Passive",
"sec_num": "2.3.2."
},
{
"text": "Consider the following relative clauses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative Clauses",
"sec_num": "2.3.3."
},
{
"text": "The man whom I met comes from Philadelphia. The man who opened the door comes from Detroit. The man whom I sold the book to comes from Miami.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative Clauses",
"sec_num": "2.3.3."
},
{
"text": "Can we give a unified account of these different structures? In each case the phrase following \"who\" or \"whom\" is itself a full sentence from which a single noun phrase has been omitted. We can make this explicit by putting back the omitted words, enclosed in brackets: ---> prep-phrase I \"who\" sentence I \"whom\" sentence Note the recursive structure: the entire sentence may contain a smaller sentence structure within it. To handle the omission within the relative clause, we must allow exactly one np within the relative clause to take the value null. This requirement is not readily stated within our phrase-structure rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative Clauses",
"sec_num": "2.3.3."
},
{
"text": "The",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative Clauses",
"sec_num": "2.3.3."
},
{
"text": "The rules given above can be used to generate quite a variety of sentences; unfortunately, they can also generate quite a variety of ungrammatical sentences, such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2,4. Syntactic Constraints",
"sec_num": null
},
{
"text": "The cats eats fish. The cat sleeps fish. The cat has sleeping. The cat which cat eats fish is sleeping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2,4. Syntactic Constraints",
"sec_num": null
},
{
"text": "These sentences violate particular syntactic constraints; we shall consider some of these constraints in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2,4. Syntactic Constraints",
"sec_num": null
},
{
"text": "Here are a few of the constraints not captured by our rules: number agreement. The subject and verb must agree in number (\"Cats sleep.\" but not \"Cats sleeps.\"). Also, the determiner and noun must agree in number (\"A cat ...\" but not \"A cats ...\"). count noun. A singular noun representing a countable entity must be preceded by a determiner (\"The cat is eating.\" but not \"Cat is eating.\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Some Constraints",
"sec_num": "2.4.1."
},
{
"text": "Only certain types of objects may appear with certain verbs (\"The cat sleeps.\" but not \"The cat sleeps fish.\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "subcategorization.",
"sec_num": null
},
{
"text": "omission. Exactly one np should be null within a relative clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "subcategorization.",
"sec_num": null
},
{
"text": "One's first reaction, when seeing all these constraints, is \"Why bother to enforce them?\". After all, we expect our input to be reasonably well-formed sentences, not gibberish like \"My cat sleeps fish.\", so it seems safe to assume that these constraints will be satisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why enforce the constraints?",
"sec_num": "2.4.2."
},
{
"text": "However, ff we try parsing some sentences with a simple grammar which doesn't check these constraints, we will discover that we get quite a few parses, even for rather innocuous sentences. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why enforce the constraints?",
"sec_num": "2.4.2."
},
{
"text": "The bird can fly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why enforce the constraints?",
"sec_num": "2.4.2."
},
{
"text": "will get (in addition to the correct parse) two other parses: one parse analogous to \"The workmen can tomatoes.\", the other to \"The trash can flies [through the air]\". Both of these parses violate number agreement (the first violates the count noun constraint as well), so a grammar which checked these constraints would give us only the correct parse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why enforce the constraints?",
"sec_num": "2.4.2."
},
{
"text": "In addition, for speech recognition we would like to have as many constraints as possible in order to narrow the range of possible words we should expect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why enforce the constraints?",
"sec_num": "2.4.2."
},
{
"text": "Most of the constraints we have mentioned are not easily captured by extending the phrase structure rules. To add subject-verb number agreement to our simple grammar, for example, we would have to double the sentence, subject, and np rules: sentence singular-subject plural-subject singular-np plural-np singular-subject *singular-verb object I plural-subject *plural-verb object singular-np ---> plural-np ---> Some systems get around this problem by associating features (such as syntactic number) with the nodes of the parse tree, and extending the rule formalism to assign and test these features. Other systems allow the grammar writer to write procedures which enforce the constraints by checking the properties of the words. Such systems are called augmented context-free grammar systems. ATNs (augmented transition networks) are a form of augmented context-free grammar in which the phrase structure component is represented by networks rather than by rules as shown above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "How to enforce the constraints",
"sec_num": "2.4.3."
},
{
"text": "An augmented context-free grammar consists of a set of context-free phrase structure rules, such as those we gave above, plus a set of procedures which enforce grammatical constraints. These procedures are associated with particular definitions in the grammar; for example, the procedure for subject-verb number agreement would be associated with the rule for sentence. When this rule is used to build a new node of the parse tree, the agreement procedure is invoked. It checks the syntactic number features of the verb and the head (main noun) of the subject; if they don't match, the procedure fails and the node is discarded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressing constraints as procedures",
"sec_num": "2.4.4."
},
{
"text": "Different systems use different languages for expressing these restrictions. Most ATNs use LISP, and provide special predicates for testing and recording features. The systems at NYU (the Linguistic String Parser and PRO-TEUS Parser) use a language called Restriction Language designed for stating these restrictions. For example, in Restriction Language a simple number agreement restriction might be written WAGREE = IN SENTENCE: BOTH IF THE VERB IS PLURAL THEN THE CORE OF THE SUBJECT IS PLURAL AND IF THE VERB IS SINGULAR THEN THE CORE OF THE SUBJECT IS SINGULAR. (the \"CORE\" is the main noun of a noun phrase).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A grammatical restriction",
"sec_num": "2.4.5."
},
{
"text": "As we noted above, an alternative approach is to associate features with the nodes of the parse tree and extend the rule formalism to assign and test these features. For example, we can introduce the feature number with values singular and plural, and associate it with nouns, verbs, noun phrases, and subject nodes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressing constraints using features",
"sec_num": "2.4.6."
},
{
"text": "sentence --> subject<number> *verb<number> object subject<number> ~ np<number> np<number> ---> [*det] [*adjective] *noun<number> [*prep-phrase]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressing constraints using features",
"sec_num": "2.4.6."
},
{
"text": "If the feature marker <number> appears at two places in a single production, the values of the feature at the two places must be equal. Thus the number feature of the np must be the same as that of the noun, the number of the subject the same as that of the np it dominates, and the number of the subject and verb in a sentence must be equal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressing constraints using features",
"sec_num": "2.4.6."
},
{
"text": "A basic function of syntactic analysis is to establish the relationships among the constituents in a sentence. In the sentence Sam baked a cake. the subject is the thing doing the baking and the object is the thing being baked. If we look at closely related syntactic forms, however, we see that this relation no longer holds. For example, in the passive A cake was baked by Sam. the subject is now the thing that was baked, while the thing doing the baking has moved to a \"by\" phrase. progressive form, Sam is baking a cake.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic variety",
"sec_num": "2.5.1."
},
{
"text": "\"is\" has now become the verb constituent, and the \"main verb\" is within the object constituent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic variety",
"sec_num": "2.5.1."
},
{
"text": "In the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic variety",
"sec_num": "2.5.1."
},
{
"text": "This syntactic variety obscures the common functional relationships among the act of baking, Sam (the baker), and the cake (the think baked) in these three sentences. We can clarify these relationships by reducing all of these forms to a standard form, such as the simple active sentence. This process of syntactic regularization is a part of most natural language systems. It simpfifies the next stage of processing --semantic analysis --which relies heavily on the functional relationships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the variety",
"sec_num": "2.5.2."
},
{
"text": "In some systems, regularization is done by transformations which act on the parse tree, lransforming, for example, the passive sentence structure: In other systems (in most ATNs, for example), the regularized structure is built up incrementally during parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reducing the variety",
"sec_num": "2.5.2."
},
{
"text": "So far we have focussed on determining the structure of natural language sentences. This,however, is rarely our final objective. Rather, we are concerned with understanding what the sentences mean, or performing some action in response to a received sentence. We will begin by looking at the question of meaning at a slightly abstract level, and then in a short while will connect this up with an application using natural language input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SEMANTIC ANALYSIS",
"sec_num": "3."
},
{
"text": "What does it mean to understand a sentence? One answer, for declarative sentences at least, is to say that we understand a sentence if We can determine, under any given set of circumstances, whether it is true or not. The usual approach to this is to select a formal language for which the rules of evaluation are simple, and translate the natural language sentences into this language. We shall use predicate logic with restricted quantifiers for this task, and shall call the representation of a sentence in predicate logic its logicalforra.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Representation",
"sec_num": "3.1."
},
{
"text": "Our world will be described in terms'of a set of objects and a set of predicates. The predicates are functions whose arguments are objects and whose value is true or false. For example, we could have a \"microworld\" inhabited by Tom, Dick, Harry, and Jane. We can have predicates like \"male\" (which takes one argument) and \"fatherof\" (which takes two arguments). The current state of the world can be described by listing, for each predicate, the values of the arguments for which it is true. For example, male(Torn) male(Dick) male(Harry) father-of(Tom ,Dick) father-of(Tom jane)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates",
"sec_num": "3.1.1."
},
{
"text": "We would then analyze a sentence such as Tom is the father of Jane.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates",
"sec_num": "3.1.1."
},
{
"text": "by translating it into a predicate with arguments father-of(Tom,Jane) and then seeing if the value of the predicate for those arguments is true.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates",
"sec_num": "3.1.1."
},
{
"text": "With predicates alone, we are very restricted in the range of sentences we are able to translate. In order to handle sentences such as Everyone is mortal. we need to introduce quantifiers into our formalism. We will introduce two quantifiers: existential quantifiers and universal quantifiers. A formula with a universal quantifier, such as x) P(x) says that P is true for every object in our world. A formula with an existential quantifier, such as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.1.2."
},
{
"text": "(3 x) V(x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.1.2."
},
{
"text": "says that there is some object for which the predicate P is true. Thus in a world in which the only objects are people, the sentence Everyone is mortal would be translated to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.1.2."
},
{
"text": "(V x) mortal(x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.1.2."
},
{
"text": "Of course, we don't live in a world in which the only objects are people, so we will introduce restricted quantifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Quantifiers",
"sec_num": "3.1.3."
},
{
"text": "is true if, is true if, mmslate to (V x : Q(x)) P(x) for every object for which Q is true, P is true too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Quantifiers",
"sec_num": "3.1.3."
},
{
"text": "(3 x : Q(x)) P(x) for some object for which Q is true, P is true too. Then, in a world with people and other things, we would Everyone is mortal",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Quantifiers",
"sec_num": "3.1.3."
},
{
"text": "(V x : person(x)) mortal(x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Restricted Quantifiers",
"sec_num": "3.1.3."
},
{
"text": "Having defined a semantic representation, we should now sketch a procedure for mapping our (regularized) syntactic structures into this representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifier Analysis",
"sec_num": "3.2."
},
{
"text": "The simplest sentences are those involving only constants (names). For example, Mary loves Tom could be translated into loves(Mary,Tom)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple sentences",
"sec_num": "3.2.1."
},
{
"text": "Thus, the verb is translated into a predicate and the subject and object are translated into arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple sentences",
"sec_num": "3.2.1."
},
{
"text": "When the subject or object involves an English quantifier, we have to translate the noun phrase into a quantifier governing one of the arguments of the predicate. For example, Every student loves Mary. could be translated to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.2.2."
},
{
"text": "(V x : student(x)) loves(x,Mary)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.2.2."
},
{
"text": "Note that the head of the noun phrase translates into a restriction on the quantifier. If both subject and object have quantifiers, we will end up with two in the logical form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.2.2."
},
{
"text": "Every student has a terminal. becomes (V x : student(x)) (3 y : terminal(y)) has(x,y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantifiers",
"sec_num": "3.2.2."
},
{
"text": "If the noun phrase has modifiers, these will translate into further restrictions on the quantifier: Every student has a red notebook. becomes (V x : student(x)) (3 y : notebook(y) & red (y)) has(x,y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Noun phrase modifiers",
"sec_num": "3.2.3."
},
{
"text": "If the noun phrase modifier is a relative clause, the approach is the same --it translates into a restriction on the quantifier --but the formulas become more complicated. For example, Every student who has a terminal has a modem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative clauses",
"sec_num": "3.2.4."
},
{
"text": "would translate into",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative clauses",
"sec_num": "3.2.4."
},
{
"text": "(V x : student(x) & (3 t : terminal(t)) has(x,0) (3 m : modem(m)) has(x,m))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative clauses",
"sec_num": "3.2.4."
},
{
"text": "To do this translation, our procedure must operate recursively: we first translate the relative clause \"who has a terminal\", in much the same way as we would a simple sentence. This produces the second line of the logical form above. We then use this as a quantifier restriction in creating the translation of the entire sentence. This recursive translation procedure parallels the recursive syntactic structure we introduced for relative clauses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relative clauses",
"sec_num": "3.2.4."
},
{
"text": "The semantic representation and quantifier analysis procedures may be quite elegant, but they may also seem quite useless. After all, no one is likely to pay us a lot of money for a program which prints out logical forms; they want a natu~l language system to do something. The \"best-sellers\" among natural language programs these days are \"question-answering systems\": natural language interfaces for data base retrieval. We will therefore consider how our semantic analyzer can be readily transformed into a simple question-answering system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Base Retrieval",
"sec_num": "3.3."
},
{
"text": "Let's suppose we have a relational data base. The relations in the data base can be viewed as predicates: the predicate P(a,b,c) is true if the relation P has a row with values <a,b,c>. Thus if we have a query such as Is Frank employed by NYU?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates and Relations",
"sec_num": "3.3.1."
},
{
"text": "we would generate its logical form, employ (NYU,Frank) and then, treating this as a data base query on relation \"employ\", see whether it is true or false and then respond \"yes\" or \"no\".",
"cite_spans": [
{
"start": 43,
"end": 54,
"text": "(NYU,Frank)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates and Relations",
"sec_num": "3.3.1."
},
{
"text": "Quantifiers have a very direct procedural interpretation: to evaluate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quantifiers",
"sec_num": "3.3.2."
},
{
"text": "x: R(x)) P(x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quantifiers",
"sec_num": "3.3.2."
},
{
"text": "we iterate over all the objects in our world, and for those for which R is true, check that P is true. Similarly, for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreting Quantifiers",
"sec_num": "3.3.2."
},
{
"text": "we iterate over all the objects in our world, and look for one for which both R and P are true. For a large data base, of course, this will be inefficient, but --depending on the data base query language --there will typically be more efficient approaches. The existential quantifier shown, for example, may be realized as a join of P and R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(3 x : R(x)) P(x)",
"sec_num": null
},
{
"text": "A wh-question (one beginning with \"who\", \"what\", or \"which\") can be interpreted as a request to determine the set of values for which a formula is true. For example, Which students own a typewriter? can be interpreted as (find the set of x : student(x)) (3 y : typewriter(y)) own(x,y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wit Questions",
"sec_num": "3.3.3."
},
{
"text": "In data base terms, this means returning the set of values of one attribute of the relation, rather than just a \"yes\" or \"no\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wit Questions",
"sec_num": "3.3.3."
},
{
"text": "In doing our semantic analysis, we have assumed that each verb corresponds to a predicate (or relation) of the same name. However, one of the benefits of a natural language interface lies in the ability to refer to the same relationship in several ways, and the ability to refer succinctly to complex relationships (which may not be directly recorded in the data base). For example, using an employment data base, we would want the system to accept either How many people work for XYZ? or How many people are employed by XYZ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbs",
"sec_num": "3.3.4."
},
{
"text": "\"Iaais indicates that, in general, several verbs may be translated into a single predicate. If the system has historical data on individual hirings and firings, we would probably want to allow How many people were rehired last year?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbs",
"sec_num": "3.3.4."
},
{
"text": "and have \"rehire\" translate into a complicated condition involving a firing and a subsequent hiring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbs",
"sec_num": "3.3.4."
},
{
"text": "Putting all the pieces together, even a simple question-answering system will have the following stages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The structure of a question-answering system",
"sec_num": "3.3.5."
},
{
"text": "Translation to Data Base Query Retrieval If the system does anything fancier, such as analyzing pronouns (to be discussed below), this will normally be done in terms of the logical form, before the translation to a data base query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing Syntactic Regularization Translation to Logical Form",
"sec_num": null
},
{
"text": "The last few sections might suggest that we know all there is to about building good question-answering systems. In fact, current systems are still very rudimentary --not at all a full natural language interface. Because such systems rarely incorporate any deep model of what is in the data base, or what the user's interest might be in querying the data base, they are very limited in their responses. Few systems allow questions about what type of information is in the data base (\"What do you know about company XYZ?\"). Many systems interpret questions literally, even if that is clearly not their intent (\"Do you have any record of Joe's salary?\" --\"Yes.\"). And no system provides helpful feedback for questions which fall outside the semantic model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "What's missing",
"sec_num": "3.3.6."
},
{
"text": "In our discussion of syntactic analysis we pointed out that not all the sentences generated by our phrase structure rules were grammatical. To account for this we introduced various syntactic constraints into our grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic constraints",
"sec_num": "3.4."
},
{
"text": "In semantic analysis, our aim is to decide whether a sentence is true. However, some of the sentences which are grammatical are so nonsensical that we might be reluctant to identify them as true or false; for instance, The closet likes scrambled eggs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic constraints",
"sec_num": "3.4."
},
{
"text": "The road is wearing a brown hat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "or",
"sec_num": null
},
{
"text": "If we want to build a practical natural language system, why would we be interested in identifying such nonsensical sentences (rather than, say, just considering them to be false)? Surely we don't expect them to appear as input. Our answer is much the same as it was for grammatical constraints: constraints which in one case separate sensible from nonsensical sentences may in other cases separate correct and incorrect readings of a sensible sentence. Consider for example I passed a man on the road wearing a brown hat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "or",
"sec_num": null
},
{
"text": "Syntactically, this sentence is ambiguous: is the man or the road wearing the hat? If we have a constraint that roads don't wear hats, we could block the incorrect syntactic analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "or",
"sec_num": null
},
{
"text": "How should we characterize and organize these facts about sensible and nonsensical sentences?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "First, we can observe that these facts are best stated as constraints on semantic structure, not on syntactic form. If",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "The road is wearing a brown hat. doesn't make any sense, then neither does A brown hat was worn by the road.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "or any other form of the sentence. So, if we translate the verb \"wear\" into a predicate \"wear\", we can state this as a constraint on the predicate: that wear(road,hat) is a nonsensical combination of predicate and arguments. More generally, we will want to assert that this predicate is meaningful only for certain values of the arguments; we call this the domain of the predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "For any realistic subject area, enumerating separately the domain of each predicate would be an overwhelming task. However, many predicates share domains; these domains correspond in many cases to generally recognized \"semantic classes\". For example, we might say that the domain for the first argument of wear is the set of animals (including people):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "The man wore a hat. The horse is wearing a saddle and horseshoes. The dog is wearing a sweater.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "This set is also the predicate domain for many predicates associated with animal functions: seeing, breathing, sleeping, eating, chewing, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "Thus, we would specify these semantic constraints by defining a set of semantic classes (typically as a hierarchy of broader and finer classes) and then specifying the predicate domain of each predicate in terms of these classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate domains",
"sec_num": "3.4.1."
},
{
"text": "The use of predicate domains is particularly successful in dealing with texts in clearly restricted subject areas, such as technical and scientific reports. It is less successful in dealing with texts which range over a broad area, such as fiction or newspaper stories. Such texts involve many different types of objects, making a classification difficult, and may include metaphorical and imaginary usages, which blur the lines of predicate domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of predicate domains",
"sec_num": "3.4.2."
},
{
"text": "It is also important to recognize that these semantic classes provide only a relatively general constraints. In some cases we will need much more detailed information about what is possible or impossible in order to understand a sentence correctly. Consider for example the two sentences I left the toaster in the kitchen on the floor. I left the toaster in the kitchen on the first floor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of predicate domains",
"sec_num": "3.4.2."
},
{
"text": "\"on the floor\" indicates where in the kitchen you left the toaster; \"on the first floor\" indicates where the kitchen is.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of predicate domains",
"sec_num": "3.4.2."
},
{
"text": "In formal and programming languages, when we want to refer to something more than once we generally have to give the object a name and refer to it by name. Natural language provides a much more flexible means for referring to entities previously mentioned in a text. Such references are called anaphoric references. The most familiar form of anaphoric reference is the pronoun: I bought an ice cream cone. It was delicious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anaphora",
"sec_num": "3.5."
},
{
"text": "The previous noun phrase to which the pronoun refers is called the antecedent. Noun phrases with \"the\" are also frequently used anaphorically: I bought an ice cream cone and a hot dog. The cone was delicious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Anaphora",
"sec_num": "3.5."
},
{
"text": "The simplest cases of anaphora can be accounted for quite straightforwardly using the notions of predicate domains and semantic classes which we just introduced. We begin by translating the sentence with a pronoun into logical form, treating the pronoun just as we would other noun phrases. We then determine the predicate domain for the argument position occupied by the pronoun, look for the most recently mentioned noun phrase belonging to that semantic class, and consider that the antecedent of the pronoun. A translation of the antecedent then replaces the pronoun in the logical form.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using predicate domains",
"sec_num": "3.5.1."
},
{
"text": "For example, if we read Sam bought a hat from the store on Wednesday. Ted wore it on Thursday.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using predicate domains",
"sec_num": "3.5.1."
},
{
"text": "we might translate the second sentence to wear(Ted,it,Thursday)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using predicate domains",
"sec_num": "3.5.1."
},
{
"text": "The predicate domain for the second argument of \"wear\" is \"clothing\", so we would look for the most recently mentioned noun phrase in that class. We find \"hat\", identify it as the antecedent, and then, roughly speaking, replace \"it\" by \"the hat\" in the logical form. (Strictly speaking, we will replace \"it\" by the logical representation of \"the hat which Sam bought from the store on Wednesday\".)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using predicate domains",
"sec_num": "3.5.1."
},
{
"text": "This approach works quite well, but --as in the case of syntactic ambiguity --there are cases where more detailed information is needed. Winograd created an oft-repeated pair of sentences to illustrate this:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "When domains aren't enough",
"sec_num": "3.5.2."
},
{
"text": "The city council refused to grant the women a parade permit because they advocated violence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "When domains aren't enough",
"sec_num": "3.5.2."
},
{
"text": "The city council refused to grant the women a parade permit because they feared violence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "When domains aren't enough",
"sec_num": "3.5.2."
},
{
"text": "In one case \"they\" refers to the council, in the other case to the women; making the proper choice requires rather detailed reasoning about the concerns of the city council.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "When domains aren't enough",
"sec_num": "3.5.2."
},
{
"text": "In many cases a definite noun phrase (\"the ...\") refers not to something explicitly mentioned previously, but rather to something related to a previously mentioned object or activity. This is termed contextual reference. Thus in I bought an apartment with a small kitchen. The stove is in the middle, the dishwasher underneath, and the refrigerator on the ceiling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual reference",
"sec_num": "3.5.3."
},
{
"text": "we understand \"stove\", \"dishwasher\", and \"refrigerator\" because we expect them as part of a kitchen. We would be perplexed if the second sentence said \"The bulldozer is in the middle, the parakeet underneath, and the bed on the ceiling.\". Such references, therefore, rely on a quite detailed knowledge of the structure of objects and actions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextual reference",
"sec_num": "3.5.3."
},
{
"text": "In the last section we saw examples where substantial world knowledge was needed to correctly understand some natural language input. We noted that constraints based on predicate domains were not sufficient for resolving many syntactic ambiguities. For example, if you told a robot to Broil the steak on the top shelf of the refrigerator. it might start by building a small fire in your refrigerator. Similarly, when cooking was finished, if you told it to Put the chicken on the table and cut off the legs. you might get the response \"What legs?\" or, even worse, a shorter table. To determine that \"legs\" might refer to \"chicken\", our robot must know about the structure of things (in this case, chickens); to determine that \"legs\" does refer to \"chicken\", it must know about appropriate actions (in this case, in serving food).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Syntactic ambiguity and contextual reference",
"sec_num": "4.1.1."
},
{
"text": "The need for world knowledge is particularly acute in analyzing multi-sentence texts. We have the problems mentioned above, such as syntactic ambiguity and contextual reference. In addition, we have the task of deciding how the sentences are related. For example, we wouldn't say that we had understood the following passage John threw a cream pie. Mary ducked, and Tom got hit smack in the face.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing texts",
"sec_num": "4.1.2."
},
{
"text": "unless we recognized the relation between the events involved. In any text there will be one or more relations -cause, time, sequence, elaboration, etc. --tying the sentences together. These relations will rarely be explicit, yet we must identify them in order to properly understand the text. We must therefore rely on extensive background knowledge to infer these relations from the facts explicitly presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing texts",
"sec_num": "4.1.2."
},
{
"text": "It's all well and good to say that we need to incorporate a lot of world knowledge in our system, but this doesn't tell us what to do in constructing a language processing system. How should we collect this knowledge, how should we organize it, and how should we use it in the language analysis? The answers to these questions are as yet poorly understood. They have been addressed only for certain types of knowledge, relevant to the understanding of certain limited classes of texts. We examine in this section two types of knowledge which have been applied to language analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Organizing world knowledge",
"sec_num": "4.1.3."
},
{
"text": "The first type of text we will consider are narratives about stereotyped sequences of events. We are all familiar with such sequences. As we are growing up we learn, in some detail, what to do in certain social situations: eating in a restaurant, buying food in a supermarket, visiting a doctor's office, taking a plane trip.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing narrative",
"sec_num": "4.2."
},
{
"text": "Because we share knowledge of these sequences, we don't have to provide all the details when describing such an event; we just have to present the highlights or unexpected events. The listener should be able to fill in the gaps, and tie together the explicitly mentioned events, using the shared knowledge. For example, if you hear I went to Clancy's restaurant yesterday. The soup was cold and the steak was tough, so I left the waiter a small tip and vowed never to go back. you can understand phrases like \"the soup\", \"the waiter\", and \"a tip\", and the reason for the small tip, from your knowledge of restaurants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hitting the highlights",
"sec_num": "4.2.1."
},
{
"text": "How should we record our knowledge of such stylized sequences? Schank suggested a slructure called the script, which is a kind of flowchart. This flowchart involves acWrs (basically, people) and props (objects); it consists of a series of primitive actions performed by the actors. For example, a restaurant script (the original example) would include such actors as the customer and waiter, and such props as the food, the check, and the tip. It might include steps such as customer enters restaurant customer goes to table waiter comes to table customer gives order to waiter waiter brings food to customer customer eats food waiter brings check to customer customer gives money to waiter waiter brings change to customer customer leaves tip customer leaves restaurant (in an actual script, these would be more detailed and in a formal representation). It could also include various conditional information, such as the relation between the food served and the tip. An actual story, such as the one above, can then be matched against items within the script. Once this matching is done, the script provides the desired connections to tie together the sentences and resolve the contextual references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The script",
"sec_num": "4.2.2."
},
{
"text": "The second type of text we shall consider is reports about pieces of equipment --your car, your radio, or your personal computer. Here too we see the effect of shared knowledge --knowledge in this case about the structure and function of objects rather than actions. Suppose we hear",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing text about equipment",
"sec_num": "4.3."
},
{
"text": "The car started to overheat. We opened the hood and saw that the fan belt was broken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing text about equipment",
"sec_num": "4.3."
},
{
"text": "Then --if we know something about cars --we recognize that the broken belt probably caused the overheating, and that opening the hood let us see the broken belt but probably didn't cause the belt to break.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing text about equipment",
"sec_num": "4.3."
},
{
"text": "As with other texts, making these connections is an essential part of understanding the texts. In this case the relevant background knowledge is a simple model of the car: what the components of the car are, what the function of each component is, and how these components interact in the operation of the car. In addition, we require a map-' ping which relates the predicates of our logical form (\"break\", \"overheat\") to states of the model. Then, once the individual sentences of the report have been analyzed into logical form, discourse analysis can use the model to identify the implicit causal relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analyzing text about equipment",
"sec_num": "4.3."
},
{
"text": "To organize our quick trip through computational linguistics, we have divided the problems and techniques into three areas: syntactic analysis, semantic analysis, and integration with \"world knowledge\". Some such subdivision of the problem is essential if we are to successfully address the myriad problems of natural language. This division also corresponds roughly to the stages of processing in many current natural language systems. This division in processing is more problematic: it reflects in part our current difficulty in developing an integrated framework and analysis procedure which will allow us to apply all these constraints --syntax, semantics, and world knowledge --in a uniform fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "5."
},
{
"text": "During the course of this tutorial I hope to do two things.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial ~ntroductlon",
"sec_num": null
},
{
"text": "The first is to present some of our current knowledge about the production, transmission, and perception of speech! and second is to give you some idea of how, on the basis of this knowledge~ speech researchers try to extract information from speech automatically. While doing these things I will be inflicting \u2022 good deal of jargon on you.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial ~ntroductlon",
"sec_num": null
},
{
"text": "Speech researchers are unable to communicate with each other without this jargon, and you are going to have to learn some of it so they can also communicate with you.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial ~ntroductlon",
"sec_num": null
},
{
"text": "The signal the speech scientist has to deal with is \u2022n acoustic w~vefprm in the air, a sequence of sounds we produce in the region known as the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mechanics of speech production",
"sec_num": null
},
{
"text": "An interesting fact about speech production is that all the organs used for speech evolved for other purposes. The lungs, which produce the air stream that powers the making of sound, are actually provided to keep you alive by bringing oxygen to your blood. Here is \u2022 picture of the lungs making sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mechanics of speech production",
"sec_num": null
},
{
"text": "The airstream passes up through the larynx, which is a valve you close when you eat, to keep food from dropping into the lungs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 1 --lungs)",
"sec_num": null
},
{
"text": "It is in the middle of your neck, and is larger in men than in women --enough larger that it causes a protrusion in \u2022 man's neck called the Adam's apple.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 1 --lungs)",
"sec_num": null
},
{
"text": "The larynx contains two horizontal, opposed curtains, or folds of flesh supported by cartilage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 1 --lungs)",
"sec_num": null
},
{
"text": "Here is \u2022 drawing of the larynx looking down from the top.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 1 --lungs)",
"sec_num": null
},
{
"text": "Muscles can open and close the opening between the folds; this opening is called the olottls.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 2 --larynx)",
"sec_num": null
},
{
"text": "When the folds are closed, or pulled together, you can still force air up through the larynx. When you do, the folds are forced apart; then they are sucked together again by the air stream! they slap together; the air stream forcesthem apart again; they close again; etc. The action is that of what is called a relaxation oscillator. This procedure is often described as vibration of the yocal cords. The implication is that there is a harp-like structure in your throat.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 2 --larynx)",
"sec_num": null
},
{
"text": "I have done considerable research looking for the origin of this misconception, and I have never found it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 2 --larynx)",
"sec_num": null
},
{
"text": "The earliest speech literature speaks of cords, although it was known anatomically that there were none.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 2 --larynx)",
"sec_num": null
},
{
"text": "Here is a typical time waveform of airflow through the glottis when the larynx is \"closed\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 3 --glottal pulses)",
"sec_num": null
},
{
"text": "It is not at all sinusoidal, as would be expected from a vibrating strin 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 3 --glottal pulses)",
"sec_num": null
},
{
"text": "It is made up of puffs or bursts of air, relatively short compared to to the cycle time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 3 --glottal pulses)",
"sec_num": null
},
{
"text": "In males the mass and tensions are such that the puffs occur between 50 and 200 times a second, depending on muscular effort.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 3 --glottal pulses)",
"sec_num": null
},
{
"text": "In women the larynx is smaller and less muscular, and the rate is likely to be between 150 and 350 times a second.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 3 --glottal pulses)",
"sec_num": null
},
{
"text": "If you could listen to this signal it would sound like a buzz, and it is often called just that.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 3 --glottal pulses)",
"sec_num": null
},
{
"text": "The slide also shows the fourler spectrum of thls sequence of Pulses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The spectrum of \u2022 slgnal shows how much energy is present at each frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The abscissa here is frequency, and the ordinate zs amp lltude, expressed In declbels, or db.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Decibels are \u2022 logarlthmlc scale, used when a variable has such a large range that it is best to express its behavior by showing the ratios, rather than the differences, between ira large and small values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The number of db between two values is I0 times the logarithm of the ratio between those values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Numbers that are the same size are 0 db apart! if their ratlo ~s 2, they are about 3 db apart! 30 d~ means a ratio of 1000! 50 db means a ratio of 100000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The rapid falloff w:th increasing frequency is an important feature of the buzz, and will be referred to often.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "When speech sounds are produced like this, with the vocal folds held together so that a buzz comes outp they are csll voiced, or vocalized. The pulses are called Ditch pulses, or glottal pulses, and the frequency of the buzz is called Ditch, or fundamental freouencY, or the fundamental. {Purists reserve the word \"pitch\" to describe a psychological phenomenonl a feature of perception, not production.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The time from one of these pulses to the next is called the Ditch Deriod.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The \"pitch\" pattern of a sound (or of a word or a sentence} is called the iptonation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The term \"pitch period\" is, unfortunately, used in two different senses. It can denote an amount of time --the number of milliseconds between one pulse and the next --or it can mean the event that begins at the leading edge of a certain pitch pulse and ends at the leading edge of the next pitch pulse. I try to use \"pitch period\" for a length of time, and \"Ditch epqch\" 4or the event the starts at the onset of one glottal pulse and last for one pitch period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "There is another way of producing sound in speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The stream of air passing up through the vocal tract can be confined by a constriction so that it breaks into turbulent floNt producing hiss, or 4rice, ion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Speech sounds produced in this way are called fricatives.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Here is a waveform of a fricative lound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Frication can occur in various places in the vocal tract, all the way from the larynx to the lips.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 4 --Fricative)",
"sec_num": null
},
{
"text": "Note that frication (hiss) and voicing (buzz) are not mutually exclusive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 4 --Fricative)",
"sec_num": null
},
{
"text": "The stream of air can also be stopped, again at any point in the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 4 --Fricative)",
"sec_num": null
},
{
"text": "Speech sounds that involve a stoppage of airflow are called, unimaginatively, sto~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 4 --Fricative)",
"sec_num": null
},
{
"text": "Whatever the source of the sound, it is called the eMcltati~n~ as it is thought of as exciting the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 4 --Fricative)",
"sec_num": null
},
{
"text": "The many organs that make up the vocal tract then modulate this excitation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 4 --Fricative)",
"sec_num": null
},
{
"text": "Here is a side view of the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 5 --x-section of vocal tract)",
"sec_num": null
},
{
"text": "The parts that can be adjusted to modulate the sound range from the velum (a flap at the back that determines whether the nasal passage will be coupled into the vocal tract) to the teeth and llps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 5 --x-section of vocal tract)",
"sec_num": null
},
{
"text": "The most Important modulatlng organ Is the tonQue, orlglnally provlded for movlng food back Into the esophagus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "It can be humped high, creating a narrow passage, or lald low, maklng a large passage, and the hump can be at the back of the mouth or at the front. Also the tip can be curled back, or retroflexed. The amount of ja~ openlng also has an ef4ect on the speech sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "Here is \u2022 waveform of a typ;cal volced sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "Note that it is much more complicated than the glottal waveform. This is the effect of modulation by the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "(Slide 6 --voiced speech)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "And here is speech that is both voiced and fricated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "(Slide 7 --voiced fricative)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "Here is a graphic representation of this source-modulation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "(Slide 8 --buzz-output)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "The upper llne shows the sequence of pulses, and what happens when they have passed through the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "The lower line is the frequency-domain version of this process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "A pulse train has a spectrum consisting of lines at multiples of the fundamental, as shown at the left.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "The vocal tract passes different frequencies with different amounts of attenuationl the function that describes this process is called the transfer function, shown in the middle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "The output signal has a spectrum that is the product of the first two.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9peach Tutorial",
"sec_num": null
},
{
"text": "Phoneticians are concerned with describing and making taxonomies for speech sounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Articula~ory Phone, Sos and the Sounds of Speech",
"sec_num": null
},
{
"text": "The study of how the sounds of speech are produced is called articulatory phonetics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Articula~ory Phone, Sos and the Sounds of Speech",
"sec_num": null
},
{
"text": "Sounds can be divided into crude categories, such as voiced or unvoiced, and into fine categories such as exactly where in the mouth fricetion takes place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Articula~ory Phone, Sos and the Sounds of Speech",
"sec_num": null
},
{
"text": "Here is a fairly coarse categorization of sounds according to how they are produced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Articula~ory Phone, Sos and the Sounds of Speech",
"sec_num": null
},
{
"text": "These categories can be lumped together in certain ways to produce certain important cruder categories. For example, if \u2022 sound is not stopped, it is called a c~ntinuant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 8a --sound categories)",
"sec_num": null
},
{
"text": "A continuant with no frication is called a sonorant. If it has turbulence, whether or not it is voiced, it is called a fFicative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 8a --sound categories)",
"sec_num": null
},
{
"text": "A vow~l is a sonor\u2022nt in which there is no obstruction in the vocal tract (unlike /L/, for example, in which air is forced to flow around the tongue). Vowels occupy a major fraction of the total time in speech, and an even larger fraction of the total energy; and every word has at least one vowel in it. The tongue is the major determiner of vowel quality. (Slide ? --vowel trapezoid) This shows tongue position for the vowels of English.",
"cite_spans": [
{
"start": 358,
"end": 385,
"text": "(Slide ? --vowel trapezoid)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 8a --sound categories)",
"sec_num": null
},
{
"text": "Left is the front of the mouthp and up means high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 8a --sound categories)",
"sec_num": null
},
{
"text": "Vowels at the left are produced by pushing the tongue hump forward, and vowels at the right by pulling the tongue back.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 8a --sound categories)",
"sec_num": null
},
{
"text": "Vowels at the top have a high tongue hump, and vowels at the bottom have a fairly flat tongue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 8a --sound categories)",
"sec_num": null
},
{
"text": "So far we have looked at sounds only in terms of how they are produced. Most of the work In phonetlcs has to do with dlviding up sounds according to how they are Der~elved, and how they are used in the spoken language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "There zs an infinity of produceable sounds, but we seem to perceive them as a small number of classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Phoneticzans classlfy percesved sounds in a number of ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "One is the 0honeme. A phoneme can be thought o4 as a linguistic sound class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Two sounds A and B \"belong\" to different phonemes if there is a pair of words WI and W2, identical except in one place, such that putting sound A in that place and putting sound B in that place make WI and W2 have different meanings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "For example, the words \"hit\" and \"heat\",differ only in the sound between the /H/ and the /T/, and they have different meanings. Therefore the sound you hear as /IH/ and the sound you hear as /EE/ must belong to dif4erent phonemic classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Phoneticians divide sounds up in this way for all languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "There is not universal agreement as to how to do the dividing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "But by any method, most languages have about 45 phonemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Here is a list 9f one version of the phonemes o~ English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "{Slide 10 --phonemes)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "The symbols in the left column are the classic ones, standard among phoneticians, but cannot all be produced on a typewriter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "The second and third columns are symbols that can be typed, and were developed during the large ARPA-sponsored speech research project of the early 70\"s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "The two-character set has stuck, and is becoming the phonetic symbol set of the computer age.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Note that phonemes are language-dependent. In English we don't care ~hether a vowel is nasalized or not --you can say \"ah\" or you can say \"anh\", with the velum up or down, and the meaning is not changed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "In French, the situation is different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "The words \"a\" and \"in\" in French differ only in nasalization, and have different meanings! nasalization is a phonemic feature of the language. In some languages, for example Chinese, intonatic~ --the pitch pattern of a sound --is phonemic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Note also that sounds need not be produced similarly in order to be in the same phonemic class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "There are two very different ways of producing the phoneme /L/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "One uses the tip of the tongue, and the other, a rarer Version, uses the back of the tongue.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Both are recognized as /L/ in a word like \"lift\". Thus a phonemic class can be thought of as a linguistic behavior class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Sounds this different within a phonemic class are rare.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "But within any phonemic class, there are many different sounds. Infinitely many.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "But they appear to fall into subclasses in a definable way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "In English, the sound /P/ in \"pin\" has aspiration, a puff of air, after the stop, while the /P/ in \"spin\" does not have aspiration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Both are assigned to the phonemic class /P/. They are called all.,hones of the phoneme /P/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "The many --but not infinite -classes that are formed by dividing phonemes up into all.phones are called phones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "Phones are the fundamental units oh speech, as produced and as perceived, the fine structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "The study of what phones occur in a language, and how they are produced, is called Dhonolcx;v.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "All the phones of known languages can be represented with a very large set of symbols known as the International Phonetic Alphabet, using diacritical marks, subscripts, etc. for fine shades of difference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SS Speech Tutorlal",
"sec_num": null
},
{
"text": "A curious fact about phonemes: some of them represent speech acts that are steady-state, for example /AA/, the vowel ~n \"ma\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Some represent sounds that change slowly, llke the g|Ide /Y/ in \"you\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "And some are sounds that change suddenly, like the /P/ in \"pln\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The unlts that have proved to be useful llnguistically are not at all homogeneous from the standpoint of production.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "An automatic speech recognizer must deal with the acoustic waveform generated (in the air) by the talker! this is the only signal available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "It is obviously sufficient, since it is all the listener has, and he can understand the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "The study of what acoustic features are present in the speech waveform, and how they are used by listeners to identify phones or phonemes, is called Acoustic Phonetics. Much of the work in machine recognition is based on what acoustic phoneticians have discovered about the so-called ~foustic cues, the f~atuFe~ that humans use to identify speech sounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "Acoustic phoneticians are very interested in how the ear analyzes sound, as it must h&ve a bearing on how we perceive speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "Much theorizing in speech perception is done in terms of current models of what the ear is telling the brain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "Various models of the ear have been proposed as a result of surgical observation and psycho-physical experiments; some characteristics are common to all models and accepted by all speech researchers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "1) The ear does a fine frequency analysis of incoming signals and also resolves signals in time very accurately. (Unlike analyzers built by acousticians and electrical engineers, which cannot do both simultaneously.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "2) Low frequencies are \"resolved\" better than high frequencies! in fact, resolution depends on the difference of frequencies up to about 1000 hz, and on the ratio above 1000 hz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "Thus, the relationship of the \"natural\" scale to the frequency in herz is linear up to 1000 hz, and logarithmic thereafter. This natural scale is called the ~e) scale by some and the bark scale by others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "Here is a graph of the mel scale plotted against the frequency in herz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acoust~ Phoqet;cs",
"sec_num": null
},
{
"text": "One of the moet important concepts in scouetic phonet|cs is the formant. Formants were discovered by a 1?th century phonetician who had time on his hands (he was ill) and very acute hearing. He noticed, while listening to his own voiced sounds, that besides hearing the buzz of the glottal waveform he could hear several high-pitched notes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "The pitch of these notes was consistent for a given vowel, and differed from vowel to vowel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "The high-pitched signals came to be known as formants --they seemed to characterize the vowel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "It is as if the vowel were really a sum of high-frequency periodic signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "(Slide 11 --narrow-band spectrum of /AA/) If we make a fourier spectrum, a frequency analysis, of a vowel, we can see these formants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "Here is a spectrum of the vowel /AA/. There are lines in the spectrum at multiples of the pitch frequency. Riding on top of these lines you can see large lumps in the spectrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "These are the result of modulation of the glottal signal by the vocal tract, or ringing of the so-called resonances of the vocal tract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "The vocal tract is alloying more energy to get through at the frequencies near these lumps that at other frequencies. It is these lumps, or formants that you can hear as notes if you have very good hearing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(81ide lOa --mel scale)",
"sec_num": null
},
{
"text": "Speech Tutorial (Sllde 12 --wide band spectrum of /~/)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The lumps are more apparent If we make a wlde-band s~pctrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "That is, we do a frequency analysis using a f11ter that Is so wide that it cannot resolve the lines, but can only follow the general shape of the spectrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "Here Is the wide-band spectrum of the vowel /AA/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "You can see that it follows roughly the tops of the lines of the narrow-band spectrum (Slide 13 --spectrum of /IY/)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "If we compare the spectrum of another vowel, the vowel /IY/, with that of /AA/, we see that they are different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The lowest lump in /IY/ is lower than that for /AA/, and the second lump is much higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "It turns out that vowels can be characterized by the positions of these lumps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The lowest-frequency formant is tradition\u2022fly called the first fQrmant, abbreviated E.J~-Voiced sounds usually have 4tom 3 to 5 formants within the first 5000hz (telephone bandwidth). FI generally falls in the range 250-1100 hz, F2 in 1000-2200 hz, and F3 in 2000-3500 hz.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The fundamental frequency, or pitch, is often abbreviated E.Q., even though it has nothing to do with formants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "Phoneticians have learned \u2022 greet deal by studying displays such as these. But by far the most-used display is the so-called sonooram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "(This is actually a proprietary name --the generic name is sound sDectrooram.) This is the picture you get if you do a lot of spectra closely spaced in time, and plot the amplitudes as a function of two variables, ;requency and time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The common way to display this function is with time along the x-axis, frequency along the y-axis, and amplitude as blackness --the higher the amplitude at a given frequency and time, the blacker the mark on the paper. the spectrum late in the epoch is different from the spectrum early in the epoch, which makes for a stripy look.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The pitch bars are no longer resolved~ because the filter is broad, but the formants are now more obvious.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "For this reason, phoneticians traditionally use this representation to display and study speech. It is still easy to tell voiced speech from unvoiced --the unvoiced portions have no striations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S?",
"sec_num": null
},
{
"text": "The connection between formant position and perceived sound is central to acoustic phonetics. Most of the information is found in the first two formants! F3 and higher seem to be fairly constant over all sounds for a given talker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Imoortapc~ of Formants (Slide 17 --Peterson-Barney)",
"sec_num": null
},
{
"text": "A famous study of FI and F2 as indicators of vowel identlty was done by Paterson and Barneyl it Is Illustrated in thls picture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "From spectrograms they measured formant posltlons for ten vowels spoken by a large number o4 people.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The picture shows F1 and F2 4or every one of these tokens. F1 is plotted in the x-dlrectlon, and F2 In the y-direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The Identity of the vowel is shown by the symbol plotted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The symbols cluster, and the regions where they fall are indicated by a surrounding balloon and a phonetic symbol to show what vowel (mainly) 4oils in that region.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "There is no doubt that vowel identity and formant frequency ere related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "(Slide 18 --vowel vs. The relation between vowel identity and frequency of FI and F2 is summed up in this diagram, which also shows what the vocal tract is doing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "In this regard it is instructive to compare two earlier pictures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "If we superimpose the vowel trapezoid, which shows tongue position, on the Paterson-Barney plotp which shows FI-F2, we see that they are related.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "F2 is correlated with frontness, and F1 is anti-correlated with tongue height.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "(Slide LIP --stop and glide formants) You shouldn't think that only vowels are characterized by their formants. For example, these highly stylized spectrograms show what formants look like for stops and glides. All sounds pictured end with the phoneme /EH/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The upper left is the nonsense word /BEH/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The lower left is /GEH/. The /EH/ part is the same, but the beginning has a so-called formant transition that is characteristic of the stop that precedes the vowel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The second formant in /BEH/ starts low, and the second formant in /GEH/ starts high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The 4th from the left in the top row is /WEH/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "It differs from /BEH/ in that the formant transitions are slower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Thus stops and glides can both be characterized by the behavior of the formants of the nearby vowels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Since the 1930s speech researchers have been trying to characterize and categorize speech sounds automatically, and re|Imbly, using a small number of parameters. There have been two main motivations for this work: one is the need to transmit speech economically, which has led to the development of vocoders . speech compression devices in wide use today! and the other is the desire to recognize words automatically. These two have developed together, and many researchers have made contributions to both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Characterization of So~tch Sounds",
"sec_num": null
},
{
"text": "The fourier spectrum9 the version with broad peaks and no pitch barsm has played the principal part in both speech recognition and speech compression. In English and other non-tonal languages the pitch bars carry no linguistic information, so the shape of the broad spectrum, especially the positions of the main peakst is a good clue to the identity of a contlnuant speech sound. And the spectrum changes fairly slowly, because the articulators in the vocal tract are sluggish; you don't have to derive the spectrum very often to capturQ changes in the speech sound --100 times a second is plenty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Characterization of So~tch Sounds",
"sec_num": null
},
{
"text": "(You might thlnk that volc:ng would not be captured by the broad spectrum. The narrow-band spectrum does capture this in~ormatlon, since It has lines, or pitch bars, for volced speech and none for unvoiced speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "But the broad-band spectrum also contalns voicing Information, in its general shape.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "In voiced speech, the spectrum of the excltation, the glottal pulses, falls off rapidly with frequency! therefore the output spectrum, which is the product of the glottal spectrum and the transfer function of the vocal tract, also falls off rapidly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Unvoiced speech has for its excitation something like white noise, which has a flat spectrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Hence the broad spectrum of an unvoiced sound is much flatter than the spectrum of an unvoiced sound. )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "We have already seen, in the Peterson-Barney diagram, that the formant positions for a given sound can vary considerably from person to person; they are not even consistent for a single individual (although much more constrained than for the whole population). This unfortunate variabill~v will be dealt with in more detail later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Focusing attention for the moment on a single individual, whose formants will be tightly localized, it would seem a simple matter ~o find the formants and use their position to characterize the sound. Many a researcher has blunted his spear trying to do Just that; the lumps in the spectrum have proved remarkably hard to find automatically. And if, for example, you miss the first formant entirely, and identify the second as the first, you will almost surely mis-identify the sound. An advantage of using the whole spectrum to characterize the sound, rather than Just the formant positions, is that it fails gracefully.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Practically every speech recognition system, then, transforms incoming speech into a sequence of spectra, and discards all other informatlon that may have been present in the speech Waveform.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The vocal tract is regarded as a machine that churns out spectra, one spectrum every 1/100 of a second. On the question of how actually to represent that spectrum as a sequence of numbers, there is no such unanimity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "To begin with, some systems are based on a linear spectrum: they divide up the spectrum into k equal intervals, find the amount of energy in each interval, and represent the spectrum as a sequence of those k numbers. Others are based on the oel scale mentioned above (in describing the action Of the ear); the spectrum is divided up linearly in the range 0-I000 hz, and logarithmically from there on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "(A parenthetical note: while most recognition systems are based on the spectrum, since it seems to characterize speech sounds well, some systems are based on more complicated functions of the incoming signal, functions that transfom the acoustic signal as we think the ear does before sending it on to the brain. The idea is that the ear-brain combination is a splendid recognizer, so if the ear does it, it must be a good thing to do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "There is evidence that this is a good direction for recognition research to go in, but opinion is still divided. To keep it simple, I will talk about systems as if they were all spectrum-based.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "In fact, except that speech is represented differently, the ear-model systems are just like the spectrum-based systems.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "The time neededto calculate the spectrum is of concern to system builders. Fortunately, just when computer speech recognition research began in earnest~ a new method of computing the fourier spectrum was invented, known as the Fast Fourie r Transform, or FFT. The FFT is most easily and rapidly computed if its size is a power of 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "It produces a spectrum that covers the frequency range from 0 to half the rate at which the mpeech was sampled --5000 hz, if sampling is done 10000 times a second.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Thus many recognition procedures are based on a spectrum that divides the frequency range 0-5000 into 64, 128 or 256 intervals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "Linear PFedi~t~ve Codlnq A development of the early 70\"s was the appllcatlon to speech of Linear Predlctiye Codi~Q, or LPC. Thzs process0 in use at that tlme by seismologists, makes use of the notion that if we make certaln acoustic assumptlons about the machine that Is producing the speech soundam the spectrum is very easy to calculate --or more properly, to approximate. LPC is based on the observation that the vocal tract is a tubep closed (most of the time) at the glottal end and open at the lips end.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "Such a tube can be approximated by a set of coaxial cylindermj all the same length, but of varying cross-sectional areap llke this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "(Slide 20 --Acoustic tube) Nobody thinks the vocal tract actually looks like this, but acousticians assure us that the sound coming out of a vocal tract can be duplicated by introducing a buzz or hiss, as appropriate, into such a tube. For reasons not detailed here, there should be 10 sections, each 1.7 cm long; specifying the ares of ~ach of these 10 sections completely specifies the quality of the output sound --that is, every sound is characterized by 3ust 10 numbers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "The LPC process can then be thought of as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "Form the fourier spectrum of the current centisecond of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "Now generate a sound with the acoustic tube, and form its fourier spectrum, and compare that spectrum with the spectrum of the speech sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "Do this again and again, varying the areas of the tube sections, until the spectrum of the otput of the tube is maximally like the spectrum of the speech sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "Characterize the speech sound by the 10 cross-sectional areas of this \"best\" tube.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "(The LPC algorithm is a clever method for carrying out in a very short time what seems like a wander through an infinite 10-dimensional space.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "The spectrum of the output of this \"best\" tube is called the LPC spectrum. It is in many cases remarkably like the spectrum of the speech sound. Here is a comparison of the LPC spectrum of the sound /AA/ with the fourier spectrum of that sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "The LPC spectrum has captured the essential shape, especially the formantsl and remember that this spectrum is specified by Just 10 numbers, the areas of the 10 sections Qf the tube. Some systems based on LPC use the 10 areas to characterize the current speech sound, others use the LPC spectrum, still others use certain functions of the 10 areas such as the ratios between successive areas, logarithms of those ratios, or another function called the Feflection coefficients.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutor lal",
"sec_num": null
},
{
"text": "The spectrum is used because it seems to retain the linguistic information and discard the non-linguistic information. In many systems the recognition algorithm operates not on the original spectrum (whether mel or linear), but on some function of the spectrum that is thought to do an even better job of emphasizing information useful in recognition and discarding the \"noise\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Cepstrum",
"sec_num": null
},
{
"text": "One such functlon Is called the ceDstrum (a coined word).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The cepstrum lS the spectFu~ Q~ the spectrum ~actually the spectrum of the logarlthm of the spectrum).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The reasonlng behlnd the use of the cepstrum zs the following. A spectrum is \u2022 frequency analysis --i~ ;s a functlon that tells you what frequencles are present in \u2022 signal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The spectrum of speech is itself a signal, a signal whose shape corresponds to the sound, or phone, being produced. A frequency analysis of that signal, Of the spectrum, might capture the salient features of that shape, and thus characterize the phone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The more \"~oeffi~i~nts\" one uses in forming the cepstrum, the finer the frequency analysis (of the speech spectrum) that will result. Cepstrum-based recognition systems generally use about 15 cepstral coefficients to represent the speech spectrum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The cepstrum is very much in vogue at present.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The stage is now set. Speech has been transformed into a sequence of vectors, One each centisecond, vectors which are thought to be similar for similar sounds, and different for different sounds. For concreteness~ let us assume that the vectors are 15 long.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recoqnitiop AlqoFithms",
"sec_num": null
},
{
"text": "A word half a second long will be represented by about 50 of these 15-1ong vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recoqnitiop AlqoFithms",
"sec_num": null
},
{
"text": "How do we now decide what word it is? We need a recognition algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recoqnitiop AlqoFithms",
"sec_num": null
},
{
"text": "There are two schools of thought about how a recognition algorithm should be created. One school says, the way to proceed from here is to study the characteristics of these sequences of vectors, to see what common behavior they have --what is Invariant --over all tokens of a particular sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recoqnitiop AlqoFithms",
"sec_num": null
},
{
"text": "On the basis of such study rules can then be developed, involving features (such as formant behavior) of the vectors in the sequence, rules that hold whenever that sound is spoken. (This is very much what acoustic phoneticians have been trying to do for many years, largely through the study of sound spectrograms --the spectrogram is, of course, a visual display of a sequence of spectral vectors.) The recognition system can then test an incoming utterance, to see if it is some particular wordj by determ|ning (according to the rules) what the incoming sounds aree and then looking at the pronunciation of the word to see if the sounds are appropriate to that word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recoqnitiop AlqoFithms",
"sec_num": null
},
{
"text": "Systems that operate in this way are called acoustic-phonetlc, or feature-Rased systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recoqnitiop AlqoFithms",
"sec_num": null
},
{
"text": "The other school says no, we are not clever enough to develop rules that will tell us what sounds \u2022re being producedl we must let the speech speak for itself, by \"training\" the recognizer automatically. There are three current methods (not entirely mutually exclusive) for doing this: Template matching; Statistical modeling; Neural nets (which I will not discuss).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 23 --schools of thought)",
"sec_num": null
},
{
"text": "For a long time the speech community has been struggling to produce a successful feature-based system, and some day they will.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 23 --schools of thought)",
"sec_num": null
},
{
"text": "At present, however, the other approaches, particularly, the statistical, are way out in front. We are just too ignorant to extract and codify the important clues to sound or word identity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 23 --schools of thought)",
"sec_num": null
},
{
"text": "In the rest of this tutorial, then, I will not talk about feature-based systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(Slide 23 --schools of thought)",
"sec_num": null
},
{
"text": "The templ~te-matc~anq system",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The simplest sytem Is one in whlch there zs a vocabulary of some number of words (50 is a common number for thls kind of system) and the system Is \"trained\" by having a talker, the person who w~ll be using the system0 speak each of the 50 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The system stores each word as a sequence of vectors. The stored sequence is called a template.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Recognltlon is done by convertlng an incoming word to a sequence of vectorsD and comparing that sequence to each of the templates in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Whoever matches best against the new sequence is the winner.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Implicit in this description is the ability to ~Ind the beginnings and endings of incoming words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "In running speech~ thls is a very hard thing to do. Various systems get around this difficulty in various ways --one way is to have the user push a button before and after each word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Another is to have the user pause between words~ and use the silences to demarcate the words. Both these allpw the system to be what is called an isolated word recognizer --the problem of breaking up a sentence into words does not have to be faced. Later I will discuss what template-based systems do when the input is so-called connected speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The system must have a quantitative way of comparing two sequences~ often called a Scorinq alqorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Let us assume that the sequences are exactly the same length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The algorithm then has two parts: first, we must say how to score an element (vector) in one sequence against its opposite number in the other sequence. This is commonly done by taking the cross-product (or equivalentlyt the sum of squared differences).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Secondp given scores for individual elements of the sequence, we must compute a score for the entire sequence;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "a common way is just to add the cross-products.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "If the sequences are not the same length, we cannot compare them element by element.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Fortunately there is an algorithm, called Dynamic Time WarpinQ (DTW) that allows us to compare sequences of different lengths.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "It does this-by stretching and/or compressing the sequences so that they are the same lengthp and furthermore the places where they are the most similar are lined up with each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "DTW has been astonishingly successful in eliminating the timing problem in,template matching word recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "A more sophisticated version of DTWp called a l~vel-~uildinq algorithm, also allows template matching to be used even on connected speech0 where the beginnings and ends of words are impossible to find.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Effectively, DTW matches every sequence of words from the vocabulary with the incoming utterance, trying every possible time-warp of each word, and chooses the sequence of words that scores the best. This is a good place to point out that working speech recognition systems do more than just analyze the acoustic signal to determine what words have been spoken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Every system is designed to work in some task domain~ and every such domain has restrictions that limits or at least change the probability of, the words that can appear at any particular place in a sentence. These restrictions include qram~r~ semantics, subject matter, and what is called praqmatics --the particular situation or mode the talker is in ~such as the what he said in his previous sentence).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "All these sources of kno~ledqe are built into the system to the greatest possible degreew and have a tremendous effect on the ability of the system to recognize words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "The system as so far described Is still sultable for only one tall, er --It was trained by only one talker, and one talker's templates may not be sultable for recognizing another talker's words: as the Peterson-Barney diagram shows, the two talkers may have their formants in different places even when they are maklng exactly the same sound.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Template-based systems attack this problem in two ways: one is to gather several (4 to b) templates of each word from each of several talkers, and use them all during recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "Another is to gather such templates, and then make an average template, or a few average templatesl perhaps one for men and one for women. These are the current state-of-the-art methods of copihg with the spectral variabllity problem in template-based speech recognit%on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "A variant, or refinement, of the template-matching system is a system in which phone~ or, phonemEs , rather than words, are the units that are templated. An obvious attraction of this idea is that no matter what the vocabulary, the phonemes are fixed --there is only a small finite number of them to \"learn\". If we had a template for every phoneme, and could recognize phonemes as they occurred, we would be just where the acoustic phoneticians wanted to be --we could search the vocabulary to see what word (when pronounced) best matches the incoming string of phonemes. With a reasonable size of vocabulary, we could even tolerate a fair amount of garbling, or at least uncertainty of identification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "But e~en for a single speaker, this approach has not been fruitful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "Phones and phonemes undergo radical changes in pronunciation due to context (coarticulation effects), speed, loudness, and other effects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "Many of the rules governing these changes (\"phonological rules\"} are known, but even so, units this small are currently too hard to identify reliably.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "It has been suggested (often) that a template-matching system based on the syllable would be a proper compromise between words and phones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "Againp no matter what the vocabu~ary, the number of syllables is finite --but very large (several thousand) in English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "Both the difficulty of distinguishing among So many items, and the fact that a lot of memory is needed to store so many templates, have discouraged Western researchers from pursuing this approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "It is more suitable to Japanese, however, which has about 1OO syllables --and in fact is being tried in Japan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone and Phoneme Templates",
"sec_num": null
},
{
"text": "The philosophy behind the statistical model system is the following. We are too ignorant to specify rules for determining, from a spectrum or sequence of spectra( what sound is generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Modelinq --Today's Leader",
"sec_num": null
},
{
"text": "Word templates are impractical because the variability in pronunciation is so great --it would take too many of them to cover all the possibilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Modelinq --Today's Leader",
"sec_num": null
},
{
"text": "Phoneme templates have not worked. But somehow the brain learns to identify a sound after exposure to many examples of that sound (and other sounds).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Modelinq --Today's Leader",
"sec_num": null
},
{
"text": "Therefore the sequence of spectra representing that sound has statistical properties that serve to distinguish it from other sounds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Modelinq --Today's Leader",
"sec_num": null
},
{
"text": "We will try to get at the statistical properties of the sounds by imagining that they were produced by a very simple machine; we will assume a form for that machine, and then estimate its parameters by statistical estimation from a large amount of actual speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Modelinq --Today's Leader",
"sec_num": null
}
],
"back_matter": [
{
"text": "The currently most popular (and successful) statistical model is the H1ddpn Markov Model, or H_.~. By way of Introductlon to statlstlcal modellng, and before describing a HMM, I will descr;be a s:mpler statistlcal model. To fi>: Ideas, let us assume that Engllsh has 99 phones, labeled PI, P2, ... , PQg. And assume that we know how to seqment speech into individual phones.Imagine that there ;s a large urn full o4 phones, spoken by lots of people. Different phones are present In dlfferent numbers --lots of some, not so many of others.Our model of speech productlon will be: Speech is generated by drawing phones (independently) out of the urn and concatenating them.We want to build a recognizer based on this model.We must first find out what each of the phones looks like in a statistical sense.To do this, we collect a lot of speech under controlled circumstances; since we know exactly what was said, we can divide it up into known phones.We then bring together all examples of a given phone (reduced to sequences of 15-long vectors)! if the average phone is 10 vectors long, and there are 200 samples of a given phone, there are 2000 vectors all belong;ng to that phone.We compute the mean and variance of those 2000 vectors --that mean and variance will be our statistical model for that phone. If we do this for all 99 phones, we will have 99 means and variances --a statistical description of the imaginary machine that produces speech.We have trained the model. Now we can use this model to identify an incoming word. We segment the word into phones --but we don't yet know what phones they are. A typical phone is perhaps 10 vectors long.There is a standard statistical technique ;or then calculating the probability that those 10 vectors all belong to P1, to P2, etc. That is, we can attach 99 probabilities to the incoming phone, one for each phone in the model.Hopefully, one of them is very large and the rest are very small. At any rate, our choice for the incoming phone is the model phone with the largest probability.When the word is all in, the recognizer has produced a sequence of phone labels That sequence can then be compared with the words in the vocabulary to see what word the recognizer thinks is most likely.Models like this have been used with some success.Such a model is very like the model where there is a template, or several templates, for each phone, but here the \"template\" is statistical.A real problem with such a model is that we must be able to segment the training set into phones, in order to collect together all examples of each phonel and then we must be able to segment the unknown, incoming word Into phones.The building of the training set is painful, and the segmentation of unknown words is very hard. and is a major source of error in such a recognition system.The Hidden Markov Model, as used ~n speech recognition, overcomes these difficulties.We imagine quite a different machine as the speech generator.The machine has two parts. The first part determines what state the machine is in.We are free to imaglne how many \"states\" the machine has --to fix ideas, let us say there are 5 of them, labeled $1 to $5.This part of the machine has a centlsecond clock and a probabilistic chanqe rule; the machine starts in state SI, and every centisecond it consults the rule to see if it is time for a change.If so, it changes to state $2. It continues this process until it has gone through $3~ $4~ and $5.When it leaves $5, it stops.A common varlatlon of thls math:he allows the machine to sklp a state~ when leavlng S3t for example It may go e:ther to $4 (w~th a certazn translt:on 0robablli~y P34) or to $5 ~w~th probabillty P35).P34 and P~5 must of course add to I. Such a machine can be d:agrammed like this~ (Slide 24 --flnite state machine~The clrcles are the state~.The arrows show what change of state can occur at each tick of the clock.The re~entrant arrows show what happens when the change rule does not call for a change --the machine stays in the same state. The other arrows show what happens when the state changes --the machine can advance either one or two states, and the labels on the advance arrows are the probabilities of the two advances.So far no \"speech\" has been generated. That happens in the second part of the machiner which works as follow~.There are 5 urns of sounds, labeled UI to U5, one urn for each state of the first part of the machine. (You can think of the contents of the urns as phones, although they don't have to be phones.} Since we represent sounds as 15-dimensional vectors, what is actually in urn U3, for example, is a collection of vectors with a certain mean M3 and variance V3.To make the model mathematically tractable, we assume that the vectors are Normally distributed.The second part of the machine operates off the same centisecond clock as the first part. At each tick, it selects at random a sound (vector) from the urn corresponding to the state of the first part of the machine. If the first part is in state $2, the second part makes a random drawing from urn U2.The vector that is drawn is defined to be the \"speech\" that is put out by the machine at the current clock tick.A \"word\" then is a succession of vectors, first some from U1, then some from U2, U3, U4, and U5 (except that an urn may be skippedl.The \"Hidden\" in this model refer's to the fact that the State of the first part of the machine, and therefore the identity of the urn that is drawn from by the second part, is hidden from us; we see only the vectors that are drawn from whatever urn it was.The \"Mar kov\" is a mathematical term having to do with how successor states are (probabilistically} chosen by the first part of the machine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorlal",
"sec_num": null
},
{
"text": "It is this. Suppose we assign some probabilites to the 5 transltlons of part one (including the probability of no transition at all), and to the 5 means and variances of part two. And suppose we collect a large number of tokens of some word~ which now do not need to be segmented as they were in the phone model described earlier.In the jargon of HMM, this collection of tokens is called the O~seryatictJ~S.Then there is a statistical technique for calculating the probability, given the Observations, that the parameters we have assigned to the machine are correct.Further, there is an algorithm that allows us to do a wonderful thing.Based on the Observations, and on the current parameters of the machine, one application of the algorithm will produce a new set of parameters that is guaranteed to be more likely than the set we started with. If we apply this algorithm repeatedly we will \u00b0'climb\" to a set of parameters for our machine that is ma~imally likely to be corrects given the Observations.This set of parameters --transition probabilities and 5 means and variances --is our statistical model for the word we collected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Now what is the point of such a model?",
"sec_num": null
},
{
"text": "Note that no segmentation, and no Identlficatlon of phones, was necessary. This process is repeated for each word in the vocabulary --all we need is many tokens of each word to properly train the model.The set of all word-models, one for each word in the vocabulary, is our statistzcal model of speech.Recognltion proceeds just as zt did for the previous statistical model. each word-model we calculate the probability that the incoming word was produced by that model; if there are 50 words In the vocabulary, we get 50 probability estimates, one for each word-model.Hopefully the one with the highest probability is the word that was actually spoken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Tutorial",
"sec_num": null
},
{
"text": "An advantage of the HMM, as has been noted, is that we need not do any segmentation of the training collection, or of the incoming word.There are disadvantages, too.For one thing, we must assume that every word, of whatever length, has the same number of states.For another, we can never know what the states really mean; often, if you look at the implicit segmentation (change of state) you will recognize linguistic classes, but not always. This is not a mathematical dlsadvantage, but it leaves the user somewhat unsatisfied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For",
"sec_num": null
},
{
"text": "You will be hearing about many recognition systems, all different but all having the same general form.You could probably draw this diagram yourself by now, but here is the generic Speech Recognition System.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Form of a Speech Recoqnition System",
"sec_num": null
},
{
"text": "You cannot be allowed to proceed from this tutorial to the rest of this meeting thinking that these systems have an easy time of it.If every phone, or other sound unit, gave rise to just one spectrum, speech recognition would be simple and successful.Unfortunately there is tremendous variability in how spectra can I o ok for a given sound, and in how a sequence of spectra can look for a given word.Here are some of the sources Of variability that plague builders of speech recognition systems:(Slide 26 --Sources of variability) Size, sex, and age of the talker --men, women, children and the aged have very different spectra for a given vowel, and even within one of these groups there is considerable varlation;Dialect --can have a gross effect on certain sounds; Loudness, emotion, vocal effort --all affect formant size and position; Coarticulation --phoneme pronunciation depends on what its neighbors ares Speech rate, loudness, health --affect pronunciation and sound qualityS Channel --the transmission path between talker and listener (or recognition device) --often changes the gross shape or tilt of the spectrum (but usually doesn't affect formant positions greatly) S Noise --masks the small ripples in the spectrum, and even some big ones.Human listeners do just fine in spite of all this variability, but we are a long way from understanding how they do it, and a long way from building speech recognizers that can really cope with it. ~Tg. , L7 (D',u-;irt,~.l ~im, ,/~ ~ol ",
"cite_spans": [
{
"start": 1461,
"end": 1491,
"text": "L7 (D',u-;irt,~.l ~im, ,/~ ~ol",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sources of Varia~ilit~",
"sec_num": null
}
],
"bib_entries": {},
"ref_entries": {
"FIGREF1": {
"text": "Slide 15 --narrow-band spectrogram) Here is a sound spectrogram of speech analyzed with \u2022 narrow filter! this is tradition\u2022fly called \u2022 narrow-band sDectroaram. The lines that run roughly horlzontally are the harmonics of the fundamental; they are called mirth bar~. Where there is no voicing there are, of course, no pitch bars. (Slide 16 --wide-band spectrogram) If we use \u2022 broad analyzing filter, the picture looks like this. The vertical striation ~ correspond to individual pitch epochs;",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Slide 21 --LPC spectrum of /AA/) Here are the fourier spectrum and LPC spectrum of the vowel /EE/. (Slide 22 --LPC spectrum of /EE/)",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td/><td/><td>SENTENCE</td><td/></tr><tr><td/><td>SUBJECT</td><td>VERB</td><td>OBJECT</td></tr><tr><td/><td>I</td><td>I</td><td>I</td></tr><tr><td/><td>NP</td><td>drinks</td><td>NP</td></tr><tr><td/><td>/,\\</td><td/><td/></tr><tr><td>DET</td><td>ADJECHVE</td><td>NOUN</td><td>NOUN</td></tr><tr><td>I</td><td>I</td><td>I</td><td>I</td></tr><tr><td>The</td><td>young</td><td>cat</td><td>milk</td></tr></table>",
"type_str": "table",
"num": null,
"text": "The first step in trying to figure out what a sentence means is trying to analyze the structure of the sentence: what the subject and object of the verb are, what words are modifying other words. labeled bracketing of the sentence: [SUBJECT [NP [DET The] [~JEcnvE young] [NOUN cat]]] [VERB drinks] [oBJECT [NP [NOUN milk]]] or equivalently, by a tree diagram:",
"html": null
},
"TABREF2": {
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"2\">SENTENCE</td></tr><tr><td/><td/><td>SUBIECr</td><td/><td/><td>VERB</td><td>NP</td></tr><tr><td/><td/><td>I</td><td/><td/><td>I</td><td>I</td></tr><tr><td/><td/><td>NP</td><td/><td/><td>drinks</td><td>NOUN</td></tr><tr><td>DET</td><td>ADJECTIVE</td><td>NOUN</td><td colspan=\"2\">PREP-PHRASE</td><td>milk</td></tr><tr><td>I</td><td>I</td><td>I</td><td>/</td><td>\\</td></tr><tr><td>The</td><td>young</td><td>eat</td><td>PREP</td><td/></tr><tr><td/><td/><td/><td>,</td><td>/</td><td>\\</td></tr><tr><td/><td/><td/><td>undo</td><td>DET</td><td>NOUN</td></tr><tr><td/><td/><td/><td/><td>I</td><td>I</td></tr><tr><td/><td/><td/><td/><td>the</td><td>car</td></tr><tr><td>2.2. Parsing</td><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"num": null,
"text": "Parsing means analyzing a sentence with respect to a grammar: determining if the sentence is grammatical, and what the structure of the sentence is.",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>np</td><td>~ [*det] [*adjective] *noun [noun-modifier]</td></tr><tr><td>noun-modifier</td><td/></tr><tr><td colspan=\"2\">In terms of our grammar, we could modify the noun-phrase rule as follows:</td></tr></table>",
"type_str": "table",
"num": null,
"text": "man whom I met [the man] comes from Philadelphia. The man who [the man] opened the door comes from Detroit. The man whom I sold the book to [the man] comes from Miami.",
"html": null
}
}
}
}