Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y08-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:38:09.925998Z"
},
"title": "Unsupervised Approach for Dialogue Act Classification",
"authors": [
{
"first": "Kiyonori",
"middle": [],
"last": "Ohtake",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Institute of Information and Communications Technology (NICT)/ Advanced Telecommunications Research Institute International (ATR)",
"location": {
"addrLine": "2-2-2 Hikaridai, Keihanna Science City",
"postCode": "619-0288",
"country": "JAPAN"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents an unsupervised approach for dialogue act (DA) classification. We used a latent variable model to compress the dimensions of the feature vector. We introduced a paraphraser to reduce the variety of expressions and to solve the pragmatic problem for DA classification. The paraphraser seemed to work well on some DA classifications in the unsupervised approach. The results obtained by the unsupervised approach were compared with the manually annotated labels. A preliminary experiment for semi-supervised tagging was also carried out, and we discuss these results.",
"pdf_parse": {
"paper_id": "Y08-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents an unsupervised approach for dialogue act (DA) classification. We used a latent variable model to compress the dimensions of the feature vector. We introduced a paraphraser to reduce the variety of expressions and to solve the pragmatic problem for DA classification. The paraphraser seemed to work well on some DA classifications in the unsupervised approach. The results obtained by the unsupervised approach were compared with the manually annotated labels. A preliminary experiment for semi-supervised tagging was also carried out, and we discuss these results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recognizing the intentions of a user in a dialogue system is very important. So far, many methods have been developed to infer a user's intention in a dialog situation. To infer the user's intention in an utterance, the utterance can be categorized into given classes. Therefore, many studies have designed the classes called dialogue act (DA) labels that approximate a speaker's intention. They annotated the labels on a corpus to analyze the phenomena for DA interaction or to develop a DA tagger in order to infer the DA label from a speech segment (e.g. some utterances, an utterance, or a part of an utterance). Most studies on DA taggers were based on a supervised method (e.g., (Stolcke et al., 2000; Tanaka and Yokoo, 1999) ). The labels used in a DA tagger have to be predefined, and supervised methods require a corpus that is manually annotated by the labels. On the other hand, it is difficult to design a tag set (labels) that can be used to annotate a corpus because the design of a tag set depends on the domain and the task. Therefore, we have to redesign the tag set and construct a corpus annotated with a new tag set if we apply our system to different domains or tasks. In addition, designing a tag set that can be used in any domain or task is very difficult. However, we have to annotate DA tags on a corpus, because many applications require predefined DA tags. This paper discusses an unsupervised approach to infer the user's intention in a situation by using a dialog system. Unsupervised approach may not achieve highly accurate results when compared to the supervised approach. However, in any domain or task, the unsupervised approach can yield human DA annotators with machine judgments of the DA classification that may be useful to keep the consistency of DA annotation results for a corpus.",
"cite_spans": [
{
"start": 685,
"end": 707,
"text": "(Stolcke et al., 2000;",
"ref_id": "BIBREF9"
},
{
"start": 708,
"end": 731,
"text": "Tanaka and Yokoo, 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In addition, annotating a corpus with given labels is very time-consuming. An unsupervised method is independent of annotation and designing the tag set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In order to achieve an unsupervised method, we need an unsupervised clustering method. So far, many clustering methods have been proposed and discussed for applications in natural language processing (NLP), such as works by Zhao and Karypis (Zhao and Karypis, 2005) . However, an utterance is very short against a document that is used in a common NLP application. In addition, the feature space that is used to express any natural language expression is extremely large and an utterance is expressed by a very sparse vector in the feature space. Therefore, it is very important to handle a sparse feature vector of an utterance in the huge feature space.",
"cite_spans": [
{
"start": 224,
"end": 265,
"text": "Zhao and Karypis (Zhao and Karypis, 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Here, we construct a dialogue system to make an itinerary of one-day sightseeing tour and also develop a dialog corpus for this system. The corpus consists of 100 dialogues between a professional tour guide and a tourist. Each dialog is almost 30-min long. An annotated corpus with DA is needed to construct our dialogue system. Therefore, we have started to design a DA tag set and annotate the DA tags on the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Annotation",
"sec_num": "2."
},
{
"text": "However, there are several problems that make it difficult for us to maintain consistency in the annotation as follows: (a) segmentation, (b) pragmatics, and (c) multifunctionality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Annotation",
"sec_num": "2."
},
{
"text": "Sometimes, utterances are fragmental, and it is difficult to recognize an appropriate boundary of an utterance for a DA tag. Hinarejos et al. reported that the correct segmentation for DA is very important for obtaining an accurate result in DA tagging (Hinarejos et al., 2006) .",
"cite_spans": [
{
"start": 125,
"end": 141,
"text": "Hinarejos et al.",
"ref_id": null
},
{
"start": 253,
"end": 277,
"text": "(Hinarejos et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DA Annotation",
"sec_num": "2."
},
{
"text": "There is a pragmatic problem in the annotation of DA tags. For example, the utterance \"Do you know what time it is?\" can be recognized as a yes/no question from the surface information, but the speaker's intention is a request such as \"Please tell me the time.\" In addition, utterances are generally multifunctional. This problem is closely related to the design of the DA tag set. So far, many DA tag sets have been proposed and used to annotate corpora. Some of them have several layers (e.g., DAMSL (Allen and Core, 1997) ) and dimensions.",
"cite_spans": [
{
"start": 502,
"end": 524,
"text": "(Allen and Core, 1997)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DA Annotation",
"sec_num": "2."
},
{
"text": "In this paper, we focus on the pragmatic problem in the DA annotation. We try to resolve the pragmatics problem by paraphrasing. If a euphemism is paraphrased into a straightforward expression, the dialogue system can easily understand the expression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DA Annotation",
"sec_num": "2."
},
{
"text": "In this section, we describe an unsupervised approach to classify an utterance. The overview of the unsupervised approach is as follows: 1. Construct a feature vector from an utterance. 2. Reduce the dimensions of the feature space using a latent variable model. 3. Classify the vector whose dimension was reduced using an unsupervised classification method. After constructing the feature vector, we use a latent variable model to reduce the dimension of the feature space. Then, we use an unsupervised classification method to classify the vector that produced by using a latent variable model. Finally, we find the class to which the utterance belongs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised method for DA annotation",
"sec_num": "3."
},
{
"text": "We also introduce a rule-based paraphraser to reduce the variety of expressions because a different expression is treated to be completely different in a latent variable model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised method for DA annotation",
"sec_num": "3."
},
{
"text": "Several unsupervised text modeling methods, such as PLSI (probabilistic latent semantic indexing (Hofmann, 1999) ) and LDA (latent Dirichlet allocation (Blei et al., 2003) ), are available to model a text based on the features of words and their frequencies. In general, the latent variables indicate the topics of each segment (some sentences for text or some utterances for speech), and we can use the topic information indicated by the latent variables of the model as a compact surrogate expression for a given feature vector of an utterance. In other words, we can use these models to reduce the dimension of the feature space. Once the model parameters are learned from a corpus, we can infer the topic of a given utterance. If we constructed a latent variable model with k latent variables, we get a k -dimensional vector. This vector is called a topic vector. We used PLSI-a latent variable model-for general co-occurrence data that associates an unobserved topic variable with each observation, i.e. with each occurrence",
"cite_spans": [
{
"start": 97,
"end": 112,
"text": "(Hofmann, 1999)",
"ref_id": "BIBREF4"
},
{
"start": 152,
"end": 171,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent variable models",
"sec_num": "3.1."
},
{
"text": "of word in document } , , { 1 k z z Z L = \u2208 , { 1 w W L = } , , { 1 N d d D d L z } , M w w\u2208 = \u2208 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent variable models",
"sec_num": "3.1."
},
{
"text": "The probability of a topic under the document ( ) is approximated by the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent variable models",
"sec_num": "3.1."
},
{
"text": ") | ( d z P \u220f \u2211 \u2208d w z w P z P ) | ( ) ( 2 w z w P w d n ) | ( ) , ( ) , ( w d n , (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent variable models",
"sec_num": "3.1."
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent variable models",
"sec_num": "3.1."
},
{
"text": "indicates the frequency of word in the document d . The details about how to introduce Equation (1) have been previously shown (Ohtake, 2005) . In that paper, Ohtake used PLSI and LDA to evaluate whether a paraphrasing pair is contextually independent or not, as well as if there was not a big difference in the performances between them. Therefore, we use PLSI because it is simpler and faster than LDA. w",
"cite_spans": [
{
"start": 127,
"end": 141,
"text": "(Ohtake, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent variable models",
"sec_num": "3.1."
},
{
"text": "There are several unsupervised clustering methods. We used the K-means clustering algorithm (e.g., (Duda et al., 2000) ) that is very simple because, at the moment, a highly sophisticated method in which analyzing the tendency of the results by an unsupervised approach and manually annotated labels is not necessary. In addition, we have to investigate whether a topic vector reasonably expresses a DA before using a sophisticated clustering method.",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Duda et al., 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised clustering method",
"sec_num": "3.2."
},
{
"text": "The use of a wide variety of expressions that conveys the same information is natural. However, a different expression is treated to be completely different in a feature space. Therefore, paraphrasing techniques seem to be promising approaches to understand the variety of expressions. In particular, in Japanese, the ending of a sentence or utterance has many expressions even though they convey the same meaning. These expressions are related to the Japanese honorific system, and in most cases the difference in the expression does not affect the DA classification. We construct a rule-based paraphraser that is very similar to the paraphraser proposed by Ohtake and Yamamoto (Ohtake and Yamamoto, 2001) , and most of the rules in the honorific system were derived from their paraphraser. The paraphraser was carefully designed to be free from errors and developed to paraphrase a variety of expressions that convey the same meaning into a standard expression.",
"cite_spans": [
{
"start": 679,
"end": 706,
"text": "(Ohtake and Yamamoto, 2001)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing to reduce variety of expressions",
"sec_num": "3.3."
},
{
"text": "The rules of the paraphraser are based on a morphological analysis. We can use regular expressions for pattern matching in a rule and we can conjugate any morphemes that have conjugation to fit in its context. Therefore, a small number of rules cover a large number of targets that need to be paraphrased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Paraphrasing to reduce variety of expressions",
"sec_num": "3.3."
},
{
"text": "In this section, we describe our experiments and introduce the data set. We also mention the features that were used in the construction of the PLSI models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4."
},
{
"text": "We used the ATR Dialogue Database (Morimoto et al., 1994) . This database consists of 1,983 dialogues (83,052 utterances) in traveling situations. We used manually transcribed Japanese texts in the database. In the transcribed texts, fillers and disfluencies are tagged with a marker. In order to use precisely analyzed results, we eliminated the fillers and disfluencies in the transcribed texts by a morphological analyzer that was used to obtain morphemes as units like words.",
"cite_spans": [
{
"start": 34,
"end": 57,
"text": "(Morimoto et al., 1994)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data set",
"sec_num": "4.1."
},
{
"text": "We annotated 13 dialogues (489 utterances) with DA tags used in the paper by Tanaka and Yokoo (Tanaka and Yokoo, 1999) to evaluate the unsupervised classification. The remainder of the data, namely 1,970 dialogues (82,563 utterances), were used to estimate the parameters of the PLSI model. The original DA tag set that consisted of 26 tags was designed to annotate the dialogue segments that were shorter than an utterance. Therefore, there were multi-labeled utterances in our annotation results because in some cases, a person utters several things in a single utterance. For example, when a person is asked a YES or NO question (YN-QUESTION) , the person who answers might say \"Yes, I will\u2026(YES, INFORM).\" In this case, we treated the last DA tag as the labeled tag of the utterance. In the annotated dialogues, 16 tags were actually used.",
"cite_spans": [
{
"start": 77,
"end": 118,
"text": "Tanaka and Yokoo (Tanaka and Yokoo, 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 632,
"end": 645,
"text": "(YN-QUESTION)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data set",
"sec_num": "4.1."
},
{
"text": "We used uni-gram and bi-gram word frequencies. In this paper, a word is considered as a morpheme 1 in Japanese. An element of the feature consists of a pair of morpheme's basic form and POS (part of speech). However, numbers and proper names are generalized by eliminating this basic form. In other words, the features of the numbers and proper names are recognized by only by their POS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features for PLSI",
"sec_num": "4.2."
},
{
"text": "In general, PLSI requires words and their frequencies in order to construct a model from a corpus. However, Serafin et al. showed that adding extra features works well with latent semantic analysis in the DA classification (Serafin et al., 2004) . The PLSI model can be regarded as a probabilistic version of a latent semantic analysis. Therefore, we can expect the same effect on PLSI, and we introduced the uni-gram and bi-gram features. The segment for a unit of a document consists of the utterance and its previous utterance. The dialogues in the database are conversations between two people such as a customer and a clerk.",
"cite_spans": [
{
"start": 223,
"end": 245,
"text": "(Serafin et al., 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features for PLSI",
"sec_num": "4.2."
},
{
"text": "We constructed PLSI models 2 on the number of latent variables, namely 10, 50, 100, 200, and 300, in order to determine the number of latent variables. The parameter for tempered EM (TEM)-a technique used to ease the over-fitting problem-was set to 0.9 (we use this value in all of the experiments in this study) because this value exhibited the best performance in the preliminary experiments. We formulated topic vectors from the evaluation dialogue set, and we prepared the average vectors for each DA label from these topic vectors. Finally, we compared each average vector with the others according to their cosine values, and we averaged the cosine values. Therefore, these numbers indicate the distinguishing ability of topic vectors, where a smaller number is better. The average values for each number of latent variables (10, 50, 100, 200, and 300) with all the DA labels are as follows: 0.607, 0.334, 0.288, 0.290 and 0.275, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of variables and performance on differentiation",
"sec_num": "4.3."
},
{
"text": "We applied the rule-based paraphraser to the data set (83,052 utterances), and all of the 56,027 utterances were paraphrased.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of paraphrasing",
"sec_num": "4.4."
},
{
"text": "First, we show the result of an unsupervised clustering result with manually annotated labels using a non-paraphrased corpus. We constructed the PLSI model with 100 latent variables from the learning corpus that was not paraphrased. The test set was fed to the PLSI model, yielding the topic vectors. Then, we used the K-means clustering method with 16 clusters because the size of the tag set that was used to annotate the test set is 16. The result is shown in the \"without paraphraser\" column of Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 499,
"end": 506,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Impact of paraphrasing",
"sec_num": "4.4."
},
{
"text": "Second, we show the result of an unsupervised clustering result with manually annotated labels using a paraphrased corpus. The result is shown in the \"with paraphraser\" column of Table 1 . Note that the cluster IDs found in the columns, \"with paraphraser\" and \"without paraphraser\" are independent of each other. ",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Impact of paraphrasing",
"sec_num": "4.4."
},
{
"text": "We carried out a very small experiment for the semi-supervised approach. The experiment is small because the amount of annotated data is very small. We have only 13 annotated dialogues. We used 12 dialogues to construct the average vectors for each label, where a withheld dialogue (32 utterances) was used as the test data. The method to classify an utterance is very simple. From a learning set, we construct the average vectors for each label. Then, an utterance is given to construct a topic vector using PLSI with 100 latent variables and the average vector closest to the topic vector is calculated. Finally, the label of the average vector is inferred from the utterance's classification. The accuracies of the results are 37.5% (12/32) without paraphrasing and 21.9% (7/32) with paraphrasing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised approach-preliminary experiment",
"sec_num": "4.5."
},
{
"text": "When using the latent variable model, the number of latent variables is an issue. In our experiment, there was not a considerable difference between the result using 100 latent variables and the results using more than 100 latent variables; therefore, 100 latent variables seem sufficient for our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "We compared the unsupervised approach and manually annotated labels. It is difficult to conclude whether the unsupervised approach works well or not. There were some cases in which the label and cluster have a strong correlation. For example, the label \"THANK\" and cluster ID A and the label \"WH-Q\" and ID F in the \"without paraphraser\" column of Table 1 indicate very good cases. On the other hand, the original label \"INFORM\" indicated a miscellaneous category, and there were so many utterances labeled \"INFORM.\" Thus, utterances labeled \"INFORM\" were classified into many clusters. We paraphrased our data set to reduce the variety of expressions. From Table 1 , we find a very clear tendency in the result of the label \"ACT-REQ (ACTION-REQUEST)\" that was used to label utterances asking someone to perform a certain task. In Japanese, there is a large variety of expressions to this end. Without paraphrasing, these expressions are treated differently. On the contrary, we treated them as the same expression when they were paraphrased into a single expression. Therefore, paraphrasing works quite well on utterances labeled \"ACT-REQ.\"",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 657,
"end": 664,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "We have to consider the number of variables in a latent variable model and the number of clusters in an unsupervised clustering method. In our experiment, the number of cluster used was the same as the number of labels that were used in the learning corpus. However, if we used more clusters, we might be able to classify a large cluster into proper sub-clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "On the other hand, there were some clusters having many elements that correspond to many manually labeled tags. For example, cluster ID N in the \"without paraphraser\" column of Table 1 was related to many labels. From the observation of the test set, the phrase \"onegai shimasu (please)\" seems to be strongly related to this cluster. This phrase is frequently used in Japanese when requesting someone to perform a particular task. Ten utterances labeled \"ACT-REQ\" were classified in this cluster. However, this phrase is too common to use as a feature. Meanwhile, cluster ID o in the \"with paraphraser\" column of Table 1 was also related to many labels. In these cases, the expressions of number seem to be related to this cluster. We have to consider what feature is effective for DA classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 185,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 614,
"end": 621,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "We carried out a very small preliminary experiment using a semi-supervised approach. The size of the learning data for the semi-supervised method was too small to evaluate the method. In addition, the accuracies were quite low-37.5% without paraphrasing and 21.9% with paraphrasing. Contrary to our expectations, the result with paraphrasing was worse than that without paraphrasing. The observation results suggested several points. First, some labels did not match their utterances after paraphrasing. The expressions used in the utterances were drastically changed by the paraphraser and the annotated labels had become inappropriate for the paraphrased utterances. Thus, we should control such paraphrasing. When we re-annotated the paraphrased test set, the accuracy increased from 21.9% to 31.3%. Second, paraphrasing caused a side effect. Reducing the variety of expressions constricted the features used by the PLSI. The paraphraser was not designed for DA classification. Some phrases should not be paraphrased and we should retain the original expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5."
},
{
"text": "This paper discussed an unsupervised approach for DA classification using a rule-based paraphraser and a latent semantic model. In the experiments, a PLSI model with 100 latent variables was found to be efficient with respect to its distinguishing ability. At the moment, on the other hand, we are unsure whether the unsupervised approach is promising when comparing the results obtained by the unsupervised approach with the manually labeled results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "The introduction of a paraphraser that reduces the variety of expressions showed good results. In particular, in Japanese, there are many euphemisms for asking someone to perform a particular task. The paraphraser paraphrased such expressions effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Several points remain for our future work as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "A further analysis of the classification results would be useful. In particular, we have to investigate whether the compressed feature space produced by PLSI is really effective for DA classification or not. Introducing other features would be effective. We only used uni-gram and bi-gram morphemes. Introducing tri-gram morphemes or other features such as dependency relationships may be effective. Tuning the paraphraser is required. The paraphraser was not tuned for DA classification. The paraphraser was designed to be generic. In addition, we will apply this unsupervised method to the corpus that is now under development for DA annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "22nd Pacific Asia Conference on Language, Information and Computation, pages 445-451",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used a morphological analyzer available at http://mecab.sourceforge.net/ 2 We used the package available at http://chasen.org/~taku/software/plsi/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Draft of DAMSL: Dialog act markup in several layers",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Core",
"suffix": ""
}
],
"year": 1997,
"venue": "Discourse Research Initiative",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen, James and Mark Core. 1997. Draft of DAMSL: Dialog act markup in several layers. Technical Report, Discourse Research Initiative.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Segmented and unsegmented dialogue-act annotation with statistical dialogue models",
"authors": [
{
"first": "Carlos",
"middle": [
"D"
],
"last": "Hinarejos",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Mart\u00ednez",
"suffix": ""
},
{
"first": "Jos\u00e9 Miguel",
"middle": [],
"last": "Granell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bened\u00ed",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL",
"volume": "",
"issue": "",
"pages": "563--570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinarejos, Carlos D. Mart\u00ednez, Ram\u00f3n Granell, and Jos\u00e9 Miguel Bened\u00ed. 2006. Segmented and unsegmented dialogue-act annotation with statistical dialogue models. In Proceedings of the COLING/ACL 2006, pp. 563-570.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Probabilistic Latent Semantic Indexing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 22 nd Annual ACM Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hofmann, Thomas. 1999. Probabilistic Latent Semantic Indexing. In Proceedings of the 22 nd Annual ACM Conference on Research and Development in Information Retrieval, pp. 50- 57.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A speech and language database for speech translation research",
"authors": [
{
"first": "Tsuyoshi",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Uratani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Furuse",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sobashima",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Higuchi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of ICSLP '94",
"volume": "",
"issue": "",
"pages": "1791--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morimoto, Tsuyoshi, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi and Y. Yamazaki. 1994. A speech and language database for speech translation research. In Proceedings of ICSLP '94, pp. 1791-1794.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Paraphrasing honorifics",
"authors": [
{
"first": "Kiyonori",
"middle": [],
"last": "Ohtake",
"suffix": ""
},
{
"first": "Kazuhide",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2001,
"venue": "Workshop Proceedings of Automatic Paraphrasing: Theories and Applications (NLPRS2001 Postconference Workshop)",
"volume": "",
"issue": "",
"pages": "13--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ohtake, Kiyonori and Kazuhide Yamamoto. 2001. Paraphrasing honorifics. In Workshop Proceedings of Automatic Paraphrasing: Theories and Applications (NLPRS2001 Post- conference Workshop), pp. 13-20.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Evaluating contextual dependency of paraphrases using a latent variable model",
"authors": [
{
"first": "Kiyonori",
"middle": [],
"last": "Ohtake",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005) conjunct with IJCNLP 2005",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ohtake, Kiyonori. 2005. Evaluating contextual dependency of paraphrases using a latent variable model. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005) conjunct with IJCNLP 2005, pp. 65-72.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "FLSA: Extending latent semantic analysis with features for dialogue act classification",
"authors": [
{
"first": "Riccardo",
"middle": [],
"last": "Serafin",
"suffix": ""
},
{
"first": "Brbara",
"middle": [],
"last": "Di",
"suffix": ""
},
{
"first": "Eugenio",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42 nd Meeting of the Association for Computational Linguistics (ACL'04)",
"volume": "",
"issue": "",
"pages": "692--699",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serafin, Riccardo and Brbara Di Eugenio. 2004. FLSA: Extending latent semantic analysis with features for dialogue act classification. In Proceedings of the 42 nd Meeting of the Association for Computational Linguistics (ACL'04), pp. 692-699.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "3",
"pages": "339--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, Andreas, K. ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, P. Taylor, R. Martin, C. Van Ess-Dykema, and M. Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-373.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An efficient statistical speech act type tagging system for speech translation systems",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Akio",
"middle": [],
"last": "Yokoo",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Thirty Seventh Annual Meeting of the Association for Computational Linguistics (ACL'99)",
"volume": "",
"issue": "",
"pages": "381--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanaka, Hideki and Akio Yokoo. 1999. An efficient statistical speech act type tagging system for speech translation systems. In Proceedings of the Thirty Seventh Annual Meeting of the Association for Computational Linguistics (ACL'99), pp. 381-388.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Hierarchical clustering algorithms for document datasets",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Karypis",
"suffix": ""
}
],
"year": 2005,
"venue": "Data Mining and Knowledge Discovery",
"volume": "10",
"issue": "",
"pages": "141--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao, Ying and George Karypis. 2005. Hierarchical clustering algorithms for document datasets. Data Mining and Knowledge Discovery, 10: 141-168.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": ", C:11, D:10, E:6, F:2, G:2, H:35, I:7, J:12, K:7, L:14, M:26, N:27, O:12 a:11, b:2, c:1, d: h:13, i:10, j:3",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"content": "<table><tr><td/><td>without paraphraser</td><td>with</td><td>paraphraser</td></tr><tr><td>manual labels (freq.)</td><td>(cluster ID: its frequency)</td><td colspan=\"2\">(cluster ID</td></tr><tr><td>ACK (68)</td><td>B:7, G:24</td><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "Unsupervised clustering result with/without paraphrasing and manual labels",
"num": null
}
}
}
}