{ "paper_id": "D08-1040", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:30:32.411634Z" }, "title": "A Casual Conversation System Using Modality and Word Associations Retrieved from the Web", "authors": [ { "first": "Shinsuke", "middle": [], "last": "Higuchi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hokkaido University", "location": { "postCode": "060-0814", "settlement": "Sapporo", "country": "Japan" } }, "email": "" }, { "first": "Rafal", "middle": [], "last": "Rzepka", "suffix": "", "affiliation": { "laboratory": "", "institution": "Hokkaido University", "location": { "postCode": "060-0814", "settlement": "Sapporo", "country": "Japan" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper we present a textual dialogue system that uses word associations retrieved from the Web to create propositions. We also show experiment results for the role of modality generation. The proposed system automatically extracts sets of words related to a conversation topic set freely by a user. After the extraction process, it generates an utterance, adds a modality and verifies the semantic reliability of the proposed sentence. We evaluate word associations extracted form the Web, and the results of adding modality. Over 80% of the extracted word associations were evaluated as correct. Adding modality improved the system significantly for all evaluation criteria. We also show how our system can be used as a simple and expandable platform for almost any kind of experiment with human-computer textual conversation in Japanese. Two examples with affect analysis and humor generation are given.", "pdf_parse": { "paper_id": "D08-1040", "_pdf_hash": "", "abstract": [ { "text": "In this paper we present a textual dialogue system that uses word associations retrieved from the Web to create propositions. We also show experiment results for the role of modality generation. The proposed system automatically extracts sets of words related to a conversation topic set freely by a user. After the extraction process, it generates an utterance, adds a modality and verifies the semantic reliability of the proposed sentence. We evaluate word associations extracted form the Web, and the results of adding modality. Over 80% of the extracted word associations were evaluated as correct. Adding modality improved the system significantly for all evaluation criteria. We also show how our system can be used as a simple and expandable platform for almost any kind of experiment with human-computer textual conversation in Japanese. Two examples with affect analysis and humor generation are given.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many task-oriented dialogue systems (Liu et al., 2003; Reitter et al., 2006) have been developped. Research on non-task-oriented dialogue systems like casual conversation dialogue systems (\"chatbots\") is on the other hand not very common, perhaps due to the many amateurs who try to build naturally talking systems using sometimes very clever, but rather unscientific methods although there are systems with chatting abilities as (Bickmore and Cassell, 2001) but concentrate on applying strategies to casual conversation rather than their automatic generation of those conversations. However, we believe that the main reason is that an unrestricted domain is disproportionately difficult compared to the possible use such a system could have. It is for example very hard to predict the contents and topics of user utterances, and therefore it is almost impossible to prepare conversational scenarios. Furthermore, scenarios need more or less specific goals to be useful. However in our opinion, sooner or later non-task-oriented dialogue systems will have to be combined with task oriented systems and used after recognizing that the user's utterance does not belong to a given task. This would lead to more natural interfaces for e.g. information kiosks or automatic guides placed in public places where anyone can talk to them about anything (Gustafson and Bell, 2000; Kopp et al., 2005) regardless of the role the developers intended. For this reason we have also started implementing emotiveness recognition and joke generation modules that are presented later in the paper.", "cite_spans": [ { "start": 36, "end": 54, "text": "(Liu et al., 2003;", "ref_id": "BIBREF0" }, { "start": 55, "end": 76, "text": "Reitter et al., 2006)", "ref_id": "BIBREF1" }, { "start": 430, "end": 458, "text": "(Bickmore and Cassell, 2001)", "ref_id": "BIBREF3" }, { "start": 1344, "end": 1370, "text": "(Gustafson and Bell, 2000;", "ref_id": "BIBREF4" }, { "start": 1371, "end": 1389, "text": "Kopp et al., 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Well-known examples of non-task-oriented dialogue systems are ELIZA (Weizenbaum, 1966) and A.L.I.C.E 1 , though the former was built to parody a Rogerian therapist which can be regarded as a task. Both systems and their countless imitators 2 use a lot of rules coded by hand. ELIZA is able to make a response to any input, but these responses are only information requests without providing any new information to the user. In the case of A.L.I.C.E, the knowledge resource is limited to the existing database. Creating such databases is costly and a programmer must learn the AIML mark-up language to build it. Although there have been attempts at updating AIML databases automatically (Pietro et al., 2005) , the scale was rather limited.", "cite_spans": [ { "start": 68, "end": 86, "text": "(Weizenbaum, 1966)", "ref_id": "BIBREF6" }, { "start": 686, "end": 707, "text": "(Pietro et al., 2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As mentioned above, these examples and many other \"chatbots\" need hand-crafted rules, and are thus often ignored by computer scientists and rarely become a research topic. However, they have proved to be useful for e-learning (Pietro et al., 2005) and machine learning (Araki and Kuroda, 2006) support.", "cite_spans": [ { "start": 226, "end": 247, "text": "(Pietro et al., 2005)", "ref_id": "BIBREF7" }, { "start": 269, "end": 293, "text": "(Araki and Kuroda, 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Building a system using automatic methods, like we do, seems to be the most realistic way for unrestricted domains. Considering the large cost of developing a program that can talk about any topic, it is appealing to turn to the huge and cheap textual source that is the Internet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this very moment millions of people (Kumar et al, 2003) are updating their blogs and writing articles on every possible topic. These are available on the Web which we can access any time, and in a faster and faster manner, the search engines grow more and more efficient. Thus, the Web is well suited to extracting word associations triggered by words from user utterances made in a topic-free dialogue system.. We present a system making use of this type of information. It automatically extracts word association lists using all keywords in a given utterance without choosing a specific one (which most other systems that ignore the context do) then generates a reply using the only one strongest association from the nouns, verbs and adjectives association groups. Modality is then added to the reply, and then it is output.", "cite_spans": [ { "start": 39, "end": 58, "text": "(Kumar et al, 2003)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our system is built upon the idea that human utterances consist of a proposition and a modality (Nitta et al., 1989) . In this paper we present an algorithm for extracting word associations from the Web and a method for adding modality to statements. We evaluate both the word associations and the use of modality. We also suggest some future possible extensions of the system and show a small experiment with adding humor to the system.", "cite_spans": [ { "start": 96, "end": 116, "text": "(Nitta et al., 1989)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, the system described works for Japanese and uses text as input and output. Though the final goal of our research is to help developing freely talking car navigation systems that by their chatting abilities can help to avoid drowsiness while driving and so on. in this part of the development we concentrate on proposition generation and modality processing. Therefore, we work only with text now. We plan to combine this project with research on in car voice recognition and generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this chapter, we present a method for automatic extraction of word associations based on keywords from user utterances. We use the Google 3 search engine snippets to extract word associations in real time without using earlier prepared resources, such as off-line databases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Word Associations", "sec_num": "2" }, { "text": "In the first step, the system analyzed user utterances using the morphological analyzer MeCab 4 in order to spot query keywords for extracting word associations lists. We define nouns, verbs, adjectives, and unknown words as query keywords. The reason we chose these word classes is that these word classes can be treated as important and, to some extent, describe the context. We define a noun as the longest set of nouns in a compound noun. For example, the compound noun shizen gengo shori 5 (natural language processing) is treated by MeCab as three words: (shizen -natural), (gengo -language) and (shori -processing). Our system, however, threats it as one noun. In the next step, the system uses these keywords as query words for the Google search engine. The system extracts the nouns from the search results and sorts them in frequency order. This process is based on the idea that words which co-occur frequently with the input words are of high relevance to them. The number of extracted snippets is 500. This value was set experimentally, taking the processing time and output quality into account. The top ten words of a list are treated as word associations, see Table 1 for an example. ", "cite_spans": [], "ref_spans": [ { "start": 1176, "end": 1183, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Extracting Word Associations from the Web", "sec_num": "2.1" }, { "text": "We asked volunteers to use our system and to evaluate the correctness of word lists generated by the system. First, a participant freely inputs an utterance, for which the system retrieves ten association words. Next, a participant rated these words using a scale of one to three with 3 meaning \"perfectly correct\", 2 -\"partially correct\" and 1 -\"incorrect\". In this experiment we consider words that receive a 2 or 3 as usable. The reason associations rated 2 or 3 are considered as usable is that the definition of what makes a good word association here is difficult to specify. When it comes to topic-free conversations we have observed that associations have an effect on a certain context. Three volunteers repeated the experiment ten times, so the final amount of evaluated words was 300. Table 2 shows the results of the top 10 words, sorted by the frequency of appearance. Table 3 shows the results of the top 5 words. What constitutes a correct word association was left to each volunteer to decide subjectively since in a casual conversation setting associations are hard to define strictly. As shown in Table 2 approximately 77% of the word associations were judged as usable but there were individual differences between the evaluators. This shows that the definition of word associations is different for each participant. Table 3 shows that approximately 80% of the word associations were judged as usable. It is thus highly likely that the top words from the frequency lists are correct associations. The results show that automatic extracting of word associations using a Web search engine is feasible. The main reason for extracting word associations from the Web is that thanks to this method, the system can handle new information, proper names, technical terms and so on. by using only the snippets from the search engine. The word association extraction takes no more than few seconds. For the evaluation we used only nouns but we expect although verbs and adjectives are often more abstract than nouns, the word associations for them will improve the results.", "cite_spans": [], "ref_spans": [ { "start": 796, "end": 803, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 882, "end": 889, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 1115, "end": 1122, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1337, "end": 1344, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "2.2" }, { "text": "The system generates replies in the following way:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Description of the System", "sec_num": "3" }, { "text": "\u2022 extraction of keywords from user utterance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Description of the System", "sec_num": "3" }, { "text": "\u2022 extraction of word associations from the Web", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Description of the System", "sec_num": "3" }, { "text": "\u2022 generation of sentence proposition using the extracted associations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Description of the System", "sec_num": "3" }, { "text": "\u2022 addition of modality to the sentence proposition", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "General Description of the System", "sec_num": "3" }, { "text": "The system applies morphological analysis to the use utterances in the same way as described in section 2.1 and extracts keywords based on part of speech. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Keywords from User Utterances", "sec_num": "3.1" }, { "text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a3 \u00a6 ! # \" $ \u00a5% & \u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 \u00a3 \u00a6 \u00a7 $ & ' & ( \" \u00a9 \u00a4 \u00a7 ( \u00a3 ) \" $ ' & 0\u00a5 1 \" 3 2 4 \u00a35 7 6 8 @ 9 A \u00a4 B \u00a5 \u00a7 ( \u00a3 \u00a6 C \u00a7 & ' ' \u00a3 $ \u00a9 D \u00a5 \" D \" $ & E \u00a3 \u00a6 \" 3 C F ' & ' \u00a4 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a7 \u00a9 ! \u00a3 % \u00a7 & & ( \" \u00a9 ( \u00a7 ( \u00a3 \u00a6 \" $ E & G H G I G P Q % \u00a2 % E R 2 S \" ( % E \u00a7 @ T\u00a3 \u00a3\" S \u00a3 \u00a6 5 ' D \u00a5 \" D \" $ & \u00a3 \u00a6 \" 3 U S \" ( % E \u00a7 @ T\u00a3 S \u00a4 \u00a1 ( \u00a3 ) \u00a5 \u00a7 E \u00a9 ! \u00a3 ) \" @ 0\u00a5 \" $ 2 V \u00a9 ( 5 ( \u00a7 \u00a4 \u00a3 B T\" W F \u00a3 D F \u00a3 3 & \u00a4 & \u00a3 B 2 F ( \u00a3\u00a3 \u00a5 \u00a7 3 ( \u00a9 X D F ' \u00a3 F E & ' \u00a5 B F \u00a3\u00a3 \u00a2 \u00a5 \u00a7 $ $ \u00a9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Keywords from User Utterances", "sec_num": "3.1" }, { "text": "The system performs a Google search using the extracted keywords as a query. The system sorts the results obtained from the query by their frequency as in section 2.1. In section 2.1 only nouns were extracted but here we also extract verbs and adjectives. After sorting all words in adjective, verb and noun lists the system uses the ones with the highest frequency as word associations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Words Association from the Web", "sec_num": "3.2" }, { "text": "Using the associations, the system generates the proposition of a sentence to be used as a reply to the user input. A proposition is an expression representing an objective statement. The proposition is generated by applying associations to a proposition template like [(noun) (topic indicating particle wa) (adjective)]. We prepared 8 proposition templates manually (see Table 4 ). The templates were chosen subjectively after examining statistics from IRC 6 chat logs. Our criteria for choosing templates from the chat logs was that they should belong to the 20 most frequent modality patterns and to be flexible enough to fit a range of grammatical constructions, for example in English, \"isn't it\" cannot follow verbs while \"I guess\" can follow nouns, adjectives, and verbs. The proposition templates are applied in a predetermined order: for example, first a template \"(noun) (wa) (adjective)\" is used; next a template \"(noun) (ga) (adjective)\" is used. However, since the generated proposition is not always a natural statement, the system uses exact matching searches of the whole phrases in a search engine to check the naturalness of each proposition. If the frequency of occurrence of the proposition is low, it is defined as unnatural and deleted. This processing is based on the idea that the phrases existing on the Web in large numbers are most probably correct grammatically and semantically. If an unnatural proposition is generated, the system generates another proposition in the same way. In this experiment the system used propositions for which the hit number exceeded 1,000 hits using Google. Thus, the processing proceeds as follows. The system first selects the top noun, top verb, and top adjective word associations. These are applied to the templates in a predetermined order. If a generated proposition is judged as valid (using Google, occurrence on the web indicates validity), it is used. If not, another template is tried until a valid proposition is found. The reason for not trying every possible combination of associated words is prohibitively long processing time. ", "cite_spans": [], "ref_spans": [ { "start": 372, "end": 379, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Generation of Proposition Using Word Associations", "sec_num": "3.3" }, { "text": "Finally, the system adds modality to the generated proposition. By modality we mean a set of grammatical and pragmatic rules to express subjective judgments and attitudes. In our system, modality is realized through adverbs at the end of a sentence which is common in Japanese (Nitta et al., 1989 ). In our system, a pair of sentence head and sentence end auxiliary verb are defined as \"modality\".", "cite_spans": [ { "start": 277, "end": 296, "text": "(Nitta et al., 1989", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Adding Modality to the Propositions", "sec_num": "3.4" }, { "text": "There is no standard definition of what constitutes modality in Japanese. In this paper modality of casual conversation is classified into questions and informative expressions. Questions are expressions that request information from the user. Informative expressions are expressions that transmit information to the user. Patterns for these modalities are extracted automatically from IRC chat logs (100,000 utterances) in advance. Modality patterns are extracted in these ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Modality", "sec_num": "3.4.1" }, { "text": "\u2022 pairs of grammatical particles and an auxiliary verbs placed at the end of sentences are defined as ending patterns", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Modality", "sec_num": "3.4.1" }, { "text": "\u2022 sentences with question marks are defined as questions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Modality", "sec_num": "3.4.1" }, { "text": "\u2022 adverbs, emotive words, and connectives at the beginning of sentences are defined as informative expressions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Modality", "sec_num": "3.4.1" }, { "text": "\u2022 candidate patterns thus obtained are sorted by frequency", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extracting Modality", "sec_num": "3.4.1" }, { "text": "First the system extracts sentence ending patterns from IRC chat logs. If an expression contains question marks, it is classified as a question. Next, the system extracts adverbs, emotive words, and connectives from the beginning and end of sentences from the IRC logs. These pairs (beginning and end) of expressions are classified as \"informative expressions\". For example question expression \"desu-ka?\" is extracted from a human utterance like \"Kyou-wa samui desu-ka?\" (Is it cold today?). An informative expression \"maa *** kedo\" is extracted from a human utterance as \"Maa sore-wa ureshii kedo\" (Well, I'm glad, but you know...). 685 patterns were obtained for informative expressions. 550 of these informative expression patterns were considered by authors as correct (80%). For questions 396 patterns were obtained, and 292 patterns (73%) were evaluated as correct. We sorted these candidates in frequency order. The words appearing at the top of the list were correct, but even the ones appearing only once were still deemed as usable. For example, the question expression \"janakatta deshita-kke?\" is a correct expression, but appeared only once in the 100,000 utterances. Hence, we confirmed that chat logs include various modality expressions, and only a few of them are incorrect. Tables 5 and 6 show some examples of modality patterns. ", "cite_spans": [], "ref_spans": [ { "start": 1291, "end": 1305, "text": "Tables 5 and 6", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Extracting Modality", "sec_num": "3.4.1" }, { "text": "The system adds the modality from section 3.4.1 to the proposition from section 3.3 to generate the system output. This process is based on the idea that human utterance consists of proposition and modality. A modality pattern is selected randomly. For example, if the system generates the proposition \"fuyu wa samui (Winter is cold.)\" and selects the modality \"iyaa ... desu-yo (Ooh ... isn't it?)\", the gen-erated output will be \"iyaa, fuyu-wa samui desu-yo (Winter is cold, you know)\". However, there is a possibility that the system generates unnatural output like \"fuyu-wa samui dayo-ne (Winter is cold, arent't it?)\", depending on the pair of proposition and modality. To this problem, the system uses the Google search engine to filter out unnatural output. The system performs a phrase search on the end of the sentence. If the number of search hits is higher than threshold, the output is judged as correct. If the number of a search hits is lower than the threshold, the output is judged as incorrect and discarded, and a new reply is generated. Here, we experimentally set the threshold to 100 hits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adding Modality", "sec_num": "3.4.2" }, { "text": "We used system \u03b1, generating only the proposition, and system \u03b2, generating both proposition and modality. 5 participants used each systems for conversations of 10 turns and evaluated the conversations on a 5-point scale. Evaluation criteria were \"will to continue the conversation\" (A), \"grammatical naturalness of dialogues\" (B), \"semantical naturalness of dialogues\" (C), \"vocabulary richness\" (D), \"knowledge richness\" (E), and \"humanity of the system\" (F). Table 7 shows average scores for the evaluations of each system. System \u03b2 that uses modality scored much higher than system \u03b1. Table 8 shows examples of actual dialogue. In the evaluation, the participants expressed the opinion that an utterance like (xx ha yy) is unnatural and using a modality like (maa)(\"well\"), (moo)(\"anyway\") is very natural. Thus we can say that the modality expressions make the utterances of the system seem more natural.", "cite_spans": [], "ref_spans": [ { "start": 462, "end": 469, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 589, "end": 597, "text": "Table 8", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4" }, { "text": "The simplicity, real-time processing capabilities and promising results showing that users do not get bored so quickly encouraged us to perform trials with other ongoing projects and experiment with the system working as a platform for adding various modules and algorithms. By using our system it is possible to perform tests to see if a new idea will support or improve human-computer interaction or not. Here we will briefly describe two such trials -one on guessing emotive values of utterances and (What food do you like ?) system: Maa, tabemono-wa oishii desu.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(Well, food tastes good.) user: Saikin-wa osake-mo suki-ni natte-kita-n-da. (Recently, I begin to like alcohol, too.) system: Demo, sake-wa yowai-no-yo-ne.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(But, I can't drink much.) user: Yowai-n-da... Sugu yochau-no? (Really...? Do you get drunk quickly?) system: Maa, sore-wa sugoi-wa-ne.", "cite_spans": [ { "start": 47, "end": 62, "text": "Sugu yochau-no?", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(Whow, that's amazing.) user: Boku-wa wari-to osake-wa tsuyoi hou-nan-da. (I have a rather high tolerance for alcohol.) system: Iya, sore-wa tsuyoi-n-de.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(Well, it is strong, I can say.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "user: Kyou-wa sukoshi hadazamui-yo-ne. Natsu-nanoni. (It is a little cold today. Even though it is summer.) system: Asa-wa hadazamui-toka ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(Isn't it chilly in morning?) user: Asa-kara hadazamukatta-ne.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(It was chilly since morning.) system: Aa kyou-wa hayai-na.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "(Ah, it's early. )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "one on improving the system's overall evaluation by adding a pun generator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The System as a Platform", "sec_num": "5" }, { "text": "Ptaszynski et al. have developed a method for affect analysis of Japanese text. Their method is based on cross-referencing lexical emotive elements with emotive expressions appearing in text. In the process of analysis first a general emotive context is determined and then the specific types of emotional states conveyed in an utterance are extracted. They support this method with a Evaluation criteria A B C D E F A B C D E F Participant a 1 3 2 2 4 2 4 4 3 4 3 5 Participant b 1 3 1 2 1 1 4 4 4 5 4 3 Participant c 1 2 1 2 1 1 1 2 1 2 1 1 Participant d 1 3 1 3 1 2 4 3 1 3 3 4 Oarticipant e 1 4 1 1 2 1 3 2 2 4 5 4 Average 1.0 3.0 1.2 2.0 1.8 1.4 3.2 3.0 2.2 3.6 3.2 3.4", "cite_spans": [], "ref_spans": [ { "start": 385, "end": 703, "text": "Evaluation criteria A B C D E F A B C D E F Participant a 1 3 2 2 4 2 4 4 3 4 3 5 Participant b 1 3 1 2 1 1 4 4 4 5 4 3 Participant c 1 2 1 2 1 1 1 2 1 2 1 1 Participant d 1 3 1 3 1 2 4 3 1 3 3 4 Oarticipant e 1 4 1 1 2 1 3 2 2 4 5 4 Average", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Testing Affect Analysis", "sec_num": "5.1" }, { "text": "Web-mining technique to improve the performance of the emotional state type extraction. A system constructed on the basis of their method achieved human level performance in determining the emotiveness of utterances, and 65% of human level performance in extracting the specific types of emotions. Also, the supporting Web mining technique improved the performance of the emotional state type extraction to 85% of the human level (Shi et al, 2008) . As these are very promising figures we are currently in the phase of implementing their ideas in our system and testing how emotion recognition can influence speech act analysis and the automatic choice of proper modality.", "cite_spans": [ { "start": 430, "end": 447, "text": "(Shi et al, 2008)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "System \u03b1 proposition system \u03b2 proposition + modality", "sec_num": null }, { "text": "In this trial, an experiment showing that humor can improve a non-task oriented conversational system's overall performance was conducted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Improving the System Using Humor", "sec_num": "5.2" }, { "text": "By using a simplified version of Dybala's PUNDA system , a pungeneration was added to our baseline system. The PUNDA algorithm consists of two parts: A Candidate Selection Algorithm and a Sentence Integration Engine. The former generates a candidate for a pun analyzing an input utterance and selects words or phrases that could be transformed into a pun by one of four generation patterns: homophony, initial mora addition, internal mora addition or final mora addition. The latter part generates a sentence including the candidate extracted in the previous step. To make the system's response more related to the user's input, each sentence that included a joke started with the pattern \"[base phrase] to ieba\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementing PUNDA system", "sec_num": "5.2.1" }, { "text": "(\"Speaking of [base phrase]\"). The remaining part of the sentence was extracted from the Web and the candidate was used as a query word and the list of sentences including this word was retrieved. Then the shortest sentence with an exclamation mark is selected as most jokes convey some emotions. When the candidate list was empty, the system selected one random pun from a pun database.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Implementing PUNDA system", "sec_num": "5.2.1" }, { "text": "In the first experiment, 5 participants were asked to perform a 10-turn dialogue with two systems. After using both systems (baseline and humorequipped), users were asked to evaluate both systems's performances by answering the following questions: A) Do you want to continue the dialogue?; B) Was the system's utterances grammatically natural?; C) Was the system's utterances semantically natural?; D) Was the system's vocabulary rich?; E) Did you get an impression that the system possesses any knowledge?; F) Did you get an impression that the system was human-like?; G) Do you think the system tried to make the dialogue more funny and interesting? and H) Did you find the system's utterances interesting and funny? Answers were given on a 5-point scale and the results are shown in Table 9 .", "cite_spans": [], "ref_spans": [ { "start": 787, "end": 794, "text": "Table 9", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Experiment results", "sec_num": "5.2.2" }, { "text": "A third-person evaluation experiment was also performed and again the humor-equipped system scored higher than the non-humor one. The question asked in this evaluation was: \"Which dialogue do you find most interesting and funny?\". Evaluators could choose between 3 options: Dialogue 1 (Baseline system first 3 turns), Dialogue 2 (Humorequipped system, first 3 turns with system's third response replaced by pun generator's output) and Dia- logue 3 (the first 3 turns of the baseline system with joking ability). Dialogue 1 and Dialogue 2 have the same input. Among 25 evaluators, only 5 (20%) responded that Dialogue 1 was most interesting and funny. 10 chose Dialogue 2 and the other 10 chose Dialogue 3 (40% respectively). This means that each of humor equipped dialogues received evaluations two times higher than non-humor dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment results", "sec_num": "5.2.2" }, { "text": "Our system can be also disassembled into a set of flexible tools which help students to experiment with dialogue processing. By using simple web-mining techniques we described, this dialogue engine is capable of automatic retrieval of associations which can be used to produce a whole range of utterances -for example by using the bottom, not the top of the associations list, one can examine how interesting or provocative the dialogue becomes. As the system has a cgi interface, the experiments are easy and any new feature (for instance a speech act choice menu) can be easily added. Such toolkit gives students an opportunity to experiment on a given aspect of dialogue processing without the need of building a conversation system from the scratch. There is also no need of laborious knowledge input and, as such open-domain oriented system generates new \"on topic\" utterances, experiment subjects do not get bored quickly, which is always a problem while collecting conversation logs of human-machine interaction. A programmer also can freely choose between thousands of IRC logs utterances and Internet resources for the statistical trials, grammar patterns retrieval, speech acts analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Toolkit for Conversation-Related Experiments", "sec_num": "5.3" }, { "text": "In this research we investigated if word associations extracted automatically from the Web are reasonable (semantically on topic) and if they can be successfully used in non-task-oriented dialogue systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "We also implemented such a system extraction module. It is able to automatically generate in real-time responses to user utterances by generating a proposition and adding modality retrieved from IRC chat logs. We conducted evaluation experiments on the overall influence of the modality usage and it improved the system. Therefore we showed that it is possible to construct a dialogue system that automatically generates understandable on-topic utterances without the need of creating vast amounts of rules and data beforehand. We also confirmed that our system can be used as a experimental platform which can be easily used by other researchers to test their algorithms with a more unpredictible (and less boring) \"chatbot\", an important factor for long tiring sessions of human-computer conversation. Currently there are several projects which use the system described here as a platform for experiments and we introduced two of them -on joke generation and affect analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "There is still a lot of work left to be done. It is necessary for a non-task-oriented dialogue system to obtain not only word associations, but also different kinds of knowledge -of user's preferences or of dialogue itself -for example conversational strategies. At this moment the system generates utterances by applying word associations to the proposition templates and adding modality. We also need to more deeply consider semantics, speech acts and context to create a more advanced system. Finally, the system needs to recognize not only keywords, but also user's modality. We assume that the affect recognition mentioned above will help us to achieve this goal in near future and this is our next step. By opening the system's code and giving others the opportunity of adding their own modules and changes we hope to solve remaining problems. In this paper we focus on the impact of adding modality to a system. Comparing the system to Japanese versions of ELIZA (already available) and ALICE (not available in Japanese yet) is also one of our next steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Wallace, R. The Anatomy of A.L.I.C.E. http://www.alicebot.org/anatomy.html.2 Many of them have been quite successful in the Loebner Prize and the Chatterbox Challenge (competitions only for English-speaking bots) but explanations of their algorithms are not available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Google, http://www.google.co.jp/ 4 MeCab: Yet Another Part-of-Speech and Morphological Analyzer, http://mecab.sourceforge.jp/ 5 All Japanese transcriptions will be written in italics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Internet Relay Chat Protocol, http://www.irchelp.org/irchelp/rfc/rfc.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially supported by the Research Grant from the Nissan Science Foundation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The method of building expectation model in task-oriented dialogue systems and its realization algorithms", "authors": [ { "first": "Bei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Limin", "middle": [], "last": "Du", "suffix": "" }, { "first": "Shuiyuan", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2003, "venue": "Proceedings of Natural Language Processing and Knowledge Engineering", "volume": "", "issue": "", "pages": "174--179", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bei Liu, Limin Du, Shuiyuan Yu. 2003 The method of building expectation model in task-oriented dia- logue systems and its realization algorithms. Proceed- ings of Natural Language Processing and Knowledge Engineering:174-179", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Priming of syntactic rules in task-oriented dialogue and spontaneous conversation", "authors": [ { "first": "David", "middle": [], "last": "Reitter", "suffix": "" }, { "first": "Johanna", "middle": [ "D" ], "last": "Moore", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2006, "venue": "Proc. 28th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Reitter, Johanna D. Moore, and Frank Keller. 2006. Priming of syntactic rules in task-oriented di- alogue and spontaneous conversation. In Proc. 28th", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Annual Conference of the Cognitive Science Society (CogSci)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Conference of the Cognitive Science Society (CogSci), Vancouver, Canada.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Relational Agents: A Model and Implementation of Building User Trust", "authors": [ { "first": "Timothy", "middle": [], "last": "Bickmore", "suffix": "" }, { "first": "Justine", "middle": [], "last": "Cassell", "suffix": "" } ], "year": 2001, "venue": "Proceedings of Human Factors Computing Systems (SIGCHI'01", "volume": "", "issue": "", "pages": "396--403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Timothy Bickmore and Justine Cassell. 2001 Relational Agents: A Model and Implementation of Building User Trust. Proceedings of Human Factors Computing Systems (SIGCHI'01): 396-403.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Speech technology on trial: Experiences from the August system", "authors": [ { "first": "Joakim", "middle": [], "last": "Gustafson", "suffix": "" }, { "first": "Linda", "middle": [], "last": "Bell", "suffix": "" } ], "year": 2000, "venue": "Natural Language Engineering", "volume": "1", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joakim Gustafson and Linda Bell. 2000. Speech technol- ogy on trial: Experiences from the August system. In Natural Language Engineering, 1(1):1-15.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A Conversational Agent as Museum Guide-Design and Evaluation of a Real-World Application. Intelligent Virtual Agents", "authors": [ { "first": "Stefan", "middle": [], "last": "Kopp", "suffix": "" }, { "first": "Lars", "middle": [], "last": "Gesellensetter", "suffix": "" }, { "first": "Nicole", "middle": [ "C" ], "last": "Kramer", "suffix": "" }, { "first": "Ipke", "middle": [], "last": "Wachsmuth", "suffix": "" } ], "year": 2005, "venue": "LNAI", "volume": "3661", "issue": "", "pages": "329--343", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Kopp, Lars Gesellensetter, Nicole C. Kramer, and Ipke Wachsmuth. 2005. A Conversational Agent as Museum Guide-Design and Evaluation of a Real- World Application. Intelligent Virtual Agents, LNAI 3661:329-343.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "ELIZA -computer program for the study of natural language communication between man and machine", "authors": [ { "first": "Joseph", "middle": [], "last": "Weizenbaum", "suffix": "" } ], "year": 1966, "venue": "Commun. ACM", "volume": "9", "issue": "1", "pages": "36--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Weizenbaum. 1966. ELIZA -computer pro- gram for the study of natural language communica- tion between man and machine. Commun. ACM, vol.9, no.1:36-45.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Automatic Update of AIML Knowledge Base in E-Learning Environment", "authors": [ { "first": "Maurizio", "middle": [], "last": "Orlando De Pietro", "suffix": "" }, { "first": "Giovanni", "middle": [], "last": "De Rose", "suffix": "" }, { "first": "", "middle": [], "last": "Frontera", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Computers and Advanced Technology in Education", "volume": "", "issue": "", "pages": "29--31", "other_ids": {}, "num": null, "urls": [], "raw_text": "Orlando De Pietro, Maurizio De Rose, and Giovanni Frontera. 2005. Automatic Update of AIML Knowl- edge Base in E-Learning Environment. In Proceedings of Computers and Advanced Technology in Education. , Oranjestad, Aruba, August:29-31.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Generality of a Spoken Dialogue System Using SeGA-IL for Different Languages", "authors": [ { "first": "Kenji", "middle": [], "last": "Araki", "suffix": "" }, { "first": "Michitomo", "middle": [], "last": "Kuroda", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the IASTED International Conference COMPUTER INTELLIGENCE", "volume": "", "issue": "", "pages": "70--75", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kenji Araki and Michitomo Kuroda. 2006. Gener- ality of a Spoken Dialogue System Using SeGA- IL for Different Languages, Proceedings of the IASTED International Conference COMPUTER INTELLIGENCE:70-75.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "On the Bursty Evolution of Blogspace", "authors": [ { "first": "Ravi", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Jasmine", "middle": [], "last": "Novak", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Tomkins", "suffix": "" } ], "year": 2003, "venue": "Proceedings of The Twelfth International World Wide Web Conference", "volume": "", "issue": "", "pages": "568--257", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ravi Kumar, Jasmine Novak, Prabhakar Raghavan, and Andrew Tomkins. 2003. On the Bursty Evolution of Blogspace. Proceedings of The Twelfth International World Wide Web Conference:568-257", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Japanese modality(Nihongo no modality) Kuroshio", "authors": [ { "first": "Yoshio", "middle": [], "last": "Nitta", "suffix": "" }, { "first": "Takashi", "middle": [], "last": "Masuoka", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoshio Nitta and Takashi Masuoka, Japanese modal- ity(Nihongo no modality) Kuroshio.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Double Standpoint Evaluation Method for Affect Analysis System", "authors": [ { "first": "Michal", "middle": [], "last": "Ptaszynski", "suffix": "" }, { "first": "Pawel", "middle": [], "last": "Dybala", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Rzepka", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Araki", "suffix": "" } ], "year": 2008, "venue": "The 22nd Annual Conference of Japanese Society for Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michal Ptaszynski, Pawel Dybala, Rafal Rzepka, and Kenji Araki. 2008. Double Standpoint Evaluation Method for Affect Analysis System. The 22nd Annual Conference of Japanese Society for Artificial Intelli- gence (JSAI 2008).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Emotive Information Discovery from User Textual Input Using Causal Associations from the Internet", "authors": [ { "first": "Wenhan", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Rzepka", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Araki", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 7th Forum of Information Technology", "volume": "", "issue": "", "pages": "267--268", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenhan Shi, Rafal Rzepka and Kenji Araki. 2008. Emo- tive Information Discovery from User Textual In- put Using Causal Associations from the Internet (in Japanese)\", Proceedings of the 7th Forum of Informa- tion Technology(Vol2):267-268", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Extracting Dajare Candidates from the Web -Japanese Puns Generating System as a Part of Humor Processing Research", "authors": [ { "first": "Pawel", "middle": [], "last": "Dybala", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Ptaszynski", "suffix": "" }, { "first": "Rafal", "middle": [], "last": "Rzepka", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Araki", "suffix": "" } ], "year": 2008, "venue": "Proceedings of LIBM'08 First International Workshop on Laughter in Interaction and Body Movement", "volume": "", "issue": "", "pages": "46--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawel Dybala, Michal Ptaszynski, Rafal Rzepka and Kenji Araki. 2008. Extracting Dajare Candidates from the Web -Japanese Puns Generating System as a Part of Humor Processing Research. Proceedings of LIBM'08 First International Workshop on Laughter in Interaction and Body Movement:46-51.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "System flow", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "html": null, "num": null, "type_str": "table", "content": "
Sapporo wa samui. (Sapporo city is cold.)
Association frequency ranking:
1yuki (snow)52
2fuyu (winter)50
3kion (temperature)16
4jiki (season)12
5Tokyo (Tokyo)12
6tenki (weather)11
7chiiki (area)10
8heya (room)10
", "text": "Examples of noun associations triggered by a user utterance" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "content": "
Top 10 word associations
scoreparticipant(A B C) total
340 52 57149
237 17 2781
123 31 1670
usability (%)77 69 8477
", "text": "" }, "TABREF2": { "html": null, "num": null, "type_str": "table", "content": "
scoreparticipant A B Ctotal
320 29 3685
217 9 1036
113 12 429
usability (%)74 76 9281
", "text": "Top 5 word associations" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "content": "
: Proposition templates
(noun) (wa) (adjective)
(noun) (ga) (adjective)
(noun) (ga) (verb)
(noun) (wa) (verb)
(so-re) (wa) (verb)
(noun)
(adjective)
(verb)
", "text": "" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "content": "
informative expressionfrequency
maa -kedo21
(Well , it can be said -but -)
maa -dana16
(Well , it can be said -)
maa -desu-ga16
(Well , it appears that -)
soko-de -desu-yo15
(Here , it is said that -)
maa -da-ga14
(Well , it can be said -but -)
maa -desu-yo12
(Well , it is that -)
", "text": "Examples of informative expression modality" }, "TABREF5": { "html": null, "num": null, "type_str": "table", "content": "
questionfreqency
...desuka?232
(Is it that ... ?)
...kana?90
(Maybe ... ?)
...da-kke?87
(Is it right that ... ?)
...masu-ka?69
(Is it that ... ?)
...nano?68
(Is it that ... ?)
...toka?55
( ... , isn't it ?)
", "text": "Examples of question modality sentence endings" }, "TABREF6": { "html": null, "num": null, "type_str": "table", "content": "", "text": "Examples of dialogues with system \u03b2 user: Nanika suki-na tabemono aru?" }, "TABREF7": { "html": null, "num": null, "type_str": "table", "content": "
", "text": "Evaluation Results" }, "TABREF8": { "html": null, "num": null, "type_str": "table", "content": "
Evaluation Criteria ABCDEFGH
Baseline System3.0 2.2 2.4 2.4 2.0 2.8 2.2 2.8
With pun generator 3.2 3.0 2.8 2.8 2.2 3.0 3.4 3.6
", "text": "Results of humor experiments" } } } }