| { | |
| "paper_id": "Y04-1027", | |
| "header": { | |
| "generated_with": "S2ORC 1.0.0", | |
| "date_generated": "2023-01-19T13:34:48.819005Z" | |
| }, | |
| "title": "Extraction of Cognition Results of Travel Routes with a Thesaurus", | |
| "authors": [ | |
| { | |
| "first": "Kazutaka", | |
| "middle": [], | |
| "last": "Takao", | |
| "suffix": "", | |
| "affiliation": { | |
| "laboratory": "", | |
| "institution": "Kobe University Kobe University", | |
| "location": { | |
| "addrLine": "1-1 Rokkodai-cho, Nada-ku, Kobe 1-1 Rokkodai-cho, Nada-ku", | |
| "postCode": "657-8501, 657-8501", | |
| "settlement": "Kobe", | |
| "country": "Japan, Japan" | |
| } | |
| }, | |
| "email": "" | |
| }, | |
| { | |
| "first": "Yasuo", | |
| "middle": [], | |
| "last": "Asakura", | |
| "suffix": "", | |
| "affiliation": { | |
| "laboratory": "", | |
| "institution": "Kobe University Kobe University", | |
| "location": { | |
| "addrLine": "1-1 Rokkodai-cho, Nada-ku, Kobe 1-1 Rokkodai-cho, Nada-ku", | |
| "postCode": "657-8501, 657-8501", | |
| "settlement": "Kobe", | |
| "country": "Japan, Japan" | |
| } | |
| }, | |
| "email": "asakura@kobe-u.ac.jp" | |
| } | |
| ], | |
| "year": "", | |
| "venue": null, | |
| "identifiers": {}, | |
| "abstract": "We are attempting to model travel route choice behaviour with language to describe the thinking process of travelers because words can directly and clearly reflect their psychological states from a bottom-up viewpoint. This paper shows a method that extracts impressions and feelings, i.e., cognition results of travel routes, out of open-ended questionnaire texts with a thesaurus. Complex words are also allowed as cognition results. Additional considerations and training contents are also reported. Finally, an experiment on the extraction of cognition results from unseen texts is reported.", | |
| "pdf_parse": { | |
| "paper_id": "Y04-1027", | |
| "_pdf_hash": "", | |
| "abstract": [ | |
| { | |
| "text": "We are attempting to model travel route choice behaviour with language to describe the thinking process of travelers because words can directly and clearly reflect their psychological states from a bottom-up viewpoint. This paper shows a method that extracts impressions and feelings, i.e., cognition results of travel routes, out of open-ended questionnaire texts with a thesaurus. Complex words are also allowed as cognition results. Additional considerations and training contents are also reported. Finally, an experiment on the extraction of cognition results from unseen texts is reported.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Abstract", | |
| "sec_num": null | |
| } | |
| ], | |
| "body_text": [ | |
| { | |
| "text": "We are attempting to linguistically model spatial cognition and travel route choice behaviour. In the field of travel engineering, a unit of travel called a \"trip\" is expressed as movement from an origin to a destination; route choice behaviour is expressed as choice from the set of alternatives in each trip. Existing city and traffic infrastructure planning are mainly framed according to economical effects and demand estimations. Many studies have approached route choice behaviour from such top-down standpoints. These studies are more interested in the results of travel behaviour rather than the psychological state. Many have used numerical equations to explain behaviour quantitatively, and psychological factors are often expressed with internal variables.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Background", | |
| "sec_num": "1.1" | |
| }, | |
| { | |
| "text": "On the other hand, when developing new products, a company must analyze customers' awareness from a bottom-up viewpoint. In the same way, it is also important for traffic infrastructure planning to analyze from a bottom-up viewpoint, observing travelers' psychological states when making choices. We expect to handle psychological factors more clearly with words. Psychological states of route choice behaviour can be explained by words with analyzing open-ended questionnaire texts.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Background", | |
| "sec_num": "1.1" | |
| }, | |
| { | |
| "text": "Route choice behaviour \"recognizes\" the characteristics of the alternatives (i.e. travel routes) in a choice set and \"chooses\" one route. Therefore, route choice behaviour can be expressed as a two-stage process that places the \"cognition result\" between the travel route and the behaviour as shown in Figure 1 .", | |
| "cite_spans": [], | |
| "ref_spans": [ | |
| { | |
| "start": 302, | |
| "end": 310, | |
| "text": "Figure 1", | |
| "ref_id": "FIGREF0" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Modeling of Travel Route Choice Behaviour", | |
| "sec_num": "1.2" | |
| }, | |
| { | |
| "text": "The first stage recognizes the characteristics, feelings, and impressions of each route in the choice set, i.e., cognition results. The second stage chooses a route by evaluating cognition results. These stages are analyzed with open-ended questionnaire texts. This paper describes the first stage, i.e., the \"recognize\" stage whose objective is to extract enough cognition results to handle the travel route choosing process in the second stage. This paper introduces a process that uses a thesaurus to extract cognition results from open-ended questionnaire texts. ", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Modeling of Travel Route Choice Behaviour", | |
| "sec_num": "1.2" | |
| }, | |
| { | |
| "text": "Extracted cognition results are handled in the second stage. Therefore, we have to see the second stage to clarify the requirements of the first stage. The second stage is expressed by such decision-making models as Tversky's Elimination-By-Aspects (EBA) model (1972) . In our study an \"aspect\" is a feature of a situation that shares several alternatives, such as \"bright\" and \"comfortable.\" The cognition results correspond to aspects. In the EBA model, a decision is made by eliminating alternatives and judging whether each alternative includes the aspect in question. For example, when a traveler wants to choose a \"comfortable\" route, the routes that do not have the cognition result \"comfortable\" are eliminated from the choice set. If more than one route remains, the traveler considers another aspect in the next priority order. In this way, a final route is chosen by eliminating routes according to the priority order of aspects. The structure of decision making can be plainly analyzed from open-ended questionnaire texts because the words that represent such aspects as \"comfortable\" appear in them.", | |
| "cite_spans": [ | |
| { | |
| "start": 261, | |
| "end": 267, | |
| "text": "(1972)", | |
| "ref_id": null | |
| } | |
| ], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Aspect in EBA Model", | |
| "sec_num": "1.3" | |
| }, | |
| { | |
| "text": "Consequently, the requirements of this paper are as follows. First, cognition results should be extracted as expressions of aspects. Therefore, we have to handle the meanings of words by considering what the questionnaire texts want to say rather than simply handling the words grammatically. Second, cognition results should be categorized easily because aspects indicate the categories of the meaning of words rather than external appearance.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Aspect in EBA Model", | |
| "sec_num": "1.3" | |
| }, | |
| { | |
| "text": "Note that the word \"aspect\" in this paper reflects the meaning used in the EBA model rather than the normal grammatical meaning.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Aspect in EBA Model", | |
| "sec_num": "1.3" | |
| }, | |
| { | |
| "text": "Some studies have attempted to analyze human psychological states from open-ended questionnaire texts. Inui (2004) analyzed open-ended questionnaire texts about traffic infrastructure planning, focusing on the intention behind answers. This paper, however, concentrates on analyzing how people feel. Tateishi et al. (2002) and Kobayashi et al. (2004) extracted evaluations of specific products from the large quantity of language resources that exist on the Web. They easily found language resources using product names as keywords. However, evaluations of travel behaviour are difficult to collect from the Web because the awareness of habitual activity tends to be latent, and moreover, it is difficult to distinguish whether it expresses travel behaviour because it is lacking specific keywords. Hence, large language resources do not always exist in this research. Moreover, evaluations of aspects are analyzed from texts as priority orders rather than their method that used a ", | |
| "cite_spans": [ | |
| { | |
| "start": 103, | |
| "end": 114, | |
| "text": "Inui (2004)", | |
| "ref_id": "BIBREF2" | |
| }, | |
| { | |
| "start": 300, | |
| "end": 322, | |
| "text": "Tateishi et al. (2002)", | |
| "ref_id": "BIBREF11" | |
| }, | |
| { | |
| "start": 327, | |
| "end": 350, | |
| "text": "Kobayashi et al. (2004)", | |
| "ref_id": "BIBREF3" | |
| } | |
| ], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Related Studies", | |
| "sec_num": "1.4" | |
| }, | |
| { | |
| "text": "Using a thesaurus to extract cognition results from open-ended questionnaire texts is convenient for the following reasons. First, we can use the wealth of vocabulary covered in the thesaurus even if it does not appear in training texts. This is advantageous for obtaining high vocabulary coverage without collecting large amounts of training texts. Second, extracted cognition results can easily correspond to categories of the thesaurus to be classified into aspects. We suggested a framework of extracting cognition results from open-ended texts with a thesaurus (See Asakura (2004-a) or (2004-d) for details). After determining the relationship between the thesaurus categories and cognition results, we can extract cognition results from unseen texts using the relationship as a template. The method of extraction is as follows:", | |
| "cite_spans": [ | |
| { | |
| "start": 571, | |
| "end": 599, | |
| "text": "Asakura (2004-a) or (2004-d)", | |
| "ref_id": null | |
| } | |
| ], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "\u2022", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Step 1: Invest the semantic code of each morpheme by looking up the thesaurus.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "\u2022 Step 2: Extract cognition results using the relationship. We used the Kadokawa New Thesaurus released by Oono and Hamanishi (1989) as a thesaurus, Chasen as a morphological analyzer, and CaboCha as a dependency structure analyzer. The classification of a Kadokawa thesaurus is indicated with semantic codes. An example is shown in Figure 2 . The semantic code of \"bright\" is 141a. If we know the relationship that code 141a expresses cognition results, we can judge \"bright\" as a cognition result. However, the following problems were found by simply applying the above steps: (a) One morpheme is sometimes too short to understand as a cognition result, and hence thin meaning words were extracted. For example, the cognition result \"time\" can be extracted from the sentence \"Waiting takes time.\" However, \"time\" is too short to be understood as an aspect.", | |
| "cite_spans": [ | |
| { | |
| "start": 107, | |
| "end": 132, | |
| "text": "Oono and Hamanishi (1989)", | |
| "ref_id": "BIBREF4" | |
| } | |
| ], | |
| "ref_spans": [ | |
| { | |
| "start": 333, | |
| "end": 341, | |
| "text": "Figure 2", | |
| "ref_id": "FIGREF1" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "(b) Incorrect cognition results were extracted from some expressions. For example, \"sit\" was extracted as a cognition result from the sentence: \"I feel anxious about traffic jams even if I can sit.\" But this is wrong. The words in the expression \"even if\" should be ignored.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Therefore, the following considerations were added to the steps. First, we allowed complex words that consist of multi-morphemes as cognition results. Second, a judgment step of the expression patterns is added as follows:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "\u2022", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Step 3: Judge the expression patterns to filter out incorrect cognition results.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Cognition results are clarified by the allowance of complex words. On the other hand, the following demerits are anticipated:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "It is bright even at night because of the street lights.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Step 1: Thesaurus 141a <light/bright> 009b <night> 958 <lamplight>", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Step 2: Relationship cognition result PACLIC 18, December 8th-10th, 2004, Waseda University, Tokyo (a) Complex words are usually not covered in a thesaurus. Hence, we have to collect the vocabularies ourselves.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "(b) The standard length of morphemes becomes ambiguous. Hence, we allowed the following complex words as cognition results: words whose synonyms or antonyms are covered in the thesaurus.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Example 1 Synonyms (Japanese words are shown in italics): Covered: (comfortable). kokochiyoi Allowed:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "(feeling) (particle) (good). kimochi ga yoi Example 2 Antonyms:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Covered: (scheduled time). teiji Allowed:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "(arrival time) (particle) (unsettled).", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Extraction Process", | |
| "sec_num": "2" | |
| }, | |
| { | |
| "text": "Thus, we have to train the following contents:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "touchaku-jikan ga fuantei", | |
| "sec_num": null | |
| }, | |
| { | |
| "text": "(a) Vocabulary knowledge for Step 1. (b) Relationship between the semantic codes and cognition results for Step 2. (c) Expression patterns to avoid extracting incorrect cognition words for Step 3.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "touchaku-jikan ga fuantei", | |
| "sec_num": null | |
| }, | |
| { | |
| "text": "We need to collect linguistic resources that include feelings and impressions. Newspaper texts are widely used in natural language processing (NLP), however, they are inappropriate for this study because the articles are usually written objectively without subjective views. Therefore, we must collect the data set ourselves. Moreover, subjects are required to write actively what they feel about travel routes. Many existing studies that analyze travel behaviour have used observation equipments to collect numerical data. In such cases, the data is collected automatically even if the subjects are passive. On the other hand, subjects are required to be active in our study. Therefore, we must realize that the amount of linguistic resources in our case is not as large as other NLP tasks, such as machine translations, because of the difficulty of collection.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Data Set Preparation", | |
| "sec_num": "3" | |
| }, | |
| { | |
| "text": "We conducted a questionnaire about going from a certain place to Kyoto City Hall (See Takao and Asakura (2004-b) for details). Four routes were shown as the choice set; bicycle, subway, bus, and taxi. Scenarios were presented in which such spatial conditions as season, weather, and time were different. The subjects were asked to write freely what they thought about each route and each scenario in open-ended texts. This is a questionnaire about transport mode choice in the narrow meaning. However, it can be treated within the travel route choice in the wide meaning because its process is also expressed as \"recognize\" and \"choose.\"", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Data Set Preparation", | |
| "sec_num": "3" | |
| }, | |
| { | |
| "text": "1209 valid sentences are collected. 200 sentences were separated at random as the test set for unseen texts. 1009 sentences were used for training.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Data Set Preparation", | |
| "sec_num": "3" | |
| }, | |
| { | |
| "text": "Trained Contents", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "4", | |
| "sec_num": null | |
| }, | |
| { | |
| "text": "Limiting the semantic codes of wide meaning words into a travel route choice task is important to filter out extra noise. For example, \" \" has meanings that include \"hang,\" \"lack,\" \"run,\" kakeru \"spend\" etc. If this word is used in the expression \" (baggage) (particle) ,\" the kaban wo kakeru semantic code should be limited to \"hang\"; otherwise, such an inaccurate cognition result as \"spend\" would be extracted.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Limitation of Semantic Code(s)", | |
| "sec_num": "4.1.1" | |
| }, | |
| { | |
| "text": "We collected the vocabularies of uncovered words from training texts whose semantic codes were also trained by hand. This work is necessary for handling travel behaviour because some basic words are not covered in the thesaurus.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Uncovered Words", | |
| "sec_num": "4.1.2" | |
| }, | |
| { | |
| "text": "Examples:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Uncovered Words", | |
| "sec_num": "4.1.2" | |
| }, | |
| { | |
| "text": "(every 5th and 10th day), (door-to-door), (mimetic word; fast), gotoobi doatsuudoa byuun (access), (annoying) akusesu iratsuku", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Uncovered Words", | |
| "sec_num": "4.1.2" | |
| }, | |
| { | |
| "text": "A collection of complex words means the combination of nouns, verbs, particles, etc. Therefore, many expressions have the same meaning. For example, the following words that mean \"obviousness of travel time\" are collected from the training texts as shown in Table 1 . Sometimes, cognition results do not appear directly. In everyday conversations, feeling or impression words are often hidden by common sense and tacit understandings. In this paper, indirect expressions are treated as cognition results by interpreting them with common sense into direct expressions covered in the thesaurus and estimating those semantic codes. Some examples are shown in Table 2 . ", | |
| "cite_spans": [], | |
| "ref_spans": [ | |
| { | |
| "start": 258, | |
| "end": 265, | |
| "text": "Table 1", | |
| "ref_id": "TABREF1" | |
| }, | |
| { | |
| "start": 656, | |
| "end": 663, | |
| "text": "Table 2", | |
| "ref_id": "TABREF2" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Complex Words", | |
| "sec_num": "4.1.3" | |
| }, | |
| { | |
| "text": "The Kadokawa thesaurus consists of 2814 categories. 109 categories were judged as the categories of cognition results. Table 3 shows the 10 topmost categories that appeared frequently in the training texts. Their total number of appearance and examples are also shown. ", | |
| "cite_spans": [], | |
| "ref_spans": [ | |
| { | |
| "start": 119, | |
| "end": 126, | |
| "text": "Table 3", | |
| "ref_id": "TABREF3" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Relationship", | |
| "sec_num": "4.2" | |
| }, | |
| { | |
| "text": "Knowledge of special expressions helps avoid extracting incorrect cognition results. For example,", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Expression Patterns", | |
| "sec_num": "4.3" | |
| }, | |
| { | |
| "text": "(The subway would have been faster.) Chikatetsu no hou ga hayakat ta to koukai suru.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Expression Patterns", | |
| "sec_num": "4.3" | |
| }, | |
| { | |
| "text": "In this sentence, \" (fast)\" expresses another route, not the focused route. Therefore, the hayakat words in the expression \" \" should be ignored. no hou ga ... ta Another example follows:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Expression Patterns", | |
| "sec_num": "4.3" | |
| }, | |
| { | |
| "text": "(It is certainly fast.) Kakujitsu ni hayai.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Expression Patterns", | |
| "sec_num": "4.3" | |
| }, | |
| { | |
| "text": "hayai \" (certainly)\" is not a cognition result, i.e., certainty is not recognized. Rather, \" (fast)\" is recognized, and \" \" only modifies \" .\" Therefore, modifiers that express the kakujitsu hayai degree of cognition results should be ignored. We collected 14 expression patterns from the training texts.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Kakujitsu", | |
| "sec_num": null | |
| }, | |
| { | |
| "text": "We carried out an experiment that extracted cognition results from unseen texts. 117 sentences different from training texts were selected out of the separated 200 sentences. The overlap ratio of the same sentences is high because some sentences consisted of only one word, for example \"hot,\" and because some cognition results are common regardless of scenario. Only different sentences were chosen for this experiment to see the availability of extraction from absolutely unseen texts.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Experiment on Unseen Texts", | |
| "sec_num": "5" | |
| }, | |
| { | |
| "text": "The results are as follows: Table 4 shows the number of extracted or missing cognition results. The causes of the faults are also shown. If we treat (B) as faults, recall is 86.9% and precision is 88.8%. Only one fault is caused by insufficient training of uncovered words. Many of the unseen simple words are covered in the thesaurus. Therefore, even if they do not appear in the training texts, cognition results have been correctly extracted by referring to the thesaurus. This means that the proposed method of using the thesaurus is effective. On the other hand, many of the faults are caused by insufficient training of complex words because these vocabularies are collected only by training texts. Finally, note that no faults are caused by insufficient training of relationships.", | |
| "cite_spans": [], | |
| "ref_spans": [ | |
| { | |
| "start": 28, | |
| "end": 35, | |
| "text": "Table 4", | |
| "ref_id": "TABREF4" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Experiment on Unseen Texts", | |
| "sec_num": "5" | |
| }, | |
| { | |
| "text": "The problem of complex words collection remains. Because there are various expressions of complex words, it is inefficient to collect them from training texts, i.e., to wait passively for their appearance in the corpus. Rather, it may be more efficient to let some subjects write complex words of the same meaning actively by showing keywords. Moreover, since this task is similar to collecting paraphrasing words, such ideas as Takao et al. (2002) are useful. Furthermore, it will be efficient to train as mappings from combinations of semantic codes with particles to semantic codes of cognition results. For example, \"", | |
| "cite_spans": [ | |
| { | |
| "start": 429, | |
| "end": 448, | |
| "text": "Takao et al. (2002)", | |
| "ref_id": "BIBREF9" | |
| } | |
| ], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Complex Words Collection", | |
| "sec_num": "6.1" | |
| }, | |
| { | |
| "text": "\" and \" \" in Table 1 yomeru wakaru have the same semantic code 413a <cognition>, hence they can be trained with code 413a rather than with each vocabulary.", | |
| "cite_spans": [], | |
| "ref_spans": [ | |
| { | |
| "start": 13, | |
| "end": 28, | |
| "text": "Table 1 yomeru", | |
| "ref_id": "TABREF1" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Complex Words Collection", | |
| "sec_num": "6.1" | |
| }, | |
| { | |
| "text": "The denial of cognition results should be extracted to be understood properly as an aspect. It can be expressed as a modifier of cognition result categories. In some cases, cognition results were denied directly; however, in other cases, the action verb is denied, and the cognition result modifies the verb. In both cases, the extraction should be in the form of the \"denial of the cognition result.\" Details will be reported in another paper.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Denial", | |
| "sec_num": "6.2" | |
| }, | |
| { | |
| "text": "\"It is not hot\" not hot denial of 146a <hot> \"I cannot arrive there easily\" not easily denial of 693 <easy>", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Example:", | |
| "sec_num": null | |
| }, | |
| { | |
| "text": "This paper shows the extraction of cognition results. In the same way, such static attributes of travel routes as facilities and the dynamic conditions of travel space such as weather, season, time, etc. can be also extracted. Thus, the causality of generating cognition results can be extracted as rules. For example, the following rule, extracted from the text shown in Figure 2 , is useful for travel infrastructure planners to give hints for modifying travel spaces.", | |
| "cite_spans": [], | |
| "ref_spans": [ | |
| { | |
| "start": 372, | |
| "end": 380, | |
| "text": "Figure 2", | |
| "ref_id": "FIGREF1" | |
| } | |
| ], | |
| "eq_spans": [], | |
| "section": "Causality of Cognition Results", | |
| "sec_num": "6.3" | |
| }, | |
| { | |
| "text": "Street light & night bright.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Causality of Cognition Results", | |
| "sec_num": "6.3" | |
| }, | |
| { | |
| "text": "We extracted cognition results from open-ended questionnaire texts. The accomplishments of this paper are briefly summarized as follows:", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Conclusion", | |
| "sec_num": "7" | |
| }, | |
| { | |
| "text": "(1) We extracted cognition results of travel routes from open-ended questionnaire texts with a thesaurus with high accuracy.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Conclusion", | |
| "sec_num": "7" | |
| }, | |
| { | |
| "text": "(2) Not only the covered words in the thesaurus, complex words are also allowed as cognition results.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Conclusion", | |
| "sec_num": "7" | |
| }, | |
| { | |
| "text": "(3) The collection of complex words was insufficient because they are not covered in existing thesauri.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Conclusion", | |
| "sec_num": "7" | |
| }, | |
| { | |
| "text": "The results of this paper were used to analyze the second stage, i.e., the \"choose\" stage. See Takao and Asakura (2004-c) for details.", | |
| "cite_spans": [ | |
| { | |
| "start": 95, | |
| "end": 121, | |
| "text": "Takao and Asakura (2004-c)", | |
| "ref_id": null | |
| } | |
| ], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Conclusion", | |
| "sec_num": "7" | |
| } | |
| ], | |
| "back_matter": [ | |
| { | |
| "text": "Our thanks are extended to everyone at the Institute of System Science Research for their kind cooperation with the questionnaire survey.", | |
| "cite_spans": [], | |
| "ref_spans": [], | |
| "eq_spans": [], | |
| "section": "Acknowledgements", | |
| "sec_num": null | |
| } | |
| ], | |
| "bib_entries": { | |
| "BIBREF0": { | |
| "ref_id": "b0", | |
| "title": "CaboCha", | |
| "authors": [], | |
| "year": null, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "CaboCha. http://chasen.org/~taku/software/cabocha/.", | |
| "links": null | |
| }, | |
| "BIBREF1": { | |
| "ref_id": "b1", | |
| "title": "ChaSen", | |
| "authors": [], | |
| "year": null, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "ChaSen. http://chasen.naist.jp/hiki/ChaSen/.", | |
| "links": null | |
| }, | |
| "BIBREF2": { | |
| "ref_id": "b2", | |
| "title": "A study on extraction of intention of open-ended questionnaire texts and automatic classification -mainly about request intention", | |
| "authors": [ | |
| { | |
| "first": "H", | |
| "middle": [], | |
| "last": "Inui", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": 2004, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Inui, H. 2004. A study on extraction of intention of open-ended questionnaire texts and automatic classification -mainly about request intention -. Ph.D. Thesis, Kobe University, Japan (in Japanese).", | |
| "links": null | |
| }, | |
| "BIBREF3": { | |
| "ref_id": "b3", | |
| "title": "Collecting evaluative Proceedings of the 1st International Joint Conference on Natural expressions for opinion extraction. in", | |
| "authors": [ | |
| { | |
| "first": "N", | |
| "middle": [], | |
| "last": "Kobayashi", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Inui", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "Y", | |
| "middle": [], | |
| "last": "Matsumoto", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Tateishi", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "T", | |
| "middle": [], | |
| "last": "Fukushima", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": 2004, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "584--589", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Kobayashi, N., Inui, K., Matsumoto, Y., Tateishi, K., and Fukushima, T. 2004. Collecting evaluative Proceedings of the 1st International Joint Conference on Natural expressions for opinion extraction. in , pp. 584-589. Language Processing (IJCNLP-04)", | |
| "links": null | |
| }, | |
| "BIBREF4": { | |
| "ref_id": "b4", | |
| "title": "Kadokawa Shoten Kadokawa New Thesaurus CD-ROM Version Publishing / Fujitsu", | |
| "authors": [ | |
| { | |
| "first": "S", | |
| "middle": [], | |
| "last": "Oono", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "M", | |
| "middle": [], | |
| "last": "Hamanishi", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": 1989, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Oono, S. and Hamanishi, M. 1989. , Kadokawa Shoten Kadokawa New Thesaurus CD-ROM Version Publishing / Fujitsu, Japan.", | |
| "links": null | |
| }, | |
| "BIBREF5": { | |
| "ref_id": "b5", | |
| "title": "Extraction of cognition rules of travel routes with a thesaurus", | |
| "authors": [ | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Takao", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "Y", | |
| "middle": [], | |
| "last": "Asakura", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": 2004, | |
| "venue": "Proceedings of The Tenth Annual Meeting of The Association for Natural Language Processing", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "103--106", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Takao, K. and Asakura, Y. 2004-a. Extraction of cognition rules of travel routes with a thesaurus. in , pp. Proceedings of The Tenth Annual Meeting of The Association for Natural Language Processing 103-106 (in Japanese).", | |
| "links": null | |
| }, | |
| "BIBREF6": { | |
| "ref_id": "b6", | |
| "title": "2004-b. Data collection for modeling spatial cognition and route choice Proceedings of the 24th Conference of Japan Society of Traffic Engineering behaviour linguistically", | |
| "authors": [ | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Takao", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "Y", | |
| "middle": [], | |
| "last": "Asakura", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": null, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Takao, K. and Asakura, Y. 2004-b. Data collection for modeling spatial cognition and route choice Proceedings of the 24th Conference of Japan Society of Traffic Engineering behaviour linguistically. in (to appear, in Japanese).", | |
| "links": null | |
| }, | |
| "BIBREF7": { | |
| "ref_id": "b7", | |
| "title": "2004-c. Extraction of choice strategy of aspects among travel routes from open-ended texts", | |
| "authors": [ | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Takao", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "Y", | |
| "middle": [], | |
| "last": "Asakura", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": null, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Takao, K. and Asakura, Y. 2004-c. Extraction of choice strategy of aspects among travel routes from open-ended texts. in , CD-ROM (to appear, Proceedings of the 30th Conference of Infrastructure Planning in Japanese).", | |
| "links": null | |
| }, | |
| "BIBREF8": { | |
| "ref_id": "b8", | |
| "title": "2004-d. Catching and modeling spatial cognition and route choice behaviour Proceedings of the 9th Conference of Hong Kong Society for Transportation Studies", | |
| "authors": [ | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Takao", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "Y", | |
| "middle": [], | |
| "last": "Asakura", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": null, | |
| "venue": "", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Takao, K. and Asakura, Y. 2004-d. Catching and modeling spatial cognition and route choice behaviour Proceedings of the 9th Conference of Hong Kong Society for Transportation Studies (9th linguistically. in (to appear).", | |
| "links": null | |
| }, | |
| "BIBREF9": { | |
| "ref_id": "b9", | |
| "title": "Comparing and extracting paraphrasing words with 2-way", | |
| "authors": [ | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Takao", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Imamura", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "H", | |
| "middle": [], | |
| "last": "Kashioka", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": 2002, | |
| "venue": "Proceedings of Third International Conference on Language Resources and bilingual dictionaries. in", | |
| "volume": "", | |
| "issue": "", | |
| "pages": "1016--1022", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Takao, K., Imamura, K., and Kashioka, H. 2002. Comparing and extracting paraphrasing words with 2-way Proceedings of Third International Conference on Language Resources and bilingual dictionaries. in , pp.1016-1022.", | |
| "links": null | |
| }, | |
| "BIBREF11": { | |
| "ref_id": "b11", | |
| "title": "Analyzing opinions on the web --a Proceedings of the 64th Annual framework for combining information extraction with text mining", | |
| "authors": [ | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Tateishi", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "S", | |
| "middle": [], | |
| "last": "Morinaga", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "K", | |
| "middle": [], | |
| "last": "Yamanishi", | |
| "suffix": "" | |
| }, | |
| { | |
| "first": "S", | |
| "middle": [], | |
| "last": "Fukushima", | |
| "suffix": "" | |
| } | |
| ], | |
| "year": 2002, | |
| "venue": "", | |
| "volume": "3", | |
| "issue": "", | |
| "pages": "19--20", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Tateishi, K., Morinaga, S., Yamanishi, K., and Fukushima, S. 2002. Analyzing opinions on the web --a Proceedings of the 64th Annual framework for combining information extraction with text mining --. in Vol. 3, pp. 19-20 (in Japanese).", | |
| "links": null | |
| }, | |
| "BIBREF12": { | |
| "ref_id": "b12", | |
| "title": "Meeting of Information Processing Society of Japan Tversky, A. 1972. Elimination by aspects: a theory of choice", | |
| "authors": [], | |
| "year": null, | |
| "venue": "", | |
| "volume": "79", | |
| "issue": "", | |
| "pages": "281--299", | |
| "other_ids": {}, | |
| "num": null, | |
| "urls": [], | |
| "raw_text": "Meeting of Information Processing Society of Japan Tversky, A. 1972. Elimination by aspects: a theory of choice. Vol. 79 No. 4, pp. Psychological Review 281-299.", | |
| "links": null | |
| } | |
| }, | |
| "ref_entries": { | |
| "FIGREF0": { | |
| "type_str": "figure", | |
| "num": null, | |
| "text": "Route Choice Process", | |
| "uris": null | |
| }, | |
| "FIGREF1": { | |
| "type_str": "figure", | |
| "num": null, | |
| "text": "Extraction with a Thesaurus", | |
| "uris": null | |
| }, | |
| "TABREF1": { | |
| "type_str": "table", | |
| "num": null, | |
| "html": null, | |
| "content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"8\">: Collected Words that Mean \"Obviousness of Travel Time\"</td></tr><tr><td>jikan</td><td>(time)</td><td colspan=\"2\">ga</td><td colspan=\"2\">(particle)</td><td colspan=\"7\">hakareru</td><td>(can estimate)</td></tr><tr><td>jikan</td><td>(time)</td><td colspan=\"2\">ga</td><td colspan=\"2\">(particle)</td><td colspan=\"6\">yomeru</td><td>(readable)</td></tr><tr><td>jikan</td><td>(time)</td><td colspan=\"3\">doori</td><td colspan=\"3\">(exactly)</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">shoyou-jikan</td><td colspan=\"6\">(required time)</td><td colspan=\"2\">ga</td><td colspan=\"2\">(particle)</td><td>yomeru</td><td>(can be read)</td></tr><tr><td colspan=\"3\">ryokou-jikan</td><td colspan=\"4\">(travel time)</td><td colspan=\"2\">no</td><td/><td colspan=\"3\">(particle)</td><td>yosou</td><td>(expectation)</td><td>ga</td><td>(particle)</td><td>tate</td><td>(plan)</td><td>yasui</td><td>(with</td></tr><tr><td>ease)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">shoyou-jikan</td><td colspan=\"6\">(required time)</td><td colspan=\"2\">no</td><td/><td>(particle)</td><td>mikomi</td><td>(expectation)</td><td>ga</td><td>(particle)</td><td>tate</td><td>(plan)</td><td>yasui</td></tr><tr><td colspan=\"2\">(easy)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"2\">machi-jikan</td><td colspan=\"6\">(waiting time)</td><td colspan=\"2\">ga</td><td colspan=\"3\">(particle)</td><td>wakaru</td><td>(know)</td></tr><tr><td colspan=\"4\">touchaku-jikan</td><td colspan=\"5\">(arrival time)</td><td colspan=\"2\">ga</td><td colspan=\"2\">(particle)</td><td>keisan</td><td>(calculate)</td><td>dekiru</td><td>(can)</td></tr></table>", | |
| "text": "" | |
| }, | |
| "TABREF2": { | |
| "type_str": "table", | |
| "num": null, | |
| "html": null, | |
| "content": "<table><tr><td colspan=\"3\">Indirect words</td><td/><td/><td/><td/><td/><td/><td>Interpretation</td><td>Semantic Code</td></tr><tr><td colspan=\"2\">nimotsu</td><td colspan=\"2\">(baggage)</td><td>ga</td><td colspan=\"2\">(particle)</td><td colspan=\"2\">heru</td><td>(decrease)</td><td>migaru</td><td>(agile)</td><td>693 <easy></td></tr><tr><td>kyori</td><td colspan=\"2\">(distance)</td><td>ga</td><td colspan=\"2\">(particle)</td><td colspan=\"2\">nagai</td><td colspan=\"2\">(long)</td><td>tooi</td><td>(far)</td><td>108a <far></td></tr></table>", | |
| "text": "Interpretation and Semantic Codes of Indirect Words" | |
| }, | |
| "TABREF3": { | |
| "type_str": "table", | |
| "num": null, | |
| "html": null, | |
| "content": "<table><tr><td/><td/><td colspan=\"7\">: 10 Topmost Categories of Cognition Results</td></tr><tr><td>Category</td><td colspan=\"8\"># of words Example of Words</td></tr><tr><td colspan=\"2\">691a <unpleasant> 59</td><td colspan=\"3\">taihen</td><td/><td colspan=\"3\">(hard),</td><td>mendou</td><td>(troublesome),</td><td>uttoushii</td><td>(annoying)</td></tr><tr><td>171a <price></td><td>58</td><td colspan=\"2\">takai</td><td colspan=\"5\">(expensive),</td><td>hiyou ga kakaru</td><td>(cost),</td><td>yasui</td><td>(cheap)</td></tr><tr><td>155b <slow></td><td>56</td><td>osoi</td><td colspan=\"6\">(slow),</td><td>jikan ga kakaru</td><td>(time-consuming)</td></tr><tr><td>252a <wet></td><td>55</td><td colspan=\"4\">nureru</td><td colspan=\"3\">(get wet)</td></tr><tr><td>693 <easy></td><td>42</td><td>raku</td><td colspan=\"6\">(easy),</td><td>rakuchin</td><td>(easy)</td></tr><tr><td>166b <distinct></td><td>42</td><td colspan=\"5\">kakujitsu</td><td colspan=\"2\">(certain),</td><td>seikaku</td><td>(accurate)</td></tr><tr><td>691 <pleasant></td><td>41</td><td colspan=\"3\">kaiteki</td><td/><td colspan=\"3\">(comfortable),</td><td>kimochi ga yoi</td><td>(comfortable)</td></tr><tr><td>146a <hot></td><td>38</td><td>atsui</td><td/><td colspan=\"4\">(hot),</td><td>mushiatsui</td><td>(muggy),</td><td>atatakai</td><td>(warm)</td></tr><tr><td>146b <cold></td><td>37</td><td colspan=\"3\">samui</td><td colspan=\"4\">(cold),</td><td>tsumetai</td><td>(cold),</td><td>suzushii</td><td>(cool)</td></tr><tr><td colspan=\"2\">156c <early & late> 36</td><td colspan=\"2\">hayai</td><td colspan=\"5\">(early),</td><td>osoi</td><td>(late)</td></tr></table>", | |
| "text": "" | |
| }, | |
| "TABREF4": { | |
| "type_str": "table", | |
| "num": null, | |
| "html": null, | |
| "content": "<table><tr><td/><td/><td>: Experiment Results</td><td/><td/></tr><tr><td/><td>(A)</td><td>(B)</td><td>(C)</td><td>(M)</td></tr><tr><td/><td colspan=\"4\">Extracted correctly Partly extracted Extra noise Missing</td></tr><tr><td>Total</td><td>119</td><td>9</td><td>6</td><td>9</td></tr><tr><td>Insufficient training of...</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">limitation of semantic codes -</td><td>-</td><td>1</td><td>-</td></tr><tr><td>uncovered words</td><td>-</td><td>-</td><td>-</td><td>1</td></tr><tr><td>complex words</td><td>-</td><td>9</td><td>2</td><td>7</td></tr><tr><td>expression patterns</td><td>-</td><td>-</td><td>3</td><td>-</td></tr><tr><td>Morphological analyzer error</td><td>-</td><td>0</td><td>0</td><td>1</td></tr></table>", | |
| "text": "" | |
| } | |
| } | |
| } | |
| } |