Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N06-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:46:18.778678Z"
},
"title": "A Fully-Lexicalized Probabilistic Model for Japanese Syntactic and Case Structure Analysis",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": "",
"affiliation": {},
"email": "kawahara@kc.t.u-tokyo.ac.jp"
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present an integrated probabilistic model for Japanese syntactic and case structure analysis. Syntactic and case structure are simultaneously analyzed based on wide-coverage case frames that are constructed from a huge raw corpus in an unsupervised manner. This model selects the syntactic and case structure that has the highest generative probability. We evaluate both syntactic structure and case structure. In particular, the experimental results for syntactic analysis on web sentences show that the proposed model significantly outperforms known syntactic analyzers.",
"pdf_parse": {
"paper_id": "N06-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "We present an integrated probabilistic model for Japanese syntactic and case structure analysis. Syntactic and case structure are simultaneously analyzed based on wide-coverage case frames that are constructed from a huge raw corpus in an unsupervised manner. This model selects the syntactic and case structure that has the highest generative probability. We evaluate both syntactic structure and case structure. In particular, the experimental results for syntactic analysis on web sentences show that the proposed model significantly outperforms known syntactic analyzers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Case structure (predicate-argument structure or logical form) represents what arguments are related to a predicate, and forms a basic unit for conveying the meaning of natural language text. Identifying such case structure plays an important role in natural language understanding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In English, syntactic case structure can be mostly derived from word order. For example, the left argument of the predicate is the subject, and the right argument of the predicate is the object in most cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address these problems and realize Japanese case structure analysis, wide-coverage case frames are required. For example, let us describe how to apply case structure analysis to the following sentence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Blaheta and Charniak proposed a statistical method",
"sec_num": null
},
{
"text": "bentou-wa taberu lunchbox-TM eat (eat lunchbox) In this sentence, taberu (eat) is a verb, and bentouwa (lunchbox-TM) is a case component (i.e. argument) of taberu. The case marker of \"bentou-wa\" is hidden by the topic marker (TM) \"wa\". The analyzer matches \"bentou\" (lunchbox) with the most suitable case slot (CS) in the following case frame of \"taberu\" (eat). Since \"bentou\" (lunchbox) is included in \"wo\" examples, its case is analyzed as \"wo\". As a result, we obtain the case structure \"\u03c6:ga bentou:wo taberu\", which means that \"ga\" (nominative) argument is omitted, and \"wo\" (accusative) argument is \"bentou\" (lunchbox). In this paper, we run such case structure analysis based on example-based case frames that are constructed from a huge raw corpus in an unsupervised manner. Let us consider syntactic analysis, into which our method of case structure analysis is integrated. Recently, many accurate statistical parsers have been proposed (e.g., (Collins, 1999; Charniak, 2000) for English, (Uchimoto et al., 2000; Kudo and Matsumoto, 2002) for Japanese). Since they somehow use lexical information in the tagged corpus, they are called \"lexicalized parsers\". On the other hand, unlexicalized parsers achieved an almost equivalent accuracy to such lexicalized parsers (Klein and Manning, 2003; . Accordingly, we can say that the state-of-the-art lexicalized parsers are mainly based on unlexical (grammatical) information due to the sparse data problem. Bikel also indicated that Collins' parser can use bilexical dependencies only 1.49% of the time; the rest of the time, it backs off to condition one word on just phrasal and part-of-speech categories (Bikel, 2004) . This paper aims at exploiting much more lexical information, and proposes a fully-lexicalized probabilistic model for Japanese syntactic and case structure analysis. Lexical information is extracted not from a small tagged corpus, but from a huge raw corpus as case frames. This model performs case structure analysis by a generative probabilistic model based on the case frames, and selects the syntactic structure that has the highest case structure probability.",
"cite_spans": [
{
"start": 33,
"end": 47,
"text": "(eat lunchbox)",
"ref_id": null
},
{
"start": 953,
"end": 968,
"text": "(Collins, 1999;",
"ref_id": "BIBREF4"
},
{
"start": 969,
"end": 984,
"text": "Charniak, 2000)",
"ref_id": "BIBREF3"
},
{
"start": 998,
"end": 1021,
"text": "(Uchimoto et al., 2000;",
"ref_id": "BIBREF18"
},
{
"start": 1022,
"end": 1047,
"text": "Kudo and Matsumoto, 2002)",
"ref_id": "BIBREF13"
},
{
"start": 1275,
"end": 1300,
"text": "(Klein and Manning, 2003;",
"ref_id": "BIBREF12"
},
{
"start": 1661,
"end": 1674,
"text": "(Bikel, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Blaheta and Charniak proposed a statistical method",
"sec_num": null
},
{
"text": "We employ automatically constructed case frames for our model of case structure analysis. This section outlines the method for constructing the case frames. A large corpus is automatically parsed, and case frames are constructed from modifier-head examples in the resulting parses. The problems of automatic case frame construction are syntactic and semantic ambiguities. That is to say, the parsing results inevitably contain errors, and verb senses are intrinsically ambiguous. To cope with these problems, case frames are gradually constructed from reliable modifier-head examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatically Constructed Case Frames",
"sec_num": "2"
},
{
"text": "First, modifier-head examples that have no syntactic ambiguity are extracted, and they are disambiguated by a couple of a verb and its closest case component. Such couples are explicitly expressed on the surface of text, and can be considered to play an important role in sentence meanings. For instance, examples are distinguished not by verbs (e.g., \"tsumu\" (load/accumulate)), but by couples (e.g., \"nimotsu-wo tsumu\" (load baggage) and \"keiken-wo tsumu\" (accumulate experience)). Modifier-head examples are aggregated in this way, and yield basic case frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatically Constructed Case Frames",
"sec_num": "2"
},
{
"text": "Thereafter, the basic case frames are clustered to merge similar case frames. For example, since \"nimotsu-wo tsumu\" (load baggage) and \"busshi-wo tsumu\" (load supply) are similar, they are clustered. The similarity is measured using a thesaurus (Ikehara et al., 1997) .",
"cite_spans": [
{
"start": 245,
"end": 267,
"text": "(Ikehara et al., 1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatically Constructed Case Frames",
"sec_num": "2"
},
{
"text": "Using this gradual procedure, we constructed case frames from the web corpus (Kawahara and Kuro-hashi, 2006) . The case frames were obtained from approximately 470M sentences extracted from the web. They consisted of 90,000 verbs, and the average number of case frames for a verb was 34.3.",
"cite_spans": [
{
"start": 77,
"end": 108,
"text": "(Kawahara and Kuro-hashi, 2006)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatically Constructed Case Frames",
"sec_num": "2"
},
{
"text": "In Figure 1 , some examples of the resulting case frames are shown. In this table, 'CS' means a case slot. <agent> in the table is a generalized example, which is given to the case slot where half of the examples belong to <agent> in a thesaurus (Ikehara et al., 1997) . <agent> is also given to \"ga\" case slot that has no examples, because \"ga\" case components are usually agentive and often omitted.",
"cite_spans": [
{
"start": 246,
"end": 268,
"text": "(Ikehara et al., 1997)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Automatically Constructed Case Frames",
"sec_num": "2"
},
{
"text": "The proposed method gives a probability to each possible syntactic structure T and case structure L of the input sentence S, and outputs the syntactic and case structure that have the highest probability. That is to say, the system selects the syntactic structure T best and the case structure L best that maximize the probability P (T, L|S):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Probabilistic Model for Syntactic and Case Structure Analysis",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(T best , L best ) = argmax (T,L) P (T, L|S) = argmax (T,L) P (T, L, S) P (S) = argmax (T,L) P (T, L, S)",
"eq_num": "(1)"
}
],
"section": "Integrated Probabilistic Model for Syntactic and Case Structure Analysis",
"sec_num": "3"
},
{
"text": "The last equation is derived because P (S) is constant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Integrated Probabilistic Model for Syntactic and Case Structure Analysis",
"sec_num": "3"
},
{
"text": "We propose a generative probabilistic model based on the dependency formalism. This model considers a clause as a unit of generation, and generates the input sentence from the end of the sentence in turn. P (T, L, S) is defined as the product of a probability for generating a clause C i as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "P (T, L, S) = i=1..n P (C i |b h i ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "where n is the number of clauses in S, and b h i is C i 's modifying bunsetsu 1 . The main clause C n at the end of a sentence does not have a modifying head, but we handle it by assuming b hn = EOS (End Of Sentence). For example, consider the sentence in Figure 1 . There are two possible dependency structures, and for each structure the product of probabilities indicated below of the tree is calculated. Finally, the model chooses the highest-probability structure (in this case the left one).",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 264,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "C i is decomposed into its predicate type f i (including the predicate's inflection) and the rest case structure CS i . This means that the predicate included in CS i is lemmatized. Bunsetsu b h i is also decomposed into the content part w h i and the type",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f h i . P (C i |b h i ) = P (CS i , f i |w h i , f h i ) = P (CS i |f i , w h i , f h i )P (f i |w h i , f h i ) \u2248 P (CS i |f i , w h i )P (f i |f h i )",
"eq_num": "(3)"
}
],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "The last equation is derived because the content part in CS i is independent of the type of its modifying head (f h i ), and in most cases, the type f i is independent of the content part of its modifying head (w h i ). For example, P (bentou-wa tabete|syuppatsu-shita) is calculated as follows: P (CS(bentou-wa taberu)|te, syuppatsu-suru)P (te|ta.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "We call P (CS i |f i , w h i ) generative model for case structure and P (f i |f h i ) generative model for predicate type. The following two sections describe these models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Syntactic and Case Structure Analysis",
"sec_num": "3.1"
},
{
"text": "We propose a generative probabilistic model of case structure. This model selects a case frame that matches the input case components, and makes correspondences between input case components and case slots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "A case structure CS i consists of a predicate v i , a case frame CF l and a case assignment CA k . Case assignment CA k represents correspondences between input case components and case slots as shown in Figure 2 . Note that there are various possibilities of case assignment in addition to that of Figure 2 , such as corresponding \"bentou\" (lunchbox) with \"ga\" case. Accordingly, the index k of CA k ranges up to the number of possible case assignments. By splitting",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 212,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 299,
"end": 307,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "CS i into v i , CF l and CA k , P (CS i |f i , w h i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "is rewritten as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (CS i |f i , w h i ) = P (v i , CF l , CA k |f i , w h i ) = P (v i |f i , w h i ) \u00d7 P (CF l |f i , w h i , v i ) \u00d7 P (CA k |f i , w h i , v i , CF l ) \u2248 P (v i |w h i ) \u00d7 P (CF l |v i ) \u00d7 P (CA k |CF l , f i )",
"eq_num": "(4)"
}
],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "The above approximation is given because it is natural to consider that the predicate v i depends on its modifying head w h i , that the case frame CF l only depends on the predicate v i , and that the case assignment CA k depends on the case frame CF l and the predicate type f i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "The probabilities P (v i |w h i ) and P (CF l |v i ) are estimated from case structure analysis results of a large raw corpus. The remainder of this section illustrates P (CA k |CF l , f i ) in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Model for Case Structure",
"sec_num": "3.2"
},
{
"text": "Assignment Let us consider case assignment CA k for each case slot s j in case frame CF l . P (CA k |CF l , f i ) can be decomposed into the following product depending on whether a case slot s j is filled with an input case component (content part n j and type f j ) or vacant:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (CA k |CF l , f i ) = s j :A(s j )=1 P (A(s j ) = 1, n j , f j |CF l , f i , s j ) \u00d7 s j :A(s j )=0 P (A(s j ) = 0|CF l , f i , s j ) = s j :A(s j )=1 P (A(s j ) = 1|CF l , f i , s j ) \u00d7P (n j , f j |CF l , f i , A(s j ) = 1, s j ) \u00d7 s j :A(s j )=0 P (A(s j ) = 0|CF l , f i , s j )",
"eq_num": "(5)"
}
],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "where the function A(s j ) returns 1 if a case slot s j is filled with an input case component; otherwise 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "P (A(s j ) = 1|CF l , f i , s j ) and P (A(s j ) = 0|CF l , f i , s j ) in equation (5) can be rewritten as P (A(s j ) = 1|CF l , s j ) and P (A(s j ) = 0|CF l , s j ),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "because the evaluation of case slot assignment depends only on the case frame. We call these probabilities generative probability of a case slot, and they are estimated from case structure analysis results of a large corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "Let us calculate P (CS i |f i , w h i ) using the example in Figure 1 . In the sentence, \"wa\" is a topic marking (TM) postposition, and hides the case marker. The generative probability of case structure varies depending on the case slot to which the topic marked phrase is assigned. For example, when a case frame of \"taberu\" (eat) CF taberu1 with \"ga\" and \"wo\" case slots is used, P (CS(bentou-wa taberu)|te, syuppatsu-suru) is calculated as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P 1 (CS(bentou-wa taberu)|te, syuppatsu-suru) = P (taberu|syuppatsu-suru) \u00d7 P (CF taberu1 |taberu) \u00d7 P (bentou, wa|CF taberu1 , te, A(wo) = 1, wo) \u00d7 P (A(wo) = 1|CF taberu1 , wo) \u00d7 P (A(ga) = 0|CF taberu1 , ga)",
"eq_num": "(6)"
}
],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "P 2 (CS(bentou-wa taberu)|te, syupatsu-suru) = P (taberu|syuppatsu-suru) \u00d7 P (CF taberu1 |taberu) \u00d7 P (bentou, wa|CF taberu1 , te, A(ga) = 1, ga) \u00d7 P (A(ga) = 1|CF taberu1 , ga) \u00d7 P (A(wo) = 0|CF taberu1 , wo)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "Such probabilities are computed for each case frame of \"taberu\" (eat), and the case frame and its corresponding case assignment that have the highest probability are selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "We describe the generative probability of a case component",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "P (n j , f j |CF l , f i , A(s j ) = 1, s j ) below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case",
"sec_num": "3.2.1"
},
{
"text": "We approximate the generative probability of a case component, assuming that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "\u2022 a generative probability of content part n j is independent of that of type f j ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "\u2022 and the interpretation of the surface case included in f j does not depend on case frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "Taking into account these assumptions, the generative probability of a case component is approximated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "P (n j , f j |CF l , f i , A(s j ) = 1, s j ) \u2248 P (n j |CF l , A(s j ) = 1, s j ) P (f j |s j , f i ) (8) P (n j |CF l , A(s j ) = 1, s j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "is the probability of generating a content part n j from a case slot s j in a case frame CF l . This probability is estimated from case frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "Let us consider P (f j |s j , f i ) in equation 8. This is the probability of generating the type f j of a case component that has a correspondence with the case slot s j . Since the type f j consists of a surface case c j 2 , a punctuation mark (comma) p j and a topic marker \"wa\" t j , P (f j |s j , f i ) is rewritten as follows 2 A surface case means a postposition sequence at the end of bunsetsu, such as \"ga\", \"wo\", \"koso\" and \"demo\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "(using the chain rule):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (f j |s j , f i ) = P (c j , t j , p j |s j , f i ) = P (c j |s j , f i ) \u00d7 P (p j |s j , f i , c j ) \u00d7 P (t j |s j , f i , c j , p j ) \u2248 P (c j |s j ) \u00d7 P (p j |f i ) \u00d7 P (t j |f i , p j )",
"eq_num": "(9)"
}
],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "This approximation is given by assuming that c j only depends on s j , p j only depends on f j , and t j depends on f j and p j . P (c j |s j ) is estimated from the Kyoto Text Corpus , in which the relationship between a surface case marker and a case slot is annotated by hand. In Japanese, a punctuation mark and a topic marker are likely to be used when their belonging bunsetsu has a long distance dependency. By considering such tendency, f i can be regarded as (o i , u i ), where o i means whether a dependent bunsetsu gets over another head candidate before its modifying head v i , and u i means a clause type of v i . The value of o i is binary, and u i is one of the clause types described in (Kawahara and Kurohashi, 1999) .",
"cite_spans": [
{
"start": 705,
"end": 735,
"text": "(Kawahara and Kurohashi, 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (p j |f i ) = P (p j |o i , u i ) (10) P (t j |f i , p j ) = P (t j |o i , u i , p j )",
"eq_num": "(11)"
}
],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "3.3 Generative Model for Predicate Type Now, consider P (f i |f h i ) in the equation 3. This is the probability of generating the predicate type of a clause C i that modifies b h i . This probability varies depending on the type of b h i . When b h i is a predicate bunsetsu, C i is a subordinate clause embedded in the clause of b h i . As for the types f i and f h i , it is necessary to consider punctuation marks (p i , p h i ) and clause types (u i , u h i ). To capture a long distance dependency indicated by punctuation marks, o h i (whether C i has a possible head candidate before b h i ) is also considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P V Bmod (f i |f h i ) = P V Bmod (p i , u i |p h i , u h i , o h i )",
"eq_num": "(12)"
}
],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "When b h i is a noun bunsetsu, C i is an embedded clause in b h i . In this case, clause types and a punctuation mark of the modifiee do not affect the probability. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P N Bmod (f i |f h i ) = P N Bmod (p i |o h i )",
"eq_num": "(13)"
}
],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "P (pi, ui|p h i , u h i , o h i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "predicate type Kyoto Text Corpus P (cj|sj) surface case Kyoto Text Corpus P (vi|w h i ) predicate parsing results P (nj|CF l , A(sj) = 1, sj) words case frames P (CF l |vi) case frame case structure analysis results P (A(sj) = {0, 1} |CF l , sj) case slot case structure analysis results ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Probability of Case Component",
"sec_num": "3.2.2"
},
{
"text": "We evaluated the syntactic structure and case structure outputted by our model. Each parameter is estimated using maximum likelihood from the data described in Table 2 . All of these data are not existing or obtainable by a single process, but acquired by applying syntactic analysis, case frame construction and case structure analysis in turn. The process of case structure analysis in this table is a similarity-based method . The case frames were automatically constructed from the web corpus comprising 470M sentences, and the case structure analysis results were obtained from 6M sentences in the web corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 160,
"end": 167,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The rest of this section first describes the experiments for syntactic structure, and then reports the experiments for case structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluated syntactic structures analyzed by the proposed model. Our experiments were run on hand-annotated 675 web sentences 3 . The web sentences were manually annotated using the same criteria as the Kyoto Text Corpus. The system input was tagged automatically using the JUMAN morphological analyzer . The syntactic structures obtained were evaluated with re-gard to dependency accuracy -the proportion of correct dependencies out of all dependencies except for the last dependency in the sentence end 4 . Table 3 shows the dependency accuracy. In the table, \"baseline\" means the rule-based syntactic parser, KNP , and \"proposed\" represents the proposed method. The proposed method significantly outperformed the baseline method (McNemar's test; p < 0.05). The dependency accuracies are classified into four types according to the bunsetsu classes (VB: verb bunsetsu, NB: noun bunsetsu) of a dependent and its head. The \"NB\u2192VB\" type is further divided into two types: \"TM\" and \"others\". The type that is most related to case structure is \"others\" in \"NB\u2192VB\". Its accuracy was improved by 1.6%, and the error rate was reduced by 10.9%. This result indicated that the proposed method is effective in analyzing dependencies related to case structure. Figure 3 shows some analysis results, where the dotted lines represent the analysis by the baseline method, and the solid lines represent the analysis by the proposed method. Sentence (1) and (2) are incorrectly analyzed by the baseline but correctly analyzed by the proposed method.",
"cite_spans": [],
"ref_spans": [
{
"start": 510,
"end": 517,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1252,
"end": 1260,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experiments for Syntactic Structure",
"sec_num": "4.1"
},
{
"text": "There are two major causes that led to analysis errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments for Syntactic Structure",
"sec_num": "4.1"
},
{
"text": "In sentence (3) in Figure 3 , the baseline method correctly recognized the head of \"iin-wa\" (commissioner-TM) as \"hirakimasu\" (open). However, the proposed method incorrectly judged it as \"oujite-imasuga\" (offer). Both analysis results can be considered to be correct semantically, but from ? ?",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Mismatch between analysis results and annotation criteria",
"sec_num": null
},
{
"text": "(1) mizu-ga takai tokoro-kara hikui tokoro-he nagareru. water-nom high ground-abl low ground-all flow (Water flows from high ground to low ground.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatch between analysis results and annotation criteria",
"sec_num": null
},
{
"text": "? ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatch between analysis results and annotation criteria",
"sec_num": null
},
{
"text": "(2) ... Kobe shi-ga senmonchishiki-wo motsu volunteer-wo bosyushita ... Kobe city-nom expert knowledge-acc have volunteer-acc recruited (Kobe city recruited a volunteer who has expert knowledge, ...)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatch between analysis results and annotation criteria",
"sec_num": null
},
{
"text": "? ?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatch between analysis results and annotation criteria",
"sec_num": null
},
{
"text": "( the viewpoint of our annotation criteria, the latter is not a syntactic relation, but an ellipsis relation. To address this problem, it is necessary to simultaneously evaluate not only syntactic relations but also indirect relations, such as ellipses and anaphora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mismatch between analysis results and annotation criteria",
"sec_num": null
},
{
"text": "We proposed a generative probabilistic model, and thus cannot optimize the weight of each probability. Such optimization could be a way to improve the system performance. In the future, we plan to employ a machine learning technique for the optimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear weighting on each probability",
"sec_num": null
},
{
"text": "We applied case structure analysis to 215 web sentences which are manually annotated with case structure, and evaluated case markers of TM phrases and clausal modifiees by comparing them with the gold standard in the corpus. The experimental results are shown in table 4, in which the baseline refers to a similarity-based method . The experimental results were really good compared to the baseline. It is difficult to compare the results with the previous work stated in the next section, because of different experimental settings (e.g., our evaluation includes parse errors in incorrect cases).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments for Case Structure",
"sec_num": "4.2"
},
{
"text": "There have been several approaches for syntactic analysis handling lexical preference on a large scale. Shirai et al. proposed a PGLR-based syntactic analysis method using large-scale lexical preference (Shirai et al., 1998) . Their system learned lexical preference from a large newspaper corpus (articles of five years), such as P (pie|wo, taberu), but did not deal with verb sense ambiguity. They reported 84.34% accuracy on 500 relatively short sentences from the Kyoto Text Corpus.",
"cite_spans": [
{
"start": 203,
"end": 224,
"text": "(Shirai et al., 1998)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Fujio and Matsumoto presented a syntactic analysis method based on lexical statistics (Fujio and Matsumoto, 1998) . They made use of a probabilistic model defined by the product of a probability of having a dependency between two cooccurring words and a distance probability. The model was trained on the EDR corpus, and performed with 86.89% accuracy on 10,000 sentences from the EDR corpus 5 .",
"cite_spans": [
{
"start": 86,
"end": 113,
"text": "(Fujio and Matsumoto, 1998)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "On the other hand, there have been a number of machine learning-based approaches using lexical preference as their features. Among these, Kudo and Matsumoto yielded the best performance (Kudo and Matsumoto, 2002) . They proposed a chunkingbased dependency analysis method using Support Vector Machines. They used two-fold cross validation on the Kyoto Text Corpus, and achieved 90.46% accuracy 5 . However, it is very hard to learn sufficient lexical preference from several tens of thousands sentences of a hand-tagged corpus.",
"cite_spans": [
{
"start": 186,
"end": 212,
"text": "(Kudo and Matsumoto, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "There has been some related work analyzing clausal modifiees and TM phrases. For example, Torisawa analyzed TM phrases using predicateargument cooccurences and word classifications induced by the EM algorithm (Torisawa, 2001) . Its accuracy was approximately 88% for \"wa\" and 84% for \"mo\". It is difficult to compare the accuracy of their system to ours, because the range of target expressions is different. Unlike related work, it is promising to utilize the resultant case frames for subsequent analyzes such as ellipsis or discourse analysis.",
"cite_spans": [
{
"start": 209,
"end": 225,
"text": "(Torisawa, 2001)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We have described an integrated probabilistic model for syntactic and case structure analysis. This model takes advantage of lexical selectional preference of large-scale case frames, and performs syntactic and case analysis simultaneously. The experiments indicated the effectiveness of our model. In the future, by incorporating ellipsis resolution, we will develop an integrated model of syntactic, case and ellipsis analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In Japanese, bunsetsu is a basic unit of dependency, consisting of one or more content words and the following zero or more function words. It corresponds to a base phrase in English, and \"eojeol\" in Korean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The test set is not used for case frame construction and probability estimation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since Japanese is head-final, the second last bunsetsu unambiguously depends on the last bunsetsu, and the last bunsetsu has no dependency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The evaluation includes the last dependencies in the sentence end, which are always correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Berkeley FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "Charles",
"middle": [],
"last": "Fillmore",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collin Baker, Charles Fillmore, and John Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 17th In- ternational Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computa- tional Linguistics, pages 86-90.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Intricacies of Collins' parsing model",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bikel",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "4",
"pages": "479--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M. Bikel. 2004. Intricacies of Collins' parsing model. Computational Linguistics, 30(4):479-511.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Assigning function tags to parsed text",
"authors": [
{
"first": "Don",
"middle": [],
"last": "Blaheta",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "234--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In Proceedings of the 1st Meeting of the North American Chapter of the Association for Compu- tational Linguistics, pages 234-240.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A maximum-entropy-inspired parser",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics, pages 132-139.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Japanese dependency structure analysis based on lexicalized statistics",
"authors": [
{
"first": "Masakazu",
"middle": [],
"last": "Fujio",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "88--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masakazu Fujio and Yuji Matsumoto. 1998. Japanese depen- dency structure analysis based on lexicalized statistics. In Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing, pages 88-96.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Corpus-based dependency analysis of Japanese sentences using verb bunsetsu transitivity",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 5th Natural Language Processing Pacific Rim Symposium",
"volume": "",
"issue": "",
"pages": "387--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara and Sadao Kurohashi. 1999. Corpus-based dependency analysis of Japanese sentences using verb bun- setsu transitivity. In Proceedings of the 5th Natural Lan- guage Processing Pacific Rim Symposium, pages 387-391.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Fertilization of case frame dictionary for robust Japanese case analysis",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "425--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara and Sadao Kurohashi. 2002. Fertilization of case frame dictionary for robust Japanese case analysis. In Proceedings of the 19th International Conference on Com- putational Linguistics, pages 425-431.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Case frame compilation from the web using high-performance computing",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara and Sadao Kurohashi. 2006. Case frame compilation from the web using high-performance comput- ing. In Proceedings of the 5th International Conference on Language Resources and Evaluation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Construction of a Japanese relevance-tagged corpus",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "K\u00f4iti",
"middle": [],
"last": "Hasida",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 3rd International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "2008--2013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daisuke Kawahara, Sadao Kurohashi, and K\u00f4iti Hasida. 2002. Construction of a Japanese relevance-tagged corpus. In Pro- ceedings of the 3rd International Conference on Language Resources and Evaluation, pages 2008-2013.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Adding semantic annotation to the Penn TreeBank",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Mitch",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Human Language Technology Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kingsbury, Martha Palmer, and Mitch Marcus. 2002. Adding semantic annotation to the Penn TreeBank. In Pro- ceedings of the Human Language Technology Conference.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate un- lexicalized parsing. In Proceedings of the 41st Annual Meet- ing of the Association for Computational Linguistics, pages 423-430.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Japanese dependency analysis using cascaded chunking",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "29--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Proceedings of the Conference on Natural Language Learning, pages 29-35.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "507--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao. 1994. A syntactic anal- ysis method of long Japanese sentences based on the detec- tion of conjunctive structures. Computational Linguistics, 20(4):507-534.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improvements of Japanese morphological analyzer JUMAN",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Toshihisa",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the International Workshop on Sharable Natural Language",
"volume": "",
"issue": "",
"pages": "22--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi, Toshihisa Nakamura, Yuji Matsumoto, and Makoto Nagao. 1994. Improvements of Japanese morpho- logical analyzer JUMAN. In Proceedings of the Interna- tional Workshop on Sharable Natural Language, pages 22- 28.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An empirical evaluation on statistical parsing of Japanese sentences using lexical association statistics",
"authors": [
{
"first": "Kiyoaki",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Takenobu",
"middle": [],
"last": "Tokunaga",
"suffix": ""
},
{
"first": "Hozumi",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "80--87",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyoaki Shirai, Kentaro Inui, Takenobu Tokunaga, and Hozumi Tanaka. 1998. An empirical evaluation on statistical parsing of Japanese sentences using lexical association statistics. In Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing, pages 80-87.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An unsupervised method for canonicalization of Japanese postpositions",
"authors": [
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 6th Natural Language Processing Pacific Rim Simposium",
"volume": "",
"issue": "",
"pages": "211--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kentaro Torisawa. 2001. An unsupervised method for canon- icalization of Japanese postpositions. In Proceedings of the 6th Natural Language Processing Pacific Rim Simposium, pages 211-218.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dependency model using posterior context",
"authors": [
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Masaki",
"middle": [],
"last": "Murata",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 6th International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "321--322",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyotaka Uchimoto, Masaki Murata, Satoshi Sekine, and Hi- toshi Isahara. 2000. Dependency model using posterior context. In Proceedings of the 6th International Workshop on Parsing Technology, pages 321-322.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An Example of Probability Calculation.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "An example of case assignment CA k .",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "3) iin-wa, jitaku-de minasan-karano gosoudan-ni oujite-imasuga, ... soudansyo-wo hirakimasu commissioner-TM at home all of you consultation-dat offer window open (the commissioner offers consultation to all of you at home, but opens a window ...) Examples of analysis results.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"type_str": "table",
"text": "examples taberu ga person, child, boy, \u2022 \u2022 \u2022 wo lunch, lunchbox, dinner, \u2022 \u2022 \u2022",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "Case frame examples (examples are expressed only in English for space limitation.).",
"num": null,
"content": "<table><tr><td/><td>CS</td><td>examples</td></tr><tr><td colspan=\"2\">ga youritsu (1) wo (support) ni</td><td>&lt;agent&gt;, group, party, \u2022 \u2022 \u2022 &lt;agent&gt;, candidate, applicant &lt;agent&gt;, district, election, \u2022 \u2022 \u2022</td></tr><tr><td colspan=\"2\">ga youritsu (2) wo (support) ni</td><td>&lt;agent&gt; &lt;agent&gt;, member, minister, \u2022 \u2022 \u2022 &lt;agent&gt;, candidate, successor</td></tr><tr><td>. . .</td><td>. . .</td><td>. . .</td></tr><tr><td colspan=\"2\">itadaku (1) ga</td><td>&lt;agent&gt;</td></tr><tr><td>(have)</td><td>wo</td><td>soup</td></tr><tr><td colspan=\"3\">ga itadaku (2) wo (be given) kara &lt;agent&gt;, president, circle, \u2022 \u2022 \u2022 &lt;agent&gt; advice, instruction, address</td></tr><tr><td>. . .</td><td>. . .</td><td>. . .</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Data for parameter estimation.",
"num": null,
"content": "<table><tr><td>probability</td><td>what is generated</td><td>data</td></tr><tr><td>P (pj|oi, uj)</td><td>punctuation mark</td><td>Kyoto Text Corpus</td></tr><tr><td>P (tj|oi, ui, pj)</td><td>topic marker</td><td>Kyoto Text Corpus</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Experimental results for syntactic analysis.",
"num": null,
"content": "<table><tr><td/><td>baseline</td><td>proposed</td></tr><tr><td>all</td><td colspan=\"2\">3,447/3,976 (86.7%) 3,477/3,976 (87.4%)</td></tr><tr><td colspan=\"3\">NB\u2192VB 1,310/1,547 (84.7%) 1,328/1,547 (85.8%)</td></tr><tr><td>TM</td><td colspan=\"2\">244/298 (81.9%) 242/298 (81.2%)</td></tr><tr><td colspan=\"3\">others 1,066/1,249 (85.3%) 1,086/1,249 (86.9%)</td></tr><tr><td colspan=\"3\">NB\u2192NB 525/556 (94.4%) 526/556 (94.6%)</td></tr><tr><td colspan=\"3\">VB\u2192VB 593/760 (78.0%) 601/760 (79.1%)</td></tr><tr><td colspan=\"3\">VB\u2192NB 453/497 (91.1%) 457/497 (92.0%)</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Experimental results for case structure analysis.",
"num": null,
"content": "<table><tr><td/><td>baseline</td><td>proposed</td></tr><tr><td>TM</td><td>72/105 (68.6%)</td><td>82/105 (78.1%)</td></tr><tr><td colspan=\"3\">clause 107/155 (69.0%) 121/155 (78.1%)</td></tr></table>",
"html": null
}
}
}
}