Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y05-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:39:53.184768Z"
},
"title": "Empirical Verification of Meaning-Game-based Generalization of Centering Theory with Large Japanese Corpus",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Shiramatsu",
"suffix": "",
"affiliation": {},
"email": "siramatu@kuis.kyoto-u.ac.jp"
},
{
"first": "Kazunori",
"middle": [],
"last": "Komatani",
"suffix": "",
"affiliation": {},
"email": "komatani@kuis.kyoto-u.ac.jp"
},
{
"first": "Takashi",
"middle": [],
"last": "Miyata",
"suffix": "",
"affiliation": {},
"email": "miyata.t@aist.go.jp"
},
{
"first": "Koiti",
"middle": [],
"last": "Hasida",
"suffix": "",
"affiliation": {},
"email": "hasida.k@aist.go.jp"
},
{
"first": "Hiroshi",
"middle": [
"G"
],
"last": "Okuno",
"suffix": "",
"affiliation": {},
"email": "okuno@kuis.kyoto-u.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Centering theory (Grosz et al., 1995) tries to explain relations among attention, anaphora, and cohesion. It has two theoretical limitations. The first is the lack of a principle behind these discourse phenomena. The second is that the salience of discourse entities has not been quantitatively defined, although it plays a critical role in this theory. Hasida et al. (1995, 1996) propose the meaning game as a more principled model of intentional communication based on game theory, and claim that it can derive centering theory. This claim, however, has not yet been verified on the basis of substantial linguistic data. In this paper, we formulate salience as a measurable quantity in terms of a reference probability. We also formulate preferences subsuming centering theory under this quantitative formulation of salience. The preferences are derived from the meaning game and entail more general predictions than those of conventional centering theory. These formulations overcome the above limitations of centering theory. By following them, we empirically verify our generalization with a large Japanese corpus. The experimental results show that there is positive correlation between the salience (reference probability) of an entity and the simplicity (utility) of a noun phrase which refers to the entity. They also indicate correspondence between the values of expected utility and the ranking of the transition states. These results indicate that our generalization is appropriate.",
"pdf_parse": {
"paper_id": "Y05-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "Centering theory (Grosz et al., 1995) tries to explain relations among attention, anaphora, and cohesion. It has two theoretical limitations. The first is the lack of a principle behind these discourse phenomena. The second is that the salience of discourse entities has not been quantitatively defined, although it plays a critical role in this theory. Hasida et al. (1995, 1996) propose the meaning game as a more principled model of intentional communication based on game theory, and claim that it can derive centering theory. This claim, however, has not yet been verified on the basis of substantial linguistic data. In this paper, we formulate salience as a measurable quantity in terms of a reference probability. We also formulate preferences subsuming centering theory under this quantitative formulation of salience. The preferences are derived from the meaning game and entail more general predictions than those of conventional centering theory. These formulations overcome the above limitations of centering theory. By following them, we empirically verify our generalization with a large Japanese corpus. The experimental results show that there is positive correlation between the salience (reference probability) of an entity and the simplicity (utility) of a noun phrase which refers to the entity. They also indicate correspondence between the values of expected utility and the ranking of the transition states. These results indicate that our generalization is appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Principled and quantitative modeling of discourse is important for analyzing and generating discourse. Centering theory (CT) is a model of discourse structures. It explains the relations among attention, anaphora, and cohesion (Iida, 1997) . However, CT has had two theoretical limitations. The first is the lack of a general principle behind the discourse phenomena. Although some studies on CT have focused on analyzing surficial linguistic features without general principles, we consider that the principle of discourse phenomena must be addressed based on measurable quantities. The second is that \"salience\", which plays a critical role in CT, cannot be verified based on large linguistic data because it is not formulated as a measurable quantity, but as heuristic rules. We have investigated the general principle of CT. We adopted the meaning game (MG) (Hasida et al., 1995 (Hasida et al., , 1996 framework because it gives a more principled explanation of the discourse phenomena than CT does. MG is a model of intentional communication (e.g., anaphora) based on game theory. Game players in game theory correspond to interlocutors in MG, and they decide their intentions and interpretations at the Pareto-optimum. Although Hasida et al. (1995) claimed that CT could be derived from the MG by formulating salience in terms of a reference probability, their claim has yet to be verified on the basis of substantial linguistic data. In this paper, we formulate the MG-based generalization of CT and verify it with a large corpus of Japanese newspaper articles. Furthermore, we quantitatively define salience by using multiple regression with a corpus for the MG-based generalization and for its verification.",
"cite_spans": [
{
"start": 227,
"end": 239,
"text": "(Iida, 1997)",
"ref_id": "BIBREF4"
},
{
"start": 862,
"end": 882,
"text": "(Hasida et al., 1995",
"ref_id": "BIBREF1"
},
{
"start": 883,
"end": 905,
"text": "(Hasida et al., , 1996",
"ref_id": "BIBREF2"
},
{
"start": 1234,
"end": 1254,
"text": "Hasida et al. (1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In CT, a discourse is represented as a sequence of utterances [U 1 , U 2 , \u2026 ,U n ]. The \"center\" is a discourse entity which draws attention. The center is likely to be pronominalised. The \"salience\" represents the degree of attention to a discourse entity. The salience also represents the likelihood of pronominalization. The salience has been defined as a heuristic ranking in previous studies (see Section 2.2). Centers are categorized as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory",
"sec_num": "2.1"
},
{
"text": "Cb(U i ): The backward-looking center of the utterance U i , which denotes the most salient discourse entity referenced in both the previous context and the current utterance U i . Cf(U i ): The forward-looking centers of U i , which denote a list of entities sorted by their salience. Cp(U i ): The preferred center of U i , which is the most salient discourse entity in Cf(U i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory",
"sec_num": "2.1"
},
{
"text": "CT embodies as the following rules (preferences) based on the heuristics definition of salience.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centering Theory",
"sec_num": "2.1"
},
{
"text": "If any element in Cf(U i ) is pronominalized, the Cb(U i ) is also pronominalized. Rule 2 (topic continuity): The transition states of centers between utterances (Table 1) are preferred in the following order: CONTINUE > RETAIN > SMOOTH-SHIFT > ROUGH-SHIFT. ",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 171,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Rule 1 (pronominalization):",
"sec_num": null
},
{
"text": "= i i U Cb U Cb ) ( ) ( 1 \u2212 \u2260 i i U Cb U Cb ) ( ) ( i i U Cp U Cb = CONTINUE SMOOTH-SHIFT ) ( ) ( i i U Cp U Cb \u2260 RETAIN ROUGH-SHIFT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule 1 (pronominalization):",
"sec_num": null
},
{
"text": "Rule 1 means that pronouns are more likely to refer to Cb than non-pronouns. Rule 2 represents the preference order among transition states according to the strength of topic continuity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule 1 (pronominalization):",
"sec_num": null
},
{
"text": "Conventional CT studies face two limitations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Issues",
"sec_num": "2.2"
},
{
"text": "1. Lack of principles behind the rules. CT does not explain why the two rules occur in discourse phenomena. 2. Salience is formalized neither objectively nor quantitatively, but heuristically (e.g., Cf-ranking).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Issues",
"sec_num": "2.2"
},
{
"text": "Such ranking is non-falsifiable (unscientific) and cannot be verified against real linguistic data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Issues",
"sec_num": "2.2"
},
{
"text": "The first limitation means that CT should have a hypothesis about the mechanisms behind discourse phenomena. The second limitation means that CT should be based on the quantitative definition of salience. Salience in CT is approximated by a heuristic ranking, called \"Cf-ranking\" (Walker et al., 1994) , as follows:",
"cite_spans": [
{
"start": 280,
"end": 301,
"text": "(Walker et al., 1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two Issues",
"sec_num": "2.2"
},
{
"text": "English Cf-ranking: subject > object > indirect object > complement > adjunct Japanese Cf-ranking: topic (zero or grammatical) > subject > indirect object > object > others The above Cf-ranking depends on only grammatical function. While Strube et al. (1999) proposed an extended Cf-ranking integrated with information status and Nariyama (2001) proposed an extended ranking integrated with contextual information, these rankings are based on surficial observations without sufficient theoretical grounds. Although Poesio et al. (2004) discussed the parameters settings in CT, their discussion was also based on heuristic ranking.",
"cite_spans": [
{
"start": 238,
"end": 258,
"text": "Strube et al. (1999)",
"ref_id": "BIBREF10"
},
{
"start": 330,
"end": 345,
"text": "Nariyama (2001)",
"ref_id": "BIBREF7"
},
{
"start": 515,
"end": 535,
"text": "Poesio et al. (2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two Issues",
"sec_num": "2.2"
},
{
"text": "Besides this second limitation, we also note that heuristic ranking is difficult to integrate with other features that influence salience (e.g., distance between the current utterance and the latest expression referring to the target entity). We address the above two issues in the following sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Issues",
"sec_num": "2.2"
},
{
"text": "The meaning game (MG) is a hypothesis about a model of intentional communication based on game theory (Hasida, 1996) . We adopted MG to give CT a general principle. The MG-based account of anaphora is a more principled hypothesis than that of CT, because MG is based on the general principle of decision-making. In MG, the interlocutors' expected utility is represented as:",
"cite_spans": [
{
"start": 102,
"end": 116,
"text": "(Hasida, 1996)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization of Centering Theory based on the Meaning Game",
"sec_num": "3."
},
{
"text": "\u2211 Pr(e)Ut(w). w refers to e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization of Centering Theory based on the Meaning Game",
"sec_num": "3."
},
{
"text": "Here, the Pr(e) is the reference probability of a discourse entity e, which is the probability that e will be referenced in the next utterance. Ut(w) is the utility of expression w that refers to e. The lower the cost of speaking or hearing w is, the higher Ut(w) becomes. Here, we assume that the value of Pr(e) is shared by interlocutors. Under this assumption, the solution which provides the maximum expected utility is the interlocutor's Pareto-optimum because the expected utility is shared by them. We leave miscommunication out of consideration under that assumption. Hasida et al. (1995) suggested that Rule 1 and 2 of CT can be derived from MG only in a few particular cases.",
"cite_spans": [
{
"start": 576,
"end": 596,
"text": "Hasida et al. (1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization of Centering Theory based on the Meaning Game",
"sec_num": "3."
},
{
"text": "Rule 1 of CT is a preference about pronominalization. Hasida et al. derived Rule 1 from MG in the following case which involves little semantic bias, \"he\" tends to refer to \"Fred\", and \"the man\" to \"Max\". U 1 : Fred scolded Max. U 2 : He was angry with the man.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "They assumed the following inequations in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "Pr(\"Fred\") > Pr(\"Max\") (Q A subject is more salient than an object ) Ut(\"he\") > Ut(\"the man\") (Q A pronoun costs less than a non-pronoun )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "In this case, the interlocutors have two choices of anaphora. Hasida et al. indicated the above \"he\" \"the man\" \"he\" \"the man\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "Pr(\"Fred\")Ut(\"he\") + Pr(\"Max\")Ut(\"the man\") > Pr(\"Fred\")Ut(\"the man\") + Pr(\"Max\")Ut(\"he\") Q(Pr(\"Fred\") \u2212 Pr(\"Max\")) (Ut(\"he\") \u2212 Ut(\"the man\")) > 0 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "Pr(c 1 )Ut(a 1 )+Pr(c 2 )Ut(a 2 ) > Pr(c 2 )Ut(a 1 )+Pr(c 1 )Ut(a 2 ) Q(Pr(c 1 ) \u2212 Pr(c 2 )) (Ut(a 1 ) \u2212 Ut(a 2 )) > 0 high a 1 a 2 a 1 a 2 c 1 c 2 high c 1 c 2 high low low low Pr Pr low Ut Ut",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "high semantic bias by comparing the expected utilities of the two choices. In other words, a solution of their MG model is that choice (A) is preferred over choice (B) in Figure 1 . This solution conforms to a prediction using Rule 1. Thus, they claimed this thought experiment proves that rule 1 of CT can be derived from MG. Table 2 shows correspondence between MG and CT concepts.",
"cite_spans": [],
"ref_spans": [
{
"start": 171,
"end": 179,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 327,
"end": 334,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "The above derivation, however, has neither been given a general formulation nor been verified on the basis of substantial linguistic data. An utterance in general examples possibly has a few anaphors and a lot of candidates of antecedent, whereas the above example case has only two anaphors and only two candidates of antecedent. Thus, general formulation is required before one can apply this model to real linguistic data. We generalize the above derivation to the following preference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 1a and 1b",
"sec_num": "3.1"
},
{
"text": "When an utterance has multiple anaphors, an anaphor with a higher utility among them tends to refer to an entity with a higher reference probability. Figure 2 illustrates Preference 1a we propose. In this example, choice (A) is preferred over choice (B) . This is the preference in the cases of multiple anaphors in an utterance. Below, we generalize it to cases that do not depend on the number of anaphors in an utterance as follows:",
"cite_spans": [
{
"start": 250,
"end": 253,
"text": "(B)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preference 1a:",
"sec_num": null
},
{
"text": "Preference 1b: There is a positive correlation between the utility and the reference probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference 1a:",
"sec_num": null
},
{
"text": "These preferences are based on a general principle. Moreover, the coverage of these preferences is wider and more general than that of Rule 1 of CT. Accordingly, these preferences we propose are generalizations of Rule 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference 1a:",
"sec_num": null
},
{
"text": "Rule 2 of CT is a preference about local cohesion that indicates the level of topic continuity. Transition states are categorized into four types with respect to two conditions (Table 1) . These four types have been heuristically ranked by local cohesion or topic continuity. The first condition, Cb(U i ) =Cb(U i\u22121 ), means that the current utterance U i inherits Cb from the previous utterance U i\u22121 . This condition corresponds to cohesion between U i\u22121 and U i . The second condition, Cb(U i ) =Cp(U i ), means that Cb(U i ) is the most salient entity in U i . This condition corresponds to the prediction of cohesion between U i and U i+1 because Cp(U i ), the most salient entity in U i , is the most likely one to be pronominalized in the following utterance U i+1 . We consider that the preference order of Rule 2 is attributed to expected utility. When the first condition holds, the reference probability of Cb is higher than when it does not hold. In this case, the utility of the anaphor referring to Cb also tends to become high because of Preference 1b, so that the expected utility is high. Similarly, when the second condition holds, the reference probability of Cb and the utility of Cb are high, thus, the expected utility is also high. Furthermore, the first condition has stronger influence than the second, because the first one represents the cohesion between the previous and the current utterances, whereas the second merely predicts the cohesion between the current and the next utterances. Consequently, RETAIN has a larger expected utility than SMOOTH-SHIFT. Rule 2 of CT can thus be derived from the general principle of maximum expected utility, which is stated as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 177,
"end": 186,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Derivation of Preference 2",
"sec_num": "3.2"
},
{
"text": "Preference 2: The interlocutors prefer an interpretation of anaphora with higher expected utility.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 2",
"sec_num": "3.2"
},
{
"text": "This preference is a generalization of Rule 2. We will verify the above preferences and provide evidence of the existence of the MG principle behind Rules 1 and 2 of CT in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Derivation of Preference 2",
"sec_num": "3.2"
},
{
"text": "Salience represents the likelihood of a discourse entity to be pronominalized or its degree of attention. In CT, it is not quantitatively defined despite that it plays critical roles in the theory. Therefore, we formulate salience in terms of a reference probability -a measurable quantity. The salience of an entity can be defined based on how probable the entity will also be referenced in the following utterances. In other words, if an entity seems to be referenced in the following discourse, the entity tends to draw attention and its salience can be considered to be high. This formulation resolves the issues of CT that were discussed in Section 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Measurement of Salience",
"sec_num": "4."
},
{
"text": "The salience of an entity e at the target utterance U i is empirically defined as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition and Measurement of Salience",
"sec_num": "4."
},
{
"text": "The salience of e at U i is defined as the reference probability Pr(e, U i ), which is the probability of e being referenced in the next utterance U i+1 . Given a large amount of linguistic data, Pr(e, U i ) can be calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "1. Find the latest reference to e in the previous discourse [U 1 , \u2026, U i ]. Let it be w e . 2. Compose the feature vector feat(w e ,U i ) from w e and [U 1 , \u2026, U i ] . For example, the features we used in this study are listed in Table 3 . 3. Extract samples (w x ,U j ) whose feature vectors feat(w x ,U j ) equal feat(w e ,U i ) from a large amount of linguistic data. 4. Using the extracted samples, calculate Pr(w x , U j ), the probability that the referent of w x is also referenced in U j+1 (in other words, calculate the relative frequency of samples that the referent of w x is also referenced in U j+1 ). 5. Take Pr(w x , U j ) to be Pr(e, U i ). ",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "Chain log ( (# references to e in the previous context of U i ) + 1) Exp expression type of the latest reference to e (pronoun/non-pronoun) last_topic whether the latest reference to e was the last topic (yes/no) last_sbj whether the latest reference to e was the last subject (yes/no) p1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "whether e was in the first person (yes/no)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "Pos part of speech of the latest reference to e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "We used (1) for MLR, and (1) and (2) for SVR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "This definition is expressed by the following equation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "Salience of e at U i ) } ); , {( # } ; ) , {( # ) , Pr( ) , Pr( : C U w D C U w U w U e j x j x j x i \u2227 = \u2248 = Condition C: feat(w x ,U j ) = feat(w e , U i ) Condition D: The referent of w x is also referenced in U j+1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "Hereafter, we explain the measurement of the salience of \"Tom\" at U i for the following example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "U i-2 : I saw Tom a little while ago. U i-1 : He seems to be sleepy. U i : It was so hot last night, U i+1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": ". Pr(\"Tom\",U i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "In this example, the anaphor \"he\" refers to \"Tom\" and it appears in the last position among expressions referring to \"Tom\" in the previous discourse. We call it the latest reference to \"Tom\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "To simplify the explanation, if the following three features are used, feat(\"Tom\",U i ) is defined as (dist = 2, gram =subject, chain = 2). dist : Utterances between U i and the latest reference to e. gram : Grammatical function of the latest reference to e. chain : References to e in the previous context of U i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "We extract samples (w x ,U j ) that have the same feature vector as feat(\"Tom\",U i ) from a corpus and calculate the reference probability from these samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "U j-k : . \u2026. U j-1 : w x . U j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "U j+1 : . Pr(\"Tom\",U i ) } ); , {( # } ; ) , {( # ) , Pr( C U w D C U w U w j x j x j x \u2227 = \u2248",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "Condition C: feat(w x ,U j ) = feat(\"Tom\",U i ) Condition D: The referent of w x is also referenced in U j+1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "Notice that interpolation and extrapolation are necessary because of data sparseness in the corpus. To this end, we used regression analysis for the measurements. We measured the reference probability with two regression algorithms: MLR (multiple logistic regression) and SVR (support vector regression). Table 3 lists the features for regression in this study.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Definition of Salience:",
"sec_num": null
},
{
"text": "We statistically verified Preference 1a, 1b, and 2 derived from MG. We used 1,356 articles taken from Japanese newspapers annotated with Global Document Annotation (GDA) (Hasida, 1998) . These articles contained 63,562 utterances (predicate clauses). Table 4 shows the distribution of anaphora types in the corpus. To measure the reference probability, we extracted 1,073,781 samples of previously referenced entities for each utterance. Table 5 shows that there were 16,728 pairs of an utterance U i and a previously referenced entity e that is also referenced in U i (namely, that corpus contains 16,728 anaphors). We assumed that the utility of pronouns is greater than that of non-pronouns; i.e., the utility of pronouns equals 2, and that of non-pronouns equals 1. This assumption is equivalent to distinguishing between pronouns and non-pronouns in CT.",
"cite_spans": [
{
"start": 170,
"end": 184,
"text": "(Hasida, 1998)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 251,
"end": 258,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 438,
"end": 445,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Empirical Verification of Meaning-Game-based Generalization",
"sec_num": "5."
},
{
"text": "We measured reference probability, which is required in the verification of our MG-based generalization in Section 5.2. We used two regression algorithms for the measurement: MLR and SVR, which will be mentioned in Section 5.1.2 and 5.1.3, respectively. These regressions, especially MLR, require that their features must be numeric values. However, a grammatical function is not defined as numeric values. Therefore, we assigned numeric values to the grammatical functions as a preparation for the multiple regressions in Section 5.1.1. After these regressions, we will also reconsider the conventional Japanese Cf-ranking based on the assigned values for grammatical functions in Section 5.1.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measurement of Salience as Reference Probability",
"sec_num": "5.1"
},
{
"text": "We measured the reference probabilities by using only grammatical functions for enabling the regression. This preparative measurement involves not regression but counting samples. Table 6 shows the reference probabilities calculated by from only the grammatical functions existing in the corpus. We used these values as gram in the multiple regression of the reference probability.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Assigning Numeric Values to Grammatical Functions",
"sec_num": "5.1.1"
},
{
"text": "MLR model is based on an assumption that the log odds of probability, ) ) 1 ( log( P P \u2212 , of some kind of event can be expressed as a linear expression of the explanatory variables. The regression function with three features in Table 3 is Pr It takes a huge amount of time to perform MLR on 1,073,781 samples. Thus, we made five regression models using 12,000 subsamples per model. We used statistical software called R (R Development Core Team, 2002) for the MLR analysis. Table 7 shows the parameters of the five regression models. We used average of probabilities predicted by the five models as the reference probability, which is represented as follows: ",
"cite_spans": [
{
"start": 442,
"end": 453,
"text": "Team, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 230,
"end": 237,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 476,
"end": 483,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Measurement with MLR",
"sec_num": "5.1.2"
},
{
"text": "Pr \u2211 = \u2212 + + + \u2212 + = 5 1 1 3 , 2 , 1 , 0 , ))) ( exp( 1 ( 5 1 k k k k k chain b gram b dist b b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Measurement with MLR",
"sec_num": "5.1.2"
},
{
"text": "We also made an SVR model to measure the reference probability with the eight features listed in Table 3 . In MLR, the input values of the target variable are 0 or 1. However, in SVR, the input values must be smoothed as real numbers. We subsampled 60,000 samples, smoothed the input variables by using the k-NN method with k =100, and made an SVR model of a 2nd-degree polynomial kernel by using TinySVM (Kudo, 2002) .",
"cite_spans": [
{
"start": 405,
"end": 417,
"text": "(Kudo, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Measurement with SVR",
"sec_num": "5.1.3"
},
{
"text": "Notice that the direct object is higher ranked than the indirect object in Table 6 . This order is opposite from the conventional Japanese Cf-ranking order (Kameyama, 1998 , Walker et al., 1994 . Unfortunately, we have no way of telling which order is right from only this result.",
"cite_spans": [
{
"start": 156,
"end": 171,
"text": "(Kameyama, 1998",
"ref_id": null
},
{
"start": 172,
"end": 193,
"text": ", Walker et al., 1994",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 75,
"end": 82,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Reconsideration of Conventional Japanese Cf-ranking",
"sec_num": "5.1.4"
},
{
"text": "To verify the order of the direct and indirect objects in Japanese, we need to consider another view point. To set the other ranking for this purpose, we calculate coefficients of linear regression of an anophor's utility (in the next utterance) by using only grammatical functions (in the current utterance) in the corpus. These coefficients are assigned to each grammatical function for maximizing the correlation between them and utilities of noun phrases. In other words, they can be regarded as rigged values to maximize the predictive ability of Preferences 1a and 1b. Table 8 lists these coefficients and their ranking among the grammatical functions. This result also indicates that the direct object is higher ranked than the indirect object. Consequently, these empirical results disprove the order of direct and indirect objects in the conventional Japanese Cf-ranking. ",
"cite_spans": [],
"ref_spans": [
{
"start": 575,
"end": 582,
"text": "Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Reconsideration of Conventional Japanese Cf-ranking",
"sec_num": "5.1.4"
},
{
"text": "We statistically verified Preferences 1a, 1b, and 2 using the reference probabilities obtained in Section 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Empirical Verification of MG-based Generalization",
"sec_num": "5.2"
},
{
"text": "Firstly, we made pairs of anaphors in the same utterance for verifying Preference 1a. There were 914 pairs in utterances having multiple anaphors (Table 9 ). There were 360 pronoun and nonpronoun pairs. Thus, we calculated the percentage of samples that agreed with the prediction of +0.386 [0.373, 0.399] Preference 1a from these 360 pairs. We assumed that the percentage was binomially distributed and calculated the 95% confidence interval of the percentage. Table 10 shows that at least 70% of the samples agreed with Preference 1a. Secondly, to verify Preference 1b, we measured Pearson's correlation coefficient between reference probability and utility. We assumed that the correlation coefficient was t-distributed and calculated the 95% confidence interval. Table 10 shows that the correlation coefficient of Preference 1b was at least +0.36 in both MLR and SVR. These results show that Preferences 1a and 1b were verified with statistical significance. Table 11 shows the distribution of transition states in the corpus (decision on centers based on reference probabilities estimated by MLR as salience values). We see that the frequency of RETAIN is low despite its high preference rank. This tendency of the paucity of RETAIN has also been observed by Iida (1997) and Yamura-Takei et al. (2000) . We cannot assume that the preference order matches the frequency order because the four transition states can not always be selected for every utterance. Table 12 shows the averages and variances of the expected utility for each transition state. The order of the data in the table conforms to the order of Rule 2 of CT. We tested this order with multiple comparison tests. The results of the Kruskal-Wallis test were = 1780.7, df = 3, and P",
"cite_spans": [
{
"start": 284,
"end": 305,
"text": "+0.386 [0.373, 0.399]",
"ref_id": null
},
{
"start": 1264,
"end": 1275,
"text": "Iida (1997)",
"ref_id": "BIBREF4"
},
{
"start": 1280,
"end": 1306,
"text": "Yamura-Takei et al. (2000)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "(Table 9",
"ref_id": "TABREF8"
},
{
"start": 462,
"end": 470,
"text": "Table 10",
"ref_id": "TABREF0"
},
{
"start": 767,
"end": 775,
"text": "Table 10",
"ref_id": "TABREF0"
},
{
"start": 963,
"end": 971,
"text": "Table 11",
"ref_id": "TABREF0"
},
{
"start": 1463,
"end": 1471,
"text": "Table 12",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Verification of Preference 1a and 1b",
"sec_num": "5.2.1"
},
{
"text": ". This means that the averages of the four states differed significantly. Table 13 shows the result of the Wilcoxon's rank sum test using the method of Holm, which demonstrates that the order is statistically significant. Additionally, the correlation coefficient between the transition states and the averages of expected utility in Table 12 was +0.520 when we assigned the following values: CONITNUE=4, RETAIN=3, SMOOTH-SHIFT=2, ROUGH-SHIFT=1. This means that expected utility correlates with topic continuity. These results provide statistical evidence of a principle of the Meaning Game behind Rules 1 and 2 in Centering Theory.",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Table 13",
"ref_id": "TABREF0"
},
{
"start": 334,
"end": 342,
"text": "Table 12",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "<",
"sec_num": null
},
{
"text": "We resolved the problems regarding salience in Section 2 by formulating it as a reference probability. That is,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Quantitative Definition of Salience as the Reference Probability",
"sec_num": "6.1"
},
{
"text": "In our formulation, salience becomes a measurable quantity and becomes statistically verifiable based on large linguistic data. By adopting regression algorithms that can handle multiple explanatory variables, the model can more easily integrate features that influence salience than heuristic methods of previous works. By taking into account the distance between the current utterance and the latest references to entities, the model can handle not only entities referenced in the previous utterance but also all entities in the previous discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Quantitative Definition of Salience as the Reference Probability",
"sec_num": "6.1"
},
{
"text": "In Section 5.1.1, we confirmed that 75.3% of samples favored Preference 1a with MLR and that 74.4% samples favored it with SVR. This means, however, that about 25% of samples did not favor Preference 1a. In checking the results, we found that semantic features (e.g., selectional restriction of predicate, semantic category of anaphor, and so on.) accounted for this difference. We consider that if these features were imported to the multiple regression of the reference probability, Preferences 1a and 1b would become even stronger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Samples Disagreeing with Preference 1a",
"sec_num": "6.2"
},
{
"text": "In Section 5.2.2, we verified the correspondence between the order of averages expected utility for each transition state and the order of Rule 2 of CT. This means that Rule 2 of CT can be derived from MG. This verification is, however, not always strict. We should verify the order of the solutions of each case, not the order of the averages. We leave this issue as a future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Strictness of Verification of Preference 2",
"sec_num": "6.3"
},
{
"text": "CT has two limitations despite it being a standard theory about discourses. It lacks a principle behind discourse phenomena and a quantitative definition of salience. We have quantitatively formulated salience as a reference probability from the standpoint that the principle underlying discourse can be attributed to game theory. Furthermore, we formulated two preferences as the MGbased generalization of CT and statistically verified these preferences in a Japanese corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "In our verification of Preference 1a and 1b, we claimed that there is a positive correlation between the utility of an anaphor and the reference probability of its referent. In connection with this, the conventional Japanese Cf-ranking was empirically disproved. Preferences 1a and 1b derived from MG cover more general cases than Rule 1 in CT does. In our verification of Preference 2, we estimated the average expected utility for the four transition states and presented evidence that the order among the averages corresponds to that of Rule 2 in CT. These empirical results indicate that a principle informed by MG is behind both rules in CT. We hence conclude that we have statistically proved our MG-based generalization. It is a more quantitative and principled model than conventional CT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [
{
"text": "The authors are grateful to members of the now-defunct Cyber Assist Research Center and the people who annotated the GDA tags to the corpus used in our study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Centering: A framework for Modeling the Local Coherence of Discourse",
"authors": [
{
"first": "B",
"middle": [
"J"
],
"last": "Grosz",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Weinstein",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "203--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grosz, B.J., Joshi, A.K., and Weinstein, S. 1995. Centering: A framework for Modeling the Local Coherence of Discourse. Computational Linguistics, 21(2), pp. 203-225.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Game-Theoretic Account of Collaboration in Communication",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hasida",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Nagao",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Miyata",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the First International Conference on Multi-Agent Systems",
"volume": "",
"issue": "",
"pages": "140--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasida, K., Nagao, K., and Miyata, T. 1995. A Game-Theoretic Account of Collaboration in Communication. Proceedings of the First International Conference on Multi-Agent Systems, pp. 140-147.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Issues in Communication Game",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hasida",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th Conference on Computational inguistics",
"volume": "",
"issue": "",
"pages": "531--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasida, K. 1996. Issues in Communication Game. Proceedings of the 16th Conference on Computational inguistics, pp. 531-536.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Global Document Annotation (GDA)",
"authors": [
{
"first": "K",
"middle": [],
"last": "Hasida",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hasida, K. 1998. Global Document Annotation (GDA). http://i-content.org/GDA/.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discourse Coherence and Shifting Centers in Japanese Texts",
"authors": [
{
"first": "M",
"middle": [],
"last": "Iida",
"suffix": ""
}
],
"year": 1997,
"venue": "Centering Theory in Discourse",
"volume": "",
"issue": "",
"pages": "161--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iida, M.. 1997. Discourse Coherence and Shifting Centers in Japanese Texts. In M. Walker, A. Joshi, and E. Prince, eds., Centering Theory in Discourse, pp. 161-180.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Intrasentential Centering: A Case Study",
"authors": [
{
"first": "M",
"middle": [],
"last": "Kameyama",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "89--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kameyama, M. 1997. Intrasentential Centering: A Case Study. In M. Walker, A. Joshi, and E. Prince, eds., Centering in Discourse, pp. 89-112.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "TinySVM: Support Vector Machines",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kudo, T. 2002. TinySVM: Support Vector Machines. http://chasen.org/~taku/software/TinySVM/.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multiple Argument Ellipses Resolution in Japanese",
"authors": [
{
"first": "S",
"middle": [],
"last": "Nariyama",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of Machine Translation Summit VIII",
"volume": "",
"issue": "",
"pages": "241--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nariyama, S. 2001. Multiple Argument Ellipses Resolution in Japanese. Proceedings of Machine Translation Summit VIII, pp. 241-245.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Centering: A Parametric Theory and Its Instantiations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Stevenson",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Di Eugenio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hitzeman",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "3",
"pages": "309--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poesio, M., Stevenson, R., Di Eugenio, B., and Hitzeman, J. 2004. Centering: A Parametric Theory and Its Instantiations. Computational Linguistics, 30(3), pp. 309-363.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The R Project for Statistical Computing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Development Core",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Team",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R Development Core Team. 2002. The R Project for Statistical Computing. http://www.r-project.org/.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Functional Centering: Grounding Referential Coherence in Information Structure",
"authors": [
{
"first": "M",
"middle": [],
"last": "Strube",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Hahn",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "3",
"pages": "309--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strube, M. and Hahn, U. 1999. Functional Centering: Grounding Referential Coherence in Information Structure. Computational Linguistics, 25(3), pp. 309-344.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The Role of Global Topic in Japanese Zero Anaphora Resolution",
"authors": [
{
"first": "M",
"middle": [],
"last": "Yamura-Takei",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Takada",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2000,
"venue": "IPSJ",
"volume": "135",
"issue": "10",
"pages": "71--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yamura-Takei, M., Takada, M., Aizawa, T. 2000. The Role of Global Topic in Japanese Zero Anaphora Resolution (in Japanese). Technical Report, of IPSJ, 135(10), pp. 71-78.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Japanese Discourse and the Process of Centering",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Walker",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Cote",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "193--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Walker, M.A., Iida, M., and Cote, S. 1994. Japanese Discourse and the Process of Centering. Computational Linguistics, 20(2), pp. 193-232.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Comparison of the expected utilities of two choices"
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">states of centers between utterances</td></tr><tr><td>(</td><td>)</td><td>(</td><td>1 \u2212</td><td>)</td></tr></table>",
"num": null,
"html": null,
"text": "Transition"
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td/><td>between MG and CT concepts</td></tr><tr><td>MG</td><td>CT</td></tr><tr><td>Pr: Reference probability</td><td>Salience (Cf-ranking)</td></tr><tr><td>High-Pr discourse entity</td><td>Center</td></tr><tr><td>Ut: Utility of noun phrase</td><td>Simplicity of noun phrase</td></tr><tr><td>High-Ut noun phrase</td><td>Pronoun</td></tr><tr><td>Low-Ut noun phrase</td><td>Non-pronoun</td></tr></table>",
"num": null,
"html": null,
"text": "Correspondence"
},
"TABREF2": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Features used in regression analysis of Pr(e,U i ) Dist log ( (# utterances between U i and the latest reference to e ) + 1) Gram grammatical function of the latest reference to e (wa/ga/no/o/ni/mo/de/kara/to)"
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>Anaphora Types</td><td colspan=\"2\"># Sample Ratio</td></tr><tr><td>Zero Pronoun</td><td>5876</td><td>35.1%</td></tr><tr><td>Pronoun</td><td>843</td><td>5.0%</td></tr><tr><td>Noun Phrase with Demonstrative</td><td>1011</td><td>6.0%</td></tr><tr><td>Other Noun Phrase</td><td>8998</td><td>53.8%</td></tr><tr><td>Total</td><td colspan=\"2\">16728 100.0%</td></tr></table>",
"num": null,
"html": null,
"text": "Distribution of Japanese anaphora types"
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td>in U i</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF5": {
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">Japanese grammatical functions (by particles)</td></tr><tr><td>Particle (Grammatical Function)</td><td># Sample</td><td>Referenced in U i+1</td><td>Reference Probability</td></tr><tr><td>wa (topic)</td><td>35,329</td><td colspan=\"2\">1,908 0.0540</td></tr><tr><td>ga (subject)</td><td>38,450</td><td colspan=\"2\">1,107 0.0288</td></tr><tr><td>no (of)</td><td>88,695</td><td colspan=\"2\">1,755 0.0198</td></tr><tr><td>o (direct object)</td><td>50,217</td><td colspan=\"2\">898 0.0179</td></tr><tr><td>ni (indirect object)</td><td>46,058</td><td colspan=\"2\">569 0.0124</td></tr><tr><td>mo</td><td>8,710</td><td colspan=\"2\">105 0.0121</td></tr><tr><td>de</td><td>24,142</td><td colspan=\"2\">267 0.0111</td></tr><tr><td>kara</td><td>7,963</td><td colspan=\"2\">76 0.00954</td></tr><tr><td>to</td><td>19,383</td><td colspan=\"2\">129 0.00666</td></tr><tr><td>Other particles</td><td>512,006</td><td colspan=\"2\">8,027 0.0157</td></tr><tr><td>No particle</td><td>153,197</td><td colspan=\"2\">1,315 0.00858</td></tr></table>",
"num": null,
"html": null,
"text": "Reference probabilities by only"
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>k: Model No.</td><td>b k,0 (const.)</td><td>b k,1 (coeff. of dist)</td><td>b k,2 (coeff. of gram)</td><td>b k,3 (coeff. of chain)</td></tr><tr><td>1</td><td>-2.825</td><td>-0.7636</td><td>9.036</td><td>2.048</td></tr><tr><td>2</td><td>-3.055</td><td>-0.7067</td><td>10.47</td><td>2.270</td></tr><tr><td>3</td><td>-2.952</td><td>-0.7574</td><td>6.433</td><td>2.399</td></tr><tr><td>4</td><td>-3.288</td><td>-0.5911</td><td>9.170</td><td>2.129</td></tr><tr><td>5</td><td>-3.043</td><td>-0.6578</td><td>4.836</td><td>2.178</td></tr></table>",
"num": null,
"html": null,
"text": "Measured parameters of the five MLR models"
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Particle (Grammatical Function) Coefficient</td></tr><tr><td>wa (topic)</td><td>5.46</td></tr><tr><td>mo</td><td>5.37</td></tr><tr><td>ga (subject)</td><td>5.27</td></tr><tr><td>kara</td><td>5.14</td></tr><tr><td>o (object)</td><td>5.12</td></tr><tr><td>to</td><td>5.05</td></tr><tr><td>ni (indirect object)</td><td>5.05</td></tr><tr><td>no (of)</td><td>5.04</td></tr><tr><td>de</td><td>4.98</td></tr></table>",
"num": null,
"html": null,
"text": "Coefficient for maximizing correlation to Ut"
},
"TABREF8": {
"type_str": "table",
"content": "<table><tr><td>Anaphors in a same utterance</td><td>Utterances</td><td colspan=\"2\">Anaphors Percentage</td><td>Pairs of anaphors in the same utterance</td></tr><tr><td>0</td><td>47,728</td><td>-</td><td>-</td><td>-</td></tr><tr><td>1</td><td>14,960</td><td>14,960</td><td>89.4%</td><td>-</td></tr><tr><td>2</td><td>854</td><td>1,708</td><td>10.2%</td><td>854</td></tr><tr><td>3</td><td>20</td><td>60</td><td>0.4%</td><td>60</td></tr><tr><td>Total</td><td>63,562</td><td>16,728</td><td>100.0%</td><td>914</td></tr></table>",
"num": null,
"html": null,
"text": "Distribution of anaphor number in the same utterance"
},
"TABREF9": {
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">of Preference 1a and 1b</td><td/></tr><tr><td/><td/><td colspan=\"2\">Measured 95% Confidence Interval</td></tr><tr><td>MLR</td><td>Preference 1a: Agreement Percentage (in 360 pairs of anaphor) Preference 1b: Correlation Coefficient (in 16,728 samples)</td><td>75.3% (271/360) +0.373</td><td>[70.5, 79.6] [0.360, 0.386]</td></tr><tr><td>SVR</td><td>Preference 1a: Agreement Percentage (in 360 pairs of anaphor) Preference 1b: Correlation Coefficient</td><td>74.4% (268/360)</td><td>[69.6, 78.9]</td></tr><tr><td/><td>(in 16,728 samples)</td><td/><td/></tr></table>",
"num": null,
"html": null,
"text": "Verification"
},
"TABREF10": {
"type_str": "table",
"content": "<table><tr><td>CONTINUE</td><td>RETAIN</td><td>SMOOTH-SHIFT</td><td>ROUGH-SHIFT</td></tr><tr><td>Zero Pronoun</td><td/><td/><td/></tr></table>",
"num": null,
"html": null,
"text": "Distribution of transition states"
},
"TABREF11": {
"type_str": "table",
"content": "<table><tr><td>Transition State</td><td colspan=\"3\">#Sample Ave. of Expected Utility (Their Variance)</td></tr><tr><td>CONTINUE</td><td>1,995</td><td>0.874</td><td>(0.361)</td></tr><tr><td>RETAIN</td><td>102</td><td>0.473</td><td>(0.242)</td></tr><tr><td>SMOOTH-SHIFT</td><td>2,950</td><td>0.287</td><td>(0.175)</td></tr><tr><td>ROUGH-SHIFT</td><td>413</td><td>0.109</td><td>(0.0336)</td></tr></table>",
"num": null,
"html": null,
"text": "Averages of expected utilities in each transition states"
},
"TABREF12": {
"type_str": "table",
"content": "<table><tr><td>'s rank sum test</td></tr></table>",
"num": null,
"html": null,
"text": "Wilcoxon"
}
}
}
}