Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y13-1007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:32:08.197785Z"
},
"title": "A Quantitative Comparative Study of Prosodic and Discourse Units, the Case of French and Taiwan Mandarin",
"authors": [
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": "",
"affiliation": {},
"email": "laurent.prevot@lpl-aix.fr"
},
{
"first": "Alvin",
"middle": [],
"last": "Cheng",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Hsien",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shu-Chuan",
"middle": [],
"last": "Tseng",
"suffix": "",
"affiliation": {},
"email": "tsengsc@gate.sinica.edu.tw"
},
{
"first": "Klim",
"middle": [],
"last": "Peshkov",
"suffix": "",
"affiliation": {},
"email": "klim.peshkov@lpl-aix.fr"
},
{
"first": "Alvin",
"middle": [],
"last": "Cheng-Hsien",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Klim",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Peshkov",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.",
"pdf_parse": {
"paper_id": "Y13-1007",
"_pdf_hash": "",
"abstract": [
{
"text": "Studies of spontaneous conversational speech grounded on large and richly annotated corpora are still rare due to the scarcity of such resources. Comparative studies based on such resources are even more rarely found because of the extra-need of comparability in terms of content, genre and speaking style. The present paper presents our efforts for establishing such a dataset for two typologically diverse languages: French and Taiwan Mandarin. To the primary data, we added morphosyntactic, chunking, prosodic and discourse annotation in order to be able to carry out quantitative comparative studies of the syntaxdiscourse-prosody interfaces. We introduced our work on the data creation itself as well as some preliminary results of the boundary alignment between prosodic and discourse units and how POS and chunks are distributed on these boundaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Interest for the studies of discourse prosody interface has arisen in the last decade as illustrated by the vitality of the events and projects in this domain. However, while theoretical proposals and descriptive works are numerous, quantitative systematic studies are less widespread due to the cost of creating resources usable for such studies. Indeed, prosodic and discourse analysis are delicate matters requiring lower-level processing such as the alignment with speech signal at syllable level (for prosody) or at least basic syntactic annotation (for discourse). Moreover, many of these studies are dealing with read or monologue speech. The extremely spontaneous nature of conversational speech renders the first levels of processing complicated. Previous works (Liu and Tseng, 2009; Chen, 2011; Bertrand et al., 2008; Blache et al., 2009; Afantenos et al., 2012) give us the opportunity to produce conversational resources of this kind. We then took advantage of a bilateral project for working on conversational speech in a quantitative fashion, and this for two typologically diverse languages: French and Taiwan Mandarin. We believe this combination of linguistic resources and skills for these two languages is a rather unique situation and allows for comparative quantitative experiments on high-level linguistic analysis such as discourse and prosody.",
"cite_spans": [
{
"start": 771,
"end": 792,
"text": "(Liu and Tseng, 2009;",
"ref_id": null
},
{
"start": 793,
"end": 804,
"text": "Chen, 2011;",
"ref_id": null
},
{
"start": 805,
"end": 827,
"text": "Bertrand et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 828,
"end": 848,
"text": "Blache et al., 2009;",
"ref_id": "BIBREF6"
},
{
"start": 849,
"end": 872,
"text": "Afantenos et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our objective is to understand the commonalities and the differences between discourse prosody interface in these two languages. More precisely, we look at how prosodic units and discourse units are distributed onto each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In spirit, our work is closely related to the one of (Simon and Degand, 2009; Lacheret et al., 2010; Gerdes et al., 2012) , however our focus here are the insights we can get from a comparative study. Moreover our dataset has a more conversational nature than the datasets studied in their work. About the data, (Gerdes et al., 2012) wanted to have an interesting spectrum of discourse genres and speak-ing styles while we focused on conversations both for making possible the comparative studies and to make sure to have enough coherent instances in the perspective of statistical studies. Also, while (Lacheret et al., 2010) requires a purely intuitive approach, we used a more balanced approach combining explicit criteria from different language domains. Finally, our annotation experiments are largely produced either by automatic tools (trained on experts data) or by naive coders. This is a major difference with the studies listed above that are based on experts annotations since it allows us scale up in data size more easily.",
"cite_spans": [
{
"start": 64,
"end": 77,
"text": "Degand, 2009;",
"ref_id": "BIBREF21"
},
{
"start": 78,
"end": 100,
"text": "Lacheret et al., 2010;",
"ref_id": null
},
{
"start": 101,
"end": 121,
"text": "Gerdes et al., 2012)",
"ref_id": null
},
{
"start": 312,
"end": 333,
"text": "(Gerdes et al., 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is structured as follows. We will start in section 2 by presenting how we built a comparable dataset from existing corpora. Then we will address in section 3 and 4 respectively the creation of prosodic and discourse units. Based on these new datasets, we will investigate the discourse prosody interface in a comparative and quantitative way (Section 5). Finally, in section 6 we will pay some attention at what is happening syntactically at various types of boundaries as defined in the preceding section. First of all, corpora from both languages were recorded in very similar conditions. There are both face-to-face interaction in an anechoic room and speech was recorded via headsets on separate channels. The original recordings are also very comparable in size. The raw figures of both datasets are presented in Table 1 . 1 We had to decide which linguistic information and which part from the full corpora to include in our joint dataset. About the later point, we extracted narrative sequences from the French data that included also more interactive topic negotiation sequences. About the linguistic levels, our study concerned prosodic and discourse levels but we wanted to be able to perform fine-grained study involving syntactic and phonetic aspects. We therefore agreed to include syllables, tokens and part-ofspeech information in our data as can be seen in Ta (Table 3) . ",
"cite_spans": [
{
"start": 838,
"end": 839,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 828,
"end": 835,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1383,
"end": 1385,
"text": "Ta",
"ref_id": null
},
{
"start": 1386,
"end": 1395,
"text": "(Table 3)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "tw fr Category N N Nouns (N) Nh P Pronouns (Pro) Ne D Determiners (Det) V V Verbs (V) T,I,FW I Particles, DM 2 ... (Part) D R Adverbs (Adv) A A Adjectives (Adj) P S Prepositions (Prep)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The ORCHID.fr Dataset is a subset of the Corpus for Interactional Data (CID) (Bertrand et al., 2008) consisting of 1.5 hour of conversational speech produced by 3 female and 3 male speakers. The CID corpus is a collection of 8 hours of free conversation in French. All the speaker turn boundaries are time-aligned with the speech signal at phone level by using forced alignment techniques (Illina et al., 2004) . Moreover, the corpus had been entirely POStagged (See for a presentation of the probabilistic technique used). Finally, in the framework of the OTIM and ORCHID projects an annotation campaign for annotating prosodic phrasing and segmenting the corpus into discourse units had been ran. In the present project, we modified the criteria for labeling discourse units according to the commonly defined operational guidelines for French and Taiwan Mandarin data processing.",
"cite_spans": [
{
"start": 77,
"end": 100,
"text": "(Bertrand et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 389,
"end": 410,
"text": "(Illina et al., 2004)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of the French dataset",
"sec_num": "2.1"
},
{
"text": "The ORCHID.tw Dataset is a subset of the Taiwan Mandarin Conversational Corpus (the TMC Corpus), consisting of 3.5 hours of conversational speech produced by 7 male and 9 female speakers (Tseng, 2013) . The TMC Corpus is a collection of 42 hours of free, task-oriented and topic-specific conversations in Taiwan Mandarin. All the speaker turn boundaries as well as syllable boundaries were human-labeled in the ORCHID.tw Dataset. Boundaries of words and POS tags were automatically generated based on the syllable boundary information and the output of the automatic word segmentation and POS tagging system developed by the CKIP at Academia Sinica (Chen et al., 1996) . Previously, the ORCHID.tw dataset has been annotated with boundaries of prosodic units as defined in (Liu and Tseng, 2009) and with boundaries of discourse units in (Chen, 2011) . In the present ORCHID project, we modified the criteria for labeling discourse units according to the commonly defined operational guidelines for French and Taiwan Mandarin data processing. The definition for prosodic units remains unchanged.",
"cite_spans": [
{
"start": 187,
"end": 200,
"text": "(Tseng, 2013)",
"ref_id": "BIBREF23"
},
{
"start": 649,
"end": 668,
"text": "(Chen et al., 1996)",
"ref_id": "BIBREF11"
},
{
"start": 781,
"end": 793,
"text": "Tseng, 2009)",
"ref_id": null
},
{
"start": 836,
"end": 848,
"text": "(Chen, 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of the Taiwan Mandarin dataset",
"sec_num": "2.2"
},
{
"text": "3 Producing prosodic units",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Creation of the Taiwan Mandarin dataset",
"sec_num": "2.2"
},
{
"text": "The definition of prosodic units is adopted mainly from prosodic phonology (Selkirk, 1986; Nespor and Vogel, 1986 ) that proposed a universal hierarchy of prosodic constituents. At least two levels of phrasing above the word have been admitted in French: the lowest level of phonological phrases (Post, 2000) or accentual phrases (AP) (Jun and Fougeron, 2000) and the highest level of Intonational phrases (IPs). The accentual phrase is the domain of primary stress. This latter is realized on the final full syllable of a word with longer duration and higher intensity than non-final syllables, and associated with a melodic movement. The secondary stress, more variable and optional, is generally realized on the initial stressed syllable of the first lexical word. It is associated with a rise movement.",
"cite_spans": [
{
"start": 75,
"end": 90,
"text": "(Selkirk, 1986;",
"ref_id": "BIBREF20"
},
{
"start": 91,
"end": 113,
"text": "Nespor and Vogel, 1986",
"ref_id": "BIBREF14"
},
{
"start": 296,
"end": 308,
"text": "(Post, 2000)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "French data",
"sec_num": "3.1"
},
{
"text": "The Intonational Phrase contains one or more accentual phrases. It is marked by a major final rise or fall (intonation contour), a stronger final lengthening and can be followed by a pause (Hirst and Di Cristo, 1984; Jun and Fougeron, 2000) . More recently, a few studies attempted to show the existence of an intermediate level of phrasing (intermediate phrase, ip) that would be realized with stronger prosodic cues than the ones associated with AP and weaker than those associated with IP (Michelas and D'Imperio, 2010). For the French dataset, both phonetic and phonological criteria have been used to annotate the boundaries of prosodic units. Once primary and secondary stresses are identified, the main acoustic cues are: (1) specific melodic contour, (2) final lengthening, (3) pitch reset. Moreover, disfluencies were annotated separately and silent pauses have not been systematically associated with a boundary (Portes et al., 2011) . In a previous study involving two experts, we have shown the reliability of annotation criteria for the higher level of constituency (IP) (see (Nesterenko et al., 2010) ). In a second stage, we elaborated a guideline for transcribing prosodic units in French by naive annotators. They have to annotate 4 levels of prosodic break defined in terms of a ToBI-style annotation (ref) (0 = no break; 1 = AP break; 2 = ip break; 3 = IP break) in Praat (Boersma, 2002) .",
"cite_spans": [
{
"start": 189,
"end": 216,
"text": "(Hirst and Di Cristo, 1984;",
"ref_id": null
},
{
"start": 217,
"end": 240,
"text": "Jun and Fougeron, 2000)",
"ref_id": null
},
{
"start": 922,
"end": 943,
"text": "(Portes et al., 2011)",
"ref_id": "BIBREF18"
},
{
"start": 1089,
"end": 1114,
"text": "(Nesterenko et al., 2010)",
"ref_id": "BIBREF15"
},
{
"start": 1391,
"end": 1406,
"text": "(Boersma, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "French data",
"sec_num": "3.1"
},
{
"text": "Based on this break annotation we created Prosodic Units (PU) that are basically resulting from considering any break of level 2 or 3 as boundaries for our PUs. The merging of breaks of level 2 and 3 has been made to match the annotation style of the Taiwan Mandarin data but also to improve the reliability of the data produced. Indeed, the interannotator agreement was overall higher when levels 2 and 3 are collapsed. Finally, we added breaks on pauses over 400ms. We computed a \u03ba-score for our data set by taking each token as a decision point and counting the number of matching and non-matching boundaries across annotators. This method of calculation yielded a \u03ba-score of 0.71 for our dataset which is a nice score for naive coders on prosodic phrasing task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French data",
"sec_num": "3.1"
},
{
"text": "Cohen's kappa (Cohen and others, 1960) (and see (Carletta, 1996; Artstein and Poesio, 2008) for further discussion) is a measure designed to measure inter-coder agreement. It corrects the raw agreement by an estimation of the agreement by chance. The issue here is that it is a segmentation task, therefore we have to decide on what are the decision points. We are using the tokens as decision points rather than a fixed sample (as it is done in some annotation tools) because the French guidelines are using words as the base units for instructing where to put the boundaries. Agreement on no-boudary (0-0) is therefore an agreement for this decision task and there is no satisfying way to evaluate a kappa score if these agreements are left out. Other measures need to be introduced (Pevzner and Hearst, 2002; Fournier and Inkpen, 2012) if one wants to measure a different aspect of the segmentation agreement. However to be perfectly transparent with the annotation results, Figure 1 presents the contingency table for the Orchid's style prosodic units (See also (Peshkov et al., 2012) for deeper evaluation of the annotation of the whole CID corpus).",
"cite_spans": [
{
"start": 48,
"end": 64,
"text": "(Carletta, 1996;",
"ref_id": "BIBREF9"
},
{
"start": 65,
"end": 91,
"text": "Artstein and Poesio, 2008)",
"ref_id": "BIBREF2"
},
{
"start": 785,
"end": 811,
"text": "(Pevzner and Hearst, 2002;",
"ref_id": "BIBREF17"
},
{
"start": 812,
"end": 838,
"text": "Fournier and Inkpen, 2012)",
"ref_id": null
},
{
"start": 1066,
"end": 1088,
"text": "(Peshkov et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 978,
"end": 986,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "French data",
"sec_num": "3.1"
},
{
"text": "A/B (0-1) (2-3) (0-1) 1987 (2-3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French data",
"sec_num": "3.1"
},
{
"text": "581 5272 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "French data",
"sec_num": "3.1"
},
{
"text": "The definition of prosodic units is adopted mainly from that of Intonation Unit in the field of discourse analysis (Chafe, 1994; Tao, 1996) , but emphasizing on the concept of prosodic phrasing, instead of a coherent intonation pattern. We are in the opinion that prosodic phrasing is definitely not purely linear and sequential, as language planning should work with a certain kind of structure and hierarchy, which expectedly result in different types of prosodic phrasing. Nevertheless, the design of a single layer of prosodic phrasing will provide segmentation boundaries for further distinguishing the types of prosodic units and it is easier to achieve a reasonable inter-labelers agreement. Boundaries of prosodic units were annotated based on four main cues perceived by the labelers: (1) pitch reset (a shift upward in overall pitch level), (2) lengthening (changes in duration), (3) alternation of speech rate (changes in rhythm), and (4) occurrences of paralinguistic sounds (disjunction or disruption of utterances such as pauses, inhalation, and laughter). The annotation of prosodic units of the OR-CHID.tw Dataset has been accomplished in an earlier project (Liu and Tseng, 2009) . Three labelers were trained to annotate prosodic units on a subset of 150 speaker turns until a satisfactory consistency rate was achieved. The rest of the dataset was completed by the three labelers independently.",
"cite_spans": [
{
"start": 115,
"end": 128,
"text": "(Chafe, 1994;",
"ref_id": "BIBREF10"
},
{
"start": 129,
"end": 139,
"text": "Tao, 1996)",
"ref_id": "BIBREF22"
},
{
"start": 1183,
"end": 1195,
"text": "Tseng, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Taiwan Mandarin data",
"sec_num": "3.2"
},
{
"text": "Although the French and Taiwan Mandarin datasets were annotated based on different theories, but the annotation criteria were comparable. To ensure the comparability of the criteria, a crosslanguage segmentation experiment was conducted on a small subset of our data by the authors of this paper. Each tried to annotate prosodic units in the other language. The annotation results conducted by the non-native labelers confirmed that the main cues used for segmenting the prosodic unit boundaries were in principle uniform, except for those caused by repairs and restarts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Taiwan Mandarin data",
"sec_num": "3.2"
},
{
"text": "Concerning discourse units, the annotation campaign also involved naive annotators that have segmented the whole corpus (half of it being cross annotated). This annotation was performed without listening to the signal but with timing information. It was performed with Praat (Boersma, 2002) but without including the signal window, only the time-aligned token tiers. The segmentation was performed by adopting a set of discourse segmentation guidelines, inspired from (Muller et al., 2012) and (Chen, 2011) . We combined semantic criterion (Vendler's (Vendler, 1957) style eventualities identification and Xue's proposition identification (Xue, 2008) ), discourse criterion (presence of discourse markers) and pragmatic criterion (recognition of specific speech acts) to perform the segmentation.",
"cite_spans": [
{
"start": 275,
"end": 290,
"text": "(Boersma, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 468,
"end": 489,
"text": "(Muller et al., 2012)",
"ref_id": "BIBREF1"
},
{
"start": 494,
"end": 506,
"text": "(Chen, 2011)",
"ref_id": null
},
{
"start": 540,
"end": 566,
"text": "(Vendler's (Vendler, 1957)",
"ref_id": "BIBREF24"
},
{
"start": 639,
"end": 650,
"text": "(Xue, 2008)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Producing discourse units",
"sec_num": "4"
},
{
"text": "More practically the task consisted in first identifying a main predicate, and then all its complements and adjuncts as illustrated in (1) and (2). Mandarin spontaneous speech presents an additional challenge in the task of DU annotation for its lack of tensemarking verbal system. Our segmentation proceeds on the basis of the semantic bonding between predi-cates identified (Giv\u00f3n, 1993) . Additional cues such as discourse connectives articulating discourse units were also used. Finally, mainly because of the interactive dialogic phenomena (e.g question-anwser pairs) we added a few pragmatic criterion for allowing short utterances (e.g yeah) or fragments (e.g where?) (Ginzburg et al., 2007) Manual discourse segmentation with our guidelines has proven to be reliable with \u03ba-scores ranging between 0.74 and 0.85 for the French data and reaching 0.86 for the Taiwan Mandarin data. Moreover we distinguished between several units in discourse: discourse units and abandoned discourse units. 3 The later are units that are so incomplete that it is impossible to attribute them a discourse contribution. They are distinguished from false starts (that are included in the DU they contributed) by the fact that the material they introduced cannot be said to be taken up in the following discourse unit.",
"cite_spans": [
{
"start": 376,
"end": 389,
"text": "(Giv\u00f3n, 1993)",
"ref_id": null
},
{
"start": 675,
"end": 698,
"text": "(Ginzburg et al., 2007)",
"ref_id": null
},
{
"start": 996,
"end": 997,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Producing discourse units",
"sec_num": "4"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Producing discourse units",
"sec_num": "4"
},
{
"text": "French Table 4 . The significantly smaller French PUs (up to 40% depending to the units taken to compare) might partially be attributed to the difference in the segmentation style and the extraction of the subsets. The Taiwan Mandarin dataset contains only very long speaker turns, thus reducing the number of shorter prosodic units which are more often produced in interactive conversational speech. For DUs in which guidelines are basically identical we get very similar DU size in terms of duration and number of syllables (roughly 15% difference), French units host more tokens (43%) and therefore included shorter words.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 14,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Producing discourse units",
"sec_num": "4"
},
{
"text": "dur (s) # syll #tokens # PU PU-fr 0.88 3.9 3.3 -PU-tw 1.44 6.4 4.4 -DU-fr 2.51 11.1 9.5 2.8 DU-tw 2.17 9.6 6.6 1.5 Table 4 : Comparative size of the units produced",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Producing discourse units",
"sec_num": "4"
},
{
"text": "Moreover from Table 1 we can see, that the French dataset included a larger part of abandoned discourse units (11% for 6,5% in the Taiwan Mandarin dataset). This is in line with the more spontaneous style conversations already mentioned in the French dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Producing discourse units",
"sec_num": "4"
},
{
"text": "We examined the different types of association between prosodic and discourse units by means of boundary alignment. We follow (Chen, 2011) classification that starts from discourse units and that dis- tinguishes 8 situations resulting from combining two parameters: (i) the presence of a prosodic boundary within the discourse unit (inner boundary vs. noinner-boundary) ; (ii) the match of discourse and prosodic unit at either left, right, both or none boundaries. Such a classification resulted in the distribution illustrated in In French data, perhaps because of the comparatively smaller prosodic units in the French data, discourse units host much more systematically several prosodic units. It is striking to see in figure 3 that more than half of the time and for both language discourse units are providing the starting and ending boundaries for the prosodic units. Overall, we see in figure 3 that once atomic and composite (in terms of PUs) DUs are collapsed their split in the alignment types are quite similar. ",
"cite_spans": [
{
"start": 126,
"end": 138,
"text": "(Chen, 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Association of prosodic and discourse units",
"sec_num": "5.2"
},
{
"text": "Making use of the mapping table of POS information (Table 3) we established we are able to compare the distribution of POS at the boundaries. More precisely we looked at places where there was a match between PU-DU initial boundaries (Fig. 4) and PU-DU final boundaries (Fig. 5 ). Interestingly, French units tend to begin more often with connectives and pronouns. In Taiwan Mandarin, the percentage of pronouns is lower and that of adverbs is higher. This may be due to fact that in conversation, sentences are often zero-subject or with the focus moved to sentence-initial positions. For final matching boundaries, Taiwan Mandarin ends often with sentence-final particles, which is expected in conversation. Moreover, French ends more often at nouns than verbs, Taiwan Mandarin more verbs than nouns. Our preliminary studies on the word categories only provide information for the boundary. More work on the sentence structure is required to conduct in-depth studies on language production.",
"cite_spans": [],
"ref_spans": [
{
"start": 51,
"end": 60,
"text": "(Table 3)",
"ref_id": "TABREF4"
},
{
"start": 234,
"end": 242,
"text": "(Fig. 4)",
"ref_id": "FIGREF4"
},
{
"start": 270,
"end": 277,
"text": "(Fig. 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Syntactic categories at boundaries",
"sec_num": "6"
},
{
"text": "Chunks (Abney, 1991) can be seen as an intermediate level of syntactic processing. They are the basic structures built from the tags but do not deal with long dependencies or rich constituence. They are basically units centered on a syntactic head, a content word. As reminded by Abney, chunks can be related to \u03c6-sentences (Gee and Grosjean, 1983) which have a more intonational nature. An idea defended in these early works is that chunks are indeed language processing units from a cognitive viewpoint. The break-up of experimental linguistics as renewed the interest for this hypothesis and is attempting to make it more precise (Blache, 2013) and relate to other empirical evidences such as eyetracking .",
"cite_spans": [
{
"start": 7,
"end": 20,
"text": "(Abney, 1991)",
"ref_id": "BIBREF0"
},
{
"start": 333,
"end": 348,
"text": "Grosjean, 1983)",
"ref_id": null
},
{
"start": 633,
"end": 647,
"text": "(Blache, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chunks as processing units",
"sec_num": "7"
},
{
"text": "With this idea in mind, we will investigate our prosodic and discourse units in terms of chunk size and constituency. The first basic hypothesis we are testing is if tokens are syntactic units and chunks more processing units, the structure of PUs and DUs in terms of tokens does not have to match across languages while it should in terms of chunks. More precisely, we expect a significant variation of PU/DU size across languages in terms number of tokens but not in terms of chunk size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chunks as processing units",
"sec_num": "7"
},
{
"text": "Verbal Chunk NC Nominal Chunk AdvC Adverbial Chunk PC Prepositional Chunk IC Intractional Chunk DisfError Disfluencies or tagging errors AdjC Adjectival Chunk From the chunking definition, we retain the importance of the head. We therefore designed simple rules using POS-tag patterns for creating the chunks listed in Table 5 . This was done by looking at most frequent patterns first. We processed in three different steps involving three different type of rules: The two first types of rule are strongly language dependent while the third type is common to both languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 326,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "VC",
"sec_num": null
},
{
"text": "Using pre-trained existing chunker was problematic. The rules used were defined to handle spontaneous spoken constructions. To our knowledge, existing chunkers are trained on written data which makes them impractical for our purposes. Moreover, in the rule-based design the rules are accessible to the linguists and this allow to compare them directly across languages rather than comparing chunking quality. Indeed, we are not interested in the chunks from an applicative perspective (such as named entity recognition) but as good approximation of semantic processing units. On the longer term, it could be however interesting to improve and evaluate and improve pre-trained chunking steps but this will require a large amount of manual work which we cannot afford for the time being.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "VC",
"sec_num": null
},
{
"text": "We then try to validate our hypothesis based on the chunks created and computed the size of PUs and DUs in terms of chunks (Table 6 ) and more precisely in terms of their length (in chunks) distribution (Figures 6 and 7) .",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "(Table 6",
"ref_id": "TABREF9"
},
{
"start": 203,
"end": 220,
"text": "(Figures 6 and 7)",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Size in chunks",
"sec_num": "7.2"
},
{
"text": "Taiwan Mandarin and French size and size distribution exhibit however very different figures. About French PUs, it could be due to the sampling of the data (shorter PUs compared with the sampling of lge Size-PU Size-DU fr 1,48 3,69 tw 2,05 2,27 long speaker turns data of Taiwan Mandarin) and the annotation criteria of PU. About the DUs, the distribution is also different but for this category we are more thinking at an issue with the tagging and chunking process. While we tried to keep the rules for producing the chunks coherent across the language, we might need either a more careful joint rules crafting or, perhaps a completely systematic chunking rules system. However, we do not have annotated chunks on this kind of data for training a supervised machine learning approach. Moreover, the dataset is significant but most likely not sufficient for unsupervised methods. In this context, crafting a simple rule-based system was appealing. This work has shown that to create perfectly comparable corpora, one needs to start from joint design. However, this is a rare scenario and most of comparative datasets of richly annotated corpora will try to re-use at least part of their previous monolingual studies. Here we tried to make use of extremely similar resources for producing comparable corpora. We believe that although this data set could still be improved and benefited from an even more similar starting point, we have a unique resource for performing quantitative comparative studies of the kind initiated here. Equipped with this dataset, we are in position to conduct a series of deeper comparative studies. The chunking systems used in this paper are just a first attempt in this direction. Although the results for the chunk size are not conclusive for our hypothesis, we did get to know better the structures present in the units we are investigating and we would like to push further our exploration in this direction. We are currently looking at the distribution of the mono-,bi-and tri-chunks PUS and DUs sequences in order to get finer in the language comparison without going into a full syntactic analysis which is out of reach for this kind of data. In parallel, we will also attempt a shallower but more robust approach consisting in counting simply the number of content words in the units. This is even more basic than chunking but we would like to see whether it could be an interesting shortcut to the basic semantic structure of these units.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Size in chunks",
"sec_num": "7.2"
},
{
"text": "See sections 3 and 4 for Prosodic Units, Discourse Units and Abandoned DU definitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We actually had also a parenthetical category but it was not consistently annotated at the current stage and therefore this distinction was not included in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been realized thanks to the support of the France-Taiwan ORCHID Program, under grant 100-2911-I-001-504 and the NSC project 100-2410-H-001-093 granted to the second author, as well as ANR OTIM BLAN08-2-349062 for initial work on the French data. We would like also to thank our colleagues for the help at various stage of the Data preparation, in particular Roxane Bertrand, Yi-Fen Liu, Robert Espesser, St\u00e9phane Rauzy, Brigitte Bigi, and Philippe Blache. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing by chunks",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1991,
"venue": "Principle-Based Parsing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 1991. Parsing by chunks. In Robert Berwick, Steven Abney, and Carol Tenny, editors, Principle-Based Parsing. Kluwer Academic Publish- ers, Dordrecht.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An empirical resource for discovering cognitive principles of discourse organisation: the annodis corpus",
"authors": [
{
"first": "Stergos",
"middle": [],
"last": "Afantenos",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Asher",
"suffix": ""
},
{
"first": "Farah",
"middle": [],
"last": "Benamara",
"suffix": ""
},
{
"first": "Myriam",
"middle": [],
"last": "Bras",
"suffix": ""
},
{
"first": "C\u00e9cile",
"middle": [],
"last": "Fabre",
"suffix": ""
},
{
"first": "Mai",
"middle": [],
"last": "Ho-Dac",
"suffix": ""
},
{
"first": "Anne",
"middle": [
"Le"
],
"last": "Draoulec",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Marie-Paule",
"middle": [],
"last": "P\u00e9ry-Woodley",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stergos Afantenos, Nicholas Asher, Farah Benamara, Myriam Bras, C\u00e9cile Fabre, Mai Ho-dac, Anne Le Draoulec, Philippe Muller, Marie-Paule P\u00e9ry- Woodley, Laurent Pr\u00e9vot, et al. 2012. An empirical resource for discovering cognitive principles of dis- course organisation: the annodis corpus. In Proceed- ings of LREC.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Inter-coder agreement for computational linguistics",
"authors": [
{
"first": "Ron",
"middle": [],
"last": "Artstein",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "555--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Le cidcorpus of interactional data-annotation et exploitation multimodale de parole conversationnelle",
"authors": [
{
"first": "R",
"middle": [],
"last": "Bertrand",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Blache",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Espesser",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ferr\u00e9",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Meunier",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Priego-Valverde",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Rauzy",
"suffix": ""
}
],
"year": 2008,
"venue": "Traitement Automatique des Langues",
"volume": "49",
"issue": "3",
"pages": "1--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Bertrand, P. Blache, R. Espesser, G. Ferr\u00e9, C. Meunier, B. Priego-Valverde, S. Rauzy, et al. 2008. Le cid- corpus of interactional data-annotation et exploitation multimodale de parole conversationnelle. Traitement Automatique des Langues, 49(3):1-30.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Robustness and processing difficulty models. a pilot study for eyetracking data on the french treebank",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Blache",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Rauzy",
"suffix": ""
}
],
"year": 2012,
"venue": "24th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Blache and St\u00e9phane Rauzy. 2012. Robustness and processing difficulty models. a pilot study for eye- tracking data on the french treebank. In 24th Interna- tional Conference on Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Influence de la qualit\u00e9 de l'\u00e9tiquetage sur le chunking: une corr\u00e9lation d\u00e9pendant de la taille des chunks",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Blache",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Rauzy",
"suffix": ""
}
],
"year": 2008,
"venue": "Traitement Automatique des Langues Naturelles",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Blache, St\u00e9phane Rauzy, et al. 2008. In- fluence de la qualit\u00e9 de l'\u00e9tiquetage sur le chunk- ing: une corr\u00e9lation d\u00e9pendant de la taille des chunks. Actes, Traitement Automatique des Langues Naturelles, pages 1-10.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Creating and exploiting multimodal annotated corpora: the toma project. Multimodal corpora",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Blache",
"suffix": ""
},
{
"first": "Roxane",
"middle": [],
"last": "Bertrand",
"suffix": ""
},
{
"first": "Ga\u00eblle",
"middle": [],
"last": "Ferr\u00e9",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "38--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Blache, Roxane Bertrand, and Ga\u00eblle Ferr\u00e9. 2009. Creating and exploiting multimodal annotated corpora: the toma project. Multimodal corpora, pages 38-53.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chunks et activation: un mod\u00e8le de facilitation du traitement linguistique",
"authors": [
{
"first": "Philippe",
"middle": [],
"last": "Blache",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of Traitement Automatique des Langues Naturelles",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philippe Blache. 2013. Chunks et activation: un mod- \u00e8le de facilitation du traitement linguistique. In Pro- ceedings of Traitement Automatique des Langues Na- turelles.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Praat, a system for doing phonetics by computer",
"authors": [
{
"first": "P",
"middle": [],
"last": "Boersma",
"suffix": ""
}
],
"year": 2002,
"venue": "Glot international",
"volume": "5",
"issue": "9",
"pages": "341--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Boersma. 2002. Praat, a system for doing phonetics by computer. Glot international, 5(9/10):341-345.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Assessing agreement on classification tasks: The kappa statistic",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "22",
"issue": "",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta. 1996. Assessing agreement on classifica- tion tasks: The kappa statistic. Computational linguis- tics, 22(2):249-254.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Discourse, consciousness, and time: The flow and displacement of conscious experience in speaking and writing",
"authors": [
{
"first": "Wallace",
"middle": [],
"last": "Chafe",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wallace Chafe. 1994. Discourse, consciousness, and time: The flow and displacement of conscious expe- rience in speaking and writing. University of Chicago Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Sinica corpus: Design methodology for balanced corpora",
"authors": [
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Li-Ping",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hui-Li",
"middle": [],
"last": "Hsu",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Eleventh Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "167--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keh-Jiann Chen, Chu-Ren Huang, Li-Ping Chang, and Hui-Li Hsu. 1996. Sinica corpus: Design method- ology for balanced corpora. In Proceedings of the Eleventh Pacific Asia Conference on Language, Infor- mation and Computation, pages 167-176.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Manuel d'annotation en relations de discours du projet annodis",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel d'annotation en relations de discours du projet annodis. Technical Report 21, CLLE-ERS, Toulouse University.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Prosodic phonology",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Nespor",
"suffix": ""
},
{
"first": "Irene",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Nespor and Irene Vogel. 1986. Prosodic phonol- ogy. Dordrecht.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Prosody in a corpus of french spontaneous speech: perception, annotation and prosody\u02dcsyntax interaction",
"authors": [
{
"first": "Irina",
"middle": [],
"last": "Nesterenko",
"suffix": ""
},
{
"first": "Stephane",
"middle": [],
"last": "Rauzy",
"suffix": ""
},
{
"first": "Roxane",
"middle": [],
"last": "Bertrand",
"suffix": ""
}
],
"year": 2010,
"venue": "Speech Prosody 2010-Fifth International Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irina Nesterenko, Stephane Rauzy, and Roxane Bertrand. 2010. Prosody in a corpus of french spontaneous speech: perception, annotation and prosody\u02dcsyntax in- teraction. In Speech Prosody 2010-Fifth International Conference.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Quantitative experiments on prosodic and discourse units in the corpus of interactional data",
"authors": [
{
"first": "Klim",
"middle": [],
"last": "Peshkov",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Pr\u00e9vot",
"suffix": ""
},
{
"first": "Roxane",
"middle": [],
"last": "Bertrand",
"suffix": ""
},
{
"first": "St\u00e9phane",
"middle": [],
"last": "Rauzy",
"suffix": ""
},
{
"first": "Philippe",
"middle": [],
"last": "Blache",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SemDial 2012: The 16th Workshop on the Semantics and Pragmatics of Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klim Peshkov, Laurent Pr\u00e9vot, Roxane Bertrand, St\u00e9phane Rauzy, and Philippe Blache. 2012. Quan- titative experiments on prosodic and discourse units in the corpus of interactional data. In Proceedings of SemDial 2012: The 16th Workshop on the Semantics and Pragmatics of Dialogue.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A critique and improvement of an evaluation metric for text segmentation",
"authors": [
{
"first": "L",
"middle": [],
"last": "Pevzner",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "1",
"pages": "19--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Pevzner and M. A. Hearst. 2002. A critique and im- provement of an evaluation metric for text segmenta- tion. Computational Linguistics, 28(1):19-36.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Permanence et variation des unit\u00e9s prosodiques dans le discours et l'interaction",
"authors": [
{
"first": "Cristel",
"middle": [],
"last": "Portes",
"suffix": ""
},
{
"first": "Roxane",
"middle": [],
"last": "Bertrand",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of French Language Studies",
"volume": "21",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cristel Portes, Roxane Bertrand, et al. 2011. Perma- nence et variation des unit\u00e9s prosodiques dans le dis- cours et l'interaction. Journal of French Language Studies, 21(1).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Tonal and phrasal structures in French intonation",
"authors": [
{
"first": "Brechtje",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2000,
"venue": "Thesus",
"volume": "34",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brechtje Post. 2000. Tonal and phrasal structures in French intonation, volume 34. Thesus.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Phonology and syntax: The relation between sound and structure",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Selkirk",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabeth Selkirk. 1986. Phonology and syntax: The relation between sound and structure. The MIT Press.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "On identifying basic discourse units in speech: theoretical and empirical issues. Discours. Revue de linguistique",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Liesbeth",
"middle": [],
"last": "Degand",
"suffix": ""
}
],
"year": 2009,
"venue": "psycholinguistique et informatique",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anne Catherine Simon and Liesbeth Degand. 2009. On identifying basic discourse units in speech: theoretical and empirical issues. Discours. Revue de linguistique, psycholinguistique et informatique, (4).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Units in Mandarin conversation: Prosody, discourse, and grammar",
"authors": [
{
"first": "Hongyin",
"middle": [],
"last": "Tao",
"suffix": ""
}
],
"year": 1996,
"venue": "",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyin Tao. 1996. Units in Mandarin conversation: Prosody, discourse, and grammar, volume 5. John Benjamins Publishing Company.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Lexical coverage in taiwan mandarin conversation",
"authors": [
{
"first": "S.-C",
"middle": [],
"last": "Tseng",
"suffix": ""
}
],
"year": 2013,
"venue": "International Journal of Computational Linguistics and Chinese Language Processing",
"volume": "1",
"issue": "18",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.-C. Tseng. 2013. Lexical coverage in taiwan man- darin conversation. International Journal of Computa- tional Linguistics and Chinese Language Processing, 1(18):1-18.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Verbs and times. The philosophical review",
"authors": [
{
"first": "Zeno",
"middle": [],
"last": "Vendler",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "143--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zeno Vendler. 1957. Verbs and times. The philosophical review, pages 143-160.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Labeling chinese predicates with semantic roles",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "2",
"pages": "225--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue. 2008. Labeling chinese predicates with semantic roles. Computational Linguistics, 34(2):225-255.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Contingency table for the French prosodic units",
"num": null
},
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "Fig 2.",
"num": null
},
"FIGREF2": {
"uris": null,
"type_str": "figure",
"text": "Distribution of PU/DU association types",
"num": null
},
"FIGREF3": {
"uris": null,
"type_str": "figure",
"text": "Distribution of PU/DU simplified association types",
"num": null
},
"FIGREF4": {
"uris": null,
"type_str": "figure",
"text": "POS distribution at Initial matching boundaries",
"num": null
},
"FIGREF5": {
"uris": null,
"type_str": "figure",
"text": "POS distribution at Final matching boundaries",
"num": null
},
"FIGREF6": {
"uris": null,
"type_str": "figure",
"text": "Comparison of Units size of the TW datasetFigure 7: Comparison of Units size of the FR dataset 8 Conclusion and Future work",
"num": null
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF2": {
"content": "<table><tr><td>Description</td><td colspan=\"2\">Tier Name Tier Content</td></tr><tr><td>Syllable</td><td>Syllable</td><td>STRING-UTF8</td></tr><tr><td>Token</td><td>Word</td><td>STRING-UTF8</td></tr><tr><td colspan=\"2\">Part-Of-Speech POS</td><td>STRING-UTF8</td></tr><tr><td>Prosodic Units</td><td>PU</td><td>PU</td></tr><tr><td colspan=\"2\">Discourse Units DU</td><td>{ DU, ADU}</td></tr></table>",
"type_str": "table",
"text": "ble 2. As the POS tagsets are different in both lan-",
"html": null,
"num": null
},
"TABREF3": {
"content": "<table><tr><td>: Contents of the joint dataset</td></tr><tr><td>guages, we established a matching table to make the</td></tr><tr><td>POS information mutually understandable</td></tr></table>",
"type_str": "table",
"text": "",
"html": null,
"num": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "Correspondence of the most frequent POS tags",
"html": null,
"num": null
},
"TABREF7": {
"content": "<table/>",
"type_str": "table",
"text": "Chunks category created",
"html": null,
"num": null
},
"TABREF9": {
"content": "<table/>",
"type_str": "table",
"text": "Average size of units (in chunks)",
"html": null,
"num": null
}
}
}
}