{ "paper_id": "S12-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:23:57.659509Z" }, "title": "Annotating Preferences in Negotiation Dialogues", "authors": [ { "first": "Ana\u00efs", "middle": [], "last": "Cadilhac", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toulouse", "location": { "addrLine": "118, route de Narbonne", "postCode": "31062", "settlement": "Toulouse", "country": "France" } }, "email": "cadilhac@irit.fr" }, { "first": "Nicholas", "middle": [], "last": "Asher", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toulouse", "location": { "addrLine": "118, route de Narbonne", "postCode": "31062", "settlement": "Toulouse", "country": "France" } }, "email": "asher@irit.fr" }, { "first": "Farah", "middle": [], "last": "Benamara", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toulouse", "location": { "addrLine": "118, route de Narbonne", "postCode": "31062", "settlement": "Toulouse", "country": "France" } }, "email": "benamara@irit.fr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Modeling user preferences is crucial in many real-life problems, ranging from individual and collective decision-making to strategic interactions between agents and game theory. Since agents do not come with their preferences transparently given in advance, we have only two means to determine what they are if we wish to exploit them in reasoning: we can infer them from what an agent says or from his nonlinguistic actions. In this paper, we analyze how to infer preferences from dialogue moves in actual conversations that involve bargaining or negotiation. To this end, we propose a new annotation scheme to study how preferences are linguistically expressed in two different corpus genres. This paper describes the annotation methodology and details the inter-annotator agreement study on each corpus genre. Our results show that preferences can be easily annotated by humans.", "pdf_parse": { "paper_id": "S12-1018", "_pdf_hash": "", "abstract": [ { "text": "Modeling user preferences is crucial in many real-life problems, ranging from individual and collective decision-making to strategic interactions between agents and game theory. Since agents do not come with their preferences transparently given in advance, we have only two means to determine what they are if we wish to exploit them in reasoning: we can infer them from what an agent says or from his nonlinguistic actions. In this paper, we analyze how to infer preferences from dialogue moves in actual conversations that involve bargaining or negotiation. To this end, we propose a new annotation scheme to study how preferences are linguistically expressed in two different corpus genres. This paper describes the annotation methodology and details the inter-annotator agreement study on each corpus genre. Our results show that preferences can be easily annotated by humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Modeling user preferences is crucial in many reallife problems, ranging from individual and collective decision-making (Arora and Allenby, 1999) to strategic interactions between agents (Brainov, 2000) and game theory (Hausman, 2000) . A webbased recommender system can, for example, help a user to identify (among an optimal ranking) the product item that best fits his preferences (Burke, 2000) . Modeling preferences can also help to find some compromise or consensus between two or more agents having different goals during a negotiation (Meyer and Foo, 2004) .", "cite_spans": [ { "start": 119, "end": 144, "text": "(Arora and Allenby, 1999)", "ref_id": "BIBREF0" }, { "start": 186, "end": 201, "text": "(Brainov, 2000)", "ref_id": "BIBREF7" }, { "start": 218, "end": 233, "text": "(Hausman, 2000)", "ref_id": "BIBREF14" }, { "start": 383, "end": 396, "text": "(Burke, 2000)", "ref_id": "BIBREF8" }, { "start": 542, "end": 563, "text": "(Meyer and Foo, 2004)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Working with preferences involves three subtasks (Brafman and Domshlak, 2009) : preference acquisition, which extracts preferences from users, preference modeling where a model of users' preferences is built using a preference representation language and preference reasoning which aims at computing the set of optimal outcomes. We focus in this paper on the first task.", "cite_spans": [ { "start": 49, "end": 77, "text": "(Brafman and Domshlak, 2009)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Handling preferences is not easy. First, specifying an ordering over acceptable outcomes is not trivial especially when multiple aspects of an outcome matter. For instance, choosing a new camera to buy may depend on several criteria (e.g. battery life, weight, etc.), hence, ordering even two outcomes (cameras) can be cognitively difficult because of the need to consider trade-offs and dependencies between the criteria. Second, users often lack complete information about preferences initially. They build a partial description of agents' preferences that typically changes over time. Indeed, users often learn about the domain, each others' preferences and even their own preferences during a decision-making process. Since agents don't come with their preferences transparently given in advance, we have only two means to determine what they are if we wish to exploit them in reasoning: we can infer them from what an agent says or from his nonlinguistic actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we analyze how to infer preferences from dialogue moves in actual conversations that involve bargaining or negotiation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Within the Artificial Intelligence community, preference acquisition from nonlinguistic actions has been performed using a variety of specific tasks, including preference learning (F\u00fcrnkranz and H\u00fcllermeier, 2011) and preference elicitation methods (Chen and Pu, 2004 ) (such as query learning (Blum et al., 2004) , collaborative filtering (Su and Khoshgoftaar, 2009) and qualitative graphical representation of preferences (Boutilier et al., 1997) ). However, these tasks don't occur in actual conversations about negotiation. We are interested in how agents learn about preferences from actual conversational turns in real dialogue (Edwards and Barron, 1994) , using NLP techniques.", "cite_spans": [ { "start": 180, "end": 213, "text": "(F\u00fcrnkranz and H\u00fcllermeier, 2011)", "ref_id": null }, { "start": 249, "end": 267, "text": "(Chen and Pu, 2004", "ref_id": "BIBREF10" }, { "start": 294, "end": 313, "text": "(Blum et al., 2004)", "ref_id": null }, { "start": 340, "end": 367, "text": "(Su and Khoshgoftaar, 2009)", "ref_id": "BIBREF17" }, { "start": 424, "end": 448, "text": "(Boutilier et al., 1997)", "ref_id": "BIBREF4" }, { "start": 634, "end": 660, "text": "(Edwards and Barron, 1994)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To this end, we propose a new annotation scheme to study how preferences are linguistically expressed in dialogues. The annotation study is performed on two different corpus genres: the Verbmobil corpus (Wahlster, 2000) and a booking corpus, built by ourselves. This paper describes the annotation methodology and details the inter-annotator agreement study on each corpus genre. Our results show that preferences can be easily annotated by humans.", "cite_spans": [ { "start": 203, "end": 219, "text": "(Wahlster, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A preference is commonly understood as an ordering by an agent over outcomes, which are understood as actions that the agent can perform or goal states that are the direct result of an action of the agent. For instance, an agent's preferences may be defined over actions like buy a new car or by its end result like have a new car. The outcomes over which a preference is defined will depend on the domain or task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What are preferences?", "sec_num": "2.1" }, { "text": "Among these outcomes, some are acceptable for the agent, i.e. the agent is ready to act in such a way as to realize them, and some outcomes are not. Among the acceptable outcomes, the agent will typically prefer some to others. Our aim is not to determine the most preferred outcome of an agent but follows rather the evolution of their commitments to certain preferences as the dialogue proceeds. To give an example, if an agent proposes to meet on a certain day X and at a certain time Y, we learn that among the agent's acceptable outcomes is a meeting on X at Y, even if this is not his most preferred outcome. We are interested in an ordinal definition of preferences, which consists in imposing a ranking over all (relevant) possible outcomes and not a cardinal definition which is based on numerical values that allow comparisons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What are preferences?", "sec_num": "2.1" }, { "text": "More formally, let \u2126 be a set of possible outcomes. A preference relation, written , is a reflexive and transitive binary relation over elements of \u2126. The preference orderings are not necessarily complete, since some candidates may not be comparable by a given agent. Given the two outcomes o 1 and o 2 , o 1 o 2 means that outcome o 1 is equally or more preferred to the decision maker than o 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What are preferences?", "sec_num": "2.1" }, { "text": "Strict preference o 1 o 2 holds iff o 1 o 2 and not o 2 o 1 . The associated indifference relation is o 1 \u223c o 2 if o 1 o 2 and o 2 o 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "What are preferences?", "sec_num": "2.1" }, { "text": "It is important to distinguish preferences from opinions. While opinions are defined as a point of view, a belief, a sentiment or a judgment that an agent may have about an object or a person, preferences, as we have defined them, involve an ordering on behalf of an agent and thus are relational and comparative. Hence, opinions concern absolute judgments towards objects or persons (positive, negative or neutral), while preferences concern relative judgments towards actions (preferring them or not over others). The following examples illustrate this: (a) expresses a direct positive opinion towards the movie but we do not know if this movie is the most preferred. (b) expresses a comparative opinion between two movies with respect to their shared features (scenarios) (Ganapathibhotla and Liu, 2008) . If actions involving these movies (e.g. seeing them) are clear in the context, such a comparative opinion will imply a preference, ordering the first season scenario over the second. Finally, (c) expresses two preferences, one depending on the other. The first is that the speaker prefers to go to the cinema over other alternative actions; the second is, given that preference, that he wants to see Madagascar 2 over other possible movies. Reasoning about preferences is also distinct from reasoning about opinions. An agent's preferences determine an order over outcomes that predicts how the agent, if he is rational, will act. This is not true for opinions. Opinions have at best an indirect link to action: I may hate what I'm doing, but do it anyway because I prefer that outcome to any of the alternatives.", "cite_spans": [ { "start": 775, "end": 806, "text": "(Ganapathibhotla and Liu, 2008)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Preferences vs. opinions", "sec_num": "2.2" }, { "text": "Our data come from two corpora: one alreadyexisting, Verbmobil (C V ), and one that we created, Booking (C B ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The first corpus is composed of 35 dialogues randomly chosen from the existing corpus Verbmobil (Wahlster, 2000) , where two agents discuss on when and where to set up a meeting. Here is a typical fragment: \u03c0 1 A: Shall we meet sometime in the next week? \u03c0 2 A: What days are good for you? \u03c0 3 B: I have some free time on almost every day except Fridays. \u03c0 4 B: Fridays are bad. \u03c0 5 B: In fact, I'm busy on Thursday too. \u03c0 6 A: Next week I am out of town Tuesday, Wednesday and Thursday. \u03c0 7 A: So perhaps Monday?", "cite_spans": [ { "start": 96, "end": 112, "text": "(Wahlster, 2000)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The second corpus was built from various English language learning resources, available on the Web (e.g., www.bbc.co.uk/worldservice/ learningenglish). It contains 21 randomly selected dialogues, in which one agent (the customer) calls a service to book a room, a flight, a taxi, etc. Here is a typical fragment: \u03c0 1 A: Northwind Airways, good morning. May I help you? \u03c0 2 B: Yes, do you have any flights to Sydney next Tuesday? \u03c0 3 A: Yes, there's a flight at 16:45 and one at 18:00. \u03c0 4 A: Economy, business class or first class ticket? \u03c0 5 B: Economy, please.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "Our approach to preference acquisition exploits discourse structure and aims to study the impact of discourse for extracting and reasoning on preferences. Cadilhac et al. (2011) show how to compute automatically preference representations for a whole stretch of dialogue from the preference representations for elementary discourse units. Our annotation here concentrates on the commitments to pref-erences expressed in elementary discourse units or EDUs. We analyze how the outcomes and the dependencies between them are linguistically expressed by performing, on each corpus, a two-level annotation. First, we perform a segmentation of the dialogue into EDUs. Second, we annotate preferences expressed by the EDUs.", "cite_spans": [ { "start": 155, "end": 177, "text": "Cadilhac et al. (2011)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "The examples above show the effects of segmentation. Each EDU is associated with a label \u03c0 i . For Verbmobil, we rely on the already available discourse annotation of Baldridge and Lascarides (2005) . For Booking, the segmentation was made by consensus.", "cite_spans": [ { "start": 167, "end": 198, "text": "Baldridge and Lascarides (2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We detail, in the next section, our preference annotation scheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "To analyze how preferences are linguistically expressed in each EDU, we must: (1) identify the set \u2126 of outcomes, on which the agent's preferences are expressed, and (2) identify the dependencies between the elements of \u2126 by using a set of specific operators, i.e. identifying the agent's preferences on the stated outcomes. Consider the segment \"Let's meet Thursday or Friday\". We have \u2126 = {meet Thursday, meet Friday} where outcomes are linked by a disjunction that means the agent is ready to act for one of these outcomes, preferring them equally.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preference annotation scheme", "sec_num": "4" }, { "text": "Within an EDU, preferences can be expressed in different ways. They can be atomic preference statements or complex preference statements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preference annotation scheme", "sec_num": "4" }, { "text": "Atomic preference statements are of the form \"I prefer X\", \"Let's X\", or \"We need X\", where X describes an outcome. X may be a definite noun phrase (\"Monday\", \"next week\", \"almost every day\"), a prepositional phrase (\"at my office\") or a verb phrase (\"to meet\"). They can be expressed within comparatives and/or superlatives (\"a cheaper room\" or \"the cheapest flight\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Atomic preferences", "sec_num": "4.1" }, { "text": "Preferences can also be expressed in an indirect way using questions. Although not all questions entail that their author commits to a preference, in many cases they do. That is, if A asks \"can we meet next week?\" he implicates a preference for meeting. For negative and wh-interrogatives, the implication is even stronger. Expressions of sentiment or politeness can also be used to indirectly introduce preferences. In Booking, the segment \"economy please\" indicates the agent's preference to be in an economy class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Atomic preferences", "sec_num": "4.1" }, { "text": "EDUs can also express preferences via free-choice modalities; \"I am free on Thursday\" or \"I can meet on Thursday\" tells us that Thursday is a possible day to meet, it is an acceptable outcome.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Atomic preferences", "sec_num": "4.1" }, { "text": "A negative preference expresses an unacceptable outcome, i.e. what the agent does not prefer. Negative preference can be expressed explicitly with negation words (\"I don't want to meet on Friday\") or inferred from the context (\"I am busy on Monday\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Atomic preferences", "sec_num": "4.1" }, { "text": "While the logical form of an atomic preference statement is something of the form P ref (X), we abbreviate this in the annotation language, using just the outcome expression X to denote that the agent prefers X to the alternatives, i.e. X X. If X is an unacceptable outcome, we use the non-boolean operator not to denote that the agent prefers not X to other alternatives, i.e. X X. In our Verbmobil annotation, X is typically an NP denoting a time or place; X as an outcome is thus shorthand for meet on X or meet at X. For Booking, X is short for reserve or book X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Atomic preferences", "sec_num": "4.1" }, { "text": "Preference statements can also be complex, expressing dependencies between outcomes. Borrowing from the language of conditional preference networks or CP-nets (Boutilier et al., 2004) , we recognize that some preferences may depend on another action. For instance, given that I have chosen to eat fish, I will prefer to have white wine over red wine-something which we express as eat fish : drink white wine drink red wine. Among the possible combinations, we find conjunctions, disjunctions and conditionals. We examine these conjunctive, disjunctive and conditional operations over outcomes and suppose a language with non-boolean operators &, and \u2192 taking outcome expressions as arguments.", "cite_spans": [ { "start": 159, "end": 183, "text": "(Boutilier et al., 2004)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "With conjunctions of preferences, as in \"Could I have a breakfast and a vegetarian meal?\" or in \"Mondays and Fridays are not good?\", the agent expresses two preferences (respectively over the ac-ceptable outcomes breakfast and vegetarian meal and the non acceptable outcomes not Mondays and not Fridays) that he wants to satisfy and he prefers to have one of them if he can not have both. Hence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "o 1 & o 2 means o 1 o 1 and o 2 o 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "The semantics of a disjunctive preference is a free choice one. For example in \"either Monday or Tuesday is fine for me\" or in \"I am free Monday and Tuesday\", the agent states that either Monday or Tuesday is an acceptable outcome and he is indifferent between the choice of the outcomes. Hence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "o 1 o 2 means o 2 : o 1 \u223c o 1 , o 2 : o 1 o 1 and o 1 : o 2 \u223c o 2 , o 1 : o 2 o 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "Finally, some EDUs express conditional among preferences. For example, in the sentence \"What about Monday, in the afternoon?\", there are two preferences: one for the day Monday, and, given the Monday preference, one for the time afternoon (of Monday), at least for one syntactic reading of the utterance. Hence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "o 1 \u2192 o 2 means o 1 o 1 and o 1 : o 2 o 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "For each EDU, annotators identify how outcomes are expressed and then indicate if the outcomes are acceptable, or not, using the operator not and how the preferences on these outcomes are linked using the operators &, and \u2192.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Complex preferences", "sec_num": "4.2" }, { "text": "We give below an example of how some EDUs are annotated. i indicates that o is the outcome number i in the EDU, the symbol // is used to separate the two annotation levels and brackets indicate how outcomes are attached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "4.3" }, { "text": "\u03c0 1 : 1 I got class 2? // 1 \u2192 not 2 \u03c0 2 : What about 1, 2 or 3, // 1 \u2192 (2 3) \u03c0 3 : 1 should be equipped 2, 3 4, please.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "4.3" }, { "text": "// (1 \u2192 2) & (3 \u2192 4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "4.3" }, { "text": "In \u03c0 1 , the annotation tells us that we have two outcomes and that the agent prefers outcome 1 over any other alternatives and given that, he does not prefer outcome 2. In \u03c0 2 , the annotation tells us that the agent prefers to have one of outcome 2 and outcome 3 satisfied given that he prefers outcome 1. In this example, the free choice between outcome 2 and outcome 3 is lexicalized by the coordinating conjunction \"or\". On the contrary, \u03c0 3 is a more complex example where there is no discursive marker to find that the preference operator between the couples of outcomes 1 and 2 on one hand, and 3 and 4 on the other hand, is the conjunctive operator &.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example", "sec_num": "4.3" }, { "text": "Our two corpora (Verbmobil and Booking) were annotated by two annotators using the previously described annotation scheme. We performed an intermediate analysis of agreement and disagreement between the two annotators on two Verbmobil dialogues. Annotators were thus trained only for Verbmobil. The aim is to study to what extent our annotation scheme is genre dependent. The training allowed each annotator to understand the reason of some annotation choices. After this step, the dialogues of our corpora have been annotated separately, discarding those two dialogues. Table 1 presents some statistics about the annotated data in the gold standard. We compute four inter-annotator agreements: on outcome identification, on outcome acceptance, on outcome attachment and finally on operator identification. Table 2 summarizes our results.", "cite_spans": [], "ref_spans": [ { "start": 571, "end": 578, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Inter-annotator agreements", "sec_num": "5" }, { "text": "C V C B No.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Inter-annotator agreements", "sec_num": "5" }, { "text": "Two inter-annotator agreements were computed using Cohen's Kappa. One based on an exact matching between two outcome annotations (i.e. their corresponding text spans), and the other based on a le-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome identification", "sec_num": "5.1" }, { "text": "C V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome identification", "sec_num": "5.1" }, { "text": "C B Outcome identification (Kappa) exact : 0.66 lenient : 0.85 Outcome acceptance (Kappa) 0.90 0.95 Outcome attachment (F-measure) 93% 82% Operator identification (Kappa) 0.93 0.75 Table 2 : Inter-annotator agreements for the two corpora.", "cite_spans": [], "ref_spans": [ { "start": 181, "end": 188, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Agreements on outcome identification", "sec_num": "5.1" }, { "text": "nient match between annotations (i.e. there is an overlap between their text spans as in \"2p.m\" and \"around 2p.m\"). This approach is similar to the one used by Wiebe, Wilson and Cardie (2005) to compute agreement when annotating opinions in news corpora. We obtained an exact agreement of 0.66 and a lenient agreement of 0.85 for both corpus genres.", "cite_spans": [ { "start": 160, "end": 191, "text": "Wiebe, Wilson and Cardie (2005)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome identification", "sec_num": "5.1" }, { "text": "We made the gold standard after discussing cases of disagreement. We observed four cases. The first one concerns redundant preferences which we decided not to keep in the gold standard. In such cases, the second EDU \u03c0 2 does not introduce a new preference, neither does it correct the preferences stated in \u03c0 1 ; rather, the agent just wants to insist by repeating already stated preferences, as in the following example: \u03c0 1 A: Thursday, Friday, and Saturday I am out. \u03c0 2 A: So those days are all out for me,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome identification", "sec_num": "5.1" }, { "text": "The second case of disagreement comes from anaphora which are often used to introduce new, to make more precise or to accept preferences. Hence, we decided to annotate them in the gold standard. Here is an example: The third case of disagreement concerns preference explanation. We chose not to annotate these expressions in the gold standard because they are used to explain already stated preferences. In the following example, one judge annotated \"from nine to twelve\" to be expressions of preferences while the other did not : Finally, the last case of disagreement comes from preferences that are not directly related to the action of fixing a date to meet but to other actions, such as having lunch, choosing a place to meet, etc. Even though those preferences were often missed by annotators, we decided to keep them, when relevant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome identification", "sec_num": "5.1" }, { "text": "The aim here is to compute the agreement on the not operator, that is if an outcome is acceptable, as in \" 1 are good // 1\", or unacceptable, as in \" 1 are not good // not 1\". We get a Cohen's Kappa of 0.9 for Verbmobil and 0.95 for Booking. The main case of disagreement concerns anaphoric negations that are inferred from the context, as in \u03c0 2 below where annotators sometimes fail to consider \"in the morning\" as unacceptable outcomes: \u03c0 1 A: Tuesday is kind of out, \u03c0 2 A: Same reason in the morning Same case of disagreement in this example where \"Monday\" is an unacceptable outcome: \u03c0 1 well, I am, busy 1, // not 1 \u03c0 2 that is 1 // not 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome acceptance", "sec_num": "5.2" }, { "text": "Since this task involves structure building, we compute the agreement using the F-score measure. The agreement was computed on the previously built gold standard once annotators discussed cases of outcome identification disagreements. We compare how each outcome is attached to the others within the same EDU. This agreement concerns EDUs that contain at least three outcomes, that is 8% of EDUs from Verbmobil and 11% of EDUs from Booking. When comparing annotations for the example \u03c0 1 below, there is three errors, one for outcome 2, one for 3 and one for 4. We obtain an agreement of 93% for Verbmobil and 82% for Booking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome attachment", "sec_num": "5.3" }, { "text": "Finally, we compute the agreements for each couple of outcomes on which annotators agreed about how they are attached.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Agreements on outcome dependencies", "sec_num": "5.4" }, { "text": "In Verbmobil, the most frequently used binary operator is \u2192. Because the main purpose of the agents in this corpus is to schedule an appointment, the preferences expressed by the agents are mainly focused on concepts of time and there are many conditional preferences since it is common that preferences on specific concepts depend on more broad temporal concepts. For example, preferences on hours are generally conditional on preferences on days. In Booking, there are almost as many & as \u2192 because independent and dependent preferences are more balanced in this corpus. The agents discuss preferences about various criteria that are independent. For example, to book a hotel, the agent express his preferences towards the size of the bed (single or double), the quality of the room (smoker or nonsmoker), the presence of certain conveniences (TV, bathtub), the possibility to have breakfast in his room, etc. Within an EDU, such preferences are often expressed in different sentences (compared to Verbmobil where segments' lengths are smaller) which lead annotators to link those preferences with the operator &. Conditionals between preferences hold when decision criteria are dependent. For example, the preference for having a vegetarian meal is conditional on the preference for having lunch. There also are conditionals between temporal concepts, for example, to choose the time of a flight. Table 3 shows the Kappa for each operator on each corpus genre. The Cohen's Kappa, averaged over all the operators, is 0.93 for Verbmobil and 0.75 for Booking. We observe two main cases of disagreement: between and &, and between & and \u2192. These cases are more frequent for Booking mainly because annotators were not trained on this corpus. This is why the Kappa was lower than for Verbmobil. We discuss below the main two cases of disagreement.", "cite_spans": [], "ref_spans": [ { "start": 1400, "end": 1407, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Agreements on outcome dependencies", "sec_num": "5.4" }, { "text": "and &. The same linguistic realizations do not always lead to the same operator. For instance, in \" 1 and 2 are good\" we have 1 2 whereas in \" 1 and 2 are not good\" or in \"I would like a 1 and a 2\" we have respectively not 1 & not and 1 & 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "C V C B & 0.90 0.66 0.97 0.89 \u2192 0.92 0.71", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "The coordinating conjunction \"or\" is a strong predictor for recognizing a disjunction of preferences, at least when the \"or\" is clearly outside of the scope of a negation 1 , as in the examples below (in \u03c0 1 , the negation is part of the wh-question, and not boolean over the preference):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "\u03c0 1 Why don't we 1, or 2 // 1 2 \u03c0 2 Would you like 1 or 2? // 1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "The coordinating conjunction \"and\" is also a strong indication, especially when it is used to link two acceptable outcomes that are both of a single type (e.g., day of the week, time of day, place, type of room, etc.) between which an agent wants to choose a single realization. For example, in Verbmobil, agents want to fix a single appointment so if there is a conjunction \"and\" between two temporal concepts of the same level, it is a disjunction of preference (see \u03c0 3 below). It is also the case in Booking when an agent wants to book a single plane flight (see \u03c0 4 ). \u03c0 3 1 and 2 are good for me // 1 2 \u03c0 4 You could 1, 2 and <2pm> 3 // 1 (2 3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "The acceptability modality distributes across the conjoined NPs to deliver something like 3(meet Monday) \u2227 3(meet Tuesday) in modal logic (clearly acceptability is an existential rather than universal modality), and as is known from studies of free choice modality (Schulz, 2007) , such a conjunction translates to 3(meet Monday \u2228 meet Tuesday), which expresses our free choice disjunction of preferences, o 1 o 2 . On the other hand, when the conjunction \"and\" links two outcomes referring to a single concept that are not acceptable, it gives a conjunction of preferences, as in \u03c0 5 . Once again thinking in terms of modality is helpful. The \"not acceptable\" modality distributes across the conjunction, this gives something like 2\u00aco 1 \u2227 2\u00aco 2 (where \u00ac is truth conditional negation) which is equivalent to 2(\u00aco 1 \u2227 \u00aco 2 ), i.e. not o 1 & not o 2 and not equivalent to 2(\u00aco 1 \u2228 \u00aco 2 ), i.e. not o 1 not o 2 . The connector \"and\" also involves a conjunction of preferences when it links two independent outcomes that the agent wants to satisfy simultaneously. For example, in \u03c0 6 , the agent wants to book two hotel rooms, and so the outcomes are independent. In \u03c0 7 , the agent expresses his preferences on two different features he wants for the hotel room he is booking. Confusion between & and \u2192. In this case, disagreements are mainly due to the difficulty for annotators to decide if preferences are dependent, or not. For example, in \"I have a meeting 1, but I could meet 2\", one annotator put not 1 \u2192 2 meaning that the agent is ready to meet at one o'clock because he can not meet at three, while the other annotated not 1 & 2 meaning that the agent is ready to meet at one o'clock independently of what it will do at three.", "cite_spans": [ { "start": 265, "end": 279, "text": "(Schulz, 2007)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "Some connectors introduce contrast between the preferences expressed in a segment as \"but\", \"although\" and \"unless\". In the annotation, we can model it thanks to the operator \u2192. When it is used between two conflicting values, it represents a correction. Thus, the annotation o 1 \u2192 not o 1 means we need to replace in our model of preferences o 1 o 1 by o 1 o 1 . And vice versa for not o 1 \u2192 o 1 . \u03c0 9 1 is a little full, although there is some possibility, 2 // not 1 \u2192 (1 \u2192 2) \u03c0 10 we're full 1, unless you want 2 // not 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "\u2192 (1 \u2192 2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "However, it is important to note that the coordinating conjunction \"but\" does not always introduce contrast, as in the example below, where it introduces a conjunction of preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "\u03c0 11 I am busy 1, but 2, sounds good // not 1 & 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "The subordinating conjunctions \"if\", \"because\" and \"so\" are indications for detecting conditional preferences. The preferences in the main clause depend on the preferences in the subordinate clause (if-clause, because-clause, so-clause), as in the examples below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "\u03c0 12 so if we are going to be able to meet 1, it is going have to be 2 // 1 \u2192 2 \u03c0 13 1 I am free, 2, if you want to go for 3 // 3 \u2192 (2 \u2192 1) \u03c0 14 it is going to have to be 1 because, I am busy 2 // not 2 \u2192 1 \u03c0 15 I have a meeting 1, so we could, meet 2, or, 3 // not 1 \u2192 (2 3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "Whether or not there are some discursive markers between two outcomes, to find the appropriate operator, we need to answer some questions : does the agent want to satisfy the two outcomes at the same time ? Are the preferences on the outcomes dependent or independent ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "We have shown in this section that it is difficult to answer the second question and there is quite some ambiguity between the operators & et \u2192. This ambiguity can be explained by the fact that both operators model the same optimal preference. Indeed, we saw in section 4.2 that for two outcomes o 1 and o 2 linked by a conjunction of preferences (o 1 & o 2 ), we have o 1 o 1 and o 2 o 2 . For two outcomes o 1 and o 2 where o 2 is linked to o 1 by a conditional preference (o 1 \u2192 o 2 ), we have o 1 o 1 and o 1 : o 2 o 2 . In both cases, the best possible world for the agent is the one where o 1 and o 2 are both satisfied at the same time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Confusion between", "sec_num": null }, { "text": "In this paper, we proposed a linguistic approach to preference aquisition that aims to infer preferences from dialogue moves in actual conversations that involve bargaining or negotiation. We studied how preferences are linguistically expressed in elementary discourse units on two different corpus genres: one already available, the Verbmobil corpus and the Booking corpus purposely built for this project. Annotators were trained only for Verbmobil. The aim is to study to what extent our annotation scheme is genre dependent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Our preference annotation scheme requires two steps: identify the set of acceptable and non acceptable outcomes on which the agents preferences are expressed, and then identify the dependencies between these outcomes by using a set of specific nonboolean operators expressing conjunctions, disjunctions and conditionals. The inter-annotator agreement study shows good results on each corpus genre for outcome identification, outcome acceptance and outcome attachment. The results for outcome dependencies are also good but they are better for Verbmobil. The difficulties concern the confusion between disjunctions and conjunctions mainly because the same linguistic realizations do not always lead to the same operator. In addition, annotators often fail to decide if the preferences on the outcomes are dependent or independent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "This work shows that preference acquisition from linguistic actions is feasible for humans. The next step is to automate the process of preference extraction using NLP methods. We plan to do it using an hybrid approach combining both machine learning techniques (for outcome extraction and outcome acceptance) and rule-based approaches (for outcome attachment and outcome dependencies).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "When there is a propositional negation over the disjunction as in \"I don't want sheep or wheat\", which occurs frequently in a corpus in preparation, we no longer have a disjunction of preferences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "\u03c0 8 I have class 1, but, 2 I am free. // not 1 \u2192 (1 \u2192 2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Measuring the influence of individual preference structures in group decision making", "authors": [ { "first": "Neeraj", "middle": [], "last": "Arora", "suffix": "" }, { "first": "Greg", "middle": [ "M" ], "last": "Allenby", "suffix": "" } ], "year": 1999, "venue": "Journal of Marketing Research", "volume": "36", "issue": "", "pages": "476--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neeraj Arora and Greg M. Allenby. 1999. Measur- ing the influence of individual preference structures in group decision making. Journal of Marketing Re- search, 36:476-487.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Annotating discourse structures for robust semantic interpretation", "authors": [ { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 6th IWCS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Baldridge and Alex Lascarides. 2005. Annotating discourse structures for robust semantic interpretation. In Proceedings of the 6th IWCS.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Preference elicitation and query learning", "authors": [ { "first": "Martin", "middle": [], "last": "Zinkevich", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "649--667", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Zinkevich. 2004. Preference elicitation and query learning. Journal of Machine Learning Re- search, 5:649-667.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A constraint-based approach to preference elicitation and decision making", "authors": [ { "first": "Craig", "middle": [], "last": "Boutilier", "suffix": "" }, { "first": "Ronen", "middle": [], "last": "Brafman", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Geib", "suffix": "" }, { "first": "David", "middle": [], "last": "Poole", "suffix": "" } ], "year": 1997, "venue": "AAAI Spring Symposium on Qualitative Decision Theory", "volume": "", "issue": "", "pages": "19--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "Craig Boutilier, Ronen Brafman, Chris Geib, and David Poole. 1997. A constraint-based approach to prefer- ence elicitation and decision making. In AAAI Spring Symposium on Qualitative Decision Theory, pages 19- 28.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Cp-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements", "authors": [ { "first": "Craig", "middle": [], "last": "Boutilier", "suffix": "" }, { "first": "Craig", "middle": [], "last": "Brafman", "suffix": "" }, { "first": "Carmel", "middle": [], "last": "Domshlak", "suffix": "" }, { "first": "H", "middle": [], "last": "Holger", "suffix": "" }, { "first": "David", "middle": [], "last": "Hoos", "suffix": "" }, { "first": "", "middle": [], "last": "Poole", "suffix": "" } ], "year": 2004, "venue": "Journal of Artificial Intelligence Research", "volume": "21", "issue": "", "pages": "135--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Craig Boutilier, Craig Brafman, Carmel Domshlak, Hol- ger H. Hoos, and David Poole. 2004. Cp-nets: A tool for representing and reasoning with conditional ceteris paribus preference statements. Journal of Artificial In- telligence Research, 21:135-191.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Preference handling -an introductory tutorial. AI Magazine", "authors": [ { "first": "Ronen", "middle": [ "I" ], "last": "Brafman", "suffix": "" }, { "first": "Carmel", "middle": [], "last": "Domshlak", "suffix": "" } ], "year": 2009, "venue": "", "volume": "30", "issue": "", "pages": "58--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronen I. Brafman and Carmel Domshlak. 2009. Prefer- ence handling -an introductory tutorial. AI Magazine, 30(1):58-86.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The role and the impact of preferences on multiagent interaction", "authors": [ { "first": "Sviatoslav", "middle": [], "last": "Brainov", "suffix": "" } ], "year": 2000, "venue": "Proceedings of ATAL", "volume": "", "issue": "", "pages": "349--363", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sviatoslav Brainov. 2000. The role and the impact of preferences on multiagent interaction. In Proceedings of ATAL, pages 349-363. Springer-Verlag.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Knowledge-based recommender systems", "authors": [ { "first": "Robin", "middle": [], "last": "Burke", "suffix": "" } ], "year": 2000, "venue": "Encyclopedia of Library and Information Science", "volume": "69", "issue": "", "pages": "180--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin Burke. 2000. Knowledge-based recommender systems. In Encyclopedia of Library and Information Science, volume 69, pages 180-200. Marcel Dekker.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Commitments to preferences in dialogue", "authors": [ { "first": "Ana\u00efs", "middle": [], "last": "Cadilhac", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Asher", "suffix": "" }, { "first": "Farah", "middle": [], "last": "Benamara", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" } ], "year": 2011, "venue": "Proceedings of SIGDIAL", "volume": "", "issue": "", "pages": "204--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ana\u00efs Cadilhac, Nicholas Asher, Farah Benamara, and Alex Lascarides. 2011. Commitments to preferences in dialogue. In Proceedings of SIGDIAL, pages 204- 215. ACL.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Survey of preference elicitation methods", "authors": [ { "first": "Li", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Pearl", "middle": [], "last": "Pu", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Chen and Pearl Pu. 2004. Survey of preference elici- tation methods. Technical report.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Smarts and smarter: Improved simple methods for multiattribute utility measurement", "authors": [ { "first": "Ward", "middle": [], "last": "Edwards", "suffix": "" }, { "first": "F. Hutton", "middle": [], "last": "Barron", "suffix": "" } ], "year": 1994, "venue": "Organizational Behavior and Human Decision Processes", "volume": "60", "issue": "3", "pages": "306--325", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ward Edwards and F. Hutton Barron. 1994. Smarts and smarter: Improved simple methods for multiat- tribute utility measurement. Organizational Behavior and Human Decision Processes, 60(3):306-325.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "2011. Preference Learning", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johannes F\u00fcrnkranz and Eyke H\u00fcllermeier, editors. 2011. Preference Learning. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mining opinions in comparative sentences", "authors": [ { "first": "Murthy", "middle": [], "last": "Ganapathibhotla", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "241--248", "other_ids": {}, "num": null, "urls": [], "raw_text": "Murthy Ganapathibhotla and Bing Liu. 2008. Mining opinions in comparative sentences. In Proceedings of the 22nd International Conference on Computational Linguistics -Volume 1, COLING '08, pages 241-248, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Revealed preference, belief, and game theory", "authors": [ { "first": "M", "middle": [], "last": "Daniel", "suffix": "" }, { "first": "", "middle": [], "last": "Hausman", "suffix": "" } ], "year": 2000, "venue": "Economics and Philosophy", "volume": "16", "issue": "01", "pages": "99--115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel M. Hausman. 2000. Revealed preference, be- lief, and game theory. Economics and Philosophy, 16(01):99-115.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Logical foundations of negotiation: Strategies and preferences", "authors": [ { "first": "Thomas", "middle": [], "last": "Meyer", "suffix": "" }, { "first": "Norman", "middle": [], "last": "Foo", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Ninth International Conference on Principles of Knowledge Representation and Reasoning (KR04", "volume": "", "issue": "", "pages": "311--318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Meyer and Norman Foo. 2004. Logical founda- tions of negotiation: Strategies and preferences. In In Proceedings of the Ninth International Conference on Principles of Knowledge Representation and Reason- ing (KR04, pages 311-318.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Minimal Models in Semantics and Pragmatics: Free Choice, Exhaustivity, and Conditionals", "authors": [ { "first": "Katrin", "middle": [], "last": "Schulz", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Schulz. 2007. Minimal Models in Semantics and Pragmatics: Free Choice, Exhaustivity, and Condi- tionals. PhD thesis, ILLC.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A survey of collaborative filtering techniques", "authors": [ { "first": "Xiaoyuan", "middle": [], "last": "Su", "suffix": "" }, { "first": "Taghi", "middle": [ "M" ], "last": "Khoshgoftaar", "suffix": "" } ], "year": 2009, "venue": "Advances in Artificial Intelligence", "volume": "", "issue": "", "pages": "1--20", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyuan Su and Taghi M. Khoshgoftaar. 2009. A sur- vey of collaborative filtering techniques. Advances in Artificial Intelligence, 2009:1-20.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Verbmobil: Foundations of Speech-to-Speech Translation", "authors": [], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wolfgang Wahlster, editor. 2000. Verbmobil: Founda- tions of Speech-to-Speech Translation. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Annotating expressions of opinions and emotions in language", "authors": [ { "first": "Janyce", "middle": [], "last": "Wiebe", "suffix": "" }, { "first": "Theresa", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2005, "venue": "Language Resources and Evaluation", "volume": "39", "issue": "2-3", "pages": "165--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2- 3):165-210.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "(a) The movie is not bad. (b) The scenario of the first season is better than the second one. (c) I would like to go to the cinema. Let's go and see Madagascar 2.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "\u03c0 1 A: One p.m. on the seventeenth? \u03c0 2 B: That sounds fantastic.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "\u03c0 1 A: Monday is really not good, \u03c0 2 A: I have got class from nine to twelve.", "uris": null, "num": null }, "FIGREF3": { "type_str": "figure", "text": " 1 the only days I have open are 2 or 3 4. \u2022 Annotation 1 : 1 \u2192 (2 (3 \u2192 4)) \u2022 Annotation 2 : 1 \u2192 ((2 3) \u2192 4)", "uris": null, "num": null }, "FIGREF4": { "type_str": "figure", "text": " 1, and 2 are, booked up // not 1 & not 2 \u03c0 6 Can I have one room< with balcony> 1 and 2? // 1 & 2 \u03c0 7 1 and 2 // 1 & 2", "uris": null, "num": null }, "TABREF1": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Statistics for the two corpora." }, "TABREF2": { "type_str": "table", "html": null, "content": "
", "num": null, "text": "Agreements on binary operators." } } } }