{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:01:57.055180Z" }, "title": "ILP-based Opinion Sentence Extraction from User Reviews for Question DB Construction", "authors": [ { "first": "Masakatsu", "middle": [], "last": "Hamashita", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Takashi", "middle": [], "last": "Inui", "suffix": "", "affiliation": {}, "email": "inui@cs.tsukuba.ac.jp" }, { "first": "Koji", "middle": [], "last": "Murakami", "suffix": "", "affiliation": {}, "email": "koji.murakami@rakuten.com" }, { "first": "Keiji", "middle": [], "last": "Shinzato", "suffix": "", "affiliation": {}, "email": "keiji.shinzato@rakuten.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Typical systems for analyzing users' opinions from online product reviews have been researched and developed successfully. However, it is still hard to obtain sufficient user opinions when many reviews consist of short messages. This problem can be solved with an active opinion acquisition (AOA) framework that has an interactive interface and can elicit additional opinions from users. In this paper, we propose a method for automatically constructing a question database (QDB) essential for an AOA. In particular, to eliminate noisy sentences, we discuss a model for extracting opinion sentences that is formulated as a maximum coverage problem. Our proposed model has two advantages: (1) excluding redundant questions from a QDB while keeping variations of questions and (2) preferring simple sentence structures suitable for the question generation process. Our experimental results show that the proposed method achieved a precision of 0.88. We also give details on the optimal combination of model parameters.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Typical systems for analyzing users' opinions from online product reviews have been researched and developed successfully. However, it is still hard to obtain sufficient user opinions when many reviews consist of short messages. This problem can be solved with an active opinion acquisition (AOA) framework that has an interactive interface and can elicit additional opinions from users. In this paper, we propose a method for automatically constructing a question database (QDB) essential for an AOA. In particular, to eliminate noisy sentences, we discuss a model for extracting opinion sentences that is formulated as a maximum coverage problem. Our proposed model has two advantages: (1) excluding redundant questions from a QDB while keeping variations of questions and (2) preferring simple sentence structures suitable for the question generation process. Our experimental results show that the proposed method achieved a precision of 0.88. We also give details on the optimal combination of model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Typical systems for analyzing users' opinions from online product reviews have been researched and developed successfully (Liu, 2012; Jo and Oh, 2011; Kouloumpis et al., 2011; Pozzi et al., 2016) . However, it is still hard to obtain sufficient user opinions when many reviews consist of short messages. In this situation, it would be practical to elicit additional opinions by actively asking users questions * Currently, Gunosy Inc. instead of just waiting for user posts. We define this procedure as an active opinion acquisition (AOA).", "cite_spans": [ { "start": 122, "end": 133, "text": "(Liu, 2012;", "ref_id": "BIBREF7" }, { "start": 134, "end": 150, "text": "Jo and Oh, 2011;", "ref_id": "BIBREF5" }, { "start": 151, "end": 175, "text": "Kouloumpis et al., 2011;", "ref_id": "BIBREF6" }, { "start": 176, "end": 195, "text": "Pozzi et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Suppose an example which is a review post consisting of just one sentence below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "u1 This wine has a really refreshing aroma! It is possible to capture the user opinion \"refreshing aroma\" from u1. Here, in the case of an AOAoriented system (AOAS), the system asks a question like s1 after u1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "u1 This wine had a really refreshing aroma! s1 How was the aftertaste?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "u2 The aftertaste was bitter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Then, it is also possible to obtain the additional opinion \"bitter aftertaste\" from u2. This example shows that an AOAS can efficiently collect user opinions by asking users questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Here, a question database (QDB), that is, a set of large quantities of question examples, is an essential resource for realizing dialogues between a user and an AOAS (Murao et al., 2003) because it would enable an AOAS to ask users precise questions in various situations. Nio and Murakami (2018) proposed a question-conversion method for constructing QDBs automatically. This method runs through a machine translation-like architecture and then converts an affirmative sentence to an interrogative form such as:", "cite_spans": [ { "start": 161, "end": 186, "text": "AOAS (Murao et al., 2003)", "ref_id": null }, { "start": 273, "end": 296, "text": "Nio and Murakami (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aroma was a bouquet. \u2192 How was the aroma? (s2') How was the aroma? (s6') Was the aftertaste long? QDB Figure 1 : Relationship between sentence extraction and question conversion. Given multiple user reviews, the sentence extraction module is applied for eliminating noisy sentences and then extracted sentences are sent to the question conversion.", "cite_spans": [], "ref_spans": [ { "start": 106, "end": 114, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Note that a relationship holds that the input opinion sentence is the answer to the output question. Nio and Murakami (2018) reported a method that achieves state-of-the-art performance by using a user-review data set prepared purely for evaluation. Unfortunately, however, real review data is very noisy, so measures against such noisy data are required.", "cite_spans": [ { "start": 101, "end": 124, "text": "Nio and Murakami (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a novel sentence extraction model that eliminates noisy sentences and extracts sentences suitable for question conversion. The proposed model works as a preprocessing module for question conversion, as shown in Figure 1 . Here, note that each sentence to be extracted needs to include opinion(s) like (s2) and (s6). Therefore, the proposed model is formulated as a maximum coverage problem of opinions, which makes it possible to exclude sentences including no opinions like (s1) and (s4). Naturally, the formulation also makes it possible to exclude sentences that have redundant content like (s5). Moreover, the basic formulation is extended to exclude sentences having sentence structures that are too complex for question conversion like (s3). The extended model enables us to control the number of opinions in each output sentence in order to extract opinion sentences that have simple structures. Details on the proposed model will be given in Section 3.", "cite_spans": [], "ref_spans": [ { "start": 237, "end": 245, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Through experiments done for evaluation, it is found that the proposed method achieved a precision of 0.880. Furthermore, we revealed the characteristics of the extracted opinion sentences in terms of length and the number of types of opinions. We also give details on the optimal combination of model parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The automatic generation of questions is essential to various applications such as dialog systems and quiz generation in educational E-learning systems. The question generation shared task and evaluation challenge (QGSTEC) is a shared task for automatically generating questions for those applications. In QG-STEC, given a text segment, the goal of a system is to generate questions whose answers are included in the input segment. There have been many successful studies based on QGSTEC (Mannem et al., 2010; Ali et al., 2010; Agarwal et al., 2011) . Nevertheless, our final goal is to generate questions that enable an AOAS to elicit user opinions, quite different from QGSTEC.", "cite_spans": [ { "start": 488, "end": 509, "text": "(Mannem et al., 2010;", "ref_id": "BIBREF8" }, { "start": 510, "end": 527, "text": "Ali et al., 2010;", "ref_id": "BIBREF1" }, { "start": 528, "end": 549, "text": "Agarwal et al., 2011)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "QGSTEC", "sec_num": "2.1" }, { "text": "Zhang et al. (2018) proposed a question generation model that uses a neural network. On a news web site, if the headline of an article is a question, the click through rate increases; thus, a question headline is generated by using an encoder-decoder model. This model requires correct answer data because it involves supervised learning. Our study differs from this study in that correct answer data is not required because our study involves unsupervised learning with only reviews and question examples are created instead of question headlines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Question Generation", "sec_num": "2.2" }, { "text": "Sentence extraction has been widely studied as a form of document summarization (Kupiec et al., 1995; Hirao et al., 2002) . Among the methods of extraction proposed so far, integer linear programming (ILP) formulation provides better solutions because of its flexibility and extensibility. Given a set of sentences D = {s 1 , . . . , s N } as an input, ILP-based sentence extraction aims at constructing an appropriate subset S \u2286 D. Here, suppose D is represented by an N-dimensional 0/1 vector y = {y 1 , . . . , y N }. When a sentence s i in D is s i \u2208 S, y represents the result of sentence extraction as y i = 1; otherwise,", "cite_spans": [ { "start": 80, "end": 101, "text": "(Kupiec et al., 1995;", "ref_id": null }, { "start": 102, "end": 121, "text": "Hirao et al., 2002)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "ILP-based Sentence Extraction", "sec_num": "2.3" }, { "text": "y i = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ILP-based Sentence Extraction", "sec_num": "2.3" }, { "text": "The most fundamental model of ILP-based sentence extraction is formulated as Figure 2 . ", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "ILP-based Sentence Extraction", "sec_num": "2.3" }, { "text": "y * = arg max y f (y) s.t. N \u2211 i=1 l i y i \u2264 L max \u2200i, y i \u2208 {0, 1}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ILP-based Sentence Extraction", "sec_num": "2.3" }, { "text": "The maximum coverage model (MCM) is an instance of an ILP-based sentence extraction model, that is known to be suitable for multi-document summarization (Yih et al., 2007) . MCM prefers to create a summary output that has as many varieties of concepts, typically words, as possible. As a result, this model is naturally able to exclude redundant concepts from the output.", "cite_spans": [ { "start": 153, "end": 171, "text": "(Yih et al., 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "Multi-document summarization based on the MCM is formulated as Figure 3 . Here, the objective function f mcm (y) is defined as follows:", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 71, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "f mcm (y) = \u03bb \u2211 i r i y i + (1 \u2212 \u03bb) \u2211 k w k z k", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "The w k in f mcm (y) represents the weight of the word k. The r i represents the similarity score between a sentence s i and entire input documents. The z k is a 0/1 variable that is 1 when a word k is included in an output candidate, and 0 otherwise. Also, o ik in Figure 3 is a constant that becomes 1 when s i contains k, 0 otherwise. The model guarantees consistency between y i and z k through the constraint 2010proposed a variation of the MCM for multi-document opinion summarization. This model adopts an opinion as the concept e k instead of a word to create a summary that has as many varieties of opinions as possible. The objective function f nishikawa (y) is defined as follows:", "cite_spans": [], "ref_spans": [ { "start": 266, "end": 274, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "y * = arg max y f mcm (y) s.t. N \u2211 i=1 l i y i \u2264 L max \u2200k, \u2211 i o ik y i \u2265 z k \u2200i, y i \u2208 {0, 1} \u2200k, z k \u2208 {0, 1}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "\u2211 i o ik y i \u2265 z k . Nishikawa et al.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "f nishikawa (y) = \u03bb \u2211 k w k z k + (1 \u2212 \u03bb) \u2211 i,j c i,j x i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "(1) The first term is the same as the second term of f mcm (y). In the second term of f nishikawa (y), x i,j is a decision variable that indicates the sentence order, and c i,j is a weight related to the naturalness of the sentence order. This makes it possible to select sentences so that important concepts are included in the summary and arrange those sentences as naturally as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "This is similar to our model proposed in the next section. However, its focal point is different from ours. The model of (Nishikawa et al., 2010) does not care how many opinions are included in each sentence in the output, while the proposed model controls the number of opinions in each output sentence in order to extract opinion sentences that have simple structures. The details will be given in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "In this section, we describe our novel sentenceextraction model based on the MCM formulation. Given a set of user review sentences, the model is expected to extract sentences suitable for question conversion, as mentioned in Section 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "Suppose again that, given the six sentences shown in Figure 1 as input, only (s2) and (s6) should be extracted and sent to the question conversion process. Sentences (s1) and (s4) should not be extracted because they include no opinions at all. (s3) and s5are not worth extracting despite both sentences including opinions. (s5) is redundant because it has almost the same meaning as (s2) 1 . In addition, (s3) has too complex of a sentence structure for question conversion.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 61, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "From these observations, it was found that each sentence output from the proposed model should satisfy the following requirements. Among these three, the first and third requirements can be achieved by applying a MCM framework, as mentioned in the previous section. In this paper, we propose an extension of the basic MCM to satisfy the second requirement. First, we propose additional constraints to control the number of opinions in each output sentence, and we then describe a novel objective function for estimating how much standard the expression of opinion is. Figure 4 shows the formulation of the proposed model. Note that an opinion \u27e8a j , e k \u27e9 is assigned as the concept in the MCM framework. Here, a j (\u2208 Q a ) is an aspect word such as \"aftertaste,\" e k (\u2208 Q e ) is a sentiment word such as \"bitter,\" and Q a and Q e represent a pre-defined set of aspect words and sentiment words, respectively. Two constraints, Equation (2) and(3) in Figure 4 , are added to control the number of opinions in an 1 On the contrary, (s2) is redundant if the model outputs (s5). output sentence. A max and E max are constants representing the maximum number of aspect and sentiment words included in an output sentence, respectively. The function c a (y i , a j ) in Equation (2) indicates the number of sentences that contain a j in y i and is defined as follows.", "cite_spans": [], "ref_spans": [ { "start": 568, "end": 576, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 950, "end": 958, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "y * = arg max y f prop (y) s.t. N \u2211 i=1 l i y i \u2264 L max \u2200i, |Qa| \u2211 j=1 c a (y i , a j ) \u2264 A max (2) \u2200i, |Qe| \u2211 k=1 c e (y i , e k ) \u2264 E max (3) \u2200j, k, N \u2211 i=1 o ijk y i \u2265 z jk (4) \u2200i, y i \u2208 {0, 1} \u2200j, k, z jk \u2208 {0, 1}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "N \u2211 i=1 h ij y i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "The h ij takes 1 if a sentence s i contains the aspect word a j and 0 otherwise. Here, y i is a vector for which the i-th element is the same value as that of y, and the others are 0. As a result, c a (y i , a j ) takes 1 if s i contains a j and 0 otherwise, and the function c e (y i , e k ) in Equation 3is similarly defined as c a (y i , a j ) for sentiment words. The constraint of Equation (4) has the same role as the original MCM in Figure 3 . It is modified slightly from the original model due to the concept (opinion) structure. Here, z jk is a variable that has 1 when an opinion \u27e8a j , e k \u27e9 is included in the output and 0 otherwise. The objective function f prop (y) for the proposed model is defined as follows.", "cite_spans": [], "ref_spans": [ { "start": 440, "end": 448, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f prop (y) = |Qa| \u2211 j=1 |Qe| \u2211 k=1 w jk z jk", "eq_num": "(5)" } ], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "It forms a simple version of f nishikawa (y). The value of f prop (y) becomes larger when the output includes many different types of opinions. We use half of f nishikawa (y) because our model does not need to consider the order of sentences unlike (Nishikawa et al., 2010) .", "cite_spans": [ { "start": 249, "end": 273, "text": "(Nishikawa et al., 2010)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "When asking a user a question, the model prefers standard expressions frequently used. From this fact, the weight w jk of the variable z jk is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w jk = w word jk w syn jk", "eq_num": "(6)" } ], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "Here, w word jk represents the co-occurrence probability of an aspect word a j and a sentiment word e k in an input document. w syn jk represents the average syntactic distance between a j and e k , which increases the weight of syntactically concise opinions in which aspect words and sentiment words appear close to each other. These values are calculated separately from a large review data set. Now, we explain how to determine which pairs of aspect words and sentiment words are regarded as opinions in a sentence. Given a sentence S, V a represents a subset of Q a , whose elements are aspect words in S. Also, V e represents a subset of Q e . The opinion \u27e8a j , e k \u27e9 is determined in S immediately when (|V a |, |V e |) = (1, 1), a j \u2208 V a and e k \u2208 V e . However, we need to discover meaningful word pairs when several aspect words and sentiment words are included in S, such as (|V a |, |V e |) = (2, 3). We solved this problem by performing maximum weight matching on a weighted complete bipartite graph (Korte et al., 2012) , where G(V a \u222a V e , E) is a complete bipartite graph, in other words, every combination of a j and e k in S becomes a candidate of opinions. Each candidate \u27e8a j , e k \u27e9 is weighted by Equation (5). Table 1 shows examples of opinions with higher weights that were calculated by using the same data used in the experiments in Section 4.1. Similarly, Table 2 shows the case of lower weights. One can see that plausible opinions are included in Table 1 while meaningless aspect/sentiment word pairs are included in Table 2 . ", "cite_spans": [ { "start": 1015, "end": 1035, "text": "(Korte et al., 2012)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 1236, "end": 1243, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1386, "end": 1393, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 1479, "end": 1486, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1549, "end": 1556, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Maximum Coverage Model", "sec_num": "2.4" }, { "text": "The following two experiments were conducted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Settings", "sec_num": "4.1" }, { "text": "ments where combinations of model parameters (A max and E max ) were changed to investigate the relationship between the performance and the parameters of the proposed model. Hereafter, we refer to the proposed model as ILP+C (Amax,Emax) when showing the parameters of the model clearly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment I We conducted a series of experi-", "sec_num": null }, { "text": "Experiment II We compared a simple version of the ILP-based sentence extraction model, namely ILP-only, with a non-ILP-based method to verify the effectiveness of ILP-based formulation. ILP-only is equivalent to the proposed model without the additional constraints [Equations (2) and 3]. Additionally, the proposed model is compared with ILP-only to evaluate the effectiveness of the additional constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment I We conducted a series of experi-", "sec_num": null }, { "text": "We used a set of Japanese user review sentences posted on Rakuten Japan 2 , which is one of the major E-commerce web sites in Japan. First, we crawled the sentences in the wine category and randomly selected 1,000 sentences from 19,160 sentences. Then, two annotators independently judged whether sentences satisfied the requirements shown in Section 3. Details on the data set are given in Table 3 . Here, the symbol \"Positive\" indicates that a sentence can be converted into relevant questions, that is, it should be extracted, and \"Negative\" the opposite. Aspect and sentiment indicate the average number of aspect lexicons and sentiment lexicons per sentence, respectively, and length indicates the average number of characters per sentence. Cohen's Kappa, which means the degree of inter-annotator agreement, was 0.765 (Cohen, 1960) .", "cite_spans": [ { "start": 824, "end": 837, "text": "(Cohen, 1960)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 391, "end": 398, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiment I We conducted a series of experi-", "sec_num": null }, { "text": "We handcrafted a set of aspect lexicons Q a and a set of sentiment lexicons Q e by collecting opinions that appeared in the data set for evaluation because no Japanese aspect/sentiment lexicons suitable for our data set exist. As a result, we determined that |Q a | = 81 and |Q e | = 835. Here, we collected only sentiment lexicons with a positive polarity according to the findings of (Hamashita et al., 2018) ; it is suitable that questions used in an AOAS include contents with positive polarity.", "cite_spans": [ { "start": 386, "end": 410, "text": "(Hamashita et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment I We conducted a series of experi-", "sec_num": null }, { "text": "In Experiment I, A max and E max in the proposed model are changed from 1 to 5, respectively. The non-ILP-based method used in Experiment II is a weight-based method that extracts sentences with higher weights until the total size of the extracted opinion sentences is over L max . The weight of sentence s i is calculated by summing up the weights of the opinions w jk defined in Equation 5, included in s i . We refer to this method as w/oILP hereafter. For each run of all experiments, the ILP solution was obtained by using Python's PuLP library (Mitchell et al., 2011) , and L max was set to hold a summarization rate of 5%.", "cite_spans": [ { "start": 550, "end": 573, "text": "(Mitchell et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment I We conducted a series of experi-", "sec_num": null }, { "text": "We used a precision measure of the extractions, the average length of the extracted sentences (|Sentence|), the number of extracted sentences (#Sentences), and the number of types of opinions included in the extracted sentences (#Opinions) as the evaluation measures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment I We conducted a series of experi-", "sec_num": null }, { "text": "First, Table 4 and Figure 5 show the results of Experiment I. Here, Figure 5 represents heat maps corresponding to the results for each evaluation measure, where the vertical axis indicates A max , and the horizontal axis indicates E max . For each map, the larger the metric value becomes, the darker the color of a cell is. Table 4 shows the values of the precision measure. It turns out that the precision tended to be large when E max = 1. Notably, the best result of 0.880 was achieved for (A max , E max ) = (4, 1). We found that almost all opinion sentences extracted by ILP+C (4,1) kept a simple sentence structure. Examples of the extracted sentences are shown in Figure 6(A) .", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 19, "end": 27, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 68, "end": 76, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 326, "end": 333, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 673, "end": 684, "text": "Figure 6(A)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "In comparison between Figure 5(I) and Figure 5(II), precision and #Sentences show similar results. That is to say, both metric values became larger when E max = 1. Figure 5 (III) holds the reversed proportion against Figure 5 (II). The reason could be that the value of #Sentences multiplied by that of |Sentence| tends to remain constant due to the constraint of L max . Next, it was found in Figure 5 (III) that the sentences extracted by ILP+C (1,1) had a large |Sentence| and also found in Figure 5 (II) that the precision rapidly decreased when (A max , E max ) = (1, 1). Now, we discuss why the precision decreased. Some of the correct and wrong examples extracted by ILP+C (1,1) are shown in Figure 6 (B) and Figure 6(C), respectively. From Figure 6 (B), we can see that the correct examples had short lengths and simple structures similar to those of ILP+C (4,1) , while the wrong examples in Figure 6 (C) tended to be long due to their containing useless words. We also observed that the sentences shown in Figure 6 (C) were not extracted when A max increased. From the results, it is expected that inappropriate (long) sentences would be over-extracted due to there being a lack of sentences that satisfy the constraints when (A max , E max ) = (1, 1).", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 172, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 217, "end": 225, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 394, "end": 402, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 494, "end": 502, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 699, "end": 707, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 748, "end": 756, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 901, "end": 909, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 1016, "end": 1024, "text": "Figure 6", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "As shown in Equation 5, the objective function tended to return a larger value when there were a variety of opinions in the output sentences. This relationship immediately lead to the phenomenon that the larger both A max and E max became, the larger #Opinions became. This corresponds to the results shown in Figure 5 (IV). Here, we note the results for (A max , E max ) = (5, 5). In this case, the precision (0.797) was lower than the best of 0.880 from Table 4 . The reason could be that ILP+C (5, 5) attempts to extract sentences that include multiple opinions in order to include as many opinions as possible in the output as shown in Figure 6(D) .", "cite_spans": [ { "start": 497, "end": 500, "text": "(5,", "ref_id": null }, { "start": 501, "end": 503, "text": "5)", "ref_id": null } ], "ref_spans": [ { "start": 310, "end": 318, "text": "Figure 5", "ref_id": "FIGREF5" }, { "start": 456, "end": 463, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 640, "end": 651, "text": "Figure 6(D)", "ref_id": "FIGREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "Next, Table 5 shows the results of Experiment II. From the table, we found that (1) ILP-only achieved better precision than w/oILP and that (2) the output obtained by ILP-only included a lot of short sentences with varieties of opinions. Therefore, the ILP-based model was verified to be appropriate for our purpose. The precision of ILP-only was 0.803, confirming that the proposed method had a better extraction precision. ILP-only is an extreme case of the proposed model and strictly equivalent to ILP+C (\u221e,\u221e) . Therefore, ILP-only is considered to be a model similar to ILP+C (5,5) . Looking at Table 4 and Table 5 , it can be confirmed that the precision of ILP-only and ILP+C (5,5) were similar. Finally, we discuss how to estimate (A max , E max ), which maximizes the precision without seeing it. We mentioned above that the metric #Sentences varies the same as precision. In addition to the findings, we investigated the correlation coefficients between precision and other metrics of each (A max , E max ) to find a suitable metric that estimates (A max , E max ). The results are shown in Table 6 . Since #Sentences and |Sentence| are approximately inversely proportioned, the correlation coefficient with |Sentence| is not included in the table. The function f prop (y * ) was added to the target metric for the investigation. As a consequence, the correlation coefficient between #Sentences and precision was the largest, while the other correlation coefficients were low. From these results, we can conclude that one can select (A max , E max ) with the largest #Sentences. We get (A max , E max ) = (5, 1) in the case of our experimental settings if we adopt this strategy. The precision is not optimal but is the second largest when (A max , E max ) = (5, 1); thus, we consider that (A max , E max ) can be estimated almost exactly by referring to #Sentences.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 13, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 600, "end": 607, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 612, "end": 619, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 1101, "end": 1108, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results", "sec_num": "4.2" }, { "text": "We proposed a novel model for extracting opinion sentences for constructing question DBs. The pro- The 3 light 3 flavor with 2 best 2 balance between 1 acidity and 1 sweetness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "[w] 1 1 4 2 2 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": ". / This 3 cool wine matches your special dinner because it has 1 fruity 1 aroma, 2 clear and 2 sharp 3 taste, and 4 mild foam.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "[w] 5 4 1 1 3 2 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": ". / A lot of wines which have a good balance of 5 tannin and 4 acidity, 3 fruity, and 2 pleasant 2 taste. posed model was formulated as a maximum coverage problem of opinions. Our model has additional constraints to control the number of opinions in each output sentence and also has an objective function in order to extract opinion sentences that have simple structures. From the experimental results, we found that ILP+C (4,1) achieved a precision of 0.88. We also found that one can achieve promising results when selecting (A max , E max ) with the largest #Sentences. For future work, it is necessary to improve an opinion detection method suitable for our Japanese data set. While we applied a simple dictionary-based detection method in this work, more sophisticated methods (Brody and Elhadad, 2010; He et al., 2018) could be combined with our model. We also plan to develop an AOAS with a QDB constructed with the proposed model and conduct comprehensive evaluations.", "cite_spans": [ { "start": 783, "end": 808, "text": "(Brody and Elhadad, 2010;", "ref_id": null }, { "start": 809, "end": 825, "text": "He et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "3", "sec_num": null }, { "text": "https://www.rakuten.co.jp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Automatic question generation using discourse cues", "authors": [ { "first": "", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agarwal et al.2011] Manish Agarwal, Rakshit Shah, and Prashanth Mannem. 2011. Automatic question gener- ation using discourse cues. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Ed- ucational Applications, pages 1-9.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An unsupervised aspect-sentiment model for online reviews", "authors": [ { "first": "Ali", "middle": [], "last": "", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "804--812", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali et al.2010] Husam Ali, Yllias Chali, and Sadid A Hasan. 2010. Automation of question generation from sentences. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 58-67. [Brody and Elhadad2010] Samuel Brody and Noemie El- hadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Human Language Tech- nologies: The 2010 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, pages 804-812.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A coefficient of agreement for nominal scales. Educational and psychological measurement", "authors": [ { "first": "Jacob", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 1960, "venue": "", "volume": "20", "issue": "", "pages": "37--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Cohen. 1960. A coefficient of agree- ment for nominal scales. Educational and psychologi- cal measurement, 20(1):37-46.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Insertion effect of negative affix in question generation for interactive information collection system", "authors": [ { "first": "[", "middle": [], "last": "Hamashita", "suffix": "" } ], "year": 2018, "venue": "The Association for Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hamashita et al.2018] Masakatsu Hamashita, Takashi Inui, Koji Murakami, and Keiji Shinzato. 2018. In- sertion effect of negative affix in question genera- tion for interactive information collection system. (in Japanese). In The Association for Natural Language Processing, 25(25).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Effective attention modeling for aspect-level sentiment classification", "authors": [], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1121--1131", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al.2018] Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2018. Effective attention modeling for aspect-level sentiment classification. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1121-1131.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Aspect and sentiment unification model for online review analysis", "authors": [ { "first": "[", "middle": [], "last": "Hirao", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the fourth ACM international conference on Web search and data mining", "volume": "1", "issue": "", "pages": "815--824", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Hirao et al.2002] Tsutomu Hirao, Hideki Isozaki, Eisaku Maeda, and Yuji Matsumoto. 2002. Extracting im- portant sentences with support vector machines. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1-7. [Jo and Oh2011] Yohan Jo and Alice H Oh. 2011. As- pect and sentiment unification model for online review analysis. In Proceedings of the fourth ACM inter- national conference on Web search and data mining, pages 815-824.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Twitter sentiment analysis: The good the bad and the omg!", "authors": [ { "first": "", "middle": [], "last": "Korte", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 18th annual international ACM SIGIR conference on research and development in information retrieval", "volume": "", "issue": "", "pages": "68--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Korte et al.2012] Bernhard Korte, Jens Vygen, B Korte, and J Vygen. 2012. Combinatorial optimization, vol- ume 2. Springer. [Kouloumpis et al.2011] Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the omg! In Fifth International AAAI conference on weblogs and social media. [Kupiec et al.1995] Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summa- rizer. In Proceedings of the 18th annual international ACM SIGIR conference on research and development in information retrieval, pages 68-73.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sentiment analysis and opinion mining. Synthesis lectures on human language technologies", "authors": [ { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2012, "venue": "", "volume": "5", "issue": "", "pages": "1--167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Liu. 2012. Sentiment analysis and opin- ion mining. Synthesis lectures on human language technologies, 5(1):1-167.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Question generation from paragraphs at upenn: QGSTEC system description", "authors": [ { "first": "[", "middle": [], "last": "Mannem", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the Fourth SIGdial Workshop of discourse and dialogue", "volume": "", "issue": "", "pages": "84--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Mannem et al.2010] Prashanth Mannem, Rashmi Prasad, and Aravind Joshi. 2010. Question generation from paragraphs at upenn: QGSTEC system description. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 84-91. [Mitchell et al.2011] Stuart Mitchell, Michael OSullivan, and Iain Dunning. 2011. Pulp: a linear programming toolkit for python. The University of Auckland, Auck- land, New Zealand, http://www. optimization-online. org/DB FILE/2011/09/3178. pdf. [Murao et al.2003] Hiroya Murao, Nobuo Kawaguchi, Shigeki Matsubara, Yukiko Yamaguchi, and Ya- suyoshi Inagaki. 2003. Example-based spoken dia- logue system using woz system log. In Proceedings of the Fourth SIGdial Workshop of discourse and dia- logue.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Intelligence is asking the right question: A study on Japanese question generation", "authors": [ { "first": "Murakami2018] Lasguido", "middle": [], "last": "Nio", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Murakami", "suffix": "" } ], "year": 2018, "venue": "IEEE Spoken Language Technology conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Murakami2018] Lasguido Nio and Koji Mu- rakami. 2018. Intelligence is asking the right ques- tion: A study on Japanese question generation. In IEEE Spoken Language Technology conference. [Nishikawa et al.2010] Hitoshi Nishikawa, Takaaki", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Opinion summarization with integer linear programming formulation for sentence extraction and ordering", "authors": [ { "first": "Yoshihiro", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "Genichiro", "middle": [], "last": "Matsuo", "suffix": "" }, { "first": "", "middle": [], "last": "Kikui", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters", "volume": "", "issue": "", "pages": "910--918", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hasegawa, Yoshihiro Matsuo, and Genichiro Kikui. 2010. Opinion summarization with integer linear programming formulation for sentence extraction and ordering. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 910-918.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Multidocument summarization by maximizing informative content-words", "authors": [ { "first": "[", "middle": [], "last": "Pozzi", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 27th ACM International Conference on Information and Knowledge Management", "volume": "7", "issue": "", "pages": "617--626", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Pozzi et al.2016] F. Alberto Pozzi, Elisabetta Fersini, Enza Messina, and Bing Liu. 2016. Sentiment analy- sis in social networks. Morgan Kaufmann. [Yih et al.2007] Wen-tau Yih, Joshua Goodman, Lucy Vanderwende, and Hisami Suzuki. 2007. Multi- document summarization by maximizing informative content-words. In IJCAI, volume 7, pages 1776-1782. [Zhang et al.2018] Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, Huanhuan Cao, and Xueqi Cheng. 2018. Question headline generation for news articles. In Proceedings of the 27th ACM International Conference on Information and Knowledge Manage- ment, pages 617-626.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Thank you so much. (s2) The aroma had a nice bouquet. (s3) Soft and fresh taste just like the harvest season of a lemon grove in southern Sicily. (s4) The bottle was different from last year. The aroma had a rich bouquet! (s6) The aftertaste was long. (s2) The aroma had a bouquet. (s6) The aftertaste was long.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF1": { "text": "Fundamental model for ILP-based sentence extraction Here, L max represents the maximum output length, and l i represents the length of a sentence s i . The function f (y) is an objective function that measures the quality of an output candidate y. The model outputs the candidate holding a maximum value of f (y) while satisfying all constraints.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF2": { "text": "Maximum coverage model for multi-document summarization", "uris": null, "num": null, "type_str": "figure" }, "FIGREF3": { "text": "Requirement I: include opinion(s), Requirement II: have a simple sentence structure, and Requirement III: exclude redundant content appearing in other output sentences.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF4": { "text": "Proposed model. It enables control of number of opinions in each output sentence through additional constraints.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF5": { "text": "Heat map representations corresponding to results for each evaluation measure. For each map, as metric value becomes larger, cell becomes darker.", "uris": null, "num": null, "type_str": "figure" }, "FIGREF6": { "text": "Examples of original Japanese sentences and their literal translations into English. The symbols [c] and [w] indicate correct and wrong extractions, respectively. Underline parts indicate aspect words, and double underline parts indicate sentiment words. A pair of aspect and sentiment words with the same arabic number means an opinion.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "html": null, "text": "Examples of high weight opinions", "num": null, "type_str": "table", "content": "
\u27e8balance, good\u27e9
\u27e8taste, long\u27e9
\u27e8taste, rich\u27e9
\u27e8aroma, spread\u27e9
\u27e8cost-performance, excellent\u27e9
" }, "TABREF1": { "html": null, "text": "Examples of low weight opinions", "num": null, "type_str": "table", "content": "
\u27e8cork, strong\u27e9
\u27e8taste, hero\u27e9
\u27e8label, soft\u27e9
\u27e8bottle, long\u27e9
\u27e8price, beautiful\u27e9
4 Experiments
" }, "TABREF2": { "html": null, "text": "Data set for evaluation", "num": null, "type_str": "table", "content": "
#Sentences(Positive/Negative) 715(367/348)
aspect1.97
sentiment1.61
length53.6
" }, "TABREF3": { "html": null, "text": "Precision value for each (A max , E max )", "num": null, "type_str": "table", "content": "
E max
12345
1 .666 .701 .735 .735 .735
2 .821 .810 .794 .774 .782
A max 3 .864 .826 .794 .833 .819
4 .880 .810 .782 .794 .791
5 .868 .794 .785 .794 .797
(I) Precision(II) #Sentences
(III) |Sentence|(IV) #Opinions
" }, "TABREF4": { "html": null, "text": "", "num": null, "type_str": "table", "content": "
: Results of Experiments II
w/oILP ILP-only
Precision.621.803
|Sentence|66.229.1
#Sentences2966
#Opinions47102
" }, "TABREF5": { "html": null, "text": "", "num": null, "type_str": "table", "content": "
: Correlation coefficients
correlation coefficient
#Sentences0.85
#Opinions0.33
f prop (y * )0.42
" }, "TABREF6": { "html": null, "text": "The 1 balance of 3 fruity 2 taste is 1 excellent. The 1 impression of the 2 taste was 1 strong. The 2 tannin was a 1 mild 1 taste. The 1 balance of all elements was 1 exquisite. The 2 taste of 1 soft 1 tannin was 2 widespread.", "num": null, "type_str": "table", "content": "
(A): sentences extracted by ILP+C (4,1)
[c] 3 / [c] 1 2 1 1 1 2 / [c] 2 1 1 / (B): sentences extracted by ILP+C (1,1)
[c]11/ It was 1 very 1 fruity.
[c] . / (C): sentences extracted by ILP+C (1,1) 1 1
[w] 11. / The logo
of the 1 bottle label was 1 refreshing light blue and so cool.
[w] 11. / You can drink refresh-
ingly with 1 moderate 1 acidity.
(D): sentences extracted by ILP+C (5,5)
[c] 1 . / [c] 1 1 2 2 1 2 2 33
" } } } }