{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:55:50.824823Z" }, "title": "Personalized Extractive Summarization Using an Ising Machine Towards Real-time Generation of Efficient and Coherent Dialogue Scenarios", "authors": [ { "first": "Hiroaki", "middle": [], "last": "Takatsu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Waseda University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "takatsu@pcl.cs.waseda.ac.jp" }, { "first": "Takahiro", "middle": [], "last": "Kashikawa", "suffix": "", "affiliation": {}, "email": "kashikawa@jp.fujitsu.com" }, { "first": "Koichi", "middle": [], "last": "Kimura", "suffix": "", "affiliation": {}, "email": "k.kimura@jp.fujitsu.com" }, { "first": "Ryota", "middle": [], "last": "Ando", "suffix": "", "affiliation": {}, "email": "ando@naigaipc.co.jp" }, { "first": "Yoichi", "middle": [], "last": "Matsuyama", "suffix": "", "affiliation": { "laboratory": "", "institution": "Waseda University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "matsuyama@pcl.cs.waseda.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a personalized dialogue scenario generation system which transmits efficient and coherent information with a real-time extractive summarization method optimized by an Ising machine. The summarization problem is formulated as a quadratic unconstraint binary optimization (QUBO) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time as constraints. To evaluate the proposed method, we constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. The experimental results confirmed that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints using this dataset.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We propose a personalized dialogue scenario generation system which transmits efficient and coherent information with a real-time extractive summarization method optimized by an Ising machine. The summarization problem is formulated as a quadratic unconstraint binary optimization (QUBO) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time as constraints. To evaluate the proposed method, we constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. The experimental results confirmed that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints using this dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "As mobile personal assistants and smart speakers become ubiquitous, the demand for dialogue-based media technologies has increased since they allow users to consume a fair amount of information via a dialogue form in daily life situations. Dialoguebased media is more restrictive than textual media. For example, when listening to an ordinary smart speaker, users can not skip unnecessary information or skim only for necessary information. Thus, it is crucial for future dialogue-based media to extract and efficiently transmit information that the users are particularly interested in without excess or deficiencies. In addition, the dialogue scenarios generated based on the extracted information should be coherent to aid in the proper understanding. Generating such efficient and coherent scenarios personalized for each user generally takes more time as the information source size and the number of target users increase. Moreover, the nature of conversational experiences requires personalization in real time. In this paper, we propose a personalized extractive summarization method formulated as a combinatorial optimization problem to generate efficient and coherent dialogue scenarios and demonstrate that an Ising machine can solve the problem at high speeds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As a realistic application of the proposed personalized summarization method for a spoken dialogue system, we consider a news delivery task (Takatsu et al., 2018) . This news dialogue system proceeds the dialogue according to a primary plan to explain the summary of the news article and subsidiary plans to transmit supplementary information though question answering. As long as the user is listening passively, the system transmits the content of the primary plan. The personalized primary plan generation problem can be formulated as follows:", "cite_spans": [ { "start": 140, "end": 162, "text": "(Takatsu et al., 2018)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From N documents with different topics, sentences that may be of interest to the user are extracted based on the discourse structure of each document. Then the contents are transmitted by voice within T seconds.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Specifically, this problem can be formulated as an integer linear programming (ILP) problem, which extracts sentences that maximize the sum of the degree of user's interest in the sentences of documents with the discourse structure of each document and the total utterance time T as constraints. Because this ILP problem is NP-hard, it takes an enormous amount of time to find an optimal solution using the branch-and-cut method (Mitchell, 2002; Padberg and Rinaldi, 1991) as the problem scale becomes large.", "cite_spans": [ { "start": 429, "end": 445, "text": "(Mitchell, 2002;", "ref_id": "BIBREF27" }, { "start": 446, "end": 472, "text": "Padberg and Rinaldi, 1991)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In recent years, non-von Neumann computers called Ising machines have been attracting attention as they can solve combinatorial optimization problems and obtain quasi-optimal solutions instantly (Sao et al., 2019) . Ising machines can solve combi-natorial optimization problems represented by an Ising model or a quadratic unconstrained binary optimization (QUBO) model (Lucas, 2014; Glover et al., 2019) . In this paper, we propose a QUBO model that generates an efficient and coherent personalized summary for each user. Additionally, we verify that our QUBO model can be solved by a Digital Annealer (Aramon et al., 2019; Matsubara et al., 2020) , which is a simulated annealing-based Ising machine, in a practical time without violating the constraints using the constructed dataset.", "cite_spans": [ { "start": 195, "end": 213, "text": "(Sao et al., 2019)", "ref_id": "BIBREF32" }, { "start": 370, "end": 383, "text": "(Lucas, 2014;", "ref_id": "BIBREF21" }, { "start": 384, "end": 404, "text": "Glover et al., 2019)", "ref_id": "BIBREF9" }, { "start": 603, "end": 624, "text": "(Aramon et al., 2019;", "ref_id": "BIBREF0" }, { "start": 625, "end": 648, "text": "Matsubara et al., 2020)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The contributions of this paper are three-fold:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The ILP and QUBO models for the personalized summary generation are formulated in terms of efficient and coherent information transmission. \u2022 To evaluate the effectiveness of the proposed method, we construct a Japanese news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. \u2022 Experiments demonstrate that a Digital Annealer, which is a simulated annealing-based Ising machine, can solve our QUBO model in a practical time without violating the constraints.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follows. Section 2 discusses the related work. Section 3 overviews the annotations of the discourse structure and interest data collection. Section 4 details the proposed method. Section 5 describes the Digital Annealer. Section 6 evaluates the performance of the proposed method. Section 7 provides the conclusions and prospects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Typical datasets for discourse structure analysis are RST Discourse Treebank (Carlson et al., 2001) , Discourse Graphbank (Wolf and Gibson, 2005) , and Penn Discourse Treebank (Prasad et al., 2008) . RST Discourse Treebank is a dataset constructed based on rhetorical structure theory (Mann and Thompson, 1988) . Some studies have annotated discourse relations to Japanese documents. Kaneko and Bekki (2014) annotated the temporal and causal relations for segments obtained by decomposing the sentences of the balanced corpus of contemporary written Japanese (Maekawa et al., 2014) based on segmented discourse representation theory (Asher and Lascarides, 2003) . Kawahara et al. (2014) proposed a method to annotate discourse relations for the first three sentences of web documents in various domains using crowdsourcing. They showed that discourse relations can be annotated in many documents over a short amount of time. Kishimoto et al. (2018) confirmed that making improvements such as adding language tests to the annotation criteria of Kawahara et al. (2014) can improve the annotation quality.", "cite_spans": [ { "start": 77, "end": 99, "text": "(Carlson et al., 2001)", "ref_id": "BIBREF5" }, { "start": 122, "end": 145, "text": "(Wolf and Gibson, 2005)", "ref_id": "BIBREF36" }, { "start": 176, "end": 197, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF31" }, { "start": 285, "end": 310, "text": "(Mann and Thompson, 1988)", "ref_id": "BIBREF25" }, { "start": 384, "end": 407, "text": "Kaneko and Bekki (2014)", "ref_id": "BIBREF16" }, { "start": 559, "end": 581, "text": "(Maekawa et al., 2014)", "ref_id": "BIBREF22" }, { "start": 633, "end": 661, "text": "(Asher and Lascarides, 2003)", "ref_id": "BIBREF3" }, { "start": 664, "end": 686, "text": "Kawahara et al. (2014)", "ref_id": "BIBREF17" }, { "start": 925, "end": 948, "text": "Kishimoto et al. (2018)", "ref_id": "BIBREF19" }, { "start": 1044, "end": 1066, "text": "Kawahara et al. (2014)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Discourse structure corpus", "sec_num": "2" }, { "text": "When applying discourse structure analysis results to tasks such as document summarization (Hirao et al., 2013; Yoshida et al., 2014; Kikuchi et al., 2014; Hirao et al., 2015) or dialogue (Feng et al., 2019) , a dependency structure, which directly expresses the parent-child relationship between discourse units, is preferable to a phrase structure such as a rhetorical structure tree. Although methods have been proposed to convert a rhetorical structure tree into a discourse dependency tree (Li et al., 2014; Hirao et al., 2013) , the generated trees depends on the conversion algorithm . Yang and Li (2018) proposed a method to manually annotate the dependency structure and discourse relations between elementary discourse units for abstracts of scientific papers, and then constructed SciDTB.", "cite_spans": [ { "start": 91, "end": 111, "text": "(Hirao et al., 2013;", "ref_id": "BIBREF13" }, { "start": 112, "end": 133, "text": "Yoshida et al., 2014;", "ref_id": "BIBREF41" }, { "start": 134, "end": 155, "text": "Kikuchi et al., 2014;", "ref_id": "BIBREF18" }, { "start": 156, "end": 175, "text": "Hirao et al., 2015)", "ref_id": "BIBREF12" }, { "start": 188, "end": 207, "text": "(Feng et al., 2019)", "ref_id": "BIBREF8" }, { "start": 495, "end": 512, "text": "(Li et al., 2014;", "ref_id": "BIBREF20" }, { "start": 513, "end": 532, "text": "Hirao et al., 2013)", "ref_id": "BIBREF13" }, { "start": 593, "end": 611, "text": "Yang and Li (2018)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Discourse structure corpus", "sec_num": "2" }, { "text": "In this study, we construct a dataset suitable to build summarization or dialogue systems that transmit personalized information while considering the coherence based on the discourse structure. Experts annotated the inter-sentence dependencies, discourse relations, and chunks, which are highly cohesive sets of sentences, for Japanese news articles. Users' profiles and interests in the sentences and topics of news articles were collected via crowdsourcing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work 2.1 Discourse structure corpus", "sec_num": "2" }, { "text": "As people's interests and preferences diversify, the demand for personalized summarization technology has increased (Sappelli et al., 2018) . Summaries are classified as generic or user-focused, based on whether they are specific to a particular user (Mani and Bloedorn, 1998) . Unlike generic summaries generated by extracting important information from the text, user-focused summaries are generated based not only on important information but also on the user's interests and preferences. Most user-focused summarization methods rank sentences using a score calculated considering user's characteristics and subsequently generate a summary by extracting higher-ranked sentences (D\u00edaz and Gerv\u00e1s, 2007; Yan et al., 2011; Hu et al., 2012) . However, such conventional user-focused methods tend to generate incoherent summaries. Generic summarization methods, which consider the discourse structure of documents, have been proposed to maintain coherence (Kikuchi et al., 2014; Hirao et al., 2015; Xu et al., 2020) .", "cite_spans": [ { "start": 116, "end": 139, "text": "(Sappelli et al., 2018)", "ref_id": "BIBREF33" }, { "start": 251, "end": 276, "text": "(Mani and Bloedorn, 1998)", "ref_id": "BIBREF24" }, { "start": 681, "end": 704, "text": "(D\u00edaz and Gerv\u00e1s, 2007;", "ref_id": "BIBREF7" }, { "start": 705, "end": 722, "text": "Yan et al., 2011;", "ref_id": "BIBREF39" }, { "start": 723, "end": 739, "text": "Hu et al., 2012)", "ref_id": "BIBREF14" }, { "start": 954, "end": 976, "text": "(Kikuchi et al., 2014;", "ref_id": "BIBREF18" }, { "start": 977, "end": 996, "text": "Hirao et al., 2015;", "ref_id": "BIBREF12" }, { "start": 997, "end": 1013, "text": "Xu et al., 2020)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Personalized summarization", "sec_num": "2.2" }, { "text": "To achieve both personalization and coherence simultaneously, we propose ILP and QUBO models to extract sentences based on the degree of user's interest and generate a personalized summary for each user while maintaining coherence based on the discourse structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Personalized summarization", "sec_num": "2.2" }, { "text": "We constructed a news article corpus with annotations of the discourse structure, users' profiles, and interests in sentences and topics. Experts annotated the inter-sentence dependencies, discourse relations, and chunks for the Japanese news articles. Users' profiles and interests in the sentences and topics of news articles were collected via crowdsourcing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "3" }, { "text": "Two web news clipping experts annotated the dependencies, discourse relations, and chunks for 1,200 Japanese news articles. Each article contained between 15-25 sentences. The articles were divided into six genres: sports, technology, economy, international, society, and local. In each genre, we manually selected 200 articles to minimize topic overlap. The annotation work was performed in the order of dependencies, discourse relations, and chunks. The discourse unit was a sentence, which represents a character string separated by an ideographic full stop.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse structure dataset", "sec_num": "3.1" }, { "text": "The conditions in which sentence j can be specified as the parent of sentence i are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency annotation", "sec_num": "3.1.1" }, { "text": "\u2022 In the original text, sentence j appears before sentence i. \u2022 The flow of the story is natural when reading from the root node in order according to the tree structure and reading sentence i after sentence j. \u2022 The information from the root node to sentence j is the minimum information necessary to understand sentence i. \u2022 If it is possible to start reading from sentence i, the parent of sentence i is the root node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dependency annotation", "sec_num": "3.1.1" }, { "text": "A discourse relation classifies the type of semantic relationship between the child sentence and the parent sentence. We defined the following as discourse relations: Start, Result, Cause, Background, Correspondence, Contrast, Topic Change, Example, Conclusion, and Supplement. An annotation judgment was made while confirming whether both the definition of the discourse relation and the dialogue criterion were met. The dialogue criterion is a judgment based on whether the response is natural according to the discourse relation. For example, the annotators checked whether it was appropriate to present a child sentence as an answer to a question asking the cause, such as \"Why?\" after the parent sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discourse relation annotation", "sec_num": "3.1.2" }, { "text": "A chunk is a highly cohesive set of sentences. If a parent sentence should be presented with a child sentence, it is regarded as a chunk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk annotation", "sec_num": "3.1.3" }, { "text": "A hard chunk occurs when the child sentence provides information essential to understand the content of the parent sentence. Examples include when the parent sentence contains a comment and the child sentence contains the speaker's information or when a procedure is explained over multiple sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk annotation", "sec_num": "3.1.3" }, { "text": "A soft chunk occurs when the child sentence is useful to prevent a biased understanding of the content of the parent sentence, although it does not necessarily contain essential information to understand the parent sentence itself. An example is explaining the situation in two countries related to a subject, where the parent sentence contains one explanation and the child sentence contains another.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Chunk annotation", "sec_num": "3.1.3" }, { "text": "Participants were recruited via crowdsourcing. They were asked to answer a profile questionnaire and an interest questionnaire. We used 1,200 news articles, which were the same as those used in the discourse structure dataset. We collected the questionnaire results of 2,507 participants. Each participant received six articles, one from each genre. The six articles were distributed so that the total number of sentences was as even as possible across participants. Each article was reviewed by at least 11 participants.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interest dataset", "sec_num": "3.2" }, { "text": "The profile questionnaire collected the following information: gender, age, residential prefecture, occupation type, industry type, hobbies, frequency of checking news (daily, 4-6 days a week, 1-3 days a week, or 0 days a week), typical time of day news is checked (morning, afternoon, early evening, or night), methods to access the news (video, audio, or text), tools used to check the news (TV, newspaper, smartphone, etc.), newspapers, websites, and applications used to check the news (Nihon Keizai Shimbun, LINE NEWS, SNS, etc.), whether a fee was paid to check the news, news genre actively checked (economy, sports, etc.), and the degree of interest in each news genre (not interested at all, not interested, not interested if anything, interested if anything, interested, or very interested).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Profile questionnaire", "sec_num": "3.2.1" }, { "text": "Participants read the text of the news article and indicated their degree of interest in the content of each sentence. Finally, they indicated their degree of interest in the topic of the article. The degree of interest was indicated on six levels: 1, not interested at all; 2, not interested; 3, not interested if anything; 4, interested if anything; 5, interested; or 6, very interested.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interest questionnaire", "sec_num": "3.2.2" }, { "text": "We propose an integer linear programming (ILP) model and a quadratic unconstraint binary optimization (QUBO) model for the personalized summary generation in terms of efficient and coherent information transmission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "We considered a summarization problem, which extracts sentences that user u may be interested in from the selected N documents and then transmits them by voice within T seconds. The summary must be of interest to the user, coherent, and not redundant. Therefore, we formulated the summarization problem as an integer linear programming problem in which the objective function is defined by the balance between a high degree of interest in the sentences and a low degree of similarity between the sentences with the discourse structure as constraints. This is expressed as max. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Integer linear programming model", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k\u2208D u N iTMaximum summary length (seconds)LMaximum bias in the number of extracting sentences between documentsf k", "html": null, "type_str": "table", "text": "Variable definitions in the interesting sentence extraction method x ki Whether sentence s ki is selected y kij Whether both s ki and s kj are selected b u ki Degree of user u's interest in s ki r kij Similarity between s ki and s kj t ki Utterance time of s ki (seconds)" }, "TABREF2": { "num": null, "content": "
Parameters of the Digital AnnealerEfficiency of information transmissionProcessing
\u03bb1\u03bb2\u03bb3\u03bb4 #iteration Coverage Exclusion rate EoIT1 EoIT2 time (sec)
CPU-CBC (30 threads)-----0.6870.7260.672 0.69618.9
DAU-AM10 2 10 6 10 10 1010 30.6380.6120.584 0.5930.0570
DAU-PTM10 2 10 5 10 9 1010 3 10 40.656 0.6690.637 0.6610.608 0.618 0.627 0.6390.245 1.76
", "html": null, "type_str": "table", "text": "Information transmission efficiency of the summaries (N = 3, T = 270)" }, "TABREF3": { "num": null, "content": "
Parameters of the Digital AnnealerEfficiency of information transmissionProcessing
\u03bb1\u03bb2\u03bb3\u03bb4 #iteration Coverage Exclusion rate EoIT1 EoIT2 time (sec)
CPU-CBC (30 threads)-----0.6380.6670.639 0.651102
DAU-AM10 2 10 6 10 10 1010 30.5380.5850.552 0.5680.199
DAU-PTM10 2 10 5 10 9 1010 3 40.553 0.5700.577 0.5910.556 0.565 0.572 0.5800.749 6.44
", "html": null, "type_str": "table", "text": "Information transmission efficiency of the summaries (N = 6, T = 450)" } } } }