Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y13-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:32:16.528756Z"
},
"title": "An Abstract Generation System for Social Scientific Papers",
"authors": [
{
"first": "Michio",
"middle": [],
"last": "Kaneko",
"suffix": "",
"affiliation": {},
"email": "m-kaneko@chs.nihon-u.ac.jp"
},
{
"first": "Dongli",
"middle": [],
"last": "Han",
"suffix": "",
"affiliation": {},
"email": "han@chs.nihon-u.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "s are quite useful when one is trying to understand the content of a paper, or conducting a survey with a large number of scientific documents. The situation is even clearer for the domain of social science, as most papers are very long and some of them don't even have any abstracts at all. In this work, we narrow our attention down to the social scientific papers and try to generate their abstracts automatically. Specifically, we put weight on three points: important keywords, readability as an abstract, and features of social scientific papers. Experimental results show the effectiveness of our method, whereas some problems remain and will need to be solved in the future. <Adverb Lexicon>: created from (Nitta, 2002) containing adverbs describing degrees (like emphasis). <Sentence-End Expression Lexicon>: extracted from (Morita and Matuki, 1989) containing all expressions functioning similarly to auxiliary verbs in Japanese. <Conjunction Transformation Lexicon>:",
"pdf_parse": {
"paper_id": "Y13-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "s are quite useful when one is trying to understand the content of a paper, or conducting a survey with a large number of scientific documents. The situation is even clearer for the domain of social science, as most papers are very long and some of them don't even have any abstracts at all. In this work, we narrow our attention down to the social scientific papers and try to generate their abstracts automatically. Specifically, we put weight on three points: important keywords, readability as an abstract, and features of social scientific papers. Experimental results show the effectiveness of our method, whereas some problems remain and will need to be solved in the future. <Adverb Lexicon>: created from (Nitta, 2002) containing adverbs describing degrees (like emphasis). <Sentence-End Expression Lexicon>: extracted from (Morita and Matuki, 1989) containing all expressions functioning similarly to auxiliary verbs in Japanese. <Conjunction Transformation Lexicon>:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Abstracts are expected to help readers who are trying to understand the outline of a paper, or conducting a survey with a large number of scientific documents. The situation is even clearer for the domain of social science, as most papers in this area tend to be very long and some of them don't even have any abstracts at all.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been many methods proposed for Japanese summarization (e.g., Ochitani et al. 1997; Hatakeyama et al., 2002; Mikami et al., 1999; Ohtake et al., 1999; Hatayama et al., 2002; Tomita et al., 2009; Fukushima et al. 2011) . However, most existing proposals are made towards general text summarization instead of abstract generation for scientific papers. Here, it is important to distinguish between a summary and an abstract. According to a Japanese dictionary, an abstract contains the most important stuffs or the important matter that has been stated in a document, and a summary is a short text transformed from a long text containing all the important points in the original text (Umesao et al., 1995) .",
"cite_spans": [
{
"start": 72,
"end": 93,
"text": "Ochitani et al. 1997;",
"ref_id": "BIBREF8"
},
{
"start": 94,
"end": 118,
"text": "Hatakeyama et al., 2002;",
"ref_id": null
},
{
"start": 119,
"end": 139,
"text": "Mikami et al., 1999;",
"ref_id": "BIBREF4"
},
{
"start": 140,
"end": 160,
"text": "Ohtake et al., 1999;",
"ref_id": "BIBREF9"
},
{
"start": 161,
"end": 183,
"text": "Hatayama et al., 2002;",
"ref_id": "BIBREF2"
},
{
"start": 184,
"end": 204,
"text": "Tomita et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 205,
"end": 227,
"text": "Fukushima et al. 2011)",
"ref_id": "BIBREF1"
},
{
"start": 692,
"end": 713,
"text": "(Umesao et al., 1995)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the difference between summaries and abstracts in mind, we attempt to propose a new method to generate abstracts for social scientific papers in this paper. Specifically, we put weight on three points: important keywords, readability as an abstract, and features of social scientific papers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we first describe our proposal in Section 2, 3, 4 and 5. Specifically, Section 2 gives a brief introduction on the necessary language resources for the development of the subsequent modules. Section 3, 4 and 5 describe the sentence processing, importance degree estimation, and abstract generation respectively. Finally, we discuss some experiments conducted to evaluate the effectiveness of our approach in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, in order to perform textual analysis and importance degree estimation for words or phrases, we create the following five lexicon-files beforehand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Necessary Language Resources",
"sec_num": "2"
},
{
"text": "containing the corresponding relations between conjunctive particles and conjunctions. <Indispensable-case Lexicon>: generated from EDR 1 containing all the necessary cases of predicates. <Conjunction Lexicon>: containing the conjunctions used to expand one affair to multiple affairs, and the copulative conjunctions used to connect two affairs in Japanese as shown in Figure 1 (Ichikawa, 1978) . Moreover, we have created three lexicons specialized in social science. The first one is a called Keyword Dictionary containing the words extracted from two sociological dictionaries (Uchida et al., 2001; Imamura, 1988) .",
"cite_spans": [
{
"start": 380,
"end": 396,
"text": "(Ichikawa, 1978)",
"ref_id": "BIBREF3"
},
{
"start": 582,
"end": 603,
"text": "(Uchida et al., 2001;",
"ref_id": "BIBREF11"
},
{
"start": 604,
"end": 618,
"text": "Imamura, 1988)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 370,
"end": 379,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Necessary Language Resources",
"sec_num": "2"
},
{
"text": "The second lexicon is the Noun-phrase Dictionary. Based on the idea that noun phrases play important roles in sentences (Minami, 1974) , we extract five kinds of noun phrases from a social scientific literature database according to the following definitions \uf0d8 Expressions ending with the continuous form of a nominalized verb \uf0d8 Nominalized verb + \"\u30ab\u30bf\", \"\u30d6\u30ea\" (\"\u30c3 \u30d7\u30ea\"), \"\u30e8\u30a6\", \"\u30d0\", \"\u30d0\u30b7\u30e7\", \"\u30c8\u30b3\u30ed \" (\"\u30c9\u30b3\u30ed\"), \"\u30c8\u30ad\" (\"\u30c9\u30ad\"), etc. \uf0d8 Adjective + \"\u30b5\" \uf0d8 Noun + Noun. \uf0d8 Adnominal form of an inflectable word + noun",
"cite_spans": [
{
"start": 120,
"end": 134,
"text": "(Minami, 1974)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Necessary Language Resources",
"sec_num": "2"
},
{
"text": "The social scientific literature database we have created in advance is composed of 221 social scientific papers obtained from the Web containing 63,056 sentences. The third lexicon, Mutual-information Table, is also generated from the scientific literature database. It contains mutual information between nouns appearing in each literature. Mutual 1 http://www2.nict.go.jp/out-promotion/ techtransfer/EDR/J_index.html information between nouns is calculated with Formula 1 (Church , 1990) .",
"cite_spans": [
{
"start": 475,
"end": 490,
"text": "(Church , 1990)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 202,
"end": 208,
"text": "Table,",
"ref_id": null
}
],
"eq_spans": [],
"section": "Necessary Language Resources",
"sec_num": "2"
},
{
"text": ") ( ) ( ) , ( log ) , ( B P A P B A P B A X =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Necessary Language Resources",
"sec_num": "2"
},
{
"text": "(1) P(A) and P(B) in Formula 1 indicate the occurrence probability of noun A and noun B respectively, and P(A,B) indicates the cooccurrence probability of noun A and noun B in the same sentence of the database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Necessary Language Resources",
"sec_num": "2"
},
{
"text": "After conducting a morphological analysis on the input social scientific paper, we execute a series of processing on each sentence of the paper: keyword extraction, parenthesis processing, third-person sentence removing, sentence segmentation, and sentence-information assignment. Here, we describe them in each subsection respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Processing",
"sec_num": "3"
},
{
"text": "Keywords are extracted for subsequent importance degree estimation. Here, words and phrases are extracted from the paper as Keywords if they also appear in the Keyword Dictionary. Similarly, the noun-phrases matching the Nounphrase Dictionary are extracted as Fkeywords. Another sort of keyword is called Nkeywords, which stands for common noun or compound noun, and has been extracted during the morphological analysis using Mecab 2 , a free Japanese morphological analyzer. Meanwhile, the occurrence frequency of each extracted keyword and the place it appears (i.e., the number of paragraph it appears in) are also recorded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction",
"sec_num": "3.1"
},
{
"text": "Generally, texts enclosed in round parentheses tend to act as supplement or modification to the texts prior to it. Therefore, round parentheses could be simply removed without influencing the basic meaning of the original texts in most cases. However, there is one exception. When the texts contained in the round parentheses are less than 15 characters, they will be extracted as another sort of keyword, Tkeywords. Here, the number 15 indicates the maximum keyword-length in the Keyword Dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parenthesis Processing",
"sec_num": "3.2"
},
{
"text": "One of our goals in this work is to extract the text that expresses the author's opinions most directly and correctly. For this reason, we consider that sentences holding third-person subjects are inappropriate to appear in the final abstract. Sentences fulfilling the following conditions are recognized automatically as thirdperson subject sentences, and excluded from final sentence candidates for abstract generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Third-person Sentence Removing",
"sec_num": "3.3"
},
{
"text": "\uf0d8 sentences containing either \"\u306f\" or \"\u304c\", and the previous morpheme being a proper personal name. \uf0d8 sentences containing either \" \u306f \" or \" \u304c \", the previous morpheme being a suffix, and the morpheme prior to the suffix being a personal name. \uf0d8 sentences containing either \"\u306f\" or \"\u304c\", and the previous morpheme being a thirdperson pronoun such as \"\u5f7c\" (he) or \" \u5f7c\u5973\" (her).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Third-person Sentence Removing",
"sec_num": "3.3"
},
{
"text": "Social scientific papers in Japanese often contain long sentences. In most cases, only one part of the sentence is important and expected to be included into the final abstract, whereas the rest part might be unnecessary and redundant. Along this idea, we segment long sentences in accordance with the rules in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 311,
"end": 318,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Segmentation",
"sec_num": "3.4"
},
{
"text": "Verb(+Suffix) +\"\u3001\" verbal +\"\u3002\"+ \"\u305d\u3057\u3066\" +\"\u3001\" Conjunctive particle + \"\u3001\" verbal +\"\u3002\"+ Conjunction +\"\u3001\" Table 1 . Rules for sentence segmentation Here in Table 1 , \"\u3001\" and \"\u3002\" indicate comma and period in Japanese, and \" \u305d \u3057 \u3066 \" means \"then\" in English.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 1",
"ref_id": null
},
{
"start": 150,
"end": 157,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Original After Segmentaiton",
"sec_num": null
},
{
"text": "Moreover, in the lower case of Table 1 , i.e., when the original sentence is in a form of \"conjunctive particle + comma\", a transformation will be executed using the Conjunction Transformation Lexicon described in Section 2. Table 2 shows some examples in the Conjunction Transformation Lexicon. Table 2 . Example rules of the conjunction transformation lexicon",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 1",
"ref_id": null
},
{
"start": 225,
"end": 232,
"text": "Table 2",
"ref_id": null
},
{
"start": 296,
"end": 303,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Original After Segmentaiton",
"sec_num": null
},
{
"text": "\u304c \u3060\u304c \u3066 \u305d\u3057\u3066 \u3067 \u305d\u3057\u3066 \u306e\u3067 \u306a\u306e\u3067 \u3070 \u306a\u3089\u3070 \u3084 \u305d\u308c\u306b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conjunctive particle Conjunction",
"sec_num": null
},
{
"text": "The last process in this module is to assign some required information to sentences: cohesive relation and position information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-information Assignment",
"sec_num": "3.5"
},
{
"text": "A cohesive relation indicates a strong relation lying between two sentences. Specifically, we use the following four patterns to match two sentences where cohesive relations exist in between.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-information Assignment",
"sec_num": "3.5"
},
{
"text": "\uf0d8 The sentence containing an interrogative and the subsequent sentence. \uf0d8 The sentence containing a demonstrative and the preceding sentence. \uf0d8 Two sentences connected by conjunctions that are used for connecting two affairs logically. \uf0d8 Two sentences connected by conjunctions that are used to expand and describe the previous affair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence-information Assignment",
"sec_num": "3.5"
},
{
"text": "In the first pattern, if the sentence containing an interrogative appears at the end of the paper, no cohesive relation will be assigned. Similarly, in the second pattern, if the sentence containing a demonstrative is the first sentence, or the demonstrative is pointing to something within the current sentence, no cohesive relation will be assigned either. The third and the forth pattern are defined based on the conjunction classification tree in Figure 1 . Position information is associated with the position of the sentence. We have carried out an investigation on 40 social scientific papers with regard to the position where important sentences tend to appear. It turns out that the first paragraph and the last paragraph of each chapter, and the whole last chapter have an inclination to contain important sentences. The system records the number of chapter and paragraph as the position information of the current sentence which will be used for importance degree estimation afterward.",
"cite_spans": [],
"ref_spans": [
{
"start": 451,
"end": 459,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence-information Assignment",
"sec_num": "3.5"
},
{
"text": "An abstract is expected to contain the most important part of the original paper. In this section, we describe our proposal to estimate the importance degree of each keyword in the first step and that of each sentence in the second step for a particular social scientific literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance degree Estimation",
"sec_num": "4"
},
{
"text": "Four kinds of keywords (i.e., Keywords, FKeywords, NKeywords, and TKeywords) are considered as the candidates to be included in the final abstracts. We calculate the importance degree of each keyword (denoted as K_score hereafter) using its occurrence frequency and distribution as shown in Formula 2.",
"cite_spans": [
{
"start": 30,
"end": 76,
"text": "Keywords, FKeywords, NKeywords, and TKeywords)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords",
"sec_num": null
},
{
"text": "eInf dp wp wc score K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords",
"sec_num": null
},
{
"text": "+ + \u00d7 = ) 1 ( _ (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords",
"sec_num": null
},
{
"text": "Here, wc indicates the occurrence frequency of the keyword under calculation, wp and dp indicate the number of the paragraph the keyword appears in and the total number of paragraphs contained in the whole paper. Meanwhile, eInf, abbreviated from \"extra information\" acts to make difference between each kind of keywords. We have defined two kinds of eInf for different keywords. First, for Keywords, FKeywords, and NKeywords, the eInf amounts to the occurrence frequency of the keyword within important positions, i.e., the first paragraph and the last paragraph of each chapter, and the whole last chapter. Then, for TKeywords, we consider the total number of characters is more informative than the position information, and therefore plug it into eInf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords",
"sec_num": null
},
{
"text": "Obtained importance degrees of keywords are recorded and will be used for sentenceimportance estimation in Section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keywords",
"sec_num": null
},
{
"text": "This sub-section describes the method for calculating the importance degree of each sentence in a paper. This information will become the basis of abstract generation in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "The importance degree of a sentence (denoted as S_score hereafter) is computed following Formula 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "\u2211 = \u00d7 = n i k i keyword Score K score S 1 )} ( _ { _ \u03b1 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "Basically, S_score can be acquired as the total value of all K_scores otained in Section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "Here we denote the total number of keywords in the sentence as n. In case shorter keywords are contained in longer keywords, we employ the longest match principle and put a high priority on longer keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "\u03b1 in formula 3 is a weighted value for the following four kinds of special expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "\uf0d8 emphasis expressions existing in the Adverb Lexicon \uf0d8 sentence-end expressions existing in the Sentence-End Expression Lexicon \uf0d8 theme expressions nouns prior to \"\u306f\" \uf0d8 cohesive relations If any of the above expressions is found within the sentence under calculation, the total value of all K_scores will be multiplied by \u03b1 (> 1.0) for k times. k is the total count of the above expressions contained in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentences",
"sec_num": null
},
{
"text": "We have obtained the importance degrees for all the sentences in Section 4. However, we still need to cut the unnecessary part in each sentence to keep each sentence in the final abstract appear plain and sophisticated. This function is called sentence simplification in this paper. Then we are going to conduct constituent-sentence acquisition, cohesive sentence insertion, and abstract assembling eventually to generate the final abstract. In this section, we describe each function in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Generation",
"sec_num": "5"
},
{
"text": "We attempt to cut the unnecessary part and simplify a sentence using three kinds of information: indispensable cases, dependency relations between segments, and mutual information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "5.1"
},
{
"text": "An indispensable case is a necessary case of a predicate, such as \"\u30ac\" or \"\u30f2\" expressing agent case and object case respectively. A sentence tends to appear unnatural if its main predicate lacks one or more indispensable cases. We use the Indispensable-case Lexicon described in Section 2 to put a mark on each segment containing an indispensable case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "5.1"
},
{
"text": "Dependency relations are usually obtained with the help of a Japanese dependency analyzer. Here, we use Cabocha 3 to analyze the dependency relations between segments in a Japanese sentence. Figure 2 is the analyzing result of an example sentence, \"\u653f\u6cbb\u968e\u7d1a\u3068\u3044\u3046 \u8a00\u8449\u306f\u968e\u7d1a\u3068\u3044\u3046\u8a00\u8449\u3068\u3068\u3082\u306b\u6b7b\u8a9e\u3068\u5316\u3057\u305f\u306e \u3067\u3042\u308b\" (The word estate government turned into the dead language along with the word estate).",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 199,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "5.1"
},
{
"text": "In Figure 2 , there are six segments in the input sentence, and the main segment is \"\u5316\u3057\u305f\u306e\u3067\u3042 \u308b\" (turned). We can also see that three segments are modifying directly, or depending on in other words, the main segment, while the rest two are not.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "5.1"
},
{
"text": "Our idea is to employ this difference to cut the unnecessary part, i.e., the segments which are not depending on the main segment. However, if an indispensable-case exists in a segment, even the segment is not depending on the main segment directly, it is still left in the sentence otherwise the sentence will appear odd. Meanwhile, if we can find a sufficiently-high mutual information in the Mutual-information Table for a noun (denoted as noun a ) in any of the remaining segments, and another noun (denoted as noun b ) in the deleted segments, the segment containing noun b will be left undeleted in the sentence. Table 3 shows some examples from the Mutual-information Table. All the simplified sentences inherit the importance degrees of the original sentences. ",
"cite_spans": [],
"ref_spans": [
{
"start": 619,
"end": 626,
"text": "Table 3",
"ref_id": null
},
{
"start": 675,
"end": 681,
"text": "Table.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Simplification",
"sec_num": "5.1"
},
{
"text": "Constituent sentences are the sentences extracted from the original paper to compose the final abstract. Basically, the system just picks out the topmost n% simplified sentences based on their importance degrees. Here, n stands for the target compression rate which is set by the user before generating the abstract. Three ways have been proposed to determine the total number of constituent sentences or characters. We denote them as NC 1 , NC 2 , and NC 3 as shown below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constituent-sentence Acquisition",
"sec_num": "5.2"
},
{
"text": "= n%\u00d7total number of sentences in the original paper \uf0d8 NC 2 = n%\u00d7total number of characters in the original paper \uf0d8 NC 3 = NC 2 + cohesive sentences NC 1 is the simplest way for determining necessary number of constituent sentences. Unlike with NC 1, NC 2 uses the number of characters to calculate necessary constituent number. For example, if the original paper contains 1000 characters, and n has been set to 20, the system will extract simplified sentences in order of their importance degrees until the total number of extracted characters is equal to or larger than 200. The difference between NC 2 and NC 3 lies in the consideration of cohesive sentences. At the time the total number of extracted characters becomes larger than the calculated constituent number (200 in the above example), if the last-extracted sentence is the first sentence of a cohesive sentence pair, the system will extract the second sentence of the pair as well. Otherwise, the last-extracted sentence is removed from the constituent-PACLIC-27 sentence set. We attempt to make the final abstract appear as natural as possible in this way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\uf0d8 NC 1",
"sec_num": null
},
{
"text": "We will give a further discussion on the difference among NC 1 , NC 2 , and NC 3 in Section 6.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\uf0d8 NC 1",
"sec_num": null
},
{
"text": "As stated in Section 5.2, a cohesive sentence pair is composed of two sentences holding strong association in between. If one and only one sentence has been selected as an abstract constituent, another sentence in the pair should also be extracted and attached to the first sentence in order to keep the final abstract coherent and natural. The appending position is determined according to the type of cohesive relation as shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 433,
"end": 441,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Cohesive Sentence Insertion",
"sec_num": "5.3"
},
{
"text": "We have described the procedure to extract constituent sentences so far. The next step is to assemble all the constituent sentences in the order they have appeared in the original paper to compose the abstract. Finally, we conduct the following adjustment to format the abstract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract Assembling",
"sec_num": "5.4"
},
{
"text": "\uf0d8 connect two sentences coming from the same sentence in the original paper using the rules in Table 2 in the opposite direction. \uf0d8 replace the theme in the subsequent sentence with a demonstrative if the preceding sentence has the same theme. \uf0d8 start a new paragraph whenever the chapter changes according to the position information of each sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 102,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Abstract Assembling",
"sec_num": "5.4"
},
{
"text": "We have conducted several experiments to examine the effectiveness of our approach. Here in this section, we first introduce a set of experiments on different manners to determine the number of constituent sentences, then describe a subjective assessment on the systemgenerated abstract in comparison with another two abstracts. Finally, some discussions are made about the problems and their potential solutions. NC 1 , NC 2 , and NC 3 In order to figure out the difference between three constituent-extraction manners, we calculate the standard deviations of the total character-number in the generated abstracts with NC 1 , NC 2 , and NC 3 respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 414,
"end": 436,
"text": "NC 1 , NC 2 , and NC 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Experiments and Evaluations",
"sec_num": "6"
},
{
"text": "We select six social scientific papers as the experimental objects. Each paper has been input into three prototypes following the definitions of NC 1 , NC 2 , and NC 3 respectively. The average value of the ratios of the number of characters contained in each generated abstract divided by that of each original paper has been shown in Figure 4 , 5 and 6.",
"cite_spans": [],
"ref_spans": [
{
"start": 336,
"end": 344,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Experiments on the Difference between",
"sec_num": "6.1"
},
{
"text": "A comparison with the target ratio from 5% through 30% has been made to figure out how close the actual number of characters is to the calculated target number. From the above figures, we can see that the average-value curve for NC 3 is the most accurate one. The standard deviation for each constituentextraction manner has also been calculated. They are 0.92%~4.60% for NC 1 , 0.56%~1.40% for NC 2 , and 0.66%~1.95% for NC 3 . There is little difference between the deviations of NC 2 and NC 3 , both of which use a character-based calculation to extract constituent sentences. On the other hand, NC 1 has exhibited relatively more volatility, which shows the instability nature of sentence-based calculation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on the Difference between",
"sec_num": "6.1"
},
{
"text": "As a result, we decide to use character-based calculation to estimate the necessary number of constituents for abstract generation in subsequent processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on the Difference between",
"sec_num": "6.1"
},
{
"text": "We conduct a subjective assessment using three kinds of abstracts. In this experiment, the papers as specified in Table 4 were used. Four graduate students and fourteen undergraduate students all majoring in natural language processing have supported us with the subjective assessment. They are divided into five groups each with three or four students. All the three kinds of abstracts are provided to each group without explicit information on which is A-, S-or W-abstract. After 30 minutes' personal reading and 20 minutes' group discussion, each group is asked to rank the three abstract on the following four questions \uf0d8 Q. 1:",
"cite_spans": [],
"ref_spans": [
{
"start": 114,
"end": 121,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Subjective Assessment",
"sec_num": "6.2"
},
{
"text": "Is the abstract grammatically natural? \uf0d8 Q. 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of words",
"sec_num": null
},
{
"text": "Is the Japanese easy to understand? \uf0d8 Q. 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of words",
"sec_num": null
},
{
"text": "Are sentences naturally connected with each other? \uf0d8 Q. 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of words",
"sec_num": null
},
{
"text": "Do you think the text is appropriate as an abstract?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Number of words",
"sec_num": null
},
{
"text": "The reason we adopt groups' opinions instead of individuals' ones lies in the awareness that examinees tend to be more responsible for the group they belong to, rather than the case when they behave as individuals. Table 5 shows the results of the assessment. Each figure in Table 5 indicates an average evaluation-value of the five groups for Q.1, Q.2, Q3 or Q4 towards one of the three abstracts. An average evaluation value (aev) is calculated following Formula 4. Here, x, y, z indicates the number of groups that have assessed the abstract as the first place, second place, or third place respectively in regard to the corresponding question. A larger figure implies a better evaluation. As we have expected, the abstract written by the authors is the best for all the evaluation items. Also, our system seems to have shown the same or better performance than the summarization function of Microsoft Word 2003. Especially, our system achieves 2.2 for the question do you think the text is appropriate as an abstract, which is almost the same with that from Aabstract.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 5",
"ref_id": null
},
{
"start": 275,
"end": 282,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Number of words",
"sec_num": null
},
{
"text": "However, there are still some problems remaining. In an interview with the examinees after the assessment, we have got some valuable comments such as \"Pronouns are met too frequently\" or \"Too many long sentences exist in the abstract\". In the following sub-section, we are going to make some discussions about these problems and try to conduct a validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A-abstract S-abstract W-abstract",
"sec_num": null
},
{
"text": "In regard to the issues observed by the examinees in the subjective assessment, we might have ways to adjust our approach. For example, we can skip the theme replacement function in abstract assembling described in Section 5.4, so that the total number of pronouns will decrease. On the other hand, to get a clearer look at the adequate length of a sentence in the abstract, we have conducted an investigation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6.3"
},
{
"text": "We have randomly selected 20 social scientific papers each with an abstract written by its original authors. Another abstract is produced by the system for each paper with the same number of sentences in the original abstract. The investigation is carried out by measuring the length (i.e., the total number of characters) of sentences in the original abstract, and that of the abstract generated by the system. Figure 7 and Figure 8 show their distributions. The average numbers of characters in the original abstracts and the system-generated abstracts are 38.5 and 53.3 respctively. Moreover, The median value for the original abstracts is 64.5, whereas the median value for the abstracts generated by the system is 79.0. This might have been the reason of the unsatisfied results in Section 6.2 for Q.1 and Q.2. We could figure out some strategies to cope with this issue. For example, we can leave the cohesive relation out of our consideration when extracting constituent sentences, or just impose a restriction on the number of characters or segments when simplifying a sentence for the abstract.",
"cite_spans": [],
"ref_spans": [
{
"start": 412,
"end": 420,
"text": "Figure 7",
"ref_id": "FIGREF7"
},
{
"start": 425,
"end": 433,
"text": "Figure 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussions",
"sec_num": "6.3"
},
{
"text": "In this paper, we propose a method to generate abstracts for social scientific papers. We put weight on three points: important keywords, readability as an abstract, and features of social scientific papers. Three main modules have been developed in our system to generate the abstract: sentence processing, importance degree estimation, and abstract generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Experimental results have shown the effectiveness of our proposal in comparison with another existing summarization tool, especially when we use character-based calculation to estimate the necessary number of constituents for abstract generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "However, there is still room to improve. Results of an investigation on sentence length exhibit the future possibility to enhance our method and improve the quality of the abstract.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "http://mecab.sourceforge.net/ PACLIC-27",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "PACLIC-27",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word Association Norms, Mutual Information",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Hanks",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Patrick",
"suffix": ""
}
],
"year": 1990,
"venue": "And Lexicography. Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church K. Ward and Hanks Patrick. 1990. Word Association Norms, Mutual Information, And Lexicography. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Partitioning long sentences for text summarization",
"authors": [
{
"first": "Fukushima",
"middle": [],
"last": "Takahiro",
"suffix": ""
},
{
"first": "Ehara",
"middle": [],
"last": "Terumasa",
"suffix": ""
},
{
"first": "Shirai",
"middle": [],
"last": "Katsuhiko",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Natural Language Processing",
"volume": "6",
"issue": "6",
"pages": "131--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fukushima Takahiro, Ehara Terumasa, and Shirai Katsuhiko. 2011. Partitioning long sentences for text summarization. Journal of Natural Language Processing, 6(6):131-147. (in Japaense).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Summarizing Newspaper Articles Using Extracted Informative and Functional Words",
"authors": [
{
"first": "Hatayama",
"middle": [],
"last": "Mamiko",
"suffix": ""
},
{
"first": "Matsuo",
"middle": [],
"last": "Yoshihiro",
"suffix": ""
},
{
"first": "Shirai",
"middle": [],
"last": "Satoshi",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Natural Language Processing",
"volume": "9",
"issue": "4",
"pages": "55--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hatayama Mamiko, Matsuo Yoshihiro, and Shirai Satoshi. 2002. Summarizing Newspaper Articles Using Extracted Informative and Functional Words.Journal of Natural Language Processing, 9(4):55-73. (in Japaense).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Kokugo Kyouiku No Tame No Bunsyouron Gaisetu",
"authors": [
{
"first": "Ichikawa",
"middle": [],
"last": "Takasi",
"suffix": ""
}
],
"year": 1978,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ichikawa Takasi. 1978. Kokugo Kyouiku No Tame No Bunsyouron Gaisetu. Kyouiku-shuppan. (in Japaense).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Summarization Method by Reducing Redundancy of Each Sentence for Making Captions of Newscasting",
"authors": [
{
"first": "Mikami",
"middle": [],
"last": "Makoto",
"suffix": ""
},
{
"first": "Masuyama",
"middle": [],
"last": "Shigeru",
"suffix": ""
},
{
"first": "Nakagawa",
"middle": [],
"last": "Seiichi",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Natural Language Processing",
"volume": "6",
"issue": "6",
"pages": "65--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikami Makoto, Masuyama Shigeru, and Nakagawa Seiichi. 1999. A Summarization Method by Reducing Redundancy of Each Sentence for Making Captions of Newscasting. Journal of Natural Language Processing, 6(6):65-81. (in Japaense).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Gendai Nihongo No Kouzou",
"authors": [
{
"first": "Minami",
"middle": [],
"last": "Hujio",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minami Hujio. 1974. Gendai Nihongo No Kouzou. Taishukan Publishing Co., Ltd.(in Japaense).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fukushiteki Hyowugen No Shosou",
"authors": [
{
"first": "Nitta",
"middle": [],
"last": "Yosio",
"suffix": ""
}
],
"year": 2002,
"venue": "Kurosio Syuppan",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitta Yosio. 2002. Fukushiteki Hyowugen No Shosou. Kurosio Syuppan. (in Japaense).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Goal-Directed Approach for Text Summarization",
"authors": [
{
"first": "Ochitani",
"middle": [],
"last": "Ryo",
"suffix": ""
},
{
"first": "Nakao",
"middle": [],
"last": "Yoshio",
"suffix": ""
},
{
"first": "Nishino",
"middle": [],
"last": "Fumihito",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the ACL Workshop on Intelligent Scalable Text Summarization",
"volume": "",
"issue": "",
"pages": "47--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ochitani Ryo, Nakao Yoshio, and Nishino Fumihito. 1997. Goal-Directed Approach for Text Summarization. In Proc. of the ACL Workshop on Intelligent Scalable Text Summarization, 47-50.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multiple Articles Summarization by Deleting Overlapped and Verbose Parts",
"authors": [
{
"first": "Ohtake",
"middle": [],
"last": "Kiyonori",
"suffix": ""
},
{
"first": "Funasaka",
"middle": [],
"last": "Takahiro",
"suffix": ""
},
{
"first": "Masuyama",
"middle": [],
"last": "Shigeru",
"suffix": ""
},
{
"first": "Yamamoto",
"middle": [],
"last": "Kazuhide",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of Natural Language Processing",
"volume": "6",
"issue": "6",
"pages": "45--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ohtake Kiyonori, Funasaka Takahiro, Masuyama Shigeru, and Yamamoto Kazuhide. 1999. Multiple Articles Summarization by Deleting Overlapped and Verbose Parts.Journal of Natural Language Processing, 6(6):45-64. (in Japaense).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A New Approach of Extractive Summarization Combining Sentence Selection and Compression",
"authors": [
{
"first": "Tomita",
"middle": [],
"last": "Kohei",
"suffix": ""
},
{
"first": "Takamura",
"middle": [],
"last": "Hiroya",
"suffix": ""
},
{
"first": "Okumura",
"middle": [],
"last": "Manabu",
"suffix": ""
}
],
"year": 2009,
"venue": "IPSJ SIG Notes(NL)",
"volume": "2009",
"issue": "2",
"pages": "13--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita Kohei, Takamura Hiroya, and Okumura Manabu. 2009. A New Approach of Extractive Summarization Combining Sentence Selection and Compression. IPSJ SIG Notes(NL).2009(2):13-20. (in Japaense).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dictionary of Contemporay Japanese Government and Politics",
"authors": [
{
"first": "Uchida",
"middle": [],
"last": "Mituru",
"suffix": ""
},
{
"first": "Imamura",
"middle": [],
"last": "Hiroshi",
"suffix": ""
},
{
"first": "Tanaka",
"middle": [],
"last": "Aiji",
"suffix": ""
},
{
"first": "Tanifuji",
"middle": [],
"last": "Etsushi",
"suffix": ""
},
{
"first": "Yoshino",
"middle": [],
"last": "Takashi",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uchida Mituru, Imamura Hiroshi, Tanaka Aiji, Tanifuji Etsushi, and Yoshino Takashi. 2001. Dictionary of Contemporay Japanese Government and Politics. Brensyuppan(in Japaense).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Nihongo Daiziten Kodansha kara ban dai2han",
"authors": [
{
"first": "Umesao",
"middle": [],
"last": "Tadao",
"suffix": ""
},
{
"first": "Kindaichi",
"middle": [],
"last": "Haruhiko",
"suffix": ""
},
{
"first": "Sakakura",
"middle": [],
"last": "Atuyosi",
"suffix": ""
},
{
"first": "Hinohara",
"middle": [],
"last": "Sigeaki",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Umesao Tadao, Kindaichi Haruhiko, Sakakura Atuyosi, and Hinohara Sigeaki. 1995. Nihongo Daiziten Kodansha kara ban dai2han. Kodansha Ltd. (in Japaense).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figure 1. Conjunction classification"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The analyzing result of an example sentence"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The flow of cohesive sentence insertion"
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Experimenal results with NC 1"
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Experimenal results with NC 2 PACLIC-Experimenal results with NC 3"
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The abstract written by the authors (called as A-abstract hereafter). \uf0d8 The abstract created by the system.(called as S-abstract hereafter) \uf0d8 The abstract created by Microsoft Word 2003 (called as W-abstract hereafter)"
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Distribution of the number of characters in original abstractsFigure 8. Distribution of the number of characters in abstracts generated by the system"
}
}
}
}