Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1047",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:41:09.731565Z"
},
"title": "Better Twitter Summaries?",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Judd",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Colorado Springs",
"location": {
"region": "Colorado"
}
},
"email": "jjudd2@uccs.edu"
},
{
"first": "Jugal",
"middle": [],
"last": "Kalita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado Colorado Springs",
"location": {
"region": "Colorado"
}
},
"email": "jkalita@uccs.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes an approach to improve summaries for a collection of Twitter posts created using the Phrase Reinforcement (PR) Algorithm (Sharifi et al., 2010a). The PR algorithm often generates summaries with excess text and noisy speech. We parse these summaries using a dependency parser and use the dependencies to eliminate some of the excess text and build better-formed summaries. We compare the results to those obtained using the PR Algorithm.",
"pdf_parse": {
"paper_id": "N13-1047",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes an approach to improve summaries for a collection of Twitter posts created using the Phrase Reinforcement (PR) Algorithm (Sharifi et al., 2010a). The PR algorithm often generates summaries with excess text and noisy speech. We parse these summaries using a dependency parser and use the dependencies to eliminate some of the excess text and build better-formed summaries. We compare the results to those obtained using the PR Algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Millions of people use the Web to express themselves and share ideas. Twitter is a very popular micro blogging site. According to a recent study approximately 340 million Tweets are sent out every day 1 . People mostly upload daily routines, fun activities and other words of wisdom for readers. There is also plenty of serious information beyond the personal; according to a study approximately 4% of posts on Twitter have relevant news data 2 . Topics that may be covered by reputable new sources like CNN (Cable News Network) were considered relevant. A topic is simply a keyword or key phrase that one may use to search for Twitter posts containing it. It is possible to gather large amounts of posts from Twitter on many different topics in short amounts of time. Obviously, processing all this information by human hands is impossible. One way to extract information from Twitter posts on a certain topic is to automatically summarize them. (Sharifi et al., 2010a; Sharifi et al., 2010b; Sharifi et al., 2010c) present an algorithm called the Phrase Reinforcement Algorithm to produces summaries of a set of Twitter posts on 1 http://blog.twitter.com/2012/03/ twitter-turns-six.htm 2 http://www.pearanalytics.com/blog/wp-content/ uploads/2010/05/Twitter-Study-August-2009.pdf a certain topic. The PR algorithm produces good summaries for many topics, but for sets of posts on certain topics, the summaries become syntactically malformed or too wordy. This is because the PR Algorithm does not pay much attention to syntactic well-formedness as it constructs a summary sentence from phrases that occur frequently in the posts it summarizes. In this paper, we attempt to improve Twitter summaries produced by the PR algorithm.",
"cite_spans": [
{
"start": 947,
"end": 970,
"text": "(Sharifi et al., 2010a;",
"ref_id": null
},
{
"start": 971,
"end": 993,
"text": "Sharifi et al., 2010b;",
"ref_id": null
},
{
"start": 994,
"end": 1016,
"text": "Sharifi et al., 2010c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a number of Twitter posts on a certain topic, the PR algorithm starts construction of what is called a word graph with a root node containing the topic phrase. It builds a graph showing how words occur before and after the phrase in the root node, considering all the posts on the topic. It builds a subgraph to the left of the topic phrase and another subgraph to its right in a similar manner. To construct the left graph, the algorithm starts with the root node and obtains the set of words that occur immediately before the current node's phrase. For each of these unique words, the algorithm adds them to the graph as nodes with their associated counts to the left of the current node. The algorithm continues this process recursively for each node added to the graph until all the potential words have been added to the left-hand side of the graph. The algorithm repeats these steps symmetrically to construct the right subgraph. Once the full graph is there, the algorithm weights individual nodes. The weights are initialized to the same values as their frequency counts. Then, to account for the fact that some phrases are naturally longer than others, they penalize nodes that occur farther from the root node by an amount that is proportional to their distance. To generate a summary, the algorithm looks for the most overlapping phrases within the graph. Since the nodes' weights are proportional to their overlap, the algorithm searches for the path within the graph with the highest cumulative weight. The sequence of words in this path becomes the summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The PR Algorithm Revisited",
"sec_num": "2"
},
{
"text": "We start by making some observations on the phrasereinforcement algorithm. Certain topics do not produce well-formed summaries, while others yield very good summaries. For the posts that have a wellcentered topic without a huge amount of variation among the posts, the algorithm works well and creates good summaries. Here is an example summary produced by the PR algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "3"
},
{
"text": "Phillies defeat Dodgers to take the National League Championship series. (Sharifi et al., 2010a; Sharifi et al., 2010b; Sharifi et al., 2010c) provide additional examples. The PR algorithm limits the length of the summary to approximately 140 characters, the maximum length of a Twitter post. However, often the summary sentence produced has extraneous parts that appear due to the fact that they appear frequently in the posts being summarized, but these parts make the summary malformed or too wordy. An example with some wordiness is given below.",
"cite_spans": [
{
"start": 73,
"end": 96,
"text": "(Sharifi et al., 2010a;",
"ref_id": null
},
{
"start": 97,
"end": 119,
"text": "Sharifi et al., 2010b;",
"ref_id": null
},
{
"start": 120,
"end": 142,
"text": "Sharifi et al., 2010c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "3"
},
{
"text": "today is day for vote obama this election day Some \"raw\" PR summaries are a lot more wordy than the one above. The goal we address in this paper is to create grammatically better formed summaries by processing the \"raw\" summaries formed by the PR Algorithm. We drop this excess text and the phrases or extract pieces of text which make sense grammatically to form the final summary. This usually produces a summary with more grammatical accuracy and less noise in between the words. This gets the main point of the summary across better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Description",
"sec_num": "3"
},
{
"text": "The idea behind creating the desired summary is to parse the \"raw\" summary and build dependencies between the dependent and governor words in each summary. We perform parts of speech tagging and obtain lists of governing and dependent words. This data forms the basis for creating a valid summary. For example given the Twitter post, today is day for vote obama this election day, a dependency parser produces the governor-dependent relationships as given in Table 1 . Figure 1 also shows the same grammatical dependencies between words in the phrases.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 469,
"end": 477,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "We believe that a word which governs many words is key to the phrase as a whole, and dependent words this election",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "Algorithm 1 Algorithm to Fix \"Raw\" PRA Summaries I. For each word, check grammatical compatibility with words before and after the word being checked.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "II. If a word has no dependencies immediately before or after it, drop the word. III. After each word has been checked, check for the words that form a grammatical phrase. IV. Write out the summary without the dropped words and without phrases with only two words. V. If needed, go back to step III, because there shouldn't be any more single words with no dependencies to check, and repeat as many times as necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "which are closely related, or in other words, lay close to each other in the phrase should be left in the order they appear. Conceptually, our approach works as follows: look at every word and see if it makes sense with the word before and after it. This builds dependencies between the word in question with the words around it. If a word before or after the word being analyzed does not make sense grammatically, it can be removed from that grammatically correct phrase. Dependent words that are not close to each other may not be as important as words that lay close to each other and have more dependencies, and thus may be thrown out of the summaries. Through this process grammatically correct phrases can be formed. The dependencies are built by tagging each word as a part of speech and seeing if it relates to other words. For example, it checks whether or not the conjunction \"and\" is serving its purpose of combining a set of words or ideas, in other words, if those dependencies exist. If dependencies exist with the nearby words, that given collection of words can be set aside as a grammatically correct phrase until it reaches words with no dependencies, and the process can continue. The phrases with few words can be dropped, as well as single words. These new phrases can be checked for grammatical accuracy in the same way as the previous phrases, and if they pass, can remain combined forming a longer summary that should be grammatically correct. The main steps are given in Algorithm 1. Now, take the example summary produced by the PR Algorithm for the election Twitter posts. Looking at this summary, we, as humans, may make changes and make the summary grammatically correct. Two potential ideal summaries would be the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "today is the day to vote for obama vote for obama this election day",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "The actual process used in the making of the grammatical summaries is as follows. Two main lists are created from lists of governor and dependent words, one with the governor words and another with the dependent words. The governor words are checked to see how many dependent words are linked to them. The governing words with the highest number of dependent words are kept for later. For example using the above phrase about the elections, the word \"day\" was the governing word with the highest amount of dependent words and was thus kept for the final summary. The superscripts on the word \"day\" differentiate its two occurrences. The dependent words are kept in groups of closely linked dependent words. Using the same example about the election, an intermediate list of closely related dependent words is \"today,\" \"is,\" \"for,\" \"vote,\" \"obama,\" \"this,\" \"election,\" and \"day.\" And the final list of closely related dependent words is \"for,\" \"vote,\" \"obama,\" \"this,\" \"election\" and \"day.\" After these two lists are in the final stages the lists are merged placing the words in proper order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "4"
},
{
"text": "To begin, the Twitter posts were collected manually and stored in text files. The topics we chose to focus on important current events and some pop culture. Approximately 100 posts were collected on ten different topics. These topics are \"The Avengers,\" \"Avril Lavigne,\" \"Christmas,\" \"the election,\" \"Election Day,\" \"Iron Man 3,\" \"president 2012,\" \"Hurricane Sandy,\" \"Thanksgiving,\" and \"vote.\" The collections of posts were passed on to three volunteers to produce short accurate summaries that capture the main idea from the posts. The collections of posts were also first run through the PR Algorithm and then through the process described in this paper to try and refine the summaries output by the PR Algorithm. The Stanford CoreNLP parser 3 was used to build the lists of governor and dependent words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "We use ROUGE evaluation metrics (Lin 2004 ) just like (Sharifi et al., 2010a; Sharifi et al., 2010b; Sharifi et al., 2010c) , who evaluated summaries obtained with the PR Algorithm. Specifically, we use ROUGE-L, which uses the longest common subsequence (LCS) to compare summaries. As the LCS of the two summaries in comparison increases in length, so does the similarity of the two summaries.",
"cite_spans": [
{
"start": 32,
"end": 41,
"text": "(Lin 2004",
"ref_id": "BIBREF1"
},
{
"start": 54,
"end": 77,
"text": "(Sharifi et al., 2010a;",
"ref_id": null
},
{
"start": 78,
"end": 100,
"text": "Sharifi et al., 2010b;",
"ref_id": null
},
{
"start": 101,
"end": 123,
"text": "Sharifi et al., 2010c)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "We now discuss results using ROUGE-L on the summaries we produce. Tables 2 through 5 show the results of four different ROUGE-L evaluations, comparing them to the results found using the PR Algorithm, and Table 6 shows the comparisons of the averaged scores to the scores (Sharifi et al., 2010a) obtained using the PR Algorithm. Table 2 shows the regular ROUGE-L scores, meaning the recall, precision and F-scores for each task and the average overall scores, for the collection of posts before using the dependency parser to refine the summaries. Table 3 displays the results after using the dependency parser on the summaries formed by the PR Algorithm. One of the options in ROUGE is to show the \"best\" result, for each task. Table 4 has this result for the PR Algorithm results. Table 5 shows the results of the \"best\" scores, after running it through the dependency parser. Table 6 shows the averages from Tables 3 and 5 , using the dependency parser, compared to Sharifi et al.'s results using the PR Algorithm. Stopwords were not removed in our experiments. As one can see, the use of our algorithm on the summaries produced by the PR Algorithm improves the F-score values, at least in the example cases we tried. In almost every case, there is substantial rise in the F-score. As previously mentioned, some col- ",
"cite_spans": [
{
"start": 272,
"end": 295,
"text": "(Sharifi et al., 2010a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 66,
"end": 84,
"text": "Tables 2 through 5",
"ref_id": "TABREF1"
},
{
"start": 205,
"end": 212,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 329,
"end": 336,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 729,
"end": 736,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 783,
"end": 790,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 879,
"end": 926,
"text": "Table 6 shows the averages from Tables 3 and 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "5"
},
{
"text": "The PR Algorithm is not a pure extractive algorithm. It creates summaries of Twitter posts by piecing together the most commonly occurring words and phrases in the entire set of tweets, but keeping the order of constituents as close to the order in which they occur in the posts, collectively speaking. As we noted in this paper, the heuristic method using which the PR Algorithm composes a summary sentence out of the phrases sometimes leads to ungrammatical sentences or wordy sentences. This paper shows that the \"raw\" summaries produced by the PR Algorithm can be improved by taking into account governor-dependency relationships among the constituents. There is nothing in this clean-up algorithm that says that it works only with summaries of tweets. The same approach can potentially be used to improve grammaticality of sentences written by humans in a sloppy manner. In addition, given several sentences with overlapping content (from multiple sources), the same process can potentially be used to construct a grammatical sentence out of all the input sentences. This problem often arises in general multi-document summarization. We believe that a corrective approach like ours can be used together with a sentence compression approach, such as (Knight and Marcu 2002) , to produce even better summaries in conjunction with the PR or other summarization algorithms that work with sociallygenerated texts which are often malformed and short. We have shown in this paper that simply focusing on grammatical dependency tends to make the final summaries more grammatical and readable compared to the raw summaries. However, we believe that more complex restructuring of the words and constituents would be necessary to improve the quality of the raw summaries, in general.",
"cite_spans": [
{
"start": 1254,
"end": 1277,
"text": "(Knight and Marcu 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://nlp.stanford.edu/software/corenlp.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Summarization beyond sentence extraction: A probabilistic approach to sentence compression",
"authors": [
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2004,
"venue": "Artificial Intelligence",
"volume": "139",
"issue": "1",
"pages": "91--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Knight, K. and Marcu, D. 2004. Summarization beyond sentence extraction: A probabilistic approach to sen- tence compression, Artificial Intelligence, Vol. 139, No. 1, pp. 91-107.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Rouge: A package for automatic evaluation of summaries, Text Summarization Branches Out",
"authors": [
{
"first": "C",
"middle": [
"Y"
],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL-04 Workshop",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, C.Y. 2004. Rouge: A package for automatic evalua- tion of summaries, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pp. 74-81.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Summarizing Microblogs Automatically, Annual Conference of the National Association for Advancement of Computational Linguistics-Human Language Technology (NAACL-HLT)",
"authors": [
{
"first": "",
"middle": [],
"last": "Sharifi",
"suffix": ""
},
{
"first": "Mark-Anthony",
"middle": [],
"last": "Beaux",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Hutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "685--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharifi, Beaux, Mark-Anthony Hutton, and Jugal Kalita. 2010. Summarizing Microblogs Automatically, Annual Conference of the National Association for Advance- ment of Computational Linguistics-Human Language Technology (NAACL-HLT), pp. 685-688, Los Angeles.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Experiments in Microblog Summarization",
"authors": [
{
"first": "",
"middle": [],
"last": "Sharifi",
"suffix": ""
},
{
"first": "Mark-Anthony",
"middle": [],
"last": "Beaux",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Hutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2010,
"venue": "Second IEEE International Conference on Social Computing",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharifi, Beaux, Mark-Anthony Hutton, and Jugal Kalita. 2010. Experiments in Microblog Summarization, Sec- ond IEEE International Conference on Social Comput- ing (SocialCom 2010), pp. 49-56, Minneapolis.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic Summarization of Twitter Topics, National Workshop on Design and Analysis of Algorithms",
"authors": [
{
"first": "",
"middle": [],
"last": "Sharifi",
"suffix": ""
},
{
"first": "Mark-Anthony",
"middle": [],
"last": "Beaux",
"suffix": ""
},
{
"first": "Jugal",
"middle": [],
"last": "Hutton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kalita",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sharifi, Beaux, Mark-Anthony Hutton and Jugal Kalita. 2010. Automatic Summarization of Twitter Topics, National Workshop on Design and Analysis of Algo- rithms, NWDAA 10, Tezpur University, Assam, India.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"num": null,
"text": "Dependency Parse for today is day 1 for vote obama this election day 2"
},
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">Governor Dependent</td></tr><tr><td>day 1</td><td>today</td></tr><tr><td/><td>is</td></tr><tr><td/><td>for</td></tr><tr><td/><td>day 2</td></tr><tr><td>obama</td><td>vote</td></tr><tr><td>for</td><td>obama</td></tr><tr><td>day 2</td><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "Governor and Dependent Words for today is day 1 for vote obama this election day 2"
},
"TABREF1": {
"content": "<table><tr><td>Task</td><td colspan=\"3\">Recall Precision F-score</td></tr><tr><td>Task 1</td><td>0.667</td><td>0.343</td><td>0.453</td></tr><tr><td>Task 2</td><td>1.000</td><td>0.227</td><td>0.370</td></tr><tr><td>Task 3</td><td>0.353</td><td>0.240</td><td>0.286</td></tr><tr><td>Task 4</td><td>0.800</td><td>0.154</td><td>0.258</td></tr><tr><td>Task 5</td><td>1.000</td><td>0.185</td><td>0.313</td></tr><tr><td>Task 6</td><td>0.667</td><td>0.150</td><td>0.245</td></tr><tr><td>Task 7</td><td>0.889</td><td>0.125</td><td>0.219</td></tr><tr><td>Task 8</td><td>0.636</td><td>0.125</td><td>0.209</td></tr><tr><td>Task 9</td><td>0.500</td><td>0.300</td><td>0.375</td></tr><tr><td colspan=\"2\">Task 10 0.455</td><td>0.100</td><td>0.164</td></tr><tr><td colspan=\"2\">Average 0.696</td><td>0.195</td><td>0.289</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "ROUGE-L without Stopwords, Before"
},
"TABREF2": {
"content": "<table><tr><td>Task</td><td colspan=\"3\">Recall Precision F-score</td></tr><tr><td>Task 1</td><td>0.667</td><td>0.480</td><td>0.558</td></tr><tr><td>Task 2</td><td>0.400</td><td>0.500</td><td>0.444</td></tr><tr><td>Task 3</td><td>0.000</td><td>0.000</td><td>0.000</td></tr><tr><td>Task 4</td><td>0.400</td><td>0.333</td><td>0.363</td></tr><tr><td>Task 5</td><td>0.900</td><td>0.600</td><td>0.720</td></tr><tr><td>Task 6</td><td>0.389</td><td>0.350</td><td>0.368</td></tr><tr><td>Task 7</td><td>0.556</td><td>0.250</td><td>0.345</td></tr><tr><td>Task 8</td><td>0.545</td><td>0.500</td><td>0.522</td></tr><tr><td>Task 9</td><td>0.417</td><td>0.417</td><td>0.417</td></tr><tr><td colspan=\"2\">Task 10 0.363</td><td>0.200</td><td>0.258</td></tr><tr><td colspan=\"2\">Average 0.464</td><td>0.363</td><td>0.400</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "ROUGE-L without Stopwords, After"
},
"TABREF3": {
"content": "<table><tr><td colspan=\"4\">: ROUGE-L Best without Stopwords, Before</td></tr><tr><td/><td colspan=\"3\">Recall Precision F-score</td></tr><tr><td>Task 1</td><td>1.000</td><td>0.429</td><td>0.600</td></tr><tr><td>Task 2</td><td>1.000</td><td>0.227</td><td>0.370</td></tr><tr><td>Task 3</td><td>0.500</td><td>0.200</td><td>0.286</td></tr><tr><td>Task 4</td><td>1.000</td><td>0.154</td><td>0.267</td></tr><tr><td>Task 5</td><td>1.000</td><td>0.167</td><td>0.286</td></tr><tr><td>Task 6</td><td>1.000</td><td>0.200</td><td>0.333</td></tr><tr><td>Task 7</td><td>1.000</td><td>0.125</td><td>0.222</td></tr><tr><td>Task 8</td><td>1.000</td><td>0.071</td><td>0.133</td></tr><tr><td>Task 9</td><td>1.000</td><td>0.400</td><td>0.571</td></tr><tr><td colspan=\"2\">Task 10 1.000</td><td>0.100</td><td>0.182</td></tr><tr><td colspan=\"2\">Average 0.950</td><td>0.207</td><td>0.325</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": ""
},
"TABREF4": {
"content": "<table><tr><td/><td colspan=\"3\">Recall Precision F-score</td></tr><tr><td>Task 1</td><td>1.000</td><td>0.600</td><td>0.750</td></tr><tr><td>Task 2</td><td>0.400</td><td>0.500</td><td>0.444</td></tr><tr><td>Task 3</td><td>0.000</td><td>0.000</td><td>0.000</td></tr><tr><td>Task 4</td><td>0.500</td><td>0.333</td><td>0.400</td></tr><tr><td>Task 5</td><td>1.000</td><td>0.600</td><td>0.750</td></tr><tr><td>Task 6</td><td>0.600</td><td>0.600</td><td>0.600</td></tr><tr><td>Task 7</td><td>0.667</td><td>0.400</td><td>0.500</td></tr><tr><td>Task 8</td><td>1.000</td><td>0.333</td><td>0.500</td></tr><tr><td>Task 9</td><td>1.000</td><td>0.667</td><td>0.800</td></tr><tr><td colspan=\"2\">Task 10 1.000</td><td>0.250</td><td>0.400</td></tr><tr><td colspan=\"2\">Average 0.718</td><td>0.428</td><td>0.515</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "ROUGE-L Best without Stopwords, After"
},
"TABREF5": {
"content": "<table><tr><td/><td colspan=\"3\">Recall Precision F-score</td></tr><tr><td>Sharifi (PRA)</td><td>0.31</td><td>0.34</td><td>0.33</td></tr><tr><td>Rouge-L after re-</td><td>0.46</td><td>0.36</td><td>0.40</td></tr><tr><td>construction</td><td/><td/><td/></tr><tr><td>Rouge-L best after</td><td>0.72</td><td>0.43</td><td>0.52</td></tr><tr><td>reconstruction</td><td/><td/><td/></tr><tr><td colspan=\"4\">lections of Tweets do not produce good summaries.</td></tr><tr><td colspan=\"4\">Task 3 had some poor scores in all cases, so one can</td></tr><tr><td colspan=\"4\">deduce that the posts on that topic (Christmas) were</td></tr><tr><td colspan=\"4\">widely spread, or they did not have a central theme.</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "ROUGE-L Averages after applying our algorithm vs. Sharifi et al."
}
}
}
}