ACL-OCL / Base_JSON /prefixH /json /hackashop /2021.hackashop-1.19.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:51.633351Z"
},
"title": "Implementing Evaluation Metrics Based on Theories of Democracy in News Comment Recommendation (Hackathon Report)",
"authors": [
{
"first": "Myrthe",
"middle": [],
"last": "Reuver",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CLTL Dept. of Language, Literature & Communication Vrije Universiteit Amsterdam",
"location": {}
},
"email": "myrthe.reuver@vu.nl"
},
{
"first": "Nicolas",
"middle": [],
"last": "Mattis",
"suffix": "",
"affiliation": {},
"email": "n.m.mattis@vu.nl"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Diversity in news recommendation is important for democratic debate. Current recommendation strategies, as well as evaluation metrics for recommender systems, do not explicitly focus on this aspect of news recommendation. In the 2021 Embeddia Hackathon, we implemented one novel, normative theory-based evaluation metric, \"activation\", and use it to compare two recommendation strategies of New York Times comments, one based on user likes and another on editor picks. We found that both comment recommendation strategies lead to recommendations consistently less activating than the available comments in the pool of data, but the editor's picks more so. This might indicate that New York Times editors' support a deliberative democratic model, in which less activation is deemed ideal for democratic debate.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Diversity in news recommendation is important for democratic debate. Current recommendation strategies, as well as evaluation metrics for recommender systems, do not explicitly focus on this aspect of news recommendation. In the 2021 Embeddia Hackathon, we implemented one novel, normative theory-based evaluation metric, \"activation\", and use it to compare two recommendation strategies of New York Times comments, one based on user likes and another on editor picks. We found that both comment recommendation strategies lead to recommendations consistently less activating than the available comments in the pool of data, but the editor's picks more so. This might indicate that New York Times editors' support a deliberative democratic model, in which less activation is deemed ideal for democratic debate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recommender systems are a core component of many online environments. Such systems can be used to recommend movies or music to users where there is a large pool of potential recommendations. Their main task, as Karimi et al. (2018) put it, is \"to filter incoming streams of information according to the users' preferences or to point them to additional items of interest in the context of a given object\" (p. 1203). As such, they are usually designed in ways that maximise user satisfaction. Their performance is traditionally evaluated in terms of their \"accuracy\", which is often measured by proxies such as clicks, time spent on a page, or engagement. Simply put: the more attention a user pays to the content, the better the recommender system is deemed to be.",
"cite_spans": [
{
"start": 211,
"end": 231,
"text": "Karimi et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, there is an increasing awareness in the recommender systems domain that \"beyondaccuracy\" metrics such as diversity or novelty are also important aspects of a meaningful recommender system evaluation (Raza and Ding, 2020; Kaminskas and Bridge, 2016) . This is particularly true in contexts where the impact of recommendations extends beyond individual purchasing choices or movie selections, such as news recommendation. Given that exposure to diverse viewpoints is often regarded as beneficial for democratic societies (Helberger and Wojcieszak, 2018) , scholars have recently highlighted the importance of exposure diversity in such systems (Helberger, 2019; . Not recommending diversity in news recommender systems could potentially lead to 'filter bubbles', where users only receive ideas and viewpoints they already know and/or agree with (Pariser, 2011) .",
"cite_spans": [
{
"start": 208,
"end": 229,
"text": "(Raza and Ding, 2020;",
"ref_id": "BIBREF15"
},
{
"start": 230,
"end": 257,
"text": "Kaminskas and Bridge, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 528,
"end": 560,
"text": "(Helberger and Wojcieszak, 2018)",
"ref_id": "BIBREF6"
},
{
"start": 651,
"end": 668,
"text": "(Helberger, 2019;",
"ref_id": "BIBREF4"
},
{
"start": 852,
"end": 867,
"text": "(Pariser, 2011)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Very recently, evaluation and optimization metrics by Vrijenhoek et al. (2021) have been specifically designed to align with potential goals of democratic news recommenders as suggested by Helberger (2019) . As such, they move beyond the existing \"beyond accuracy\" evaluation metrics used in the recommender system field. These existing metrics range from \"diversity\", to \"serendipity\", \"novelty\", and \"coverage\" (Kaminskas and Bridge, 2016) , but all of these implicitly aim at increasing user satisfaction rather than achieving normative goals.",
"cite_spans": [
{
"start": 54,
"end": 78,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
},
{
"start": 189,
"end": 205,
"text": "Helberger (2019)",
"ref_id": "BIBREF4"
},
{
"start": 413,
"end": 441,
"text": "(Kaminskas and Bridge, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast, the metrics in Vrijenhoek et al. (2021) are explicitly linked to supporting democratic debate rather than user satisfaction. Specifically, these metrics are linked to models of democracy. One of these is the deliberative model of democracy, which states a functioning democracy consists of rational debate of viewpoints and ideas. Another model is the critical model, which contends a successful democracy has clashing and active debates of opposing viewpoints.",
"cite_spans": [
{
"start": 28,
"end": 52,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we specifically focus on one of these metrics, \"activation\", and use it to evaluate two different recommendation strategies for New York Times user comments in response to news articles. In doing so, our goal is to explore the potential of, but also the challenges related to, such normative metrics, especially where it concerns Natural Language Processing (NLP) tools and strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To better understand how different recommendation strategies in the NYT comment section perform in terms of this metric, we ask the following research question: \"How do different manners of recommending user comments on a news article affect the recommendation set's average activation scores?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "By comparing different comment recommendation strategies, we contribute to the ongoing discussion in three ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We are the first, to our knowledge, to implement Vrijenhoek et al. (2021) 's evaluation metrics for democratic news recommenders on a dataset;",
"cite_spans": [
{
"start": 51,
"end": 75,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We explicitly identify possibilities and problems related to NLP in the use of such metrics;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We add to the literature on the deliberative value of user-comments as well as on editorial biases in comment selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal was to \"test-drive\" one or more of the theory-driven evaluation metrics in Vrijenhoek et al. (2021) , and see where we ran into conceptual or practical problems preventing us from answering a research question aimed at comparing different recommendation strategies on the basis of this metric.",
"cite_spans": [
{
"start": 84,
"end": 108,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although not exactly the same as news articles in a news recommender system, user comments are particularly interesting in this context because of their deliberative implications. That is, they provide a public space where users can share, consume and engage with different ideas and viewpoints (Rowe, 2015) . As such, they constitute an excellent context for the test of Vrijenhoek et al. (2021) 's activation metric.",
"cite_spans": [
{
"start": 295,
"end": 307,
"text": "(Rowe, 2015)",
"ref_id": "BIBREF18"
},
{
"start": 372,
"end": 396,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The dataset (Kesarwani, 2018) , one of the datasets linked to in the hackathon resources (Pollak et al., 2021), contains 9.450 articles with 2.176.364 comments and other related metadata from the New York Times. The articles were published from January 2017 to May 2017 and January 2018 to May 2018. The mean number of comments per article is 230, with an SD of 403.4.",
"cite_spans": [
{
"start": 12,
"end": 29,
"text": "(Kesarwani, 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "The comment data set contains the text and timestamps of the individual comments, as well as unique identifiers for each comment and the article that it belongs to. In addition, for each comment it also contains the number of user likes (called \"recommendations\") as well as information on whether or not the comment was selected by the NYTimes editorial board. According to their website, \"NYT Picks are a selection of comments that represent a range of views and are judged the most interesting or thoughtful. In some cases, NYT Picks may be selected to highlight comments from a particular region, or readers with first-hand knowledge of an issue.\" (Sta) In most cases, the editors select 1 comment per debate, but the spread is large, with the mean being 13 recommended comments per article (SD = 11).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2.1"
},
{
"text": "We recommend the top 3, top 5, and top 10 comments for each news article in two ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two recommendation strategies",
"sec_num": "2.2"
},
{
"text": "\u2022 N most-liked by users",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two recommendation strategies",
"sec_num": "2.2"
},
{
"text": "\u2022 N editorial recommendations (in order of appearance)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two recommendation strategies",
"sec_num": "2.2"
},
{
"text": "We also considered comparing these two recommendation strategies to maximizing intra-list diversity based on a representation with Google News word embeddings, but ran out of time to do so. This strategy is based on Lu et al. (2020) , who use this strategy to implement the \"editorial value\" diversity.",
"cite_spans": [
{
"start": 216,
"end": 232,
"text": "Lu et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two recommendation strategies",
"sec_num": "2.2"
},
{
"text": "We compare these strategies with the evaluation metric \"activation\" from Vrijenhoek et al. (2021) .",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two recommendation strategies",
"sec_num": "2.2"
},
{
"text": "We then analyze what the different levels of Activation in different recommendation strategies say about the implicit support for the different democratic models outlined in Helberger (2019). A higher activation might indicate an implicit support of the critical model of democracy, where conflict needs to be emphasized in order to obtain a lively, healthy debate. A lower activation score might indicate an implicit support of the deliberative model of democracy, where rational and calm debate is deemed important for democratic debate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two recommendation strategies",
"sec_num": "2.2"
},
{
"text": "In order to test our approaches, we used two samples of the dataset. Our validation set was February 2018. Our unseen test set was February 2017. We chose the same month so time-sensitive differences in comments or topics were avoided. February 2017 consisted of 1.115 articles, with M = 186 comments (SD = 298) per article. February 2018 had 885 articles, with M = 263 (SD = 466) comments per article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test and validation sets",
"sec_num": "2.3"
},
{
"text": "3 Implementing the Metric 3.1 Exploring which metric to implement Early in the hackathon, we found two of the five metrics in Vrijenhoek et al. (2021) require user data, such as previous watch or read history. The three metrics suitable to our research needs, and our data without such documentation, were \"activation\", \"representation\", and \"alternative voices\". However, the latter two presented too much of a challenge for the short time of a three-week, parttime hackathon.",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test and validation sets",
"sec_num": "2.3"
},
{
"text": "\"Representation\" requires the identification of different viewpoints and perspectives in text. NLP has several manners of doing so: tasks such as claim detection, argument mining, and stance detection. For an overview of such NLP tasks and approaches useful for viewpoint diversity in news recommendation, see Reuver et al. (2021) . These approaches take time to be done correctly, and we felt the short time available to us in this hackathon did not allow us to properly identify viewpoints in the comments.",
"cite_spans": [
{
"start": 310,
"end": 330,
"text": "Reuver et al. (2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test and validation sets",
"sec_num": "2.3"
},
{
"text": "\"Alternative Voices\" requires the identification of whether mentioned people are a member of a minority group. This metric is difficult to implement for several reasons. Conceptually, for comments it may be relevant to know whether the commenter has a marginalized background (rather than any mentioned named entities). However, we did not have such information in our dataset. Additionally, who is marginalized depends likely on context -which makes detection by one model difficult. There are also technical hurdles when considering this metric. It is relatively difficult to identify whether someone mentioned comes from a marginalized background based on only the text. This could possibly be solved with open data such as Wikipedia, but this allows only wellknown named entities to be recognized. Furthermore, there is a bias in Wikipedia itself: especially women are less often mentioned. Another method would for instance utilize techniques such as largescale language models to recognize names or terms related to certain marginalized groups. However, this in itself also has bias, and could lead to racist or otherwise unwelcome associations in the representation, as pointed out in Bender et al. (2021).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test and validation sets",
"sec_num": "2.3"
},
{
"text": "The \"Activation\" metric, in contrast, is related to the polarity in the text. Polarity detection is a common task in NLP, and one with extensive support in terms of tools and methods. For this project, we chose to specifically focus on Vrijenhoek et al. 2021's activation metric. The core idea behind this metric is to gauge to what extent certain content might spark action among the readers, and is related to emotion. Past research shows that both negative and positive emotions can affect the processing and effects of textual content (Brady et al., 2017; Ridout and Searles, 2011; Soroka and McAdams, 2015) . As such, emotional content can produce various effects that may or may not contribute to healthy democracies. Indeed, activation is not universally appreciated in democratic theory. In the models of democracy, activation has different desired values, as outlined in Helberger (2019). For example, from a deliberative democratic perspective, it could be argued that neutral and impartial content facilitates reasoned reflection and deliberation. However, from a more critical democratic perspective one could also argue that emotional content is more valuable as it may generate additional interest and engagement.",
"cite_spans": [
{
"start": 586,
"end": 611,
"text": "Soroka and McAdams, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test and validation sets",
"sec_num": "2.3"
},
{
"text": "We implemented activation in the following manner, based on (Vrijenhoek et al., 2021) 's description of how it should be used. Each article has a certain set of comment recommendations, and also a set of all potential comments. For each comment, we calculate the \"compound\" polarity value. For both sets we take the mean of the absolute polarity value of each article, which we use as an approximation for Activation. We then remove the mean polarity from all possible articles from the mean of the recommendation set. This results in an output with a range [-1, 1]. According to Vrijenhoek et al. (2021) , a negative value indicates the recommender shows less activating content than available in the pool of data, while a positive value means the recommendation system generally selects more activating content than generally in the data. The use of \"polarity\" is related to that of \"sentiment\". We follow Vrijenhoek et al. (2021) and use the VADER dictionary-based approach (Hutto and Gilbert, 2014) , since the \"compound\" value of polarity used in the operationalization of the activation metric seems to be based on this method. However, we are aware this is not the only approach of polarity analysis of text, and in fact may not have the most concept and empirical validity from the social science perspective (van Atteveldt et al., 2021) , nor is considered the state of the art for sentiment analysis on user generated text in the computer science field (Zimbra et al., 2018) . We discuss this in more detail in the Discussion section. As of now, we use no lemmatization or normalization on the text data. We will also discuss implications of this in the Discussion section. Our code for implementing the metrics, preprocessing the data, and eventually testing the metrics on the data can be viewed here:",
"cite_spans": [
{
"start": 60,
"end": 85,
"text": "(Vrijenhoek et al., 2021)",
"ref_id": "BIBREF20"
},
{
"start": 580,
"end": 604,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
},
{
"start": 908,
"end": 932,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
},
{
"start": 977,
"end": 1002,
"text": "(Hutto and Gilbert, 2014)",
"ref_id": "BIBREF7"
},
{
"start": 1317,
"end": 1345,
"text": "(van Atteveldt et al., 2021)",
"ref_id": "BIBREF1"
},
{
"start": 1463,
"end": 1484,
"text": "(Zimbra et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.2"
},
{
"text": "https://github.com/myrthereuver/ Hackathon_MediaComments/blob/main/ Hackathon_comments_script.ipynb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.2"
},
{
"text": "Our results are visible in Table 1 and Table 2 below. Visible is that the editorial picks are considerably more negative, and thus are less Activated, than the recommendations based on user likes. However, both systems pick comments that are negative, and thus lower in activation than in the general pool of data. 1 -1, 1] , where a negative value denotes the recommender picks items less activating than in the general pool, while a positive value indicates the items are more activating.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 46,
"text": "Table 1 and Table 2",
"ref_id": "TABREF1"
},
{
"start": 317,
"end": 323,
"text": "-1, 1]",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "5.1 \"Test-driving\" theory-driven metrics",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We implemented Vrijenhoek et al. (2021) 's activation metric, used to assess the relation of recommendations with democratic theory. We found that even the concrete metric as described in this work requires extensive NLP (pre-)processing choices that could significantly alter the outcome of evaluation. Not only selecting which sentiment tools, but also how to tokenize and lemmatize the texts could alter the polarity scores, as does text normalization for especially spelling mistakes in comments. For instance, whether or not to normalize the word \"happines\" (presumably meaning \"happiness\") could significantly alter the polarity score of texts, especially if spelling errors are frequent -as they could be in user-generated texts such as comments.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Additionally, selecting a sentiment tool for polarity scoring is not an easy task. As noted before, recent work in social science (van Atteveldt et al., 2021) has indicated NLP sentiment tools are not as reliable and valid as one would hope, and especially dictionary-based methods do not compare to human labelling. In the computer science field, such methods are also not considered the state of the art (Zimbra et al., 2018) , performing well below more complex ensemble models of several machine learning methods.",
"cite_spans": [
{
"start": 130,
"end": 158,
"text": "(van Atteveldt et al., 2021)",
"ref_id": "BIBREF1"
},
{
"start": 406,
"end": 427,
"text": "(Zimbra et al., 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Also, we found that some of the theory-based metrics are easier to generally apply to several datasets, contexts, and research questions than others. We already pointed out that some metrics require information on individual users, such as reading history, which is often not easily available as open, shared data. Additionally, we found that implementing \"Activation\" generally makes sense to the comment recommendation context, while \"Protected Voices\" is more difficult to conceptually define, and the \"Representation\" metric requires more complex NLP analysis of viewpoints than available in standard tools or models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Very important to note is that these theory-driven metrics are by no means \"plug and play\". Using these metrics does not translate 1:1 into a score that measures the democratic valu of content. In this context, it gives an indication if and to what extent a recommendation set lives up to democratic ideals set by different models, but drawing a meaningful line on whether content becomes valuable for a given model of democracy is difficult. These metrics also do not capture more complex concepts such as intent when designing recommender systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Moreover, these metrics are based on averages: they do not show possible spread of activation across comments as well as articles. We could assume that some articles, as well as some topics, simply attract more activating comments, while others attract a more nuanced and \"deliberative\" discussion. Future research may, next to implementing the other metrics, also research whether certain topics or categories of news articles and/or comments have significantly more or less activating comments when using these recommendation approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "We researched whether different recommendation strategies in the New York Times comments dataset lead to different Activation values for the recommendations as presented in Vrijenhoek et al. (2021) , and in turn what this means for the democratic models related to these systems. We found editor selections are on average less activating than the most-liked comments. In 2018 this effect is clear, in the 2017 sample less so -even slightly opposite. This could mean several things from a media theory perspective. Perhaps, journalists implicitly select comments in accordance with deliberative ideals. Another explanation of these results is that more activating content is also more likely to be profane, which, as Muddiman and Stroud (2017) showed, makes their selection less likely. The idea behind the activation metric is that activating content in-creases engagement, maybe the fact that liked comments are more activating is due to that. Either way, connecting our results to the idea of democratic recommendation, it appears that user selection favours a more critical notion of democracy whereas editor selection favours a comparably more deliberative notion. At the same time, our results also suggest that on the whole, both recommendation styles result in a selection of comments that is slightly less activating than the overall subset. This suggests that both recommendation strategies favour less activating content, which might indicate implicit support of a deliberative model of democracy, where rational and calm debate is preferred over activating and clashing content.",
"cite_spans": [
{
"start": 173,
"end": 197,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF20"
},
{
"start": 716,
"end": 742,
"text": "Muddiman and Stroud (2017)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results implications for Democratic Debate in NYTimes Comments",
"sec_num": "5.2"
},
{
"text": "Note that for the Picks, we took the most recent Top N editorially picked comments. The results may differ with a random Top of recommended comments, or another manner of selecting the Top editorial picks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is funded through Open Competition Digitalization Humanities and Social Science grant nr 406.D1.19.073 awarded by the Netherlands Organization of Scientific Research (NWO). We would like to thank the hackathon organizers for organizing the event, and for excellently supporting all teams working on challenges. All remaining errors are our own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "The validity of sentiment analysis: Comparing manual annotation, crowdcoding, dictionary approaches, and machine learning algorithms",
"authors": [
{
"first": "",
"middle": [],
"last": "Wouter Van Atteveldt",
"suffix": ""
},
{
"first": "Acg",
"middle": [],
"last": "Mariken",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Van Der Velden",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Boukes",
"suffix": ""
}
],
"year": 2021,
"venue": "Communication Methods and Measures",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wouter van Atteveldt, Mariken ACG van der Velden, and Mark Boukes. 2021. The validity of sentiment analysis: Comparing manual annotation, crowd- coding, dictionary approaches, and machine learn- ing algorithms. Communication Methods and Mea- sures, pages 1-20.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the dangers of stochastic parrots: Can language models be too big",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; Association for Computing Machinery",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency; As- sociation for Computing Machinery: New York, NY, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emotion shapes the diffusion of moralized content in social networks",
"authors": [
{
"first": "J",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Julian",
"middle": [
"A"
],
"last": "Brady",
"suffix": ""
},
{
"first": "John",
"middle": [
"T"
],
"last": "Wills",
"suffix": ""
},
{
"first": "Joshua",
"middle": [
"A"
],
"last": "Jost",
"suffix": ""
},
{
"first": "Jay J Van",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bavel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "114",
"issue": "28",
"pages": "7313--7318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William J Brady, Julian A Wills, John T Jost, Joshua A Tucker, and Jay J Van Bavel. 2017. Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28):7313-7318.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "On the democratic role of news recommenders",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
}
],
"year": 2019,
"venue": "Digital Journalism",
"volume": "7",
"issue": "8",
"pages": "993--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Helberger. 2019. On the democratic role of news recommenders. Digital Journalism, 7(8):993-1012.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exposure diversity as a design principle for recommender systems",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
},
{
"first": "Kari",
"middle": [],
"last": "Karppinen",
"suffix": ""
},
{
"first": "Lucia D'",
"middle": [],
"last": "Acunto",
"suffix": ""
}
],
"year": 2018,
"venue": "Information, Communication & Society",
"volume": "21",
"issue": "2",
"pages": "191--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Helberger, Kari Karppinen, and Lucia D'acunto. 2018. Exposure diversity as a design principle for recommender systems. Information, Communica- tion & Society, 21(2):191-207.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Exposure diversity",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Wojcieszak",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "7",
"issue": "",
"pages": "535--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Helberger and Magdalena Wojcieszak. 2018. Exposure diversity. In Philip Michael Napoli, edi- tor, Mediated Communication, volume 7, chapter 28, pages 535-560. Walter de Gruyter GmbH & Co KG.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Vader: A parsimonious rule-based model for sentiment analysis of social media text",
"authors": [
{
"first": "Clayton",
"middle": [],
"last": "Hutto",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International AAAI Conference on Web and Social Media",
"volume": "8",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clayton Hutto and Eric Gilbert. 2014. Vader: A par- simonious rule-based model for sentiment analysis of social media text. In Proceedings of the Interna- tional AAAI Conference on Web and Social Media, volume 8.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Diversity, serendipity, novelty, and coverage: a survey and empirical analysis of beyond-accuracy objectives in recommender systems",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Kaminskas",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Bridge",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Transactions on Interactive Intelligent Systems (TiiS)",
"volume": "7",
"issue": "1",
"pages": "1--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Kaminskas and Derek Bridge. 2016. Diversity, serendipity, novelty, and coverage: a survey and em- pirical analysis of beyond-accuracy objectives in rec- ommender systems. ACM Transactions on Interac- tive Intelligent Systems (TiiS), 7(1):1-42.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "News recommender systems-survey and roads ahead",
"authors": [
{
"first": "Mozhgan",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Dietmar",
"middle": [],
"last": "Jannach",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jugovac",
"suffix": ""
}
],
"year": 2018,
"venue": "Information Processing & Management",
"volume": "54",
"issue": "6",
"pages": "1203--1227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mozhgan Karimi, Dietmar Jannach, and Michael Ju- govac. 2018. News recommender systems-survey and roads ahead. Information Processing & Man- agement, 54(6):1203-1227.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "New York Times Dataset",
"authors": [
{
"first": "Aashita",
"middle": [],
"last": "Kesarwani",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aashita Kesarwani. 2018. New York Times Dataset. https://www.kaggle.com/aashita/ nyt-comments, last accessed on March 1, 2021.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Beyond optimizing for clicks: Incorporating editorial values in news recommendation",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Anca",
"middle": [],
"last": "Dumitrache",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graus",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Lu, Anca Dumitrache, and David Graus. 2020. Beyond optimizing for clicks: Incorporating edito- rial values in news recommendation. In Proceed- ings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pages 145-153.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "News values, cognitive biases, and partisan incivility in comment sections",
"authors": [
{
"first": "Ashley",
"middle": [],
"last": "Muddiman",
"suffix": ""
},
{
"first": "Natalie Jomini",
"middle": [],
"last": "Stroud",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of communication",
"volume": "67",
"issue": "4",
"pages": "586--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashley Muddiman and Natalie Jomini Stroud. 2017. News values, cognitive biases, and partisan incivil- ity in comment sections. Journal of communication, 67(4):586-609.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The filter bubble: What the Internet is hiding from you",
"authors": [
{
"first": "Eli",
"middle": [],
"last": "Pariser",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eli Pariser. 2011. The filter bubble: What the Internet is hiding from you. Penguin UK.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "EMBEDDIA tools, datasets and challenges: Resources and hackathon contributions",
"authors": [
{
"first": "Senja",
"middle": [],
"last": "Pollak",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Robnik\u0161ikonja",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Purver",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Boggia",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Shekhar",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Pranji\u0107",
"suffix": ""
},
{
"first": "Salla",
"middle": [],
"last": "Salmela",
"suffix": ""
},
{
"first": "Ivar",
"middle": [],
"last": "Krustok",
"suffix": ""
},
{
"first": "Tarmo",
"middle": [],
"last": "Paju",
"suffix": ""
},
{
"first": "Carl-Gustav",
"middle": [],
"last": "Linden",
"suffix": ""
},
{
"first": "Leo",
"middle": [],
"last": "Lepp\u00e4nen",
"suffix": ""
},
{
"first": "Elaine",
"middle": [],
"last": "Zosa",
"suffix": ""
},
{
"first": "Matej",
"middle": [],
"last": "Ul\u010dar",
"suffix": ""
},
{
"first": "Linda",
"middle": [],
"last": "Freienthal",
"suffix": ""
},
{
"first": "Silver",
"middle": [],
"last": "Traat",
"suffix": ""
},
{
"first": "Luis",
"middle": [
"Adri\u00e1n"
],
"last": "Cabrera-Diego",
"suffix": ""
},
{
"first": "Matej",
"middle": [],
"last": "Martinc",
"suffix": ""
},
{
"first": "Nada",
"middle": [],
"last": "Lavra\u010d",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bla\u017e\u0161krlj",
"suffix": ""
},
{
"first": "Andra\u017e",
"middle": [],
"last": "Mar-Tin\u017enidar\u0161i\u010d",
"suffix": ""
},
{
"first": "Boshko",
"middle": [],
"last": "Pelicon",
"suffix": ""
},
{
"first": "Vid",
"middle": [],
"last": "Koloski",
"suffix": ""
},
{
"first": "Janez",
"middle": [],
"last": "Podpe\u010dan",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Kranjc",
"suffix": ""
},
{
"first": "Emanuela",
"middle": [],
"last": "Sheehan",
"suffix": ""
},
{
"first": "Jose",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Moreno",
"suffix": ""
},
{
"first": "Hannu",
"middle": [],
"last": "Doucet",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Toivonen",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Senja Pollak, Marko Robnik\u0160ikonja, Matthew Purver, Michele Boggia, Ravi Shekhar, Marko Pranji\u0107, Salla Salmela, Ivar Krustok, Tarmo Paju, Carl-Gustav Linden, Leo Lepp\u00e4nen, Elaine Zosa, Matej Ul\u010dar, Linda Freienthal, Silver Traat, Luis Adri\u00e1n Cabrera- Diego, Matej Martinc, Nada Lavra\u010d, Bla\u017e\u0160krlj, Mar- tin\u017dnidar\u0161i\u010d, Andra\u017e Pelicon, Boshko Koloski, Vid Podpe\u010dan, Janez Kranjc, Shane Sheehan, Emanuela Boros, Jose Moreno, Antoine Doucet, and Hannu Toivonen. 2021. EMBEDDIA tools, datasets and challenges: Resources and hackathon contributions. In Proceedings of the EACL Hackashop on News Me- dia Content Analysis and Automated Report Gener- ation. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A survey on news recommender system-dealing with timeliness, dynamic user interest and content quality, and effects of recommendation on news readers",
"authors": [
{
"first": "Shaina",
"middle": [],
"last": "Raza",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Ding",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.04964"
]
},
"num": null,
"urls": [],
"raw_text": "Shaina Raza and Chen Ding. 2020. A survey on news recommender system-dealing with timeliness, dy- namic user interest and content quality, and effects of recommendation on news readers. arXiv preprint arXiv:2009.04964.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "No nlp task should be an island: Multidisciplinarity for diversity in news recommender systems",
"authors": [
{
"first": "Myrthe",
"middle": [],
"last": "Reuver",
"suffix": ""
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Report Generation. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myrthe Reuver, Antske Fokkens, and Suzan Verberne. 2021. No nlp task should be an island: Multi- disciplinarity for diversity in news recommender sys- tems. In Proceedings of the EACL Hackashop on News Media Content Analysis and Automated Re- port Generation. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "It's my campaign i'll cry if i want to: How and when campaigns use emotional appeals",
"authors": [
{
"first": "N",
"middle": [],
"last": "Travis",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Ridout",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Searles",
"suffix": ""
}
],
"year": 2011,
"venue": "Political Psychology",
"volume": "32",
"issue": "3",
"pages": "439--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Travis N Ridout and Kathleen Searles. 2011. It's my campaign i'll cry if i want to: How and when cam- paigns use emotional appeals. Political Psychology, 32(3):439-458.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deliberation 2.0: Comparing the deliberative quality of online news user comments across platforms",
"authors": [
{
"first": "Ian",
"middle": [
"Rowe"
],
"last": "",
"suffix": ""
}
],
"year": 2015,
"venue": "Journal of broadcasting & electronic media",
"volume": "59",
"issue": "4",
"pages": "539--555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Rowe. 2015. Deliberation 2.0: Comparing the deliberative quality of online news user comments across platforms. Journal of broadcasting & elec- tronic media, 59(4):539-555.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "News, politics, and negativity",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Soroka",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mcadams",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "32",
"issue": "",
"pages": "1--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Soroka and Stephen McAdams. 2015. News, politics, and negativity. Political Communication, 32(1):1-22.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recommenders with a mission: assessing diversity in news recommendations",
"authors": [
{
"first": "Sanne",
"middle": [],
"last": "Vrijenhoek",
"suffix": ""
},
{
"first": "Mesut",
"middle": [],
"last": "Kaya",
"suffix": ""
},
{
"first": "Nadia",
"middle": [],
"last": "Metoui",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Odijk",
"suffix": ""
},
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
}
],
"year": 2021,
"venue": "SIGIR Conference on Human Information Interaction and Retrieval (CHIIR) Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanne Vrijenhoek, Mesut Kaya, Nadia Metoui, Judith M\u00f6ller, Daan Odijk, and Natali Helberger. 2021. Recommenders with a mission: assessing diversity in news recommendations. In SIGIR Conference on Human Information Interaction and Retrieval (CHIIR) Proceedings.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The state-of-the-art in twitter sentiment analysis: A review and benchmark evaluation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Zimbra",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Abbasi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Hsinchun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM Transactions on Management Information Systems (TMIS)",
"volume": "9",
"issue": "2",
"pages": "1--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Zimbra, Ahmed Abbasi, Daniel Zeng, and Hsinchun Chen. 2018. The state-of-the-art in twitter sentiment analysis: A review and benchmark evalu- ation. ACM Transactions on Management Informa- tion Systems (TMIS), 9(2):1-29.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Results on the feb 2018 set.",
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>The left column</td></tr><tr><td>shows the editorial picks, while the right column shows</td></tr><tr><td>the recommendations based on user likes. Activation</td></tr><tr><td>scores can range from [-1, 1], where a negative value</td></tr><tr><td>denotes the recommender picks items less activating</td></tr><tr><td>than in the general pool, while a positive value indi-</td></tr><tr><td>cates the items are more activating.</td></tr></table>"
},
"TABREF2": {
"text": "Results on the feb 2017 set. The left column shows the editorial picks, while the right column shows the recommendations based on user likes. Activation scores can range from [",
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}