ACL-OCL / Base_JSON /prefixG /json /gamnlp /2020.gamnlp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:06:03.968486Z"
},
"title": "Aggregation Driven Progression for GWAPs",
"authors": [
{
"first": "Doruk",
"middle": [],
"last": "Kicikoglu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary Univ. Of London",
"location": {
"country": "United Kingdom"
}
},
"email": "o.d.kicikoglu@qmul.ac.uk"
},
{
"first": "Richard",
"middle": [],
"last": "Bartle",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Of Essex",
"location": {
"country": "United Kingdom"
}
},
"email": "rabartle@essex.ac.uk"
},
{
"first": "Silviu",
"middle": [],
"last": "Paun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary Univ. Of London",
"location": {
"country": "United Kingdom"
}
},
"email": "s.paun@qmul.ac.uk"
},
{
"first": "Jon",
"middle": [],
"last": "Chamberlain",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University Of Essex",
"location": {
"country": "United Kingdom"
}
},
"email": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queen Mary Univ. Of London",
"location": {
"country": "United Kingdom"
}
},
"email": "m.poesio@qmul.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "As the use of Games-With-A-Purpose (GWAPs) broadens, their annotation schemes have increased in complexity. The types of annotations required within NLP are an example of labelling that can involve varying complexity of annotations. Assigning more complex tasks to more skilled players through a progression mechanism can achieve higher accuracy in the collected data while acting as a motivating factor that rewards the more skilled players. In this paper, we present the progression technique implemented in Wormingo , an NLP GWAP that currently includes two layers of task complexity. For the experiment, we have implemented four different progression scenarios on 192 players and compared the accuracy and engagement achieved with each scenario.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "As the use of Games-With-A-Purpose (GWAPs) broadens, their annotation schemes have increased in complexity. The types of annotations required within NLP are an example of labelling that can involve varying complexity of annotations. Assigning more complex tasks to more skilled players through a progression mechanism can achieve higher accuracy in the collected data while acting as a motivating factor that rewards the more skilled players. In this paper, we present the progression technique implemented in Wormingo , an NLP GWAP that currently includes two layers of task complexity. For the experiment, we have implemented four different progression scenarios on 192 players and compared the accuracy and engagement achieved with each scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The first GWAPs focused on simple tasks varying from text deciphering to image or sound labelling (von Ahn and Dabbish, 2004; Lafourcade et al., 2015; Barrington et al., 2009) . Such GWAPs did not require their players to progress to more advanced tasks. However, modern GWAPs collecting more complex judgments, as in NLP, may require players to carry out annotations of varying complexity that may be harder to teach to entry-level players (Poesio et al., 2013) . Such GWAPs may benefit from the practice, widely adopted within the gaming industry (Koster and Wright, 2004) , of introducing a player to simpler tasks and proceeding to the more complicated ones once they have proven successful on the initial tasks. Such skill progression achieves higher motivation and engagement as the players are kept within flow (Csikszentmihalyi, 1991) , meaning they face challenges corresponding to their improving competence. GWAPs can achieve a similar affect with this approach. In addition, this type of progression increases the quality of the data produced as players are assigned with more complicated tasks, only after they have reached a sufficient understanding of the annotation tasks within the system (Madge et al., 2019) . The fact that GWAP players vary in terms of competence makes it mandatory to assess the players by comparing to golden data, and proceed only when they reach a certain level of accuracy (Ipeirotis and Gabrilovich, 2015; Madge et al., 2019; Fort et al., 2014; Chamberlain et al., 2008) . In addition to many GWAPs that utilize this method, Phrase Detectives and Zombilingo also implement progression techniques that assess the player accuracy based on the types of tasks that they are performing. These GWAPs include different types of tasks which vary in complexity. Players begin with simpler tasks, then move on to more complicated annotation tasks once they reached a certain level of success during the assessment period. In addition to aligning the player progression along task complexity, another axis can be the difficulty of the labels; that can be defined as the difficulty of a label compared to the other labels within the same task i.e. some spans might be more ambiguous in Phrase Detectives, hence may be more difficult to resolve; creating more disagreement among the players. In a system where labels are identified and ranked by their difficulty, players can be assigned with more difficult tasks once they prove successful on the easier ones. Tile Attack and Quizz implement this technique, where players are assigned with labels matching their competence level (Ipeirotis and Gabrilovich, 2015; Madge et al., 2019) . Wormingo implements both of these approaches of progression. As players progress, they can advance to both more difficult documents (difficulty progression) and more complicated tasks (task progression). For difficulty progression, the documents in Wormingo are manually labelled into 5 levels of difficulty ranging from letter A to E. The documents in level A are considered as the easiest in terms of comprehension, while those in level E are the most difficult, that may include more sophisticated vocabulary or more complicated sentence structure. Wormingo uses a level-up mechanism which lets players reach higher levels (currently up to level 16) after collecting score points awarded for annotations. Players can play more difficult documents, only after reaching higher player levels ( Figure 1 ). Level-up mechanisms are widely used within games (Zichermann and Cunningham, 2011) . Although they are proven effective for rewarding commitment to the game, they do not necessarily indicate that the player is more competent. A player who performs poorly in terms of accuracy can simply hoard points by playing longer and still reach the next player level. Therefore, when assessing the players' competence for more advanced tasks, their annotation accuracy can be a better indicator rather than the points they managed to hoard. Comparing the players' annotations to the gold or aggregated data yields the player accuracy. However, cases in Phrase Detectives show that higher numbers of players can agree on a wrong annotation while fewer number of skilled players might contrarily have given the correct answer for a label (Paun et al., 2018) . Relying solely on the number of annotators can be misleading in such cases. Therefore, Mention Pair Annotations model (MPA) builds a confidence-based model. MPA generates confidence scores for annotations, and players, via Bayesian models with the players' annotation accuracy taken into consideration. Players who have higher accuracy gain a higher confidence score from a range between 0 and 1. During data aggregation, the annotations of players with higher confidence scores are evaluated with higher weight. MPA also generates separate player confidence scores for each task, evaluating players' performance on individual tasks. This model overcomes the aforementioned problem and produces confidence ratings both for the aggregated data and the players. Wormingo uses the player confidence outcome when assessing their competence to progress to more complicated tasks.",
"cite_spans": [
{
"start": 98,
"end": 125,
"text": "(von Ahn and Dabbish, 2004;",
"ref_id": null
},
{
"start": 126,
"end": 150,
"text": "Lafourcade et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 151,
"end": 175,
"text": "Barrington et al., 2009)",
"ref_id": "BIBREF0"
},
{
"start": 441,
"end": 462,
"text": "(Poesio et al., 2013)",
"ref_id": "BIBREF12"
},
{
"start": 549,
"end": 574,
"text": "(Koster and Wright, 2004)",
"ref_id": "BIBREF8"
},
{
"start": 818,
"end": 842,
"text": "(Csikszentmihalyi, 1991)",
"ref_id": "BIBREF4"
},
{
"start": 1206,
"end": 1226,
"text": "(Madge et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 1415,
"end": 1448,
"text": "(Ipeirotis and Gabrilovich, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 1449,
"end": 1468,
"text": "Madge et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 1469,
"end": 1487,
"text": "Fort et al., 2014;",
"ref_id": "BIBREF5"
},
{
"start": 1488,
"end": 1513,
"text": "Chamberlain et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 2610,
"end": 2643,
"text": "(Ipeirotis and Gabrilovich, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 2644,
"end": 2663,
"text": "Madge et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 3522,
"end": 3555,
"text": "(Zichermann and Cunningham, 2011)",
"ref_id": "BIBREF14"
},
{
"start": 4298,
"end": 4317,
"text": "(Paun et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 3460,
"end": 3469,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Wormingo currently includes two types of annotation tasks, discourse-new and non-referring. The earlier versions of Wormingo already included the discourse-new task ( Figure 2 ), which asks players if a label in the task has been mentioned before (Kicikoglu et al., 2019 ). In the current version of Wormingo , the non-referring task has been implemented as the second and more advanced task. In the discourse-new task, the players annotate coreference chains. The game asks the players to annotate a label, such as the label \"him\" illustrated with purple colour in Figure 2 . The player clicks \"No\" if this label was not mentioned in the text before, or \"Yes\" if it was mentioned. After clicking \"Yes\", clusters of phrases that we call \"markables\" are highlighted with colour yellow (Figure 3 ). The player chooses which of the markables that the label refers to in this interface.",
"cite_spans": [
{
"start": 248,
"end": 271,
"text": "(Kicikoglu et al., 2019",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 167,
"end": 176,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 567,
"end": 576,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 786,
"end": 795,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Tasks",
"sec_num": "2.1."
},
{
"text": "In the non-referring task, labels such as \"it\" in the sentences \"It is raining\", \"It is 3 o'clock\" do not refer to a real object. Such occurrences should be labelled as nonreferring (Chamberlain et al., 2009) . However, this adds an extra layer to the discourse-new task implemented in the earlier versions of Wormingo , because in addition to the possibility of being a non-referring label, an occurrence of the word \"it\" can be a part of a coreference chain as well; such as in \"I had a pizza, it was good!\". Therefore, non-referring is considered as a more complicated task laid on top of the discourse-new task, as it includes the complexity of the discourse-new task with the non-referring option added on top. On the interface, non-referring task uses the same interface layout as the discourse-new task, but an additional \"NR\" button is added. Players who click this button annotate the given label as non-referring ( Figure 4 ). Non-referring cases occur on expletive words \"it\" and \"there\", so only the labels with these string values were asked in the non-referring tasks. ",
"cite_spans": [
{
"start": 182,
"end": 208,
"text": "(Chamberlain et al., 2009)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 925,
"end": 934,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Annotation Tasks",
"sec_num": "2.1."
},
{
"text": "Players are taught about the discourse-new task on their first annotation. This is done through freezing the interface and showing the player a message that explains the discourse new task. First an example whose correct an-swer is discourse-new (has not been mentioned before) is shown and the player can only continue by clicking the \"No\" button, which labels the annotation as discourse-new ( Figure 5 ). On the following annotation, players are similarly shown a label that has been mentioned in the text before. Players can continue only by linking the label to one of its antecedents and clicking the \"Confirm\" button on this interface ( Figure 6 ). The case-selection algorithm of Wormingo chooses the next documents and labels to represent to the players from a selection of available items, where incomplete labels that received at least one annotation are prioritized (Chen et al., 2010) . Labels that have received less than 7 annotations are considered incomplete. Once a player has been assessed to qualify to the nonreferring task, the case-selection algorithm starts including expletive labels as well. Expletive labels gain higher priority scores; however the final case selection happens with a random selection where higher priority items gain higher probability -meaning an item with less probability still has a chance to appear as the next task depending on the generated random value. The player may also qualify to the non-referring task while playing a document that contains no expletive expressions at all. Thus, the player may not immediately encounter a non-referring task after qualifying to the non-referring tasks. Once they do encounter a non-referring task for a first time, the tutorial interface appears ( Figure 7 ) and the players are explained about the non-referring task and introduced with the \"NR\" button that allows the players to annotate labels as non-referring. ",
"cite_spans": [
{
"start": 878,
"end": 897,
"text": "(Chen et al., 2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 396,
"end": 404,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 644,
"end": 652,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1741,
"end": 1749,
"text": "Figure 7",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Tutorials",
"sec_num": "2.2."
},
{
"text": "In the experiment, we divided the players into 4 groups. Each group needed to accomplish a different scenario to advance to the non-referring tasks. Group A needed to earn 350 score points, which corresponds to reaching level 3 and an average of 16.77 discourse-new annotations (players gain 25 points for each correct discourse-new annotation and 50 points for each correct non-referring annotation). The accuracy of this group was not considered when evaluating; hoarding enough points was sufficient for Group A to qualify to the non-referring task. Groups B, C and D needed to pass the 350 point barrier like Group A. On top of this, they needed to achieve certain MPA confidence scores for their discourse-new annotations. Group B needed to reach 0.8 MPA confidence score in order to progress. Group C needed to reach 0.85 confidence score and Group D needed to reach 0.9. Comparing Group A to the other groups allowed observing the difference between assessing players based solely on their score, versus assessing players based on their accuracy. Comparing Group B, C and D allowed observing how the value of the qualification threshold affects the data produced. Figure 8 displays the average accuracy of players, varying by the number of annotations they have done. The red line is their weighted accuracy; calculated by comparing players' accuracy on each document to the average accuracy of all players on the respective document. The average weighted accuracy can vary on the first few annotations, but after players' 10th annotations, it reaches a plateau around 84% accuracy. Therefore, we took 10 annotations as the threshold -the number of discourse-new annotations a player must complete before being progressing to the non-referring tasks. Players who did annotations fewer than this threshold were not assigned to any of the observation group. The players who reached 350 points and did at least 10 annotations were assigned to an observation group.",
"cite_spans": [],
"ref_spans": [
{
"start": 1171,
"end": 1179,
"text": "Figure 8",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "2.3."
},
{
"text": "We analyze the data produced between 07 Feb 2020 and 17 Mar 2020. During this period, 192 Wormingo players did at least 1 annotation. The players came from the subreddits that we have posted on reddit.com and university e-mail groups with interest towards Computer Science and games. Out of the 192 players, 98 completed the qualification requirements and were therefore assigned to a observation group. Figure 9 shows the number of players in each group. Groups B Pass, C Pass and D Pass are the groups of players who were originally in groups B, C and D respectively and have accomplished progression to the non-referring tasks. Similarly, groups B Fail, C Fail and D Fail contain the players who were in groups B, C and D respectively but failed to advance to the next task. Figure 10 shows each group's ratio of players who passed or failed progression to the non-referring task. The ratio of players increase as expected from Group B towards D; as the threshold for progression also increases towards this direction. Figures 11 and 12 display annotation counts per group and average annotations done by players within each group. Figure 12 includes players who have qualified to the NR task but have not done any non-referring annotations (since players may not immediately come across NR tasks after they qualify), hence the average annotation counts appear low. Figure 13 provides more meaningful average scores, as it displays values for players who have done at least 2 annotations. Groups A and B Pass contribute significantly higher number of annotations (DN and NR) in both total and average per player. Figure 14 shows the groups' average accuracy and MPA confidence scores, wherein no significant difference in terms of NR accuracy is observed. However a significant difference is observed in D Pass group's NR MPA confi- Figure 14 : Average non-referring accuracy and MPA confidence scores per group dence value (p=0.01). Although it might seem like a good strategy to set the qualification threshold to D Pass group's value, 0.9, this would potentially lead to generation of too small data, as D Pass group has only generated 17 NR annotations. B-Pass group however generated much more data (157 annotation) with an average of 0.60 confidence. Figure 15 , 16 and 17 groups all players by their DN MPA confidence scores, instead of their observation groups. The bands \"conf. \u2265 80\", \"conf. \u2265 85\" and \"conf. \u2265 90\" are players whose DN confidence scores were higher than 0.8, 0.85 and 0.9 respectively and they are not exclusive of each other. We observe that a majority of players score higher than 0.85 DN MPA confidence in Figure 15 . 43% of players score higher than 0.9 while 71% score higher than 0.85. We do not observe significant difference in terms of nonreferring task competence between bands \"conf. \u2265 80\" and \"conf. \u2265 85\" bands ( Figure 16) . A slight increase is observed in the \"conf. \u2265 90\" band, however we do not have yet sufficient evidence to conclude that the threshold should be set to 0.9. Players in \"conf. \u2265 90\" band do produce more NR annotations per player (Figure 16 ), however setting the threshold at this level would rule out 57% of players who perform sufficiently well in terms of accuracy at the lower levels (Paun et al., 2018) . We hope that future studies with more players, more data, and more levels of complexity can could provide more definitive results.",
"cite_spans": [
{
"start": 3254,
"end": 3273,
"text": "(Paun et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 9",
"ref_id": "FIGREF6"
},
{
"start": 778,
"end": 787,
"text": "Figure 10",
"ref_id": "FIGREF0"
},
{
"start": 1022,
"end": 1039,
"text": "Figures 11 and 12",
"ref_id": "FIGREF0"
},
{
"start": 1135,
"end": 1144,
"text": "Figure 12",
"ref_id": "FIGREF0"
},
{
"start": 1369,
"end": 1378,
"text": "Figure 13",
"ref_id": "FIGREF0"
},
{
"start": 1616,
"end": 1625,
"text": "Figure 14",
"ref_id": "FIGREF0"
},
{
"start": 1836,
"end": 1845,
"text": "Figure 14",
"ref_id": "FIGREF0"
},
{
"start": 2260,
"end": 2269,
"text": "Figure 15",
"ref_id": "FIGREF0"
},
{
"start": 2638,
"end": 2647,
"text": "Figure 15",
"ref_id": "FIGREF0"
},
{
"start": 2855,
"end": 2865,
"text": "Figure 16)",
"ref_id": "FIGREF0"
},
{
"start": 3095,
"end": 3105,
"text": "(Figure 16",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3."
},
{
"text": "In this paper, we have tested 4 different scenarios of skills progression in Wormingo. The fact that the players have voluntarily come to the game rather than for a paid reward, assures more relevance of this data to the general GWAP audience. However, the few number of participants that arrived within the limited time hinders the accuracy of our measurements, leaving room for future research on the area, possibly with more advanced tasks added. Players who score high on discourse new tasks also achieve high accuracy on non-referring tasks. This fact is encouraging, as it supports the claim that allowing only competent players to do more complicated tasks produces cleaner data. However, this comes with a cost. Setting a threshold too high will hinder the players who have the potential to score adequately on the more complicated tasks. Setting it too low pollutes the produced data. The results show that players can perform higher accuracy on more advanced tasks, if they have were sufficiently trained on the preceding tasks. An optimal threshold that will neither rule out skilled annotators nor pollute the data can be calculated based upon the players' performance on the initial tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4."
}
],
"back_matter": [
{
"text": "This research was supported in part by the DALI project, ERC Grant 695662, in part by the EPSRC CDT in Intelligent Games and Game Intelligence (IGGI), EP/L015846/1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "5."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "User-centered design of a social game to tag music",
"authors": [
{
"first": "L",
"middle": [],
"last": "Barrington",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "O'malley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Turnbull",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lanckriet",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '09",
"volume": "",
"issue": "",
"pages": "7--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barrington, L., O'Malley, D., Turnbull, D., and Lanckriet, G. (2009). User-centered design of a social game to tag music. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '09, pages 7-10, New York, NY, USA. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Phrase detectives: A web-based collaborative annotation game",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chamberlain, J., Poesio, M., and Kruschwitz, U. (2008). Phrase detectives: A web-based collaborative annotation game. 01.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Constructing an anaphorically annotated corpus with non-experts: Assessing the quality of collaborative annotations",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources, People's Web '09",
"volume": "",
"issue": "",
"pages": "57--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chamberlain, J., Kruschwitz, U., and Poesio, M. (2009). Constructing an anaphorically annotated corpus with non-experts: Assessing the quality of collaborative an- notations. In Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources, People's Web '09, page 57-62, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The design of puzzle selection strategies for gwap systems",
"authors": [
{
"first": "L.-J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "B.-C",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "K.-T",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Concurrency and Computation: Practice and Experience",
"volume": "22",
"issue": "7",
"pages": "890--908",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, L.-J., Wang, B.-C., and Chen, K.-T. (2010). The design of puzzle selection strategies for gwap systems. Concurrency and Computation: Practice and Experi- ence, 22(7):890-908.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Flow: The Psychology of Optimal Experience",
"authors": [
{
"first": "M",
"middle": [],
"last": "Csikszentmihalyi",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Csikszentmihalyi, M. (1991). Flow: The Psychology of Optimal Experience. Harper Perennial, New York, NY, March.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Creating zombilingo, a game with a purpose for dependency syntax annotation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Fort",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Chastant",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the First International Workshop on Gamification for Information Retrieval, GamifIR '14",
"volume": "",
"issue": "",
"pages": "2--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fort, K., Guillaume, B., and Chastant, H. (2014). Creat- ing zombilingo, a game with a purpose for dependency syntax annotation. In Proceedings of the First Interna- tional Workshop on Gamification for Information Re- trieval, GamifIR '14, page 2-6, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quizz: Targeted crowdsourcing with a billion (potential) users. CoRR",
"authors": [
{
"first": "P",
"middle": [
"G"
],
"last": "Ipeirotis",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ipeirotis, P. G. and Gabrilovich, E. (2015). Quizz: Tar- geted crowdsourcing with a billion (potential) users. CoRR, abs/1506.01062.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Wormingo: a 'true gamification' approach to anaphoric annotation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kicikoglu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bartle",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2019,
"venue": "FDG '19",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kicikoglu, D., Bartle, R., Chamberlain, J., and Poesio, M. (2019). Wormingo: a 'true gamification' approach to anaphoric annotation. In FDG '19.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Theory of Fun for Game Design",
"authors": [
{
"first": "R",
"middle": [],
"last": "Koster",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koster, R. and Wright, W. (2004). A Theory of Fun for Game Design. Paraglyph Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Games with a Purpose (GWAPS) (Focus Series in Cognitive Science and Knowledge Management",
"authors": [
{
"first": "M",
"middle": [],
"last": "Lafourcade",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Joubert",
"suffix": ""
},
{
"first": "N",
"middle": [
"L"
],
"last": "Brun",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafourcade, M., Joubert, A., and Brun, N. L. (2015). Games with a Purpose (GWAPS) (Focus Series in Cogni- tive Science and Knowledge Management). Wiley-ISTE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Progression in a language annotation game with a purpose",
"authors": [
{
"first": "C",
"middle": [],
"last": "Madge",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Paun",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2019,
"venue": "FDG '19",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Madge, C., Yu, J., Kruschwitz, U., Paun, S., and Poesio, M. (2019). Progression in a language annotation game with a purpose. In FDG '19.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A probabilistic annotation model for crowdsourcing coreference",
"authors": [
{
"first": "S",
"middle": [],
"last": "Paun",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1926--1937",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paun, S., Chamberlain, J., Kruschwitz, U., Yu, J., and Poesio, M. (2018). A probabilistic annotation model for crowdsourcing coreference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1926-1937, Brussels, Belgium, October-November. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Phrase detectives: Utilizing collective intelligence for internet-scale language resource creation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ducceschi",
"suffix": ""
}
],
"year": 2013,
"venue": "ACM Trans. Interact. Intell. Syst",
"volume": "3",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poesio, M., Chamberlain, J., Kruschwitz, U., Robaldo, L., and Ducceschi, L. (2013). Phrase detectives: Utiliz- ing collective intelligence for internet-scale language re- source creation. ACM Trans. Interact. Intell. Syst., 3(1), April.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Labeling images with a computer game",
"authors": [
{
"first": "L",
"middle": [],
"last": "Ahn",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Dabbish",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '04",
"volume": "",
"issue": "",
"pages": "319--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahn, L. and Dabbish, L. (2004). Labeling images with a computer game. In Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems, CHI '04, page 319-326, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps",
"authors": [
{
"first": "G",
"middle": [],
"last": "Zichermann",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cunningham",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichermann, G. and Cunningham, C. (2011). Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps. O'Reilly Media, Inc., 1st edition.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Player attempts to access a document that is too difficult for their level",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Discourse New Annotation Interface Figure 3: Discourse New Interface -Marking coreference",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "Non-referring Annotation Interface",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "Tutorial for a discourse new labelFigure 6: Tutorial for marking coreference",
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"num": null,
"text": "Tutorial for non-referring tasks",
"uris": null,
"type_str": "figure"
},
"FIGREF5": {
"num": null,
"text": "Average discourse new accuracy of players by number of annotations Prior to the experiment, players were evaluated based on their discourse-new annotation accuracy over time. The yellow line in",
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"num": null,
"text": "Number of players per group",
"uris": null,
"type_str": "figure"
},
"FIGREF7": {
"num": null,
"text": "Pass/Fail percentages per group",
"uris": null,
"type_str": "figure"
},
"FIGREF8": {
"num": null,
"text": "Total annotation counts per groupFigure 12: Average annotation counts per player Figure 13: Average number of NR annotations and number of players who have done at least 2 NR annotations",
"uris": null,
"type_str": "figure"
},
"FIGREF9": {
"num": null,
"text": "Number of players within each band of NR MPA confidence scores",
"uris": null,
"type_str": "figure"
},
"FIGREF10": {
"num": null,
"text": "Non-referring accuracy and MPA confidence scores for each band of NR MPA confidence",
"uris": null,
"type_str": "figure"
},
"FIGREF11": {
"num": null,
"text": "Average Non-referring annotation counts for each band of NR MPA confidence",
"uris": null,
"type_str": "figure"
}
}
}
}