{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:21.988277Z" }, "title": "Multilingual Event Linking to Wikidata", "authors": [ { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "vpratapa@andrew.cmu.edu" }, { "first": "Rishubh", "middle": [], "last": "Gupta", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "rishubhg@andrew.cmu.edu" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "", "affiliation": { "laboratory": "", "institution": "Language Technologies Institute Carnegie Mellon University", "location": {} }, "email": "teruko@andrew.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a task of multilingual linking of events to a knowledge base. We automatically compile a large-scale dataset for this task, comprising of 1.8M mentions across 44 languages referring to over 10.9K events from Wikidata. We propose two variants of the event linking task: 1) multilingual, where event descriptions are from the same language as the mention, and 2) crosslingual, where all event descriptions are in English. On the two proposed tasks, we compare multiple event linking systems including BM25+ (Lv and Zhai, 2011a) and multilingual adaptations of the biencoder and crossencoder architectures from BLINK (Wu et al., 2020). In our experiments on the two task variants, we find both biencoder and crossencoder models significantly outperform the BM25+ baseline. Our results also indicate that the crosslingual task is in general more challenging than the multilingual task. To test the out-of-domain generalization of the proposed linking systems, we additionally create a Wikinews-based evaluation set. We present qualitative analysis highlighting various aspects captured by the proposed dataset, including the need for temporal reasoning over context and tackling diverse event descriptions across languages. 1", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "We present a task of multilingual linking of events to a knowledge base. We automatically compile a large-scale dataset for this task, comprising of 1.8M mentions across 44 languages referring to over 10.9K events from Wikidata. We propose two variants of the event linking task: 1) multilingual, where event descriptions are from the same language as the mention, and 2) crosslingual, where all event descriptions are in English. On the two proposed tasks, we compare multiple event linking systems including BM25+ (Lv and Zhai, 2011a) and multilingual adaptations of the biencoder and crossencoder architectures from BLINK (Wu et al., 2020). In our experiments on the two task variants, we find both biencoder and crossencoder models significantly outperform the BM25+ baseline. Our results also indicate that the crosslingual task is in general more challenging than the multilingual task. To test the out-of-domain generalization of the proposed linking systems, we additionally create a Wikinews-based evaluation set. We present qualitative analysis highlighting various aspects captured by the proposed dataset, including the need for temporal reasoning over context and tackling diverse event descriptions across languages. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language grounding refers to linking concepts (e.g., events/entities) to a context (e.g., a knowledge base) (Chandu et al., 2021) . Knowledge base (KB) grounding is a key component of information extraction stack and is well-studied for linking entity references to KBs like Wikipedia (Ji and Grishman, 2011) . In this work, we present a new multilingual task that involves linking event references to Wikidata KB. 2 Event linking differs from entity's as it involves taking into account the event participants as well as 1 https://github.com/adithya7/xlel-wd 2 www.wikidata.org its temporal and spatial attributes. Nothman et al. (2012) defines event linking as connecting event references from news articles to a news archive consisting of first reports of the events. Similar to entities, event linking is typically restricted to prominent or report-worthy events. In this work, we use a subset of Wikidata as our event KB and link mentions from Wikipedia/Wikinews articles. 3 Figure 1 illustrates our event linking methodology.", "cite_spans": [ { "start": 108, "end": 129, "text": "(Chandu et al., 2021)", "ref_id": null }, { "start": 285, "end": 308, "text": "(Ji and Grishman, 2011)", "ref_id": null }, { "start": 415, "end": 416, "text": "2", "ref_id": null }, { "start": 616, "end": 637, "text": "Nothman et al. (2012)", "ref_id": "BIBREF12" } ], "ref_spans": [ { "start": 980, "end": 988, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Event linking is closely related to the more commonly studied task of cross-document event coreference (CDEC). The goal in CDEC is to understand the identity relationship between event mentions. This identity is often complicated by subevent and membership relations among events (Pratapa et al., 2021) . Nothman et al. (2012) proposed event linking as an alternative to coreference that helps ground report-worthy events to a KB. They showed linking helps avoid the traditional bottlenecks seen with the event coreference task. We postulate linking to be a complementary task to coreference, where the first mention of an event in a document is typically linked or grounded to the KB and its relationship with the rest of the mentions from the document is captured via coreference. Additionally, due to computational constraints, coreference resolution is often restricted to a small batch of documents. Grounding, however, can be performed efficiently using dense retrieval methods (Wu et al., 2020) and is scalable to any large multi-document corpora.", "cite_spans": [ { "start": 280, "end": 302, "text": "(Pratapa et al., 2021)", "ref_id": "BIBREF13" }, { "start": 305, "end": 326, "text": "Nothman et al. (2012)", "ref_id": "BIBREF12" }, { "start": 984, "end": 1001, "text": "(Wu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Grounding event references to a KB has many downstream applications. First, event identity encompasses multiple aspects such as spatio-temporal context and participants. These aspects typically spread across many documents, and KB grounding helps construct a shared global account for each event. Second, grounding is a complementary task to coreference. In contrast to coreference, Figure 1 : An illustration of multilingual event linking with Wikidata as our interlingua. Mentions from French, English and German Wikipedia (column 1) are linked to the same event from Wikidata (column 3). The title and descriptions for the event Q830917 are compiled from the corresponding language Wikipedias (column 2). The solid blue arrows ( ) presents our multilingual task, to link lgwiki mention to event using lgwiki description. The dashed red arrows ( ) showcases the crosslingual task, to link lgwiki mention to event using enwiki description.", "cite_spans": [], "ref_spans": [ { "start": 383, "end": 391, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "event grounding formulated as the nearest neighbor search leads to efficient scaling. For the event linking task, we present a new multilingual dataset that grounds mentions from multilingual Wikipedia/Wikinews articles to the corresponding event in Wikidata. Figure 1 presents an example from our dataset that links mentions from three languages to the same Wikidata item. To construct this dataset, we make use of the hyperlinks in Wikipedia/Wikinews articles. These links connect anchor texts (like '2010 European Championships' or \"Championnats d'Europe\") in context to the corresponding event Wikipedia page ('2010 European Aquatics Championships' or \"Championnats d'Europe de natation 2010\"). We further connect the event Wikipedia page to its Wikidata item ('Q830917'), facilitating multilingual grounding of mentions to KB events. We use the title and first paragraph from the language Wikipedia pages as our event descriptions (column 2 in Figure 1 ). Such hyperlinks have previously been explored for named entity disambiguation (Eshel et al., 2017) , entity linking (Logan et al., 2019) and crossdocument coreference of events (Eirew et al., 2021) and entities (Singh et al., 2012) . Our work is closely related to the English CDEC work of Eirew et al. 2021, but we view the task as linking instead of coreference. This is primarily due to the fact that most hyperlinked event mentions are prominent and typically cover a broad range of subevents, conflicting directly with the notion of coreference. Additionally, our dataset is multilingual, covering 44 languages, with Wikidata serving as our interlingua. Botha et al. (2020) is a related work from entity linking literature that covers entity references from multilingual Wikinews articles to Wikidata.", "cite_spans": [ { "start": 1039, "end": 1059, "text": "(Eshel et al., 2017)", "ref_id": null }, { "start": 1138, "end": 1158, "text": "(Eirew et al., 2021)", "ref_id": null }, { "start": 1172, "end": 1192, "text": "(Singh et al., 2012)", "ref_id": "BIBREF17" }, { "start": 1620, "end": 1639, "text": "Botha et al. (2020)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 260, "end": 268, "text": "Figure 1", "ref_id": null }, { "start": 949, "end": 957, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We use the proposed dataset to develop multilingual event linking systems. We present two variants to the linking task, multilingual and crosslingual. In the multilingual task, mentions from individual language Wikipedia are linked to the events from Wikidata with descriptions taken from the same language (see solid blue arrows ( ) in Figure 1 ). The crosslingual task requires systems to use English event description irrespective of the mention language (see dashed red arrows ( ) in Figure 1 ). In both tasks, the end goal is to identify the Wikidata ID (e.g. Q830917). Following prior work on entity linking (Logeswaran et al., 2019), we adopt a zeroshot approach in all of our experiments. We present results using a retrieve+rank approach based on Wu et al. (2020) that utilizes BERT-based bien-coder and crossencoder for our multilingual event linking task. We experiment with two multilingual encoders, mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020) and we find biencoder and crossencoder significantly outperform a tf-idf-based baseline, BM25+ (Lv and Zhai, 2011a). Our results indicate the crosslingual task is more challenging than the multilingual task, possibly due to differences in typology of source and target languages. Our key contributions are,", "cite_spans": [ { "start": 756, "end": 772, "text": "Wu et al. (2020)", "ref_id": "BIBREF21" }, { "start": 919, "end": 940, "text": "(Devlin et al., 2019)", "ref_id": null }, { "start": 945, "end": 979, "text": "XLM-RoBERTa (Conneau et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 337, "end": 345, "text": "Figure 1", "ref_id": null }, { "start": 488, "end": 496, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a new multilingual NLP task that involves linking multilingual text mentions to a knowledge base of events. \u2022 We release a large-scale dataset for the zeroshot multilingual event linking task by compiling mentions from Wikipedia and their grounding to Wikidata. Our dataset captures 1.8M mentions across 44 languages refering to over 10K events. To test out-of-domain generalization, we additionally create a small Wikinews-based evaluation set. \u2022 We present two evaluation setups, multilingual and crosslingual event linking. We show competitive results across languages using a retrieve and rank methodology.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our focus task of multilingual event linking shares resemblance with entity/event linking, entity/event coreference and other multilingual NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our work utilizes hyperlinks between Wikipedia pages to identify event references. This idea was previously explored in multiple entity related works, both for dataset creation (Mihalcea and Csomai, 2007; Botha et al., 2020) and data augmentation during training (Bunescu and Pa\u015fca, 2006; Nothman et al., 2008) . Another related line of work utilized hyperlinks from general web pages to Wikipedia articles for the tasks of cross-document entity coreference (Singh et al., 2012) and named entity disambiguation (Eshel et al., 2017) . Sil et al. (2012); Logeswaran et al. (2019) highlighted the need for zero-shot evaluation. We adopt this standard by using a disjoint sets of events for training and evaluation (see subsection 3.2).", "cite_spans": [ { "start": 177, "end": 204, "text": "(Mihalcea and Csomai, 2007;", "ref_id": null }, { "start": 205, "end": 224, "text": "Botha et al., 2020)", "ref_id": "BIBREF8" }, { "start": 263, "end": 288, "text": "(Bunescu and Pa\u015fca, 2006;", "ref_id": "BIBREF10" }, { "start": 289, "end": 310, "text": "Nothman et al., 2008)", "ref_id": "BIBREF11" }, { "start": 458, "end": 478, "text": "(Singh et al., 2012)", "ref_id": "BIBREF17" }, { "start": 511, "end": 531, "text": "(Eshel et al., 2017)", "ref_id": null }, { "start": 534, "end": 552, "text": "Sil et al. (2012);", "ref_id": "BIBREF16" }, { "start": 553, "end": 577, "text": "Logeswaran et al. (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Entity Linking", "sec_num": "2.1" }, { "text": "Event linking is important for downstream tasks like narrative understanding. For instance, consider a prominent event like '2020 Summer Olympics'. This event has had a large influx of articles in multiple languages. It is often useful to ground the references to specific prominent subevents in KB. Some examples of such events from Wikidata are \"Swimming at the 2020 Summer Olympics -Women's 100 metre freestyle\" (Q64513990) and \"Swimming at the 2020 Summer Olympics -Men's 100 metre backstroke\" (Q64514005). Event linking task while important is albeit less explored. Nothman et al. 2012 Figure 1 ). ", "cite_spans": [], "ref_spans": [ { "start": 591, "end": 599, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Event Linking", "sec_num": "2.2" }, { "text": "To compile our dataset, we follow a three-stage pipeline, 1) identify Wikidata items that correspond to events, 2) for each Wikidata event, collect links to language Wikipedia articles and 3) iterate through all the language Wikipedia dumps to collect mention spans that refer to these events.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset Compilation", "sec_num": "3.1" }, { "text": "Wikidata Event Identification: Events are typically associated with time, location and participants, distinguishing them from entities. To identify events from the large pool of Wikidata (WD) items, we make use of the properties listed on WD. 4 Specifically, we consider a WD item to be a candidate event if it contains the following two properties, temporal 5 and spatial 6 . We perform additional postprocessing on this candidate event set to remove non-events like empires (Roman Empire: Q2277), missions (Surveyor 7: Q774594), TV series (Deception: Q30180283) and historic places (French North Africa: Q352061). 7 Each event in our final set has caused a state change and is grounded in a spatio-temporal context. This distinguishes our set of events from the rest of the items from Wikidata. Following the terminology from Weischedel et al. (2013) , these KB events can be characterized as eventive nouns. A Note on WD Hierarchy: WD is a rich structured KB and we observed many instances of hierarchical relationship between our candidate events. See Figure 2 for an example. While this hierarchy adds an interesting challenge to the event grounding task, we observed multiple instances of inconsistency in links. Specifically, we observed references to parent item (Q18193712) even though the child item (Q25397537) was the most appropriate link in context. Therefore, in our dataset, we only include leaf nodes as our candidate event set (e.g. Q25397537). This allows us to focus on most atomic events from Wikidata. Expanding the label set to include the hierarchy is an interesting direction for future work.", "cite_spans": [ { "start": 243, "end": 244, "text": "4", "ref_id": null }, { "start": 616, "end": 617, "text": "7", "ref_id": null }, { "start": 828, "end": 852, "text": "Weischedel et al. (2013)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 1056, "end": 1064, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Dataset Compilation", "sec_num": "3.1" }, { "text": "Wikipedia: WD items have pointers to the corresponding language Wikipedia articles. 8 We make use of these pointers to identify Wikipedia articles describing our candidate WD events. Figure 1 illustrates this through the coiled pointers ( ) for the three languages. We make use of the event's Wikipedia article title and its first paragraph as the description for the WD event. Each language version of a Wikipedia article is typically written by independent contributors, so the event descriptions vary across languages.", "cite_spans": [ { "start": 84, "end": 85, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 183, "end": 191, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Wikidata", "sec_num": null }, { "text": "Mention Identification: Wikipedia articles are often connected through hyperlinks. We iterate through each language Wikipedia and collect anchor texts of hyperlinks to the event Wikipedia pages (column 1 in Figure 1 ). We retain both the anchor text and the surrounding paragraph (context). Notably, the anchor text can occasionally be a temporal expression or location relevant to the event. In the German mention from Figure 1 , the anchor text '2010' links to the event Q830917 (2010 European Aquatics Championships). This event link can be infered by using the context ('Schwimmeuropameisterschaften': European Aquatics Cham-E n g l i s h F r e n c h R u s s i a n G e r m a n I t a l i a n P o l i s h S p a n i s h P o r t u g u e s e D u t c h J a p a n e s e U k r a i n i a n F i n n i s h H u n g a r i a n S w e d i s h C z e c h C h i n e s e A r a b i c C a t a l a n N o r w e g i a n T u r k i s h I n d o n e s i a n H e b r e w K o r e a n S e r b i a n P e r s i a n V i e t n a m e s e B u l g a r i a n S l o v e n e R o m a n i a n D a n i s h G r e e k M a l a y T h a i B e l a r u s i a n S l o v a k A f r i k a a n s B e n g a l i T a m i l H i n d i M a l a y a l a m M a r a t h i T e l u g u S i n h a l a S w a h i l i 10 1 10 2 10 3 10 4", "cite_spans": [], "ref_spans": [ { "start": 207, "end": 215, "text": "Figure 1", "ref_id": null }, { "start": 420, "end": 428, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Wikidata", "sec_num": null }, { "text": "10 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikidata", "sec_num": null }, { "text": "Language # events # mentions pionships). In fact, the neighboring span '2006' refers to a different event from Wikidata (Q612454: 2006 European Aquatics Championships). We use the September 2021 XML dumps of language Wikipedias and the October 2021 dump of Wikidata. We use Wikiextractor tool (Attardi, 2015) to extract text content from the Wikipedia dumps. We retain the hyperlinks in article texts for use in mention identification. Overall, the mentions in our datasets can be categorized into the following types, 1) eventive noun (like the KB event), 2) verbal, 3) location and 4) temporal expression. Such a diversity in the nature of mentions also differentiates the event linking task from the standard named entity linking or disambiguation.", "cite_spans": [ { "start": 293, "end": 308, "text": "(Attardi, 2015)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": "10" }, { "text": "Postprocessing: To link a mention to its event, the context should contain the necessary temporal information. For instance, its important to be able to differentiate between links to '2010 European Aquatics Championships' vs '2012 European Aquatics Championships'. Therefore, we heuristically remove mention (+context) if it completely misses the temporal expressions from the corresponding language Wikipedia title and description. Additionally, we also remove mentions if their contexts are either too short or too long (<100, >2000 characters). We also prune WD events under the following conditions: 1) only contains mentions from a single language, 2) >50% of the mentions match their corresponding language Wikipedia title (i.e., low diversity), 3) very few mentions (<30). Table 1 presents the overall statistics of our dataset. The full list of languages with their event and mention counts are presented in Figure 3 . Each WD event on average has mention references from 9 languages indicating the highly multilingual nature of our dataset. See Table 9 in Appendix for details on the geneological information for the chosen languages. We chose our final set of languages by maximizing for the diversity in language typology, language resources (in event-related tasks and general) and the availability of content on Wikipedia. Wikipedia texts and Wikidata KB are available under CC BY-SA 3.0 and CC0 1.0 license respectively. We will release our dataset under CC BY-SA 3.0.", "cite_spans": [], "ref_spans": [ { "start": 781, "end": 788, "text": "Table 1", "ref_id": "TABREF3" }, { "start": 917, "end": 925, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 1055, "end": 1062, "text": "Table 9", "ref_id": "TABREF16" } ], "eq_spans": [], "section": "6", "sec_num": "10" }, { "text": "To test the out-ofdomain generalization, we additionally prepare a small evaluation set based on Wikinews articles. 9 Inspired by prior work on multilingual entity linking (Botha et al., 2020) , we collect hyperlinks from event mentions in multilingual Wikinews articles to Wikidata. We restrict the set of events to the previously identified 10.9k events from Wikidata (Table 1). We again use Wikiextractor tool to collect raw texts from March 2022 dumps of all language Wikinews. We identify hyperlinks to Wikipedia pages or Wikinews categories that describe the events from Wikidata. Table 2 presents the overall statistics of our Wikinews-based evaluation set. This set is much smaller in size compared to Wikipedia-based dataset primarily due to significantly smaller footprint of Wikinews. 10 Following the taxonomy from Logeswaran et al. (2019), we present two evaluation settings, cross-domain and zero-shot. Crossdomain evaluation gauges model generalization to unseen domains (newswire). Zero-shot evaluation Cross-domain Zero-shot Events 802 149 Mentions 2562 437 Languages 27 21 Table 2 : Summary of Wikinews-based evaluation set. We present two evaluation settings, cross-domain and zero-shot. Zero-shot evaluation set is a subset of crossdomain set as it only includes events from dev and test splits of Wikipedia-based evaluation set (Table 1) .", "cite_spans": [ { "start": 172, "end": 192, "text": "(Botha et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 587, "end": 594, "text": "Table 2", "ref_id": null }, { "start": 1019, "end": 1110, "text": "Cross-domain Zero-shot Events 802 149 Mentions 2562 437 Languages 27 21 Table 2", "ref_id": "TABREF3" }, { "start": 1361, "end": 1370, "text": "(Table 1)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Wikinews Wikidata:", "sec_num": null }, { "text": "tests on unseen domain and unseen events. 11 Unlike Wikipedia, Wikinews articles contains meta information such as news article title and publication date that help provide broader context for the document. In section 5, we perform ablations studies to see the impact of this meta information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikinews Wikidata:", "sec_num": null }, { "text": "Mention Distribution: Following the categories from Logeswaran et al. 2019, we compute mention distributions in the following four buckets, 1) high overlap: mention span is the same as the event title, 2) multiple categories: event title includes an additional disambiguation phrase, 3) ambiguous substring: mention span is a substring of the event title, and 4) low overlap: all other cases. For the Wikipedia-based dataset, the category distribution is 22%, 6%, 14%, and 58%. 12 For the Wikinewsbased dataset, the category distribution is 18%, 4%, 6%, and 72%. We also computed the fraction of mentions that are temporal expressions. We used HeidelTime library (Str\u00f6tgen and Gertz, 2015) for 25 languages and found 6% of the spans in the dev set are temporal expressions.", "cite_spans": [ { "start": 663, "end": 689, "text": "(Str\u00f6tgen and Gertz, 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Wikinews Wikidata:", "sec_num": null }, { "text": "Given a mention and a pool of events from a KB, the task is to identify the mention's reference in the KB. For instance, the three mentions from column 1 in Figure 1 are to be linked to the Wikidata event, Q830917. Following Logeswaran et al. 2019, we assume an in-KB evaluation approach, therefore, every mention refers to a valid event from the KB (Wikidata). We collect descriptions for the Wikidata events from all the corresponding language Wikipedias. The article title and the first paragraph constitute the event description. This results in multilingual descriptions for each event (column 2 in Figure 1 ). We propose two variants of the event linking task, multilingual and crosslingual, depending on the source and target languages. We define the input mention and event description as source and target respectively. The event label itself (e.g. Q830917) is language-agnostic.", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 165, "text": "Figure 1", "ref_id": null }, { "start": 604, "end": 612, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Task Definition", "sec_num": "3.2" }, { "text": "Multilingual Event Linking: Given a mention from language L, the linker searches through the event candidates from the same language L to identify the correct link. The source and target language are the same in this task. The size of event candidate pool varies across languages (Figure 3 ), thereby varying the task difficulty.", "cite_spans": [], "ref_spans": [ { "start": 280, "end": 289, "text": "(Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Task Definition", "sec_num": "3.2" }, { "text": "Crosslingual Event Linking: Given a mention from any language L, the linker searches the entire pool of event candidates to identify the link. Here, we restrict the target language to English, requiring the linker to only make use of the English descriptions for candidate events. Note that, all the events in our dataset have English descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Definition", "sec_num": "3.2" }, { "text": "Creating Splits: The train, dev and test distributions are presented in Table 1 . The two tasks, multilingual and crosslingual share the same splits except for the difference in target language descriptions. Following the standard in entity linking literature, we focus on the zero-shot linking, that requires the evaluation and train events to be completely disjoint. Due to prevalence of event sequences in Wikidata, a simple random split is not sufficient. 13 We add an additional constraint that event sequences are disjoint between splits. Systems need to perform temporal and spatial reasoning to distinguish between events within a sequence, making the task more challenging.", "cite_spans": [ { "start": 460, "end": 462, "text": "13", "ref_id": null } ], "ref_spans": [ { "start": 72, "end": 79, "text": "Table 1", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Task Definition", "sec_num": "3.2" }, { "text": "In this section, we present our systems for multilingual and crosslingual event linking to Wikidata. We follow the entity linking system BLINK (Wu et al., 2020) to adapt a retrieve and rank approach. Given a mention, we first use a BERT-based biencoder to retrieve top-k events from the candidate pool. Then, we use a crossencoder to rerank these top-k candidates and identify the best event label. Additionally, following the baselines from entity linking literature, we also experiment with BM25 as a candidate retrieval method. ", "cite_spans": [ { "start": 143, "end": 160, "text": "(Wu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "4" }, { "text": "BM25 is a commonly used tf-idf based ranking function and a competitive baseline for entity linking. We explore three variants of BM25, BM25Okapi (Robertson et al., 1994) , BM25+ (Lv and Zhai, 2011a) and BM25L (Lv and Zhai, 2011b). We use the implementation of Brown (2020) with mention as query and event description as documents. 14 Since BM25 is a bag-of-words method, we only use in the multilingual task. To create the documents, we use the concatenation of title and description of events. For the query, we experiment with increasing context window sizes of 8, 16, 32, 64 and 128 along with a mention-only baseline.", "cite_spans": [ { "start": 146, "end": 170, "text": "(Robertson et al., 1994)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "BM25", "sec_num": "4.1" }, { "text": "We adapt the standard entity linking architecture (Wu et al., 2020) to the event linking task. This is a two-stage pipeline, a retriever (biencoder) and a ranker (crossencoder).", "cite_spans": [ { "start": 50, "end": 67, "text": "(Wu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Retrieve+Rank", "sec_num": "4.2" }, { "text": "Biencoder: Using two multilingual transformers, we independently encode the context and 14 To tokenize text across the 44 languages, we used bertbase-multilingual-uncased tokenizer from Huggingface.", "cite_spans": [ { "start": 88, "end": 90, "text": "14", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Retrieve+Rank", "sec_num": "4.2" }, { "text": "Multilingual In both cases, we use the final layer [CLS] token representation as our embedding. For each context, we score the event candidates by taking a dot product between the two embeddings. We follow prior work (Lerer et al., 2019; Wu et al., 2020) to make use of in-batch random negatives during training. At inference, we run a nearest neighbour search to find the top-k candidates.", "cite_spans": [ { "start": 51, "end": 56, "text": "[CLS]", "ref_id": null }, { "start": 217, "end": 237, "text": "(Lerer et al., 2019;", "ref_id": null }, { "start": 238, "end": 254, "text": "Wu et al., 2020)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "Crossencoder: In our crossencoder, the input constitutes a concatenation of the context and a given event candidate. 15 We take the [CLS] token embedding from last layer and pass it through a classification layer. We run crossencoder training only on the top-k event candidates retrieved by the biencoder. During training, we optimize a softmax loss to predict the gold event candidate within the retrieved top-k. For inference, we predict the highest scoring context-candidate tuple from the top-k candidates. We experiment with two multilingual encoders, mBERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020), we refer to the bi-and cross-encoder configurations as mBERT-bi, XLM-RoBERTa-bi and mBERT-cross, XLM-RoBERTa-cross. For crossencoder training and inference, we use the retrieval results from the same BERT-based biencoder. 16", "cite_spans": [ { "start": 117, "end": 119, "text": "15", "ref_id": null }, { "start": 132, "end": 137, "text": "[CLS]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We present our results on the development and test splits of the proposed dataset. In our experi-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "15 [CLS] left context [MENTION _ START] mention [MENTION _ END] right context [SEP] title [EVT] descrip- tion [SEP]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "16 see section A.3 in Appendix for other details. ments, we use bert-base-multilingual-uncased and xlm-roberta-base from Huggingface transformers (Wolf et al., 2020 ). For the multilingual task, even though the candidate set is partly different between languages, we share the model weights across languages. We believe this weight sharing helps in improving the performance on low-resource languages (Arivazhagan et al., 2019). We follow the standard metrics from prior work on entity linking, both for retrieval and reranking. Recall@k measures fraction of contexts where the gold event is contained in the top-k retrieved candidates. Accuracy measures fraction of contexts where the predicted event candidate matches the gold candidate. We use the unnormalized accuracy score from Logeswaran et al. (2019) that evaluates the overall end-to-end performance (retrieve+rank). Figure 4 presents the retrieval results on dev split for both multilingual and crosslingual tasks. The biencoder models significantly outperform the best BM25 configuration, BM25+ (with a context window of 16). 17 The performance is mostly similar for k=8 and k=16 for both biencoder models, therefore, we select k=8 for our crossencoder experiments. 18 Table 3 presents the accuracy scores for the crossencoder models and R@1 scores for retrieval methods. On the multilingual task, mBERT crossencoder model performs the best and signif- 17 For a detailed comparison of various configurations of BM25 baseline, refer to Figure 5 in Appendix.", "cite_spans": [ { "start": 146, "end": 164, "text": "(Wolf et al., 2020", "ref_id": "BIBREF20" }, { "start": 1227, "end": 1229, "text": "18", "ref_id": null }, { "start": 1414, "end": 1416, "text": "17", "ref_id": null } ], "ref_spans": [ { "start": 876, "end": 884, "text": "Figure 4", "ref_id": "FIGREF2" }, { "start": 1230, "end": 1237, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 1496, "end": 1504, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "5" }, { "text": "18 see Table 6 in Appendix for Recall@8 scores for all the configurations.", "cite_spans": [], "ref_spans": [ { "start": 7, "end": 14, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "icantly better than the corresponding biencoder model. However, on the crosslingual task, mBERT biencoder performs the best. As expected, the crosslingual task is more challenging than the multilingual task. Due to the large number of model parameters, all of our reported results were based on a single training run.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "We also measure the cross-domain and zeroshot performance of these systems on the proposed Wikinews evaluation set (section 3.1). As seen in Table 4 , we notice good cross-domain but moderate zero-shot transfer. This highlights that unseen events from unseen domains present a considerable challenge. We noticed further gains (4-12%) when the meta information (date and title) is included with the context. Our ablation studies showed that this gain is primarily due to article date. 19", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Results", "sec_num": "5.1" }, { "text": "Performance by Language: Multilingual and crosslingual tasks have three major differences: 1) source & target language, 2) language-specific descriptions can be more informative than English descriptions, and 3) candidate pool varies language (see Figure 3) . While the performance is largely the same across languages, we noticed slightly lower crosslingual performance, especially for medium and low-resource languages. 20 We also perform qualitative analysis of errors made by our mBERT-based biencoder models on multilingual and crosslingual tasks. We summarize our observations from this analysis below, Temporal Reasoning: The event linker occasionally performs insufficient temporal reasoning in the context (see example 1 in Table 5 ). Since our dataset contains numerous event sequences, such temporal reasoning is often important.", "cite_spans": [ { "start": 422, "end": 424, "text": "20", "ref_id": null } ], "ref_spans": [ { "start": 248, "end": 257, "text": "Figure 3)", "ref_id": "FIGREF1" }, { "start": 733, "end": 740, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.2" }, { "text": "Temporal and Spatial expressions: In cases where the anchor text is a temporal or spatial expression, we found the system sometimes struggle to link to the event even if the link can be infered given the context information (see example 2 in Table 5). We believe these examples will also serve as interesting challenge for future work on our dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis", "sec_num": "5.2" }, { "text": "Event Descriptions: Crosslingual system occasionally struggles with the English description. In example 4 from Table 5 , we notice the mention matches exactly with the language Wikipedia title but it struggles with English description. Therefore, depending on the event, we hypothesize that language-specific event descriptions can sometimes be more informative than the English description.", "cite_spans": [], "ref_spans": [ { "start": 111, "end": 118, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.2" }, { "text": "Dataset Errors: We found instances where the context doesn't provide sufficient information needed for grounding (see example 3 in Table 5 ). Albeit uncommon, we found a few cases where the human annotated hyperlinks in Wikipedia can sometimes be incorrect. 21", "cite_spans": [], "ref_spans": [ { "start": 131, "end": 138, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.2" }, { "text": "Retrieve+rank based methods have been effective for entity linking tasks (Wu et al., 2020; Botha et al., 2020) . Our results indicate that the same retrieve+rank approach is useful for the task of event linking. However, our zero-shot results on Wikinews hint toward potential challenges in adapting to new domains. Additionally, as described above, event linking presents added challenges in dealing with temporal/spatial expressions and temporal reasoning. For further analysis, it would be interesting to contrast the performance differences between planned (e.g., sports competitions) and unplanned (e.g., wars) events.", "cite_spans": [ { "start": 73, "end": 90, "text": "(Wu et al., 2020;", "ref_id": "BIBREF21" }, { "start": 91, "end": 110, "text": "Botha et al., 2020)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3" }, { "text": "We present the task of multilingual event linking to Wikidata. To support this task, we first compile a dictionary of events from Wikidata using temporal and spatial properties. We prepare descriptions for these events from multilingual Wikipedia pages. We then identify a large collection of inlinks from various language Wikipedia. Depending on the language of event description, we present two variants of the task, multilingual (lg lg) and crosslingual (lg en). Furthermore, to test crossdomain generalization we create a small evaluation set based on Wikinews articles. Our results using a retrieve+rank approach indicate that the crosslingual task is more challenging than the multilingual.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "Event linking task has multiple interesting future directions. First, the Wikidata-based event dictionary can be expanded to include hierarchical event structures (Figure 2 ). Since events are inherently hierarchical, this will present a more realistic challenge for the linking systems. Second, mention coverage of our dataset can be expanded to include more verbal events. Third, event linking systems can be improved with better temporal reasoning and improved handling of temporal and spatial expressions. Fourth, the Wikidata-based event dictionary can be expanded to include events that do not contain any English Wikipedia descriptions. In this work, we presented a new dataset compiled automatically from Wikipedia, Wikinews and Wikidata. After the initial collection process, we perform rigorous post-processing steps to reduce potential errors in our dataset. Our dataset is multilingual with texts from 44 languages. In our main paper, we state these languages as well as their individual representation in our dataset. As we highlight in the paper, the proposed linking systems only work for specific class of events (eventive nouns) due to the nature of our dataset.", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 172, "text": "(Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Conclusion & Future Work", "sec_num": "6" }, { "text": "After identifying potential events from Wikidata, we perform additional post-processing to remove any non-event items. Table 8 presents the list of all Wikidata properties used for removing non-event items from our corpus. Table 9 lists all languages from our dataset along with their language genealogy and distribution in the dataset. ", "cite_spans": [], "ref_spans": [ { "start": 119, "end": 126, "text": "Table 8", "ref_id": "TABREF15" }, { "start": 223, "end": 230, "text": "Table 9", "ref_id": "TABREF16" } ], "eq_spans": [], "section": "A.2 Dataset", "sec_num": null }, { "text": "Experiments: We use the base versions of mBERT and XLM-RoBERTa in all of our experi-ments. In the biencoder model, we use two multilingual encoders, one each for context and candidate encoding. In crossencoder, we use just one multilingual encoder and a classification layer. In all of our experiments, we optimize all the encoder layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Modeling", "sec_num": null }, { "text": "For biencoder training, we use AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of 1e-05 and a linear warmup schedule. We restrict the context and candidate lengths to 128 sub-tokens and select the best epoch (of 5) on the development set. For crossencoder training, we also use AdamW optimizer with a learning rate of 2e-05 and a linear warmup schedule. We restrict the overall sequence length to 256 sub-tokens and select the best epoch (of 5) on the development set. We ran our experiments on a mix of GPUs, TITANX, v100, A6000 and a100. Each training and inference runs were run on a single GPU. Both biencoder and crossencoder were run for 5 epochs and we select the best set of hyperparameters based on the dev set performance. On a single a100 GPU, biencoder training takes about 1.5hrs per epoch and the crossencoder takes \u223c20hrs per epoch (with k=8).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Modeling", "sec_num": null }, { "text": "Results: In Figure 5 , we present results on the development set from all the explored configurations. In Table 6 , we show the Recall@8 scores from all the retrieval models. Based on the performance on development set, we selected k=8 for our crossencoder training and inference. We also report the test scores for completeness. Figure 6 presents the retrieval recall scores. Figure 7 presents the retrieval recall scores for BM25+ (context length 16) method. Figure 9 presents a detailed comparison of per-language accuracies between multilingual and crosslingual tasks for each configuration.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 5", "ref_id": "FIGREF3" }, { "start": 106, "end": 113, "text": "Table 6", "ref_id": "TABREF11" }, { "start": 330, "end": 338, "text": "Figure 6", "ref_id": "FIGREF6" }, { "start": 377, "end": 385, "text": "Figure 7", "ref_id": "FIGREF7" }, { "start": 461, "end": 469, "text": "Figure 9", "ref_id": "FIGREF11" } ], "eq_spans": [], "section": "A.3 Modeling", "sec_num": null }, { "text": "Wikinews: Each Wikinews article contains meta information such as article title and publication date. Since this meta information provide additional context to the linker, we experimented by including this meta information along with the mention context. The meta information is encoded with the context as \" [CLS] Table 7 presents the detailed results on the Wikinews evaluation set.", "cite_spans": [ { "start": 309, "end": 314, "text": "[CLS]", "ref_id": null } ], "ref_spans": [ { "start": 315, "end": 322, "text": "Table 7", "ref_id": "TABREF14" } ], "eq_spans": [], "section": "A.3 Modeling", "sec_num": null }, { "text": "Examples: We also present full examples of system errors we identified through a qualitative analysis. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.3 Modeling", "sec_num": null }, { "text": "E n g l i s h G e r m a n R u s s i a n F r e n c h I t a l i a n P o l i s h S p a n i s h U k r a i n i a n P o r t u g u e s e J a p a n e s e F i n n i s h C z e c h D u t c h H e b r e w C a t a l a n H u n g a r i a n A r a b i c C h i n e s e N o r w e g i a n S w e d i s h S e r b i a n V i e t n a m e s e T u r k i s h I n d o n e s i a n B u l g a r i a n K o r e a n D a n i s h G r e e k R o m a n i a n P e r s i a n S l o v e n e S l o v a k B e n g a l i T h a i B e l a r u s i a n M a l a y T a m i l A f r i k a a n s H i n d i M a l a y a l a m M a r a t h i T e l u g u S i n h a l a S w a h i l i 0 20 40 60 80 100 Language Accuracy R@1 R@4 R@8 E n g l i s h G e r m a n R u s s i a n F r e n c h I t a l i a n P o l i s h S p a n i s h U k r a i n i a n P o r t u g u e s e J a p a n e s e F i n n i s h C z e c h D u t c h H e b r e w C a t a l a n H u n g a r i a n A r a b i c C h i n e s e N o r w e g i a n S w e d i s h S e r b i a n V i e t n a m e s e T u r k i s h I n d o n e s i a n B u l g a r i a n K o r e a n D a n i s h G r e e k R o m a n i a n P e r s i a n S l o v e n e S l o v a k B e n g a l i T h a i B e l a r u s i a n M a l a y T a m i l A f r i k a a n s E n g l i s h G e r m a n R u s s i a n F r e n c h I t a l i a n P o l i s h S p a n i s h U k r a i n i a n P o r t u g u e s e J a p a n e s e F i n n i s h C z e c h D u t c h H e b r e w C a t a l a n H u n g a r i a n A r a b i c C h i n e s e N o r w e g i a n S w e d i s h S e r b i a n V i e t n a m e s e T u r k i s h I n d o n e s i a n B u l g a r i a n K o r e a n D a n i s h G r e e k R o m a n i a n P e r s i a n S l o v e n e S l o v a k B e n g a l i T h a i B e l a r u s i a n M a l a y T a m i l A f r i k a a n s E n g l i s h G e r m a n R u s s i a n F r e n c h I t a l i a n P o l i s h S p a n i s h U k r a i n i a n P o r t u g u e s e J a p a n e s e F i n n i s h C z e c h D u t c h H e b r e w C a t a l a n H u n g a r i a n A r a b i c C h i n e s e N o r w e g i a n S w e d i s h S e r b i a n V i e t n a m e s e T u r k i s h I n d o n e s i a n B u l g a r i a n K o r e a n D a n i s h G r e e k R o m a n i a n P e r s i a n S l o v e n e S l o v a k B e n g a l i T h a i B e l a r u s i a n M a l a y T a m i l A f r i k a a n s ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "51", "sec_num": null }, { "text": "E n g l i s h G e r m a n R u s s i a n F r e n c h I t a l i a n P o l i s h S p a n i s h U k r a i n i a n P o r t u g u e s e J a p a n e s e F i n n i s h C z e c h D u t c h H e b r e w C a t a l a n H u n g a r i a n A r a b i c C h i n e s e N o r w e g i a n S w e d i s h S e r b i a n V i e t n a m e s e T u r k i s h I n d o n e s i a n B u l g a r i a n K o r e a n D a n i s h G r e e k R o m a n i a n P e r s i a n S l o v e n e S l o v a k B e n g a l i T h a i B e l a r u s i a n M a l a y T a m i l A f r i k a a n s H i n d i M a l a y a l a m M a r a t h i T e l u g u S i n h a l a S w a h i l i E n g l i s h G e r m a n R u s s i a n F r e n c h P o l i s h I t a l i a n S p a n i s h U k r a i n i a n P o r t u g u e s e F i n n i s h J a p a n e s e C z e c h D u t c h A r a b i c H u n g a r i a n S w e d i s h H e b r e w N o r w e g i a n C h i n e s e S e r b i a n C a t a l a n V i e t n a m e s e K o r e a n T u r k i s h I n d o n e s i a n G r e e k B u l g a r i a n R o m a n i a n P e r s i a n D a n i s h B e l a r u s i a n S l o v a k S l o v e n e M a l a y T h a i B e n g a l i A f r i k a a n s T a m i l H i n d i M a l a y a l a m M a r a t h i T e l u g u S w a h i l i S i n h a l a ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "52", "sec_num": null }, { "text": "E n g l i s h G e r m a n R u s s i a n F r e n c h P o l i s h I t a l i a n S p a n i s h U k r a i n i a n P o r t u g u e s e F i n n i s h J a p a n e s e C z e c h D u t c h A r a b i c H u n g a r i a n S w e d i s h H e b r e w N o r w e g i a n C h i n e s e S e r b i a n C a t a l a n V i e t n a m e s e K o r e a n T u r k i s h I n d o n e s i a n G r e e k B u l g a r i a n R o m a n i a n P e r s i a n D a n i s h B e l a r u s i a n S l o v a k S l o v e n e M a l a y T h a i B e n g a l i A f r i k a a n s T a m i l H i n d i M a l a y a l a m M a r a t h i T e l u g u S w a h i l i S i n h a l a 0 20 40 60 80 100 Language Accuracy (multilingual) mBERT-bi (crosslingual) mBERT-bi E n g l i s h G e r m a n R u s s i a n F r e n c h P o l i s h I t a l i a n S p a n i s h U k r a i n i a n P o r t u g u e s e F i n n i s h J a p a n e s e C z e c h D u t c h A r a b i c H u n g a r i a n S w e d i s h H e b r e w N o r w e g i a n C h i n e s e S e r b i a n C a t a l a n V i e t n a m e s e K o r e a n T u r k i s h I n d o n e s i a n G r e e k B u l g a r i a n R o m a n i a n P e r s i a n D a n i s h B e l a r u s i a n S l o v a k S l o v e n e M a l a y T h a i B e n g a l i A f r i k a a n s T a m i l H E n g l i s h G e r m a n R u s s i a n F r e n c h P o l i s h I t a l i a n S p a n i s h U k r a i n i a n P o r t u g u e s e F i n n i s h J a p a n e s e C z e c h D u t c h A r a b i c H u n g a r i a n S w e d i s h H e b r e w N o r w e g i a n C h i n e s e S e r b i a n C a t a l a n V i e t n a m e s e K o r e a n T u r k i s h I n d o n e s i a n G r e e k B u l g a r i a n R o m a n i a n P e r s i a n D a n i s h B e l a r u s i a n S l o v a k S l o v e n e M a l a y T h a i B e n g a l i A f r i k a a n s T a m i l H E n g l i s h G e r m a n R u s s i a n F r e n c h P o l i s h I t a l i a n S p a n i s h U k r a i n i a n P o r t u g u e s e F i n n i s h J a p a n e s e C z e c h D u t c h A r a b i c H u n g a r i a n S w e d i s h H e b r e w N o r w e g i a n C h i n e s e S e r b i a n C a t a l a n V i e t n a m e s e K o r e a n T u r k i s h I n d o n e s i a n G r e e k B u l g a r i a n R o m a n i a n P e r s i a n D a n i s h B e l a r u s i a n S l o v a k S l o v e n e M a l a y T h a i B e n g a l i A f r i k a a n s T a m i l H i n d i M a l a y a l a m M a r a t h i T e l u g u S w a h i l i S i n h a l a ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "53", "sec_num": null }, { "text": "We define mention as the textual expression that refers to an event from the KB.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "wikidata.org/wiki/Wikidata:List _ of _ properties 5 duration OR point-in-time OR (start-time AND end-time) 6 location OR coordinate-location 7 seeTable 8in subsection A.2 of Appendix for the full list of exclusion properties.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://meta.wikimedia.org/wiki/List _ of _ Wikipedias", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.wikinews.org 10 For comparison, English Wikinews contains 21K articles while English Wikipedia contains 6.5M pages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "we consider dev and test events fromTable 1as unseen.12 The disambiguation phrase is typically a suffix in the title for English(Logeswaran et al., 2019), but in our multilingual setting, it can be anywhere in the title.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "2008, 2010, 2012 iterations of Aquatics Championships fromFigure 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "see section A.3 in Appendix for full results. 20 seeFigure 8andFigure 9in Appendix", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For more detailed examples, refer toTable 10, Table 12andTable 13in Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Table 13: Examples of errors by the event linking system. (also errors in the dataset)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Mention Context: At the 2000 Summer Olympics in Sydney, Sitnikov competed only in two swimming events", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: At the 2000 Summer Olympics in Sydney, Sitnikov competed only in two swimming events. ... Three days later, in the 100 m freestyle, Sitnikov placed fifty-third on the morning prelims. ...", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Summer Olympics -Men's 100 metre freestyle Gold Label: Swimming at the 2000 Summer Olympics -Men's 100 metre freestyle Mention Context: ... war er bei der Oscarverleihung 1935 erstmals f\u00fcr einen Oscar f\u00fcr den besten animierten Kurzfilm nominiert. Eine weitere Nominierung in dieser Kategorie erhielt er 1938 f\u00fcr \"The Little Match Girl", "authors": [], "year": 1937, "venue": "Predicted Label: The 9th Academy Awards were held on March 4", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Swimming at the 2008 Summer Olympics -Men's 100 metre freestyle Gold Label: Swimming at the 2000 Summer Olympics -Men's 100 metre freestyle Mention Context: ... war er bei der Oscarverleihung 1935 erstmals f\u00fcr einen Oscar f\u00fcr den besten animierten Kurzfilm nominiert. Eine weitere Nominierung in dieser Kategorie erhielt er 1938 f\u00fcr \"The Little Match Girl\" (1937). Predicted Label: The 9th Academy Awards were held on March 4, 1937, ...", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The 10th Academy Awards were originally scheduled ... but due to", "authors": [ { "first": "Gold", "middle": [], "last": "Label", "suffix": "" } ], "year": 1938, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: The 10th Academy Awards were originally scheduled ... but due to ... were held on March 10, 1938, ..", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Mention Context: Ivanova won the silver medal at the 1978 World Junior Championships. She made her senior World debut at the 1979 World Championships, finishing 18th. Ivanova was 16th at the 1980 Winter Olympics. Predicted Label: FIBT World Championships 1979 Gold Label", "authors": [], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: Ivanova won the silver medal at the 1978 World Junior Championships. She made her senior World debut at the 1979 World Championships, finishing 18th. Ivanova was 16th at the 1980 Winter Olympics. Predicted Label: FIBT World Championships 1979 Gold Label: 1979 World Figure Skating Championships Mention Context: ...\u651d\u6d25\u865f\u8207\u5176\u59d0\u59b9\u8266\u6cb3\u865f\u65bc1914\u5e7410\u6708\u81f311\u6708\u9593\u53c3\u8207\u4e86\u9752\u5cf6\u6230\u5f79\u7684\u6700\u5f8c\u968e\u6bb5...", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Siege of Tsingtao: The siege of Tsingtao", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Battle of the Yellow Sea Gold Label (English): Siege of Tsingtao: The siege of Tsingtao (or Tsingtau) was the attack on the German port of Tsingtao (now Qingdao) ...", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Massively multilingual neural machine translation in the wild: Findings and challenges. arXiv", "authors": [ { "first": "N", "middle": [], "last": "\u662f\u7b2c\u4e00\u6b21\u4e16\u754c\u5927\u6230\u521d\u671f\u65e5\u672c\u9032\u653b\u570b\u81a0\u5dde\u7063\u6b96\u6c11\u5730\u53ca\u5176\u9996\u5e9c\u9752\u5cf6\u7684\u4e00\u5834\u6230\u5f79\uff0c\u4e5f\u662f\u552f\u4e00\u7684 \u4e00\u5834\u6230\u5f79\u3002 References", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "Dmitry", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Lepikhin", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mia", "middle": [ "Xu" ], "last": "Krikun", "suffix": "" }, { "first": "Yuan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "George", "middle": [ "F" ], "last": "Cao", "suffix": "" }, { "first": "Colin", "middle": [], "last": "Foster", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "Z", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label (Chinese): \u9752\u5cf6\u6230\u5f79(\uff0c)\u662f\u7b2c\u4e00\u6b21\u4e16\u754c\u5927\u6230\u521d\u671f\u65e5\u672c\u9032\u653b\u570b\u81a0\u5dde\u7063\u6b96\u6c11\u5730\u53ca\u5176\u9996\u5e9c\u9752\u5cf6\u7684\u4e00\u5834\u6230\u5f79\uff0c\u4e5f\u662f\u552f\u4e00\u7684 \u4e00\u5834\u6230\u5f79\u3002 References N. Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George F. Foster, Colin Cherry, Wolfgang Macherey, Z. Chen, and Yonghui Wu. 2019. Massively multilingual neural machine trans- lation in the wild: Findings and challenges. arXiv, abs/1907.05019.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "XOR QA: Cross-lingual open-retrieval question answering", "authors": [ { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "547--564", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.46" ] }, "num": null, "urls": [], "raw_text": "Akari Asai, Jungo Kasai, Jonathan Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021. XOR QA: Cross-lingual open-retrieval question answering. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 547-564, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Entity Linking in 100 Languages", "authors": [ { "first": "Jan", "middle": [ "A" ], "last": "Botha", "suffix": "" }, { "first": "Zifei", "middle": [], "last": "Shan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gillick", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "7833--7845", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.630" ] }, "num": null, "urls": [], "raw_text": "Jan A. Botha, Zifei Shan, and Daniel Gillick. 2020. En- tity Linking in 100 Languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7833-7845, Online. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rank-BM25: A Collection of BM25 Algorithms in Python", "authors": [ { "first": "Dorian", "middle": [], "last": "Brown", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.5281/zenodo.4520057" ] }, "num": null, "urls": [], "raw_text": "Dorian Brown. 2020. Rank-BM25: A Collection of BM25 Algorithms in Python.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Using encyclopedic knowledge for named entity disambiguation", "authors": [ { "first": "Razvan", "middle": [], "last": "Bunescu", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" } ], "year": 2006, "venue": "11th Conference of the European Chapter of Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4417--4422", "other_ids": {}, "num": null, "urls": [], "raw_text": "Razvan Bunescu and Marius Pa\u015fca. 2006. Using en- cyclopedic knowledge for named entity disambigua- tion. In 11th Conference of the European Chapter of Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4417-4422, Portoro\u017e, Slovenia. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Transforming Wikipedia into named entity training data", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "124--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, James R. Curran, and Tara Murphy. 2008. Transforming Wikipedia into named entity training data. In Proceedings of the Australasian Language Technology Association Workshop 2008, pages 124- 132, Hobart, Australia.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Event linking: Grounding event reference in a news archive", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Hachey", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "228--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, Matthew Honnibal, Ben Hachey, and James R. Curran. 2012. Event linking: Grounding event reference in a news archive. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 228-232, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cross-document event identity via dense annotation", "authors": [ { "first": "Adithya", "middle": [], "last": "Pratapa", "suffix": "" }, { "first": "Zhengzhong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Kimihiro", "middle": [], "last": "Hasegawa", "suffix": "" }, { "first": "Linwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yukari", "middle": [], "last": "Yamakawa", "suffix": "" }, { "first": "Shikun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Teruko", "middle": [], "last": "Mitamura", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 25th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "496--517", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adithya Pratapa, Zhengzhong Liu, Kimihiro Hasegawa, Linwei Li, Yukari Yamakawa, Shikun Zhang, and Teruko Mitamura. 2021. Cross-document event iden- tity via dense annotation. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 496-517, Online. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Okapi at trec-3", "authors": [ { "first": "Stephen", "middle": [ "E" ], "last": "Robertson", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Walker", "suffix": "" }, { "first": "Susan", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Micheline", "middle": [], "last": "Hancock-Beaulieu", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Gatford", "suffix": "" } ], "year": 1994, "venue": "TREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen E. Robertson, Steve Walker, Susan Jones, Micheline Hancock-Beaulieu, and Mike Gatford. 1994. Okapi at trec-3. In TREC.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "XTREME-R: Towards more challenging and nuanced multilingual evaluation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Ruder", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Botha", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Siddhant", "suffix": "" }, { "first": "Orhan", "middle": [], "last": "Firat", "suffix": "" }, { "first": "Jinlan", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Junjie", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "10215--10245", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Ruder, Noah Constant, Jan Botha, Aditya Sid- dhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, and Melvin John- son. 2021. XTREME-R: Towards more challenging and nuanced multilingual evaluation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10215-10245, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Linking named entities to any database", "authors": [ { "first": "Avirup", "middle": [], "last": "Sil", "suffix": "" }, { "first": "Ernest", "middle": [], "last": "Cronin", "suffix": "" }, { "first": "Penghai", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "116--127", "other_ids": {}, "num": null, "urls": [], "raw_text": "Avirup Sil, Ernest Cronin, Penghai Nie, Yinfei Yang, Ana-Maria Popescu, and Alexander Yates. 2012. Linking named entities to any database. In Proceed- ings of the 2012 Joint Conference on Empirical Meth- ods in Natural Language Processing and Computa- tional Natural Language Learning, pages 116-127, Jeju Island, Korea. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Wikilinks: A largescale cross-document coreference corpus labeled via links to wikipedia", "authors": [ { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Amarnag", "middle": [], "last": "Subramanya", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large- scale cross-document coreference corpus labeled via links to wikipedia. Technical Report.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A baseline temporal tagger for all languages", "authors": [ { "first": "Jannik", "middle": [], "last": "Str\u00f6tgen", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Gertz", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "541--547", "other_ids": { "DOI": [ "10.18653/v1/D15-1063" ] }, "num": null, "urls": [], "raw_text": "Jannik Str\u00f6tgen and Michael Gertz. 2015. A baseline temporal tagger for all languages. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 541-547, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "OntoNotes Release 5.0. Linguistic Data Consortium", "authors": [ { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Taylor", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Kaufman", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Ni- anwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, and Ann Houston. 2013. OntoNotes Release 5.0. Linguistic Data Consortium, Philadelphia, PA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "", "middle": [], "last": "Drame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-demos.6" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Scalable zeroshot entity linking with dense entity retrieval", "authors": [ { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Petroni", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Josifoski", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6397--6407", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.519" ] }, "num": null, "urls": [], "raw_text": "Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettlemoyer. 2020. Scalable zero- shot entity linking with dense entity retrieval. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6397-6407, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Event Linking: Grounding Event Mentions to Wikipedia. arXiv", "authors": [ { "first": "Xiaodong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaodong Yu, Wenpeng Yin, Nitish Gupta, and Dan Roth. 2021. Event Linking: Grounding Event Men- tions to Wikipedia. arXiv.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Teaming with Sergey Borisenko, Pavel Sidorov, and Andrey Kvassov in heat three, Sitnikov swam a lead-off leg and recorded a split of 52.56, but the Kazakhs settled only for last place in a final time of 3:28.90. Three days later, in the 100 m freestyle, Sitnikov placed fifty-third on the morning prelims", "authors": [], "year": null, "venue": "Mention Context: At the 2000 Summer Olympics in Sydney", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: At the 2000 Summer Olympics in Sydney, Sitnikov competed only in two swimming events. He eclipsed a FINA B-cut of 51.69 (100 m freestyle) from the Kazakhstan Open Championships in Almaty. On the first day of the Games, Sitnikov placed twenty-first for the Kazakhstan team in the 4 \u00d7 100 m freestyle relay. Teaming with Sergey Borisenko, Pavel Sidorov, and Andrey Kvassov in heat three, Sitnikov swam a lead-off leg and recorded a split of 52.56, but the Kazakhs settled only for last place in a final time of 3:28.90. Three days later, in the 100 m freestyle, Sitnikov placed fifty-third on the morning prelims. Swimming in heat five, he raced to a fifth seed by 0.15 seconds ahead of Chinese Taipei's Wu Nien-pin in 52.57.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Summer Olympics -Men's 100 metre freestyle: The men's 100 metre freestyle event at the 2008 Olympic Games took place on 12-14 August at the Beijing National Aquatics Center", "authors": [], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Swimming at the 2008 Summer Olympics -Men's 100 metre freestyle: The men's 100 metre freestyle event at the 2008 Olympic Games took place on 12-14 August at the Beijing National Aquatics Center in Beijing, China. There were 64 competitors from 55 nations.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Summer Olympics -Men's 100 metre freestyle: The men's 100 metre freestyle event at the 2000 Summer Olympics took place on 19-20 September at the Sydney International Aquatic Centre in Sydney, Australia. There were 73 competitors from 66 nations", "authors": [], "year": 2000, "venue": "Gold Label: Swimming at the", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: Swimming at the 2000 Summer Olympics -Men's 100 metre freestyle: The men's 100 metre freestyle event at the 2000 Summer Olympics took place on 19-20 September at the Sydney International Aquatic Centre in Sydney, Australia. There were 73 competitors from 66 nations. Nations have been limited to two swimmers each since the 1984 Games.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "at the Allstate Arena in Rosemont, Illinois. The 2014 event was also held in June at the same arena and was also the first Payback to air on the WWE Network, which had launched earlier that year", "authors": [], "year": 1999, "venue": "Mention Context: In 2012, WWE reinstated their No Way Out pay-per-view (PPV), which had previously ran annually from", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: In 2012, WWE reinstated their No Way Out pay-per-view (PPV), which had previously ran annually from 1999 to 2009. The following year, however, No Way Out was canceled and replaced by Payback, which in turn became an annual PPV for the promotion. The first Payback event was held on June 16, 2013 at the Allstate Arena in Rosemont, Illinois. The 2014 event was also held in June at the same arena and was also the first Payback to air on the WWE Network, which had launched earlier that year. In 2015 and 2016, the event was held in May. The 2016 event was also promoted as the first PPV of the New Era for WWE. In July 2016, WWE reintroduced the brand extension, dividing the roster between the Raw and SmackDown brands where wrestlers are exclusively assigned to perform. The 2017 event was in turn held exclusively for wrestlers from the Raw brand, and was also moved up to late-April.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Battleground was a professional wrestling pay-per-view (PPV) event and WWE Network event produced by WWE for their SmackDown brand division. It took place on", "authors": [], "year": 2017, "venue": "", "volume": "34", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Battleground (2017): Battleground was a professional wrestling pay-per-view (PPV) event and WWE Network event produced by WWE for their SmackDown brand division. It took place on July 23, 2017, at the Wells Fargo Center in Philadelphia, Pennsylvania. It was the fifth and final event under the Battleground chronology, as following WrestleMania 34 in April 2018, brand-exclusive PPVs were discontinued, resulting in WWE reducing the amount of yearly PPVs produced.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "It was the fifth event in the Payback chronology. Due to the Superstar Shake-up, the event included two interbrand matches with SmackDown wrestlers. It was the final Payback event until 2020, as following WrestleMania", "authors": [], "year": 2017, "venue": "WWE discontinued brand-exclusive PPVs", "volume": "34", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: Payback (2017): Payback was a professional wrestling pay-per-view (PPV) and WWE Network event, produced by WWE for the Raw brand division. It took place on April 30, 2017 at the SAP Center in San Jose, California. It was the fifth event in the Payback chronology. Due to the Superstar Shake-up, the event included two interbrand matches with SmackDown wrestlers. It was the final Payback event until 2020, as following WrestleMania 34 in 2018, WWE discontinued brand-exclusive PPVs, which resulted in the reduction of yearly PPVs produced. Table 10: Examples of errors by the event linking system. (temporal reasoning related)", "links": null }, "BIBREF29": { "ref_id": "b29", "title": ") was an assistant director at Paramount Pictures. He won the 1935 Best Assistant Director Academy Award for \"The Lives of a Bengal Lancer\" along with Clem Beauchamp. Wing was the assistant director on only two films owing to his service in the United States Army. During his service, Wing was in a prisoner camp that was portrayed in the film", "authors": [], "year": null, "venue": "Mention Context: Paul Wing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: Paul Wing (August 14, 1892 -May 29, 1957) was an assistant director at Paramount Pictures. He won the 1935 Best Assistant Director Academy Award for \"The Lives of a Bengal Lancer\" along with Clem Beauchamp. Wing was the assistant director on only two films owing to his service in the United States Army. During his service, Wing was in a prisoner camp that was portrayed in the film \"The Great Raid\" (2005).", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "They were hosted by Frank Capra. This was the first year in which the gold statuettes were called", "authors": [], "year": 1936, "venue": "Predicted Label: 8th Academy Awards: The 8th Academy Awards were held on March 5", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: 8th Academy Awards: The 8th Academy Awards were held on March 5, 1936, at the Biltmore Hotel in Los Angeles, California. They were hosted by Frank Capra. This was the first year in which the gold statuettes were called \"Oscars\".", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Gold Label: 7th Academy Awards: The 7th Academy Awards, honoring the best in film for 1934", "authors": [], "year": 1935, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: 7th Academy Awards: The 7th Academy Awards, honoring the best in film for 1934, was held on February 27, 1935, at the Biltmore Hotel in Los Angeles, California. They were hosted by Irvin S. Cobb.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Mention Context: F\u00fcr \"Holiday Land\" (1934) war er bei der Oscarverleihung 1935 erstmals f\u00fcr einen Oscar f\u00fcr den besten animierten Kurzfilm nominiert. Eine weitere Nominierung in dieser Kategorie erhielt er 1938 f\u00fcr", "authors": [], "year": 1937, "venue": "The Little Match Girl", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: F\u00fcr \"Holiday Land\" (1934) war er bei der Oscarverleihung 1935 erstmals f\u00fcr einen Oscar f\u00fcr den besten animierten Kurzfilm nominiert. Eine weitere Nominierung in dieser Kategorie erhielt er 1938 f\u00fcr \"The Little Match Girl\" (1937).", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "They were hosted by George Jessel; music was provided by the Victor Young Orchestra, which at the time featured Spike Jones on drums. This ceremony marked the introduction of the Best Supporting Actor and Best Supporting Actress categories, and was the first year that the awards for directing and acting were fixed at five nominees per category", "authors": [], "year": 1937, "venue": "Predicted Label: 9th Academy Awards: The 9th Academy Awards were held on", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: 9th Academy Awards: The 9th Academy Awards were held on March 4, 1937, at the Biltmore Hotel in Los Angeles, California. They were hosted by George Jessel; music was provided by the Victor Young Orchestra, which at the time featured Spike Jones on drums. This ceremony marked the introduction of the Best Supporting Actor and Best Supporting Actress categories, and was the first year that the awards for directing and acting were fixed at five nominees per category.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Gold Label: 10th Academy Awards: The 10th Academy Awards were originally scheduled for March", "authors": [], "year": 1938, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: 10th Academy Awards: The 10th Academy Awards were originally scheduled for March 3, 1938, but due to the", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "It was hosted by Bob Burns. Table 11: Examples of errors by the event linking system", "authors": [], "year": 1938, "venue": "Los Angeles flood of 1938 were held on March 10", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Los Angeles flood of 1938 were held on March 10, 1938, at the Biltmore Hotel in Los Angeles, California. It was hosted by Bob Burns. Table 11: Examples of errors by the event linking system. (temporal or spatial expression related)", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Mention Context: Nel 2018 ha preso parte alle Olimpiadi di Pyeongchang, venendo eliminata nel primo turno della finale e classificandosi diciannovesima nella gara di gobbe", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: Nel 2018 ha preso parte alle Olimpiadi di Pyeongchang, venendo eliminata nel primo turno della finale e classificandosi diciannovesima nella gara di gobbe.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Predicted Label: Snowboarding at the 2018 Winter Olympics -Women's parallel giant slalom: The women's parallel giant slalom competition of the 2018 Winter Olympics was held on 24", "authors": [], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Snowboarding at the 2018 Winter Olympics -Women's parallel giant slalom: The women's parallel giant slalom competition of the 2018 Winter Olympics was held on 24 February 2018 Bogwang Phoenix Park in Pyeongchang, South Korea.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "It was won by Perrine Laffont, with Justine Dufour-Lapointe taking silver and Yuliya Galysheva taking bronze. For Laffont and Galysheva these were first Olympic medals. Galysheva also won the first ever medal in Kazakhstan in freestyle skiing. Mention Context: Predicted Label: Hungarian Revolution of 1956: The Hungarian Revolution of 1956 (), or the Hungarian Uprising, was a nationwide revolution against the Hungarian People's Republic and its Soviet-imposed policies, lasting from", "authors": [], "year": 1956, "venue": "Gold Label: Freestyle skiing at the 2018 Winter Olympics -Women's moguls", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: Freestyle skiing at the 2018 Winter Olympics -Women's moguls: The Women's moguls event in freestyle skiing at the 2018 Winter Olympics took place at the Bogwang Phoenix Park, Pyeongchang, South Korea from 9 to 11 February 2018. It was won by Perrine Laffont, with Justine Dufour-Lapointe taking silver and Yuliya Galysheva taking bronze. For Laffont and Galysheva these were first Olympic medals. Galysheva also won the first ever medal in Kazakhstan in freestyle skiing. Mention Context: Predicted Label: Hungarian Revolution of 1956: The Hungarian Revolution of 1956 (), or the Hungarian Uprising, was a nationwide revolution against the Hungarian People's Republic and its Soviet-imposed policies, lasting from 23 October until 10 November 1956. Leaderless at the beginning, it was the first major threat to Soviet control since the Red Army drove Nazi Germany from its territory at the end of World War II in Europe.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Suez Crisis: The Suez Crisis, or the Second Arab-Israeli war, also called the Tripartite Aggression () in the Arab world and the Sinai War in Israel", "authors": [ { "first": "Gold", "middle": [], "last": "Label", "suffix": "" } ], "year": null, "venue": "Mention Context: \u651d\u6d25\u865f\u6230\u8266\u65bc1909\u5e744\u67081\u65e5\u5728\u9808\u8cc0\u6d77\u8ecd\u5de5\u5ee0\u92ea\u8a2d\u9f8d\u9aa8\uff0c\u5f8c\u65bc1909\u5e741\u65e518\u65e5\u8209\u884c\u4e0b\u6c34\u5100\u5f0f\uff0c\u4e26 \u65bc1912\u5e747\u67081\u65e5\u7ae3\u5de5\uff0c\u7e3d\u9020\u50f9\u70ba11,010,000\u65e5\u5713\u3002\u6d77\u8ecd\u5927\u4f50\u7530\u4e2d\u76db\u79c0\u65bc1912\u5e7412\u67081\u65e5\u51fa\u4efb\u672c\u8266\u8266\u9577\uff0c\u4e26\u7de8\u5165\u7b2c\u4e00 \u5206\u9063\u8266\u968a\u3002\u7fcc\u5e74\u7684\u591a\u6578\u6642\u5019\uff0c\u651d\u6d25\u865f\u5747\u5de1\u822a\u65bc\u4e2d\u570b\u5916\u6d77\u6216\u662f\u63a5\u53d7\u6230\u5099\u64cd\u6f14\u3002\u7576\u7b2c\u4e00\u6b21\u4e16\u754c\u5927\u6230\u65bc1914\u5e748\u6708\u9593\u7206 \u767c\u6642\uff0c\u672c\u8266\u6b63\u505c\u6cca\u65bc\u5ee3\u5cf6\u7e23\u5e02\u8ecd\u6e2f\u3002\u651d\u6d25\u865f\u8207\u5176\u59d0\u59b9\u8266\u6cb3\u865f\u65bc1914\u5e7410\u6708\u81f311\u6708\u9593\u53c3\u8207\u4e86\u9752\u5cf6\u6230\u5f79\u7684\u6700\u5f8c\u968e\u6bb5\uff0c \u4e26\u65bc\u5916\u6d77\u4ee5\u8266\u7832\u5bc6\u96c6\u8f5f\u70b8\u8ecd\u9663\u5730\u3002\u672c\u8266\u65bc1916\u5e7412\u67081\u65e5\u96e2\u958b\u7b2c\u4e00\u5206\u9063\u8266\u968a\uff0c\u4e26\u9001\u5f80\u5e02\u9032\u884c\u5347\u7d1a\u4f5c\u696d\u3002\u5347\u7d1a\u4f5c\u696d \u65bc1917\u5e7412\u67081\u65e5\u5b8c\u6210\uff0c\u8a72\u8266\u96a8\u5f8c\u7de8\u5165\u7b2c\u4e8c\u5206\u9063\u8266\u968a\uff0c\u76f4\u81f31918\u5e747\u670823\u65e5\u91cd\u65b0\u6b78\u5165\u7b2c\u4e00\u5206\u9063\u8266\u968a\u70ba\u6b62\u3002\u81ea\u6b64\u6642", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: Suez Crisis: The Suez Crisis, or the Second Arab-Israeli war, also called the Tripartite Aggression () in the Arab world and the Sinai War in Israel, Mention Context: \u651d\u6d25\u865f\u6230\u8266\u65bc1909\u5e744\u67081\u65e5\u5728\u9808\u8cc0\u6d77\u8ecd\u5de5\u5ee0\u92ea\u8a2d\u9f8d\u9aa8\uff0c\u5f8c\u65bc1909\u5e741\u65e518\u65e5\u8209\u884c\u4e0b\u6c34\u5100\u5f0f\uff0c\u4e26 \u65bc1912\u5e747\u67081\u65e5\u7ae3\u5de5\uff0c\u7e3d\u9020\u50f9\u70ba11,010,000\u65e5\u5713\u3002\u6d77\u8ecd\u5927\u4f50\u7530\u4e2d\u76db\u79c0\u65bc1912\u5e7412\u67081\u65e5\u51fa\u4efb\u672c\u8266\u8266\u9577\uff0c\u4e26\u7de8\u5165\u7b2c\u4e00 \u5206\u9063\u8266\u968a\u3002\u7fcc\u5e74\u7684\u591a\u6578\u6642\u5019\uff0c\u651d\u6d25\u865f\u5747\u5de1\u822a\u65bc\u4e2d\u570b\u5916\u6d77\u6216\u662f\u63a5\u53d7\u6230\u5099\u64cd\u6f14\u3002\u7576\u7b2c\u4e00\u6b21\u4e16\u754c\u5927\u6230\u65bc1914\u5e748\u6708\u9593\u7206 \u767c\u6642\uff0c\u672c\u8266\u6b63\u505c\u6cca\u65bc\u5ee3\u5cf6\u7e23\u5e02\u8ecd\u6e2f\u3002\u651d\u6d25\u865f\u8207\u5176\u59d0\u59b9\u8266\u6cb3\u865f\u65bc1914\u5e7410\u6708\u81f311\u6708\u9593\u53c3\u8207\u4e86\u9752\u5cf6\u6230\u5f79\u7684\u6700\u5f8c\u968e\u6bb5\uff0c \u4e26\u65bc\u5916\u6d77\u4ee5\u8266\u7832\u5bc6\u96c6\u8f5f\u70b8\u8ecd\u9663\u5730\u3002\u672c\u8266\u65bc1916\u5e7412\u67081\u65e5\u96e2\u958b\u7b2c\u4e00\u5206\u9063\u8266\u968a\uff0c\u4e26\u9001\u5f80\u5e02\u9032\u884c\u5347\u7d1a\u4f5c\u696d\u3002\u5347\u7d1a\u4f5c\u696d \u65bc1917\u5e7412\u67081\u65e5\u5b8c\u6210\uff0c\u8a72\u8266\u96a8\u5f8c\u7de8\u5165\u7b2c\u4e8c\u5206\u9063\u8266\u968a\uff0c\u76f4\u81f31918\u5e747\u670823\u65e5\u91cd\u65b0\u6b78\u5165\u7b2c\u4e00\u5206\u9063\u8266\u968a\u70ba\u6b62\u3002\u81ea\u6b64\u6642", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "\u8d77\uff0c\u651d\u6d25\u865f\u6230\u8266\u4e0a\u6240\u6709\u7684QF 12\u78c53\u82f1\u540b40\u500d\u5f91\u8266\u7832\u5747\u79fb\u9664\uff0c\u4e26\u4ee5QF 12\u78c53\u82f1\u540b40\u500d\u5f91\u9632\u7a7a\u7832\u53d6\u4ee3\uff0c\u53e6\u4ea6\u79fb\u9664\u4e86\u5169 \u5177\u9b5a\u96f7\u767c\u5c04\u7ba1\u30021918\u5e7410\u670828\u65e5\uff0c\u651d\u6d25\u865f\u6230\u8266\u6210\u70ba\u5927\u6b63\u5929\u7687\u65bc\u6d77\u4e0a\u6821\u6642\u6240\u642d\u4e58\u7684\u65d7\u8266\u3002", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u8d77\uff0c\u651d\u6d25\u865f\u6230\u8266\u4e0a\u6240\u6709\u7684QF 12\u78c53\u82f1\u540b40\u500d\u5f91\u8266\u7832\u5747\u79fb\u9664\uff0c\u4e26\u4ee5QF 12\u78c53\u82f1\u540b40\u500d\u5f91\u9632\u7a7a\u7832\u53d6\u4ee3\uff0c\u53e6\u4ea6\u79fb\u9664\u4e86\u5169 \u5177\u9b5a\u96f7\u767c\u5c04\u7ba1\u30021918\u5e7410\u670828\u65e5\uff0c\u651d\u6d25\u865f\u6230\u8266\u6210\u70ba\u5927\u6b63\u5929\u7687\u65bc\u6d77\u4e0a\u6821\u6642\u6240\u642d\u4e58\u7684\u65d7\u8266\u3002", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "The battle foiled an attempt by the Russian fleet at Port Arthur to break out and form up with the Vladivostok squadron, forcing them to return to port. Four days later, the Battle off Ulsan similarly ended the Vladivostok group's sortie, forcing both fleets to remain at anchor. Gold Label: Siege of Tsingtao: The siege of Tsingtao (or Tsingtau) was the attack on the German port of Tsingtao (now Qingdao) in China during World War I by Japan and the United Kingdom. The siege was waged against Imperial Germany between 27", "authors": [], "year": 1904, "venue": "the Russian Navy, it was referred to as the Battle of 10 August", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Battle of the Yellow Sea: The Battle of the Yellow Sea (; ) was a major naval battle of the Russo-Japanese War, fought on 10 August 1904. In the Russian Navy, it was referred to as the Battle of 10 August. The battle foiled an attempt by the Russian fleet at Port Arthur to break out and form up with the Vladivostok squadron, forcing them to return to port. Four days later, the Battle off Ulsan similarly ended the Vladivostok group's sortie, forcing both fleets to remain at anchor. Gold Label: Siege of Tsingtao: The siege of Tsingtao (or Tsingtau) was the attack on the German port of Tsingtao (now Qingdao) in China during World War I by Japan and the United Kingdom. The siege was waged against Imperial Germany between 27 August and 7 November 1914. The siege was the first encounter between Japanese and German forces, the first Anglo-Japanese operation of the war, and the only major land battle in the Asian and Pacific theatre during World War I. Table 12: Examples of errors by the event linking system. (language-related)", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Mention Context: He established his own production company", "authors": [], "year": null, "venue": "Emirau Productions, named after the battle in World War II in which Warren was injured", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: He established his own production company, Emirau Productions, named after the battle in World War II in which Warren was injured.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": ") was a battle of the Western Desert Campaign of the Second World War, fought in Egypt between Axis forces (Germany and Italy) of the Panzer Army Africa () (which included the under Field Marshal (", "authors": [], "year": 1942, "venue": "Erwin Rommel) and Allied (British Imperial and Commonwealth) forces", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: First Battle of El Alamein: The First Battle of El Alamein (1-27 July 1942) was a battle of the Western Desert Campaign of the Second World War, fought in Egypt between Axis forces (Germany and Italy) of the Panzer Army Africa () (which included the under Field Marshal () Erwin Rommel) and Allied (British Imperial and Commonwealth) forces (Britain, British India, Australia, South Africa and New Zealand) of the Eighth Army (General Claude Auchinleck).", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "The island was not occupied by the Japanese and there was no fighting. It was developed into an airbase which formed the final link in the chain of bases surrounding Rabaul. The isolation of Rabaul", "authors": [], "year": 1944, "venue": "Gold Label: Landing on Emirau: The Landing on Emirau was the last of the series of operations that made up Operation Cartwheel", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: Landing on Emirau: The Landing on Emirau was the last of the series of operations that made up Operation Cartwheel, General Douglas MacArthur's strategy for the encirclement of the major Japanese base at Rabaul. A force of nearly 4,000 United States Marines landed on the island of Emirau on 20 March 1944. The island was not occupied by the Japanese and there was no fighting. It was developed into an airbase which formed the final link in the chain of bases surrounding Rabaul. The isolation of Rabaul permitted MacArthur to turn his attention westward and commence his drive along the north coast of New Guinea toward the Philippines.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Mention Context: Ivanova won the silver medal at the 1978 World Junior Championships. She made her senior World debut at the 1979 World Championships, finishing 18th", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: Ivanova won the silver medal at the 1978 World Junior Championships. She made her senior World debut at the 1979 World Championships, finishing 18th. Ivanova was 16th at the 1980 Winter Olympics.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "It was the first championships that took place on an artificially refrigerated track. The track also hosted the luge world championships that same year, the first time that had ever happened in both bobsleigh and luge in a non-Winter Olympic year", "authors": [], "year": null, "venue": "Predicted Label: FIBT World Championships 1979: The FIBT World Championships 1979 took place in", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: FIBT World Championships 1979: The FIBT World Championships 1979 took place in K\u00f6nigssee, West Germany. It was the first championships that took place on an artificially refrigerated track. The track also hosted the luge world championships that same year, the first time that had ever happened in both bobsleigh and luge in a non-Winter Olympic year (Igls hosted both events for the 1976 games in neighboring Innsbruck.).", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "World Figure Skating Championships: The 1979 World Figure Skating Championships were held in Vienna, Austria from March 13 to 18", "authors": [ { "first": "Gold", "middle": [], "last": "Label", "suffix": "" } ], "year": 1979, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: 1979 World Figure Skating Championships: The 1979 World Figure Skating Championships were held in Vienna, Austria from March 13 to 18. At the event, sanctioned by the International Skating Union, medals were awarded in men's singles, ladies' singles, pair skating, and ice dance.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Mention Context: \u0418\u0437\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u043e \u043e\u0442\u043a\u0440\u044b\u0442\u0438\u0435 \u0431\u0430\u0448\u043d\u0438 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u043b\u043e \u0441\u043e\u0441\u0442\u043e\u044f\u0442\u044c\u0441\u044f \u0432 \u0434\u0435\u043a\u0430\u0431\u0440\u0435 2011 \u0433\u043e\u0434\u0430, \u043d\u043e \u043f\u043e\u0441\u043b\u0435 \u0437\u0435\u043c\u043b\u0435\u0442\u0440\u044f\u0441\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0430\u043c\u0435\u0434\u043b\u0438\u043b\u043e\u0441\u044c \u0438\u0437-\u0437\u0430 \u043d\u0435\u0445\u0432\u0430\u0442\u043a\u0438 \u0441\u0440\u0435\u0434\u0441\u0442\u0432", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: \u0418\u0437\u043d\u0430\u0447\u0430\u043b\u044c\u043d\u043e \u043e\u0442\u043a\u0440\u044b\u0442\u0438\u0435 \u0431\u0430\u0448\u043d\u0438 \u0434\u043e\u043b\u0436\u043d\u043e \u0431\u044b\u043b\u043e \u0441\u043e\u0441\u0442\u043e\u044f\u0442\u044c\u0441\u044f \u0432 \u0434\u0435\u043a\u0430\u0431\u0440\u0435 2011 \u0433\u043e\u0434\u0430, \u043d\u043e \u043f\u043e\u0441\u043b\u0435 \u0437\u0435\u043c\u043b\u0435- \u0442\u0440\u044f\u0441\u0435\u043d\u0438\u044f \u0441\u0442\u0440\u043e\u0438\u0442\u0435\u043b\u044c\u0441\u0442\u0432\u043e \u0437\u0430\u043c\u0435\u0434\u043b\u0438\u043b\u043e\u0441\u044c \u0438\u0437-\u0437\u0430 \u043d\u0435\u0445\u0432\u0430\u0442\u043a\u0438 \u0441\u0440\u0435\u0434\u0441\u0442\u0432.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "The () earthquake struck the Canterbury region in the South Island, centred south-east of the centre of Christchurch, the country's second-most populous city. It caused widespread damage across Christchurch, killing 185 people", "authors": [], "year": 2011, "venue": "at 12:51 p.m. local time", "volume": "23", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: 2011 Christchurch earthquake: A major earthquake occurred in Christchurch, New Zealand, on Tuesday 22 February 2011 at 12:51 p.m. local time (23:51 UTC, 21 February). The () earthquake struck the Canterbury region in the South Island, centred south-east of the centre of Christchurch, the country's second-most populous city. It caused widespread damage across Christchurch, killing 185 people, in the nation's fifth-deadliest disaster.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "1 (Mw) undersea megathrust earthquake had an epicenter in the Pacific Ocean, east of the Oshika Peninsula of the T\u014dhoku region, and lasted approximately six minutes, causing a tsunami", "authors": [], "year": null, "venue": "UTC) on 11 March. The magnitude 9", "volume": "", "issue": "", "pages": "0--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gold Label: 2011 T\u014dhoku earthquake and tsunami: The occurred at 14:46 JST (05:46 UTC) on 11 March. The magnitude 9.0-9.1 (Mw) undersea megathrust earthquake had an epicenter in the Pacific Ocean, east of the Oshika Peninsula of the T\u014dhoku region, and lasted approximately six minutes, causing a tsunami. It is sometimes known in Japan as the , among other names. The disaster is often referred to in both Japanese and English as simply 3.11 (readsan ten ichi-ichi\u00efn Japanese).", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Mention Context: \u30dd\u30ef\u30f3\u30c8\u30fb\u30c7\u30e5\u30fb\u30aa\u30c3\u30af (Pointe du Hoc) \u304b\u3089\u5411\u304b\u3063\u305f\u30a2\u30e1\u30ea\u30ab\u8ecd\u306e\u30ec\u30f3\u30b8\u30e3\u30fc\u90e8\u968a\u306e8\u500b\u4e2d", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mention Context: \u30dd\u30ef\u30f3\u30c8\u30fb\u30c7\u30e5\u30fb\u30aa\u30c3\u30af (Pointe du Hoc) \u304b\u3089\u5411\u304b\u3063\u305f\u30a2\u30e1\u30ea\u30ab\u8ecd\u306e\u30ec\u30f3\u30b8\u30e3\u30fc\u90e8\u968a\u306e8\u500b\u4e2d", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "\u306e\u5357\u306b\u9032\u51fa\u3059\u308b\u524d\u306b\u30dd\u30fc\u30eb\uff1d\u30a2\u30f3\uff1d\u30d9\u30c3\u30b5\u30f3 (Port-en-Bessin) \u3068\u30f4\u30a3\u30eb\u5ddd", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "\u30aa\u30de\u30cf\u30d3\u30fc\u30c1\u306e\u4e0a\u9678\u90e8\u968a\u306e\u4e3b\u76ee\u6a19\u306f\u3001\u30b5\u30f3\uff1d\u30ed\u30fc (Saint-L\u00f4) \u306e\u5357\u306b\u9032\u51fa\u3059\u308b\u524d\u306b\u30dd\u30fc\u30eb\uff1d\u30a2\u30f3\uff1d\u30d9\u30c3\u30b5\u30f3 (Port-en-Bessin) \u3068\u30f4\u30a3\u30eb\u5ddd (Vire River) \u9593\u306e\u6a4b\u982d\u5821\u3092\u5b88\u308b\u3053\u3068\u3067\u3042\u3063\u305f\u3002", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "The Allies consisted of British Imperial Forces, including a Greek contingent, with American and French corps. The battle opened with initial success by the German and Italian forces but the massive supply interdiction efforts led to the decisive defeat of the Axis. Over 250,000 German and Italian troops were taken as prisoners of war, including most of the Afrika Korps", "authors": [], "year": 1942, "venue": "Predicted Label: Tunisian campaign: The Tunisian campaign (also known as the Battle of Tunisia) was a series of battles that took place in Tunisia during the North African campaign of the Second World War, between Axis and Allied forces", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Predicted Label: Tunisian campaign: The Tunisian campaign (also known as the Battle of Tunisia) was a series of battles that took place in Tunisia during the North African campaign of the Second World War, between Axis and Allied forces. The Allies consisted of British Imperial Forces, including a Greek contingent, with American and French corps. The battle opened with initial success by the German and Italian forces but the massive supply interdiction efforts led to the decisive defeat of the Axis. Over 250,000 German and Italian troops were taken as prisoners of war, including most of the Afrika Korps. Gold Label: Operation Torch: Operation Torch (8 November 1942 -16 November 1942) was an Allied invasion of French North Africa during the Second World War. While the French colonies formally aligned with Germany via Vichy France, the loyalties of the population were mixed. Reports indicated that they might support the Allies. American General Dwight D. Eisenhower, supreme commander of the Allied forces in Mediterranean Theater of Operations, planned a three-pronged attack on Casablanca (Western), Oran (Center) and Algiers (Eastern), then a rapid move on Tunis to catch Axis forces in North Africa from the west in conjunction with Allied advance from east.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "An illustration of event hierarchy in Wikidata." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "Statistics of events and mentions per language in the proposed dataset. The languages are sorted in the decreasing order of # events. The counts on y-axis are presented in log scale." }, "FIGREF2": { "type_str": "figure", "num": null, "uris": null, "text": "Retrieval performance on dev split." }, "FIGREF3": { "type_str": "figure", "num": null, "uris": null, "text": "Effect of context window size on BM25+ retrieval performance." }, "FIGREF4": { "type_str": "figure", "num": null, "uris": null, "text": "H i n d i M a l a y a l a m M a r a t h i T e l u g u S i n h a l a S w a h i l i" }, "FIGREF6": { "type_str": "figure", "num": null, "uris": null, "text": "Retrieval recall scores on development set for mBERT and XLM-R in multilingual and crosslingual settings." }, "FIGREF7": { "type_str": "figure", "num": null, "uris": null, "text": "Retrieval recall scores on development set for BM25+ in multilingual setting." }, "FIGREF8": { "type_str": "figure", "num": null, "uris": null, "text": "Test accuracy of mBERT-bi and mBERT-cross in multilingual and crosslingual tasks. The languages on the x-axis are sorted in the increasing order of mentions." }, "FIGREF9": { "type_str": "figure", "num": null, "uris": null, "text": "i n d i M a l a y a l a m M a r a t h i T e l u g u S w a h i l i S i n h a l a" }, "FIGREF11": { "type_str": "figure", "num": null, "uris": null, "text": "Test accuracy of mBERT-bi, XLM-R-bi, mBERT-cross, XLM-R-cross in multilingual and crosslingual tasks. The languages on the x-axis are sorted in the increasing order of mentions." }, "TABREF0": { "html": null, "text": "Aliaksandra Herasimenia est une nageuse bi\u00e9lorusse en activit\u00e9 sp\u00e9cialiste des \u00e9preuves de sprint en nage libre et en dos. ... Multiple m\u00e9daill\u00e9e au niveau plan\u00e9taire et continental, elle d\u00e9croche en 2010 son premier titre international majeur lors des Championnats d'Europe de Budapest, sur dos. (frwiki) Aliaksandra Herasimenia Minibaev's first major international medal came in the men's synchronized 10 metre platform event at the 2010 European Championships. Mention from language Wikipedia La des Championnats d'Europe de natation se tient du 4 au \u00e0 Budapest en Hongrie. C'est la quatri\u00e8me fois que la capitale hongroise accueille l'\u00e9v\u00e9nement bisannuel organis\u00e9 par la Ligue europ\u00e9enne de natation apr\u00e8s les \u00e9ditions 1926, 1958 et 2006. (frwiki) Championnats d'Europe de natation 2010 The 2010 European Aquatics Championships were held from 4-15 August 2010 in Budapest and Balatonf\u00fcred, Hungary. It was the fourth time that the city of Budapest hosts this event after 1926, 1958 and 2006. Events in swimming, diving, synchronised swimming (synchro) and open water swimming were scheduled. (enwiki) 2010 European Aquatics Championships Die 30. Schwimmeuropameisterschaften fanden vom 4. bis 15. August 2010 nach 1926, 1958 und 2006 zum vierten Mal in der ungarischen Hauptstadt Budapest statt.", "type_str": "table", "num": null, "content": "
Event Description from language WikipediaEvent ID from Wikidata
(enwiki) Viktor Minibaev
Q830917
hoven Silber vom 1 m-Brett. Bronze vom 3 m-Brett, 2008 holte sie in Eind-gewann sie in ihrer Heimatstadt Budapest jeweils sie insgesamt drei Medaillen. 2006 und 2010 Bei Schwimmeuropameisterschaften gewann (dewiki) N\u00f3ra Barta(dewiki) Schwimmeuropameisterschaften 2010
" }, "TABREF3": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
: Dataset Summary
3 Multilingual Event Linking Dataset
Our data collection methodology is closely related
to the zero shot entity linking work of Botha et al.
(2020) but we take a top-down approach starting
from Wikidata. Eirew et al. (2021) identified event
pages from English Wikipedia by processing the
infobox elements. However, we found relying on
Wikidata for event identification to be more robust.
Additionally, Wikidata serves as our interlingua
that connects mentions from numerous languages.
" }, "TABREF5": { "html": null, "text": "", "type_str": "table", "num": null, "content": "" }, "TABREF7": { "html": null, "text": "Event linking accuracy on Wikinews test set. CD and ZS indicate cross-domain and zero-shot.", "type_str": "table", "num": null, "content": "
" }, "TABREF8": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
" }, "TABREF9": { "html": null, "text": "the Association for Computational Linguistics, pages 9-16, Trento, Italy. Association for Computational Linguistics. Khyathi Raghavi Chandu, Yonatan Bisk, and Alan W Black. 2021. Grounding 'grounding' in NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4283-4305, Online. Association for Computational Linguistics.", "type_str": "table", "num": null, "content": "
A Appendix A.1 Ethical ConsiderationsAssociation for Computational Linguistics: Human Language Technologies, pages 1148-1158, Portland, Oregon, USA. Association for Computational Lin-guistics.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typo-logically diverse languages. Transactions of the As-sociation for Computational Linguistics, 8:454-470. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro-ceedings of the 58th Annual Meeting of the Asso-ciation for Computational Linguistics, pages 8440-8451, Online. Association for Computational Lin-guistics. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Nat-ural Language Processing, pages 2475-2485, Brus-sels, Belgium. Association for Computational Lin-guistics.Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282-6293, Online. Association for Computational Linguistics. Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, and Alex Peysakhovich. 2019. Pytorch-biggraph: A large scale graph embedding system. In Proceedings of Machine Learning and Systems, volume 1, pages 120-131. Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. 2019. Barack's wife hillary: Using knowledge graphs for fact-aware lan-guage modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin-guistics, pages 5962-5971, Florence, Italy. Associa-tion for Computational Linguistics. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity de-scriptions. In Proceedings of the 57th Annual Meet-ing of the Association for Computational Linguistics, pages 3449-3460, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under-standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech-nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Alon Eirew, Arie Cattan, and Ido Dagan. 2021. WEC: Deriving a large-scale cross-document event coref-erence dataset from Wikipedia. In Proceedings of the 2021 Conference of the North American Chap-ter of the Association for Computational Linguistics: Human Language Technologies, pages 2498-2510, Online. Association for Computational Linguistics.Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer-ence on Learning Representations. Yuanhua Lv and ChengXiang Zhai. 2011a. Lower-bounding term frequency normalization. In Proceed-ings of the 20th ACM International Conference on Information and Knowledge Management, CIKM '11, page 7-16, New York, NY, USA. Association for Computing Machinery. Yuanhua Lv and ChengXiang Zhai. 2011b. When doc-uments are very long, bm25 fails! In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '11, page 1103-1104, New York, NY, USA. Association for Computing Machinery.
Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, and Omer Levy. 2017. Named entity disambiguation for noisy text. In Pro-ceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 58-68, Vancouver, Canada. Association for Compu-tational Linguistics. Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of the 49th Annual Meeting of theRada Mihalcea and Andras Csomai. 2007. Wikify! linking documents to encyclopedic knowledge. In Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Manage-ment, CIKM '07, page 233-242, New York, NY, USA. Association for Computing Machinery. Anne-Lyse Minard, Manuela Speranza, Ruben Urizar, Bego\u00f1a Altuna, Marieke van Erp, Anneleen Schoen, and Chantal van Son. 2016. MEANTIME, the NewsReader multilingual event and time corpus. In
" }, "TABREF11": { "html": null, "text": "Event candidate retrieval results, Recall@8.", "type_str": "table", "num": null, "content": "" }, "TABREF12": { "html": null, "text": "title [SEP] date [SEP] left context [MENTION _ START] mention [MENTION _ END] right context [SEP]\".", "type_str": "table", "num": null, "content": "
" }, "TABREF13": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
presents examples of system errors
due to insufficient temporal reasoning in the con-
" }, "TABREF14": { "html": null, "text": "", "type_str": "table", "num": null, "content": "
: Event linking accuracy on Wikinews test set. For each configuration, we report results using just the mention context (Ctxt), mention context + article publication date (Ctxt+date), mention context + article title (Ctxt+title) and mention context + article date & title (Ctxt+date+title). Most of the gain comes from including the date across all model configurations and tasks.
text. Table 11 presents examples of system errors
on mentions that are temporal or spatial expres-
sions. Table 12 presents examples of system er-
rors on crosslingual task due to issues related with
tackling non-English mentions. Table 13 presents
examples of system errors that were caused due to
dataset errors.
" }, "TABREF15": { "html": null, "text": "List of properties used for postprocessing Wikidata events. If a candidate event has the property 'P31', we prune them depending on the corresponding. For example, we only prune items that are instances of empire, historical period etc., For other properties like P527, P36, we prune items if they contain this property.", "type_str": "table", "num": null, "content": "
LanguageCode Events MentionsGenus
Afrikaansaf3162036Germanic
Arabicar269128801Semitic
Belarusianbe7377091Slavic
Bulgarianbg142612570Slavic
Bengalibn2703136Indic
Catalanca263122296Romance
Czechcs283936658Slavic
Danishda118910267Germanic
Germande7371209469Germanic
Greekel99713361Greek
Englishen10747328789Germanic
Spanishes506491896Romance
Persianfa156610449Iranian
Finnishfi325347944Finnic
Frenchfr8183136482Romance
Hebrewhe187134470Semitic
Hindihi2161219Indic
Hungarianhu306727333Ugric
Indonesianid227414049Malayo-Sumbawan
Italianit7116108012Romance
Japaneseja383249198Japanese
Koreanko173213544Korean
Malayalamml136730Southern Dravidian
Marathimr132507Indic
Malayms8244650Malayo-Sumbawan
Dutchnl415141973Germanic
Norwegianno251424092Germanic
Polishpl6270110381Slavic
Portuguesept446645125Romance
Romanianro122412117Romance
Russianru7929180891Slavic
Sinhalasi3165Indic
Slovaksk7265748Slavic
Slovenesl12888577Slavic
Serbiansr161124093Slavic
Swedishsv286523152Germanic
Swahilisw2274Bantoid
Tamilta2501682Southern Dravidian
Telugute39243 South-Central Dravidian
Thaith8004749Kam-Tai
Turkishtr234219846Turkic
Ukrainianuk342853098Slavic
Vietnamesevi143913744Viet-Muong
Chinesezh275921259Chinese
Total109471805866
" }, "TABREF16": { "html": null, "text": "", "type_str": "table", "num": null, "content": "" } } } }