{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:14:07.920086Z"
},
"title": "SU-NLP at CASE 2021 Task 1: Protest News Detection for English",
"authors": [
{
"first": "Furkan",
"middle": [
"\u00c7"
],
"last": "Elik",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanc\u0131 Universit\u1e8f Istanbul",
"location": {
"country": "Turkey"
}
},
"email": "fcelik@sabanciuniv.edu"
},
{
"first": "Tugberk",
"middle": [],
"last": "Dalk\u0131l\u0131\u00e7",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanc\u0131 Universit\u1e8f Istanbul",
"location": {
"country": "Turkey"
}
},
"email": ""
},
{
"first": "Fatih",
"middle": [],
"last": "Beyhan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanc\u0131 Universit\u1e8f Istanbul",
"location": {
"country": "Turkey"
}
},
"email": "fatihbeyhan@sabanciuniv.edu"
},
{
"first": "Reyyan",
"middle": [],
"last": "Yeniterzi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Sabanc\u0131 Universit\u1e8f Istanbul",
"location": {
"country": "Turkey"
}
},
"email": "reyyan@sabanciuniv.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper summarizes our group's efforts in the multilingual protest news detection shared task, which is organized as a part of the Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE) Workshop. We participated in all four subtasks in English. Especially in the identification of event containing sentences task, our proposed ensemble approach using RoBERTa and multichannel CNN-LexStem model yields higher performance. Similarly in the event extraction task, our transformer-LSTM-CRF architecture outperforms regular transformers significantly.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper summarizes our group's efforts in the multilingual protest news detection shared task, which is organized as a part of the Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE) Workshop. We participated in all four subtasks in English. Especially in the identification of event containing sentences task, our proposed ensemble approach using RoBERTa and multichannel CNN-LexStem model yields higher performance. Similarly in the event extraction task, our transformer-LSTM-CRF architecture outperforms regular transformers significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Identifying events and extracting event related information from text is an important language understanding task which has been studied for quite some time. This challenging task has been studied in several steps or divided into some sub-tasks. The first step is identifying whether a document or a sentence contains an event or not. If it contains then the event co-reference resolution task analyses whether the context around it (such as other sentences) refer to the same event or not. Event related information such as the event trigger and its arguments are also extracted, which can be later on used to create event taxonomies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These steps either alone or together have been studied for English extensively, similar to many other Natural Language Processing tasks. This year as part of the Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE) Workshop, a shared task covering some of these sub-tasks has been organized not only for English but also for Portuguese, Spanish and Hindi (H\u00fcrriyetoglu et al., 2021) . The common theme was the identification of protest events from news articles.",
"cite_spans": [
{
"start": 397,
"end": 424,
"text": "(H\u00fcrriyetoglu et al., 2021)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The organizers specifically focus on the four subtasks. In the first and second sub-task, the aim is to predict whether a given document (subtask 1) or sentence (subtask 2) contains information about an event (either past or ongoing). The third subtask focuses on event sentence coreference and the participants are asked to predict whether the sentences containing an event are referring to the same event or not. In subtask 4, the goal is to identify event triggers and related arguments from sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is hard to choose among these interesting subtasks, therefore we participate in all four of them. Due to time constraints we only work on English and leave the rest of the languages as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The first and the second subtask focus on predicting whether a content contains an event or not. For these tasks in addition to trying standard transformer based models, we explore ensemble models which combine the strengths of different models. Furthermore, the effect of stemming the context is also explored in these subtasks. The third subtask is related to the event coreference task. For this task, we explore the rescoring and clustering approach proposed by (\u00d6rs et al., 2020) . Finally, the goal of subtask 4 is to extract event information from context. For this task, we exploit the transformer-LSTM-CRF architecture which has shown success in several NER tasks.",
"cite_spans": [
{
"start": 466,
"end": 484,
"text": "(\u00d6rs et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as following: Section 2 describes our proposed approach for identifying whether a content contains an event or not, and details our submissions for subtasks 1 and 2. Section 3 explains our submission to the event coreference resolution subtask. Section 4 presents the experimental results for event extraction subtask and finally Section 5 concludes the paper with future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of the first two subtasks is to predict whether the provided input context contains an event (either past or ongoing) or not. Therefore, the task is a binary classification task. In these two subtasks the only difference is the input context. In subtask 1 the input is the whole news article while in subtask 2, it is only a sentence. The main difference between these two tasks is the length of the input. In subtask 1's dataset, even though most documents contain around 3 sentences, the maximum length in the data is almost 10 times larger than the maximum length in subtask 2 data. This makes subtask 1 slightly more challenging. One expects documents as longer input, to contain more clues about an event if there is; therefore more useful. However, there is also the risk of unrelated content causing mixed signals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "Even though this difference between the tasks, we mostly apply same approaches to both. For this binary classification problem, we use some simple neural network architectures as baselines and also investigate fine-tuning several pretrained transformer based models. The models applied are listed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "\u2022 CNN: A single convolutional layer connected to a fully connected dense layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "\u2022 LSTM: A unidirectional long short term memory model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "\u2022 GRU: A unidirectional gated recurrent unit model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "\u2022 BERT (Devlin et al., 2019) : Uses bidirectional transformer architecture for language modeling. We fine-tune the BERT-base-cased 1 model.",
"cite_spans": [
{
"start": 7,
"end": 28,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "\u2022 Albert (Lan et al., 2019) : An efficient (A Lite BERT) version of BERT which outperformed BERT in several benchmark data sets. We fine-tune the Albert-base-v2 model 2 in this paper.",
"cite_spans": [
{
"start": 9,
"end": 27,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "\u2022 RoBERTa : A robustly optimized version of BERT which outperformed BERT in GLUE benchmark. We fine-tune the RoBERTa-base model 3 in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "For neural networks like CNN and RNN, several pretrained word embeddings, like Google News 1 https://huggingface.co/ bert-base-cased 2 https://huggingface.co/albert-base-v2 3 https://huggingface.co/roberta-base Word2Vec 4 (Mikolov et al., 2013) , NNLM (Bengio et al., 2003) model trained on Google News dataset 5 and GloVe (Pennington et al., 2014) 6B Wikipedia embeddings 6 , have been tried. Since the ratio of out-of-vocabulary words were very small, character-based embeddings have not been explored. We have seen that using different embeddings resulted in minor changes, and rather finetuning the embedding layer or not, does not have any significant effect on the performance of models in terms of overfitting resistance or achieved scores. NNLM and GloVe return slightly better performance compared to Word2Vec, when used in standalone CNN or RNN models. However, as we try ensembling approaches (to be described in the upcoming sections), NNLM outperforms GloVe with its high Precision score. Therefore, NNLM embedding is used in all reported experiments in this section.",
"cite_spans": [
{
"start": 222,
"end": 244,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 252,
"end": 273,
"text": "(Bengio et al., 2003)",
"ref_id": "BIBREF1"
},
{
"start": 323,
"end": 348,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 1 & 2: Event or Not",
"sec_num": "2"
},
{
"text": "In all these subtasks, the data collections were gathered from news articles about socio-political and crisis conflicts. For the document classification task, we are provided with an imbalance training data of 9324 news articles with 7407 of them without any events and the rest as containing event. Similarly in subtask 2, among the provided 22825 sentences, only 4210 of them contain an event while the rest of them do not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Experiments",
"sec_num": "2.1"
},
{
"text": "For both tasks, 20% of the provided data is used for validation purposes and rest for model training. During the training process, several balancing approaches were applied to decrease any possible negative effects caused by the imbalance data problem. But overall they did not provide any significant improvements in F1 score; therefore data is used in its original ratio without any balancing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Experiments",
"sec_num": "2.1"
},
{
"text": "The experimental results of the baseline approaches are displayed in Tables 1 and 2. In subtask 1, except for RNNs, all methods listed above were tested. RNNs were not tested due to limited time and prioritization of computational resources for other more advance models. Only a single layer CNN is used in the experiments, since adding more layers caused over-fitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Experiments",
"sec_num": "2.1"
},
{
"text": "Validation Based on the results, transformer based approaches outperform classical neural network based approaches in both tasks. In traditional neural network based models, RNN based ones, both LSTM and GRU, suffer from serious overfitting even though all the efforts of regularization and dropout. Regarding the transformer-based models, in both subtasks, RoBERTa outperforms both BERT and Albert with close margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In the task definition, it is mentioned that the labeled events can be either from past or continuous. This suggests various types of tense use in the context. This variety may cause model to miss some events. In order to deal with this variety, in addition to the lexical forms of the words, their stemmed versions are also included to CNN model as additional channel in the network. WordNetLemmatizer 7 is used as the stemmer. In this proposed model, which is named as LexStem model, one channel is used for the original form of the sentence and another channel for the stemmed version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStem Model",
"sec_num": "2.2"
},
{
"text": "In order to make a fair comparison of the LexStem model, additional CNN multi-channel models are trained as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStem Model",
"sec_num": "2.2"
},
{
"text": "\u2022 CNN-LexLex: A two channels model with original form of the words are used in both channels. This one is developed to see the effect of two channels compared to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStem Model",
"sec_num": "2.2"
},
{
"text": "\u2022 CNN-StemStem: A two channels model with stemmed version of the words are used in both channels. This one is developed to see the individual effect of stem information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStem Model",
"sec_num": "2.2"
},
{
"text": "\u2022 CNN-LexStem: The proposed two channel model with one channel for lexical form of the word and the other for stemmed version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LexStem Model",
"sec_num": "2.2"
},
{
"text": "The experimental results of these models are displayed in Table 3 . In the table, the first two rows are from subtask 1 and the rest of them are from subtask 2. The proposed LexStem model does not provide any significant improvements in subtask 1, therefore other multi-channel models are not tested with this task. Unlike subtask 1, for subtask 2 the LexStem model provides drastic improvements with validation data, but only slight improvement on test data. A similar improvement on test set is also observed at subtask 1. Using multi-channel architecture and therefore using more parameters probably increases model's likelihood of overfitting. This is more observable with CNN-LexLex and CNN-StemStem models. Even though with this increased overfitting possibility, CNN-LexStem model returns small yet consistent increase on test set. The possible reasons of this improvement will be explored more in the future.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "LexStem Model",
"sec_num": "2.2"
},
{
"text": "RoBERTa model outperforms all other models, therefore we specifically analyze its performance and its confidence of its predictions on the validation set. Figure 1 displays how the average F1 score changes with respect to model's confidence values. In the figure, 0.05-0.95 means RoBERTa's predictions which are lower than 0.05 or higher than 0.95.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ensemble Models",
"sec_num": "2.3"
},
{
"text": "According to the Figure 1 , confidence scores lower than 10% and higher than 90% achieve the highest Macro F1 score of 94% and after this, as confidence values go below 90% or above 10%, the F1 score consistently decreases. This means that as RoBERTa gets more unsure of its predictions, it is making more mistakes as expected. In order to prevent these errors, ensemble models are explored.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Ensemble Models",
"sec_num": "2.3"
},
{
"text": "A weighted ensemble model is applied for any case in which RoBERTa is not confident. After trying several threshold values, 0.1 and 0.9 is chosen. Cases where RoBERTa's output are higher than 0.9 or lower than 0.1, are accepted as they are. For anything in between, an ensemble model is used. In order to find the right models to ensemble, a grid search is applied. RoBERTa is assumed to be the permanent model in this ensemble. Therefore, the search is performed over other models as either individual or in groups of two. The following models and weights return the highest performance for subtask 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Models",
"sec_num": "2.3"
},
{
"text": "\u2022 RoBERTa-RNN: 0.4 RoBERTa + 0.15 LSTM + 0.45 GRU",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Models",
"sec_num": "2.3"
},
{
"text": "\u2022 RoBERTa-LexStem: 0.45 RoBERTa + 0.55",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Models",
"sec_num": "2.3"
},
{
"text": "The performance of these ensembles together with individual model performances are presented in Table 4 . The ensemble model is only applied for subtask 2. As for subtask 1, we don't have any RNN model to ensemble or the CNN-LexStem did not provide any improvement on the validation set.",
"cite_spans": [],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "CNN-LexStem",
"sec_num": null
},
{
"text": "According to model is not confident; using a weighted voting and combining these powers can be useful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-LexStem",
"sec_num": null
},
{
"text": "In conclusion, for subtask 1 RoBERTa is the top performing model based on the validation set and it is ranked the 3rd place in the public leaderboard. For subtask 2, our ensemble models receive the 3rd rank in the leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CNN-LexStem",
"sec_num": null
},
{
"text": "In event sentence coreference task, event containing sentences in a document are analyzed to see whether they refer to the same event or not. This task is slightly different than other ones as it does not only consist of a classification step, but also requires clustering afterwards. This two step procedure is known as the Mention-Pair model (Ng, 2010) in coreference resolution tasks. The first step includes a binary classification model to classify pairs of mentions and the second step uses these predictions to determine the coreference relations by clustering them (Ng, 2010) . In this paper, we also use the two step approach, and first perform pairwise classification of sentences and then cluster them.",
"cite_spans": [
{
"start": 344,
"end": 354,
"text": "(Ng, 2010)",
"ref_id": "BIBREF10"
},
{
"start": 573,
"end": 583,
"text": "(Ng, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 3: Event Sentence Coreference Identification",
"sec_num": "3"
},
{
"text": "For the classification part, similar to previous subtasks, base models of BERT, ALBERT and RoBERTa are fine-tuned. Additionally, an ensemble model which is a probabilistic average of these three models, is developed. In all these four binary classification models, instead of using the regular 0.5 boundary, 0.6 boundary is used to identify the positive labels, since 0.6 threshold returned better performance in our experiments. For the clustering step, (\u00d6rs et al., 2020)'s clustering approach together with their proposed rescoring algorithm is used. Their rescoring algorithm calculates an updated score for a pair of sentences by using how sentences within the pair interact with other sentences in the document. For instance, the following pair of sentences, s 1 and s 2 , has positive label predicted. If the predicted label between s 1 and s 3 is same as the prediction between s 2 and s 3 , then a reward is given to s 1 and s 2 pair. But if the labels are different, then a penalty is applied. After the scores are updated, a greedy agglomerative algorithm is applied to construct the clusters (\u00d6rs et al., 2020) . The same rescoring and clustering approach is used in this paper as well.",
"cite_spans": [
{
"start": 1104,
"end": 1122,
"text": "(\u00d6rs et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two-Step Approach",
"sec_num": "3.1"
},
{
"text": "The main evaluation metric for this subtask is different than the other three. CoNLL metric, which is widely used on event/entity coreference tasks, is used in this task for the final system rankings. CoNLL is the average of MUC score (Vilain et al., 1995) , B 3 score (Bagga and Baldwin, 1998) and CEAF e score (Luo, 2005) .",
"cite_spans": [
{
"start": 235,
"end": 256,
"text": "(Vilain et al., 1995)",
"ref_id": null
},
{
"start": 269,
"end": 294,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 312,
"end": 323,
"text": "(Luo, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "3.2"
},
{
"text": "The provided English dataset consists of 596 documents with their event containing sentences and gold clusters. This dataset is divided into training (80%) and validation (20%) sets. Unlike other tasks, this data split is performed more carefully to make sure that various types of clusters are observed in both training and validation sets. While creating these splits, two ratios are calculated and observed. The first one is the single cluster ratio which is calculated by dividing the number of documents with only one cluster to the total number of documents. The second one is referred to as positive class ratio which is calculated by dividing the number of sentence pairs with positive labels into total number of sentence pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "3.2"
},
{
"text": "Having training and validation splits with very different single cluster ratio may affect the performance of clustering step. Similarly having a different positive class ratio may affect the classification performance. Hence, we tried different seeds for random splitting to find the splits which are similar to each other in terms of both of these ratios. The statistics of the constructed splits are presented in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 415,
"end": 422,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "3.2"
},
{
"text": "In addition to the provided training data, we also explore an external dataset from a similar shared task which was organized in 2020. AESPEN'20 8 shared task also focused on event sentence coreference identification and publicly shared a training data of 404 English news articles with their gold-8 https://emw.ku.edu.tr/aespen-2020/ standard labels. We explore the effects of using this dataset as an extension to the existing one. In our experiments this year's provided dataset is referred to as RAW, and the extended version which contains data from both CASE and AESPEN is called EXT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setting",
"sec_num": "3.2"
},
{
"text": "Classification results of our models on validation set can be seen in Table 6 . As expected, all models perform much better with the extended dataset. In general, BERT performs slightly better than the others. The Ensemble model cannot outperform BERT, but it is the second best, therefore we keep using it. Errors of the classification step will unfortunately propagate to the next step, which is clustering. Since some of the pairwise sentences' labels are wrong, the constructed clusters will likely be wrong as well. In order to decrease the effect of this error propagation, we use the best two models from the classification step in this clustering part. The results of the BERT and the Ensemble models are summarized in Table 7 As expected, models trained on the extended (larger) dataset return consistently higher scores. Between the BERT and the Ensemble model, there isn't a clear winner. However, in test set the highest score is retrieved with the Ensemble model which is ranked the 5th in the public leaderboard.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 727,
"end": 734,
"text": "Table 7",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3.3"
},
{
"text": "The goal of the final subtask is to identify the event triggers and its arguments from the sentence. The training dataset consists of 808 sentences which contain IOB type token-based labels of 7 different labels. Similar to previous tasks, 20% of this data is used for validation and the rest for training purposes. In many sequence modeling tasks, the bidirectional transformer models outperform other machine learning architectures; therefore, BERT and RoBERTa are used as strong baselines in this task. As a further development, the transformer model is connected with a BiLSTM and a CRF layer as our second architecture. Connecting BiLSTM and CRF to a transformer has shown success in several Named Entity Recognition tasks (Jiang et al., 2019; Dai et al., 2019) . The performance of these models over both validation and test sets are presented in Table 8 . According to Table 8 , RoBERTa outperforms BERT in both validation and test sets. Combining these with BiLSTM-CRF improves both of them. The performance difference between test and validation sets also decreases with this addition.",
"cite_spans": [
{
"start": 728,
"end": 748,
"text": "(Jiang et al., 2019;",
"ref_id": "BIBREF5"
},
{
"start": 749,
"end": 766,
"text": "Dai et al., 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 853,
"end": 860,
"text": "Table 8",
"ref_id": "TABREF13"
},
{
"start": 876,
"end": 883,
"text": "Table 8",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "4"
},
{
"text": "Even though we achieved good performance, due to a minor format issue at our test submission file, our submissions were not correctly evaluated. Based on our scores at Table 8 , with our best model RoBERTa-BiLSTM-CRF, we would have ranked second in the public leaderboard.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 8",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "4"
},
{
"text": "Analyzing the individual tag performances revealed that model is doing a better job at identifying the triggers compared to its arguments. This is expected as trigger tag is the second most popular tag at the data after the O tag. Trigger is closely followed by event time, which is easier to predict due to its smaller vocabulary variance and common language patterns, even though its lower presence in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "4"
},
{
"text": "In order to analyze the weak points of the models, the confusion table of the top performing RoBERTa-BiLSTM-CRF model over the validation data is shown in Figure 2 . The confusion matrix specifically focuses on the event trigger and arguments tags. Figure 2 , the etime (event time) is the tag which has not been mistaken with any other event specific tags. On the other hand, the highest confusion is between the organizer and participant tags. That is followed by place and fname (facility name) which is expected due to use of similar wordings and context around.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 249,
"end": 257,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Subtask 4: Event Extraction",
"sec_num": "4"
},
{
"text": "In this paper, we mainly focus on English, and try to improve the current state-of-the-art on event specific NLP tasks. Source codes of all of our models are available online 9 . Additional details of our models, like hyper-parameters, are also summarized in the Github. As future work, we will focus on other languages and see whether the trends observed with English, exist in those other languages as well.",
"cite_spans": [
{
"start": 175,
"end": 176,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://radimrehurek.com/gensim/auto_ examples/howtos/run_downloader_api.html 5 https://tfhub.dev/google/ nnlm-en-dim128/2 6 https://nlp.stanford.edu/projects/ glove/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.nltk.org/_modules/nltk/ stem/wordnet.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "563--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. pages 563-566.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A neural probabilistic language model. The journal of machine learning research",
"authors": [
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "R\u00e9jean",
"middle": [],
"last": "Ducharme",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janvin",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "3",
"issue": "",
"pages": "1137--1155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshua Bengio, R\u00e9jean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic lan- guage model. The journal of machine learning re- search, 3:1137-1155.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Named entity recognition using bert bilstm crf for chinese electronic health records",
"authors": [
{
"first": "Zhenjin",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Xutao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Pin",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Yuming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Gangmin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xuming",
"middle": [],
"last": "Bai",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 12th international congress on image and signal processing, biomedical engineering and informatics (cisp-bmei)",
"volume": "",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenjin Dai, Xutao Wang, Pin Ni, Yuming Li, Gang- min Li, and Xuming Bai. 2019. Named entity recognition using bert bilstm crf for chinese elec- tronic health records. In 2019 12th international congress on image and signal processing, biomedi- cal engineering and informatics (cisp-bmei), pages 1-5. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual protest news detectionshared task 1, case 2021",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "H\u00fcrriyetoglu",
"suffix": ""
},
{
"first": "Osman",
"middle": [],
"last": "Mutlu",
"suffix": ""
},
{
"first": "Erdem",
"middle": [],
"last": "Farhana Ferdousi Liza",
"suffix": ""
},
{
"first": "Ritesh",
"middle": [],
"last": "Y\u00f6r\u00fck",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ratan",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali H\u00fcrriyetoglu, Osman Mutlu, Farhana Ferdousi Liza, Erdem Y\u00f6r\u00fck, Ritesh Kumar, and Shyam Ratan. 2021. Multilingual protest news detection - shared task 1, case 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A bert-bilstm-crf model for chinese electronic medical records named entity recognition",
"authors": [
{
"first": "Shaohua",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Shan",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 12th International Conference on Intelligent Computation Technology and Automation (ICICTA)",
"volume": "",
"issue": "",
"pages": "166--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaohua Jiang, Shan Zhao, Kai Hou, Yang Liu, Li Zhang, et al. 2019. A bert-bilstm-crf model for chinese electronic medical records named entity recognition. In 2019 12th International Conference on Intelligent Computation Technology and Automa- tion (ICICTA), pages 166-169. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.11942"
]
},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. arXiv preprint arXiv:1909.11942.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "6--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. pages 6-8.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Supervised noun phrase coreference research: The first fifteen years",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th annual meeting of the association for computational linguistics",
"volume": "",
"issue": "",
"pages": "1396--1411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th annual meeting of the association for com- putational linguistics, pages 1396-1411.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Event clustering within news articles",
"authors": [
{
"first": "Faik",
"middle": [],
"last": "Kerem\u00f6rs",
"suffix": ""
},
{
"first": "S\u00fcveyda",
"middle": [],
"last": "Yeniterzi",
"suffix": ""
},
{
"first": "Reyyan",
"middle": [],
"last": "Yeniterzi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020",
"volume": "",
"issue": "",
"pages": "63--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Faik Kerem\u00d6rs, S\u00fcveyda Yeniterzi, and Reyyan Yen- iterzi. 2020. Event clustering within news articles. In Proceedings of the Workshop on Automated Ex- traction of Socio-political Events from News 2020, pages 63-68, Marseille, France. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "45--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. pages 45-52.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Subtask 2: Confidence Intervals and Their Respective Macro F1 Scores Calculated over Validation Set",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Confusion Table for Event Trigger and Arguments Tags Based on",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "
"
},
"TABREF3": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": ""
},
"TABREF4": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": ", both ensembles outper- |
form RoBERTa both in the validation and test sets. |
This indicates that different types of neural net- |
works have different powers, and in case when a |
"
},
"TABREF5": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": ": Subtask 2 -Ensemble Models F1 Macro |
Scores |
"
},
"TABREF7": {
"text": "Statistics of the Training and Validation Sets",
"html": null,
"type_str": "table",
"num": null,
"content": ""
},
"TABREF9": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": ": Subtask 3: F1 Macro Scores of Classification |
Step over Validation Set |
"
},
"TABREF11": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": ""
},
"TABREF13": {
"text": "Subtask 4: F1 Macro Scores",
"html": null,
"type_str": "table",
"num": null,
"content": ""
}
}
}
}