{ "paper_id": "N19-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:59:12.906880Z" }, "title": "Entity Recognition at First Sight: Improving NER with Eye Movement Information", "authors": [ { "first": "Nora", "middle": [], "last": "Hollenstein", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETH Zurich", "location": {} }, "email": "noraho@ethz.ch" }, { "first": "Ce", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "ETH Zurich", "location": {} }, "email": "ce.zhang@inf.ethz.ch" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Previous research shows that eye-tracking data contains information about the lexical and syntactic properties of text, which can be used to improve natural language processing models. In this work, we leverage eye movement features from three corpora with recorded gaze information to augment a state-of-the-art neural model for named entity recognition (NER) with gaze embeddings. These corpora were manually annotated with named entity labels. Moreover, we show how gaze features, generalized on word type level, eliminate the need for recorded eye-tracking data at test time. The gaze-augmented models for NER using tokenlevel and type-level features outperform the baselines. We present the benefits of eyetracking features by evaluating the NER models on both individual datasets as well as in cross-domain settings.", "pdf_parse": { "paper_id": "N19-1001", "_pdf_hash": "", "abstract": [ { "text": "Previous research shows that eye-tracking data contains information about the lexical and syntactic properties of text, which can be used to improve natural language processing models. In this work, we leverage eye movement features from three corpora with recorded gaze information to augment a state-of-the-art neural model for named entity recognition (NER) with gaze embeddings. These corpora were manually annotated with named entity labels. Moreover, we show how gaze features, generalized on word type level, eliminate the need for recorded eye-tracking data at test time. The gaze-augmented models for NER using tokenlevel and type-level features outperform the baselines. We present the benefits of eyetracking features by evaluating the NER models on both individual datasets as well as in cross-domain settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The field of natural language processing includes studies of tasks of different granularity and depths of semantics: from lower level tasks such as tokenization and part-of-speech tagging up to higher level tasks of information extraction such as named entity recognition, relation extraction, and semantic role labeling (Collobert et al., 2011) . As NLP systems become increasingly prevalent in society, how to take advantage of information passively collected from human readers, e.g. eye movement signals, is becoming more interesting to researchers. Previous research in this area has shown promising results: Eye-tracking data has been used to improve tasks such as part-of-speech tagging (Barrett et al., 2016) , sentiment analysis (Mishra et al., 2017) , prediction of multiword expressions (Rohanian et al., 2017) , and word embedding evaluation (S\u00f8gaard, 2016) .", "cite_spans": [ { "start": 321, "end": 345, "text": "(Collobert et al., 2011)", "ref_id": "BIBREF7" }, { "start": 694, "end": 716, "text": "(Barrett et al., 2016)", "ref_id": "BIBREF1" }, { "start": 738, "end": 759, "text": "(Mishra et al., 2017)", "ref_id": "BIBREF20" }, { "start": 798, "end": 821, "text": "(Rohanian et al., 2017)", "ref_id": "BIBREF25" }, { "start": 854, "end": 869, "text": "(S\u00f8gaard, 2016)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, most of these studies focus on either relatively lower-level tasks (e.g. part-of-speech tagging and multiword expressions) or relatively global properties in the text (e.g. sentiment analysis). In this paper, we test a hypothesis on a different level: Can eye movement signals also help improve higher-level semantic tasks such as extracting information from text?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The answer to this question is not obvious. On one hand, the quality improvement attributed to eye movement signals on lower-level tasks implies that such signals do contain linguistic information. On the other hand, it is not clear whether these signals can also provide significant improvement for tasks dealing with higher-level semantics. Moreover, even if eye movement patterns contain signals related to higher-level tasks, as implied by a recent psycholinguistic study (Tokunaga et al., 2017) , noisy as these signals are, it is not straightforward whether they would help, if not hurt, the quality of the models.", "cite_spans": [ { "start": 476, "end": 499, "text": "(Tokunaga et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we provide the first study of the impact of gaze features to automatic named entity recognition from text. We test the hypothesis that eye-tracking data is beneficial for entity recognition in a state-of-the-art neural named entity tagger augmented with embedding layers of gaze features. Our contributions in the current work can be summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "1. First, we manually annotate three eyetracking corpora with named entity labels to train a neural NER system with gaze features. This collection of corpora facilitates future research in related topics. The annotations are publicly available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. Beyond that, we present a neural architecture for NER, which in addition to textual information, incorporates embedding layers to encode eye movement information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. Finally, we show how gaze features generalized to word types eliminate the need for recorded eye-tracking data at test time. This makes the use of eye-tracking data in NLP applications more feasible since recorded eye-tracking data for each token in context is not required anymore at prediction time. Moreover, type-aggregated features appear to be particularly useful for cross-domain systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our hypotheses are evaluated not only on the available eye-tracking corpora, but also on an external benchmark dataset, for which gaze information does not exist.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The benefits of eye movement data for machine learning have been assessed in various domains, including NLP and computer vision. Eye-trackers provide millisecond-accurate records on where humans look when they are reading, and they are becoming cheaper and more easily available by the day (San Agustin et al., 2009; Sewell and Komogortsev, 2010) . Although eye-tracking data is still being recorded in controlled experiment environments, this will likely change in the near future. Recent approaches have shown substantial improvements in recording gaze data while reading by using cameras of mobile devices (G\u00f3mez-Poveda and Gaudioso, 2016; Papoutsaki et al., 2016) . Hence, eye-tracking data will probably be more accessible and available in much larger volumes in due time, which will facilitate the creation of sizable datasets enormously. Tokunaga et al. (2017) recently analyzed eyetracking signals during the annotation of named entities to find effective features for NER. Their work proves that humans take into account a broad context to identify named entities, including predicate-argument structure. This further strengthens our intuition to use eye movement information to improve existing NER systems. And going even a step further, it opens the possibility for real-time entity annotation based on the reader's eye movements.", "cite_spans": [ { "start": 290, "end": 316, "text": "(San Agustin et al., 2009;", "ref_id": "BIBREF26" }, { "start": 317, "end": 346, "text": "Sewell and Komogortsev, 2010)", "ref_id": "BIBREF28" }, { "start": 609, "end": 642, "text": "(G\u00f3mez-Poveda and Gaudioso, 2016;", "ref_id": "BIBREF11" }, { "start": 643, "end": 667, "text": "Papoutsaki et al., 2016)", "ref_id": "BIBREF21" }, { "start": 845, "end": 867, "text": "Tokunaga et al. (2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The benefit of eye movement data is backed up by extensive psycholinguistic studies. For example, when humans read a text they do not focus on every single word. The number of fixations and the fixation duration on a word depends on a number of linguistic factors (Clifton et al., 2007; Demberg and Keller, 2008) . First, readers are more likely to fixate on open-class words that are not predictable from context (Rayner, 1998) . Reading patterns are a reliable indicator of syntactical categories (Barrett and S\u00f8gaard, 2015a) . Second, word frequency and word familiarity influence how long readers look at a word. The frequency effect was first noted by Rayner (1977) and has been reported in various studies since, e.g. Just and Carpenter (1980) and Cop et al. (2017) . Moreover, although two words may have the same frequency value, they may differ in familiarity (especially for infrequent words). Effects of word familiarity on fixation time have also been demonstrated in a number of recent studies (Juhasz and Rayner, 2003; Williams and Morris, 2004) . Additionally, the positive effect of fixation information in various NLP tasks has recently been shown by Barrett et al. (2018) , where an attention mechanism is trained on fixation duration.", "cite_spans": [ { "start": 264, "end": 286, "text": "(Clifton et al., 2007;", "ref_id": "BIBREF6" }, { "start": 287, "end": 312, "text": "Demberg and Keller, 2008)", "ref_id": "BIBREF10" }, { "start": 414, "end": 428, "text": "(Rayner, 1998)", "ref_id": "BIBREF24" }, { "start": 499, "end": 527, "text": "(Barrett and S\u00f8gaard, 2015a)", "ref_id": "BIBREF2" }, { "start": 657, "end": 670, "text": "Rayner (1977)", "ref_id": "BIBREF23" }, { "start": 724, "end": 749, "text": "Just and Carpenter (1980)", "ref_id": "BIBREF15" }, { "start": 754, "end": 771, "text": "Cop et al. (2017)", "ref_id": "BIBREF8" }, { "start": 1019, "end": 1032, "text": "Rayner, 2003;", "ref_id": "BIBREF14" }, { "start": 1033, "end": 1059, "text": "Williams and Morris, 2004)", "ref_id": "BIBREF35" }, { "start": 1168, "end": 1189, "text": "Barrett et al. (2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "State-of-the-art NER Non-linear neural networks with distributed word representations as input have become increasingly successful for any sequence labeling task in NLP (Huang et al., 2015; Chiu and Nichols, 2016; Ma and Hovy, 2016) . The same applies to named entity recognition: State-of-the-art systems are combinations of neural networks such as LSTMs or CNNs and conditional random fields (CRFs) (Strauss et al., 2016) . Lample et al. (2016) developed such a neural architecture for NER, which we employ in this work and enhance with eye movement features. Their model successfully combines wordlevel and character-level embeddings, which we augment with embedding layers for eye-tracking features.", "cite_spans": [ { "start": 169, "end": 189, "text": "(Huang et al., 2015;", "ref_id": "BIBREF13" }, { "start": 190, "end": 213, "text": "Chiu and Nichols, 2016;", "ref_id": "BIBREF4" }, { "start": 214, "end": 232, "text": "Ma and Hovy, 2016)", "ref_id": "BIBREF19" }, { "start": 401, "end": 423, "text": "(Strauss et al., 2016)", "ref_id": "BIBREF32" }, { "start": 426, "end": 446, "text": "Lample et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "For our experiments, we resort to three eyetracking data resources: the Dundee corpus (Kennedy et al., 2003) , the GECO corpus (Cop et al., 2017) and the ZuCo corpus . For the purpose of information extraction, it is important that the readers process longer fragments of text, i.e. complete sentences instead of single words, which is the case in all three datasets. Table 1 shows an overview of the domain and size of these datasets. In total, they comprise 142,441 tokens with gaze information. milliseconds) and gaze duration (the average duration of all fixations on a word)).", "cite_spans": [ { "start": 86, "end": 108, "text": "(Kennedy et al., 2003)", "ref_id": "BIBREF16" }, { "start": 127, "end": 145, "text": "(Cop et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 368, "end": 375, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Eye-tracking corpora", "sec_num": "3" }, { "text": "Dundee Corpus The gaze data of the Dundee corpus (Kennedy et al., 2003) was recorded with a Dr. Bouis Oculometer Eyetracker. The English section of this corpus comprises 58,598 tokens in 2,367 sentences. It contains eye movement information of ten native English speakers as they read the same 20 newspaper articles from The Independent. The text was presented to the readers on a screen five lines at a time. This data has been widely used in psycholinguistic research to analyze the reading behavior of subjects while reading sentences in context under relatively naturalistic conditions.", "cite_spans": [ { "start": 49, "end": 71, "text": "(Kennedy et al., 2003)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking corpora", "sec_num": "3" }, { "text": "The Ghent Eye-Tracking Corpus (Cop et al., 2017 ) is a more recent dataset, which was created for the analysis of eye movements of monolingual and bilingual subjects during reading. The data was recorded with an Eye-Link 1000 system. The text was presented one paragraph at a time. The subjects read the entire novel The Mysterious Affair at Styles by Agatha Christie (1920) containing 68,606 tokens in 5,424 sentences. We use only the monolingual data recorded from the 14 native English speakers for this work to maintain consistency across corpora.", "cite_spans": [ { "start": 30, "end": 47, "text": "(Cop et al., 2017", "ref_id": "BIBREF8" }, { "start": 359, "end": 374, "text": "Christie (1920)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "GECO Corpus", "sec_num": null }, { "text": "ZuCo Corpus The Zurich Cognitive Language Processing Corpus ) is a combined eye-tracking and EEG dataset. The gaze data was also recorded with an EyeLink 1000 system. The full corpus contains 1,100 English sentences read by 12 adult native speakers. The sentences were presented at the same position on the screen one at a time. For the present work, we only use the eye movement data of the first two reading tasks of this corpus (700 sentences, 15,237 tokens), since these tasks encouraged natural reading. The reading material included sentences from movie reviews from the Stanford Sentiment Treebank (Socher et al., 2013) and the Wikipedia dataset by Culotta et al. (2006) .", "cite_spans": [ { "start": 605, "end": 626, "text": "(Socher et al., 2013)", "ref_id": "BIBREF30" }, { "start": 656, "end": 677, "text": "Culotta et al. (2006)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "GECO Corpus", "sec_num": null }, { "text": "For the purposes of this work, all datasets were manually annotated with named entity labels for three categories: PERSON, OR-GANIZATION and LOCATION. The annotations are available at https://github.com/ DS3Lab/ner-at-first-sight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GECO Corpus", "sec_num": null }, { "text": "The datasets were annotated by two NLP experts. The IOB tagging scheme was used for the labeling. We followed the ACE Annotation Guidelines (Linguistic Data Consortium, 2005) . All conflicts in labelling were resolved by adjudication between both annotators. An inter-Basic n fixations total number of fixations on a word w fixation probability the probability that a word w will be fixated mean fixation duration mean of all fixation durations for a word w Early first fixation duration duration of the first fixation on a word w first pass duration sum of all fixation durations during the first pass Late total fixation duration sum of all fixation durations for a word w n re-fixations number of times a word w is fixated (after the first fixation) re-read probability the probability that a word w will be read more than once Context total regression-from duration combined duration of the regressions that began at word w w-2 fixation probability fixation probability of the word before the previous word w-1 fixation probability fixation probability of the previous word w+1 fixation probability fixation probability of the next word w+2 fixation probability fixation probability of the word after the next word w-2 fixation duration fixation duration of the word before the previous word w-1 fixation duration fixation duration of the previous word w+1 fixation duration fixation duration of the next word w+2 fixation duration fixation duration of the word after the next word annotator reliability analysis on 10,000 tokens (511 sentences) sampled from all three datasets yielded an agreement of 83.5% on the entity labels (\u03ba = 0.68). Table 2 shows the number of annotated entities in each dataset. The distribution of entities between the corpora is highly unbalanced: Dundee and ZuCo, the datasets containing more heterogeneous texts and thus, have a higher ratio of unique entity occurrences, versus GECO, a homogeneous corpus consisting of a single novel, where the named entities are very repetitive.", "cite_spans": [ { "start": 140, "end": 174, "text": "(Linguistic Data Consortium, 2005)", "ref_id": null } ], "ref_spans": [ { "start": 1645, "end": 1652, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "GECO Corpus", "sec_num": null }, { "text": "The gaze data of all three corpora was recorded for multiple readers by conducting experiments in a controlled environment using specialized equipment. It is important to consider that, while we extract the same features for all corpora, there are certainly practical aspects that differ across the datasets. The following factors are expected to influence reading: experiment procedures; text presentation; recording hardware, software and quality; sampling rates; initial calibration and filtering, as well as human factors such as head movements and lack of attention. Therefore, separate normalization for each dataset should better preserve the signal within each corpus and for the same reason the type-aggregation was computed on the normalized feature values. This is especially relevant for the type-aggregated features and the cross-corpus experiments described below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "In order to add gaze information to the neural network, we have selected as many features as available from those present in all three corpora. Previous research shows benefits in combining multiple eye-tracking features of different stages of the human reading process (Barrett et al., 2016; Tokunaga et al., 2017) .", "cite_spans": [ { "start": 270, "end": 292, "text": "(Barrett et al., 2016;", "ref_id": "BIBREF1" }, { "start": 293, "end": 315, "text": "Tokunaga et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "The features extracted follow closely on Barrett et al. (2016). As described above, psycholinguistic research has shown how fixation duration and probability differ between word classes and syntactic comprehension processes. Thus, the features focus on representing these nuances as broadly as possible, covering the complete reading time of a word at different stages. Table 3 shows the eye movement features incorporated into the experiments. We split the 17 features into 4 distinct groups (analogous to Barrett et al. (2016) ), which define the different stages of the reading process:", "cite_spans": [ { "start": 507, "end": 528, "text": "Barrett et al. (2016)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 370, "end": 377, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "1. BASIC eye-tracking features capture characteristics on word-level, e.g. the number of all fixations on a word or the probability that a word will be fixated (namely, the number of subjects who fixated the word divided by the total number of subjects).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "2. EARLY gaze measures capture lexical access and early syntactic processing and are based on the first time a word is fixated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "3. LATE measures reflect the late syntactic processing and general disambiguation. These features are significant for words which were fixated more than once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "4. CONTEXT features capture the gaze measures of the surrounding tokens. These features consider the fixation probability and duration up to two tokens to the left and right of the current token. Additionally, regressions starting at the current word are also considered to be meaningful for the syntactic processing of full sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "The eye movement measurements were averaged over all native-speaking readers of each dataset to obtain more robust estimates. The small size of eye-tracking datasets often limits the potential for training data-intensive algorithms and causes overfitting in benchmark evaluation . It also leads to sparse samples of gaze measurements. Hence, given the limited number of observations available, we normalize the data by splitting the feature values into quantiles to avoid sparsity issues. The best results were achieved with 24 bins. This normalization is conducted separately for each corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "Moreover, special care had to be taken regarding tokenization, since the recorded eye-tracking data considers only whitespace separation. For example, the string John's would constitute a single token for eye-tracking feature extraction, but would be split into John and 's for NER, with the former token holding the label PERSON and the latter no label at all. Our strategy to address this issue was to assign the same values of the gaze features of the originating token to split tokens. Barrett and S\u00f8gaard (2015b) showed that typelevel aggregation of gaze features results in larger improvements for part-of-speech tagging. Following their line of work, we also conducted exper-iments with type aggregation for NER. This implies that the eye-tracking feature values were averaged for each word type over all occurrences in the training data. For instance, the sum of the features of all n occurrences of the token \"island\" are averaged over the number of occurrences n. As a result, for each corpus as well as for the aggregated corpora, a lexicon of lower-cased word types with their averaged eye-tracking feature values was compiled. Thus, as input for the network, either the type-level aggregates for each individual corpus can be used or the values from the combined lexicon, which increases the number of word types with known gaze feature values.", "cite_spans": [ { "start": 490, "end": 517, "text": "Barrett and S\u00f8gaard (2015b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Eye-tracking features", "sec_num": "4" }, { "text": "The goal of type aggregation is twofold. First, it eliminates the requirement of eye-tracking features when applying the models at test time, since the larger the lexicon, the more tokens in the unseen data receive type-aggregated eye-tracking feature values. For those tokens not in the lexicon, we assign a placeholder for unknown feature values. Second, type-aggregated features can be used on any dataset and show that improvements can be achieved with aggregated gaze data without requiring large quantities of recorded data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Type aggregation", "sec_num": "4.1" }, { "text": "The experiments in this work were executed using an enhanced version of the system presented by Lample et al. (2016) . This hybrid approach is based on bidirectional LSTMs and conditional random fields and relies mainly on two sources of information: character-level and word-level representations.", "cite_spans": [ { "start": 96, "end": 116, "text": "Lample et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "For the experiments, the originally proposed values for all parameters were maintained. Specifically, the bidirectional LSTMs for characterbased embeddings are trained on the corpus at hand with dimensions set to 25. The lookup table tor the word embeddings was initialized with the pre-trained GloVe vectors of 100 dimensions (Pennington et al., 2014) . The model uses a single layer for the forward and backward LSTMs. All models were trained with a dropout rate at 0.5. Moreover, all digits were replaced with zeros.", "cite_spans": [ { "start": 327, "end": 352, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "The original model 1 was modified to include the gaze features as additional embedding layers to the network. The character-level representation, i.e. the output of a bidirectional LSTM, is concatenated with the word-level representation from a word lookup table. In the augmented model with eye-tracking information, the embedding for each discrete gaze feature is also concatenated to the input. The dimension of the gaze feature embeddings is equal to the number of quantiles. This architecture is shown in Figure 1 . Word length and word frequency are known to correlate and interact with gaze features (Tomanek et al., 2010) , which is why we selected a base model that allows us to combine the eye-tracking features with word-and character-level information.", "cite_spans": [ { "start": 607, "end": 629, "text": "(Tomanek et al., 2010)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 510, "end": 518, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model", "sec_num": "5" }, { "text": "Our main finding is that our models enhanced with gaze features consistently outperform the baseline. As our baseline, we trained and evaluated the original models with the neural architecture and parameters proposed by Lample et al. (2016) on the GECO, Dundee, and ZuCo corpora and compared it to the models that were enriched with eyetracking measures. The best improvements on F 1score over the baseline models are significant under one-sided t-tests (p<0.05). All models were trained with 10-fold cross validation (80% training set, 10% development set, 10% test set) and early stopping was performed after 20 epochs of no improvement on the development set to reduce training time.", "cite_spans": [ { "start": 220, "end": 240, "text": "Lample et al. (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "First, the performance on the individual datasets is tested, together with the performance of one combined dataset consisting of all three corpora (consisting of 142,441 tokens). In addition, we evaluate the effects of the type-aggregated features using individual type lexicons for each datasets, and combining the three type lexicons of each corpus. Finally, we experiment with cross-corpus scenarios to evaluate the potential of eye-tracking features in NER for domain adaptation. Both settings were also tested on an external corpus without eye-tracking features, namely the CoNLL-2003 dataset (Sang and De Meulder, 2003) .", "cite_spans": [ { "start": 579, "end": 607, "text": "CoNLL-2003 dataset (Sang and", "ref_id": null }, { "start": 608, "end": 625, "text": "De Meulder, 2003)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6" }, { "text": "First, we analyzed how augmenting the named entity recognition system with eye-tracking features affects the results on the individual datasets. Table 4 shows the improvements achieved by adding all 17 gaze features to the neural architecture, and training models on all three corpora, and on the combined dataset containing all sentences from the Dundee, GECO and ZuCo corpora. Noticeably, adding token-level gaze features improves the results on all datasets individually and combined, even on the GECO corpus, which yields a high baseline due to the homogeneity of the contained named entities (see Table 2 ).", "cite_spans": [], "ref_spans": [ { "start": 602, "end": 609, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Individual dataset evaluation", "sec_num": "6.1" }, { "text": "Furthermore, Table 4 also presents the results of the NER models making use of the typeaggregated features instead of token-level gaze features. There are two different experiments for these type-level features: Using the features of the word types occurring in the corpus only, or using the aggregated features of all word types in the three corpora (as describe above). As can be seen, the performance of the different gaze fea- Table 4 : Precision (P), recall (R) and F 1 -score (F) for all models trained on individual datasets (best results in bold; * indicates statistically significant improvements on F 1 -score). With gaze are models trained on the original eye-tracking features on token-level, type individual are the models trained on type-aggregated gaze features of this corpus only, while type combined are the models trained with type-aggregated features computed on all datasets. ture levels varies between datasets, but both the original token-level features as well as the individual and combined type-level features achieve improvements over the baselines of all datasets.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": "Table 4", "ref_id": null }, { "start": 431, "end": 438, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Individual dataset evaluation", "sec_num": "6.1" }, { "text": "To sum up, the largest improvement with eyetracking features is achieved when combining all corpora into one larger dataset, where an additional 4% is gained in F 1 -score by using typeaggregated features. Evidently, a larger mixeddomain dataset benefits from the type aggregation, while the original token-level gaze features achieve the best results on the individual datasets. Moreover, the additional gain when training on all datasets is due to the higher signal-to-noise ratio of type-aggregated features from multiple datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Individual dataset evaluation", "sec_num": "6.1" }, { "text": "Evaluation on CoNLL-2003 Going on step further, we evaluate the type-aggregated gaze features on an external corpus with no eye movement information available. The CoNLL-2003 corpus (Sang and De Meulder, 2003) Table 5 : Precision (P), recall (R) and F 1 -score (F) for using type-aggregated gaze features on the CoNLL-2003 dataset (* marks statistically significant improvement).", "cite_spans": [ { "start": 164, "end": 191, "text": "CoNLL-2003 corpus (Sang and", "ref_id": null }, { "start": 192, "end": 209, "text": "De Meulder, 2003)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 210, "end": 217, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Individual dataset evaluation", "sec_num": "6.1" }, { "text": "widely used as a benchmark dataset for NER in different shared tasks. The English part of this corpus consists of Reuters news stories and contains 302,811 tokens in 22,137 sentences. We use this dataset as an additional corpus without gaze information. Only the type-aggregated features (based on the combined eye-tracking corpora) are added to each word. Merely 76% of the tokens in the CoNLL-2003 corpus also appear in the eyetracking corpora described above and thus receive type-aggregated feature values. The rest of the tokens without aggregated gaze information available receive a placeholder for the unknown feature values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Individual dataset evaluation", "sec_num": "6.1" }, { "text": "Note that to avoid overfitting we do not train on the official train/test split of the CoNLL-2003 dataset, but perform 10-fold cross validation. Applying the same experiment setting, we train the augmented NER model with gaze features on the CoNLL-2003 data and compare it to a baseline model without any eye-tracking features. We achieve a minor, but nonetheless significant improvement (shown in Table 5 ), which strongly supports the generalizability effect of the typeaggregated features on unseen data.", "cite_spans": [], "ref_spans": [ { "start": 398, "end": 405, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Individual dataset evaluation", "sec_num": "6.1" }, { "text": "In a second evaluation scenario, we test the potential of eye-tracking features for NER across corpora. The goal is to leverage eye-tracking features for domain adaptation. To show the robustness of our approach across domains, we train the models with token-level and type-level features on 100% of corpus A and a development set of 20% of corpus B and test on the remaining 80% of the corpus B, alternating only the development and the test set for each fold. Table 6 shows the results of this cross-corpus evaluation. The impact of the eye-tracking features varies between the different combinations of datasets. However, the inclusion of eye-tracking features improves the results for all combinations, except for the models trained on the ZuCo corpus and tested on the GECO corpus. Presumably, this is due to the combination of the small training data size of the ZuCo corpus and the homogeneity of the named entities in the GECO corpus. Table 7 : Precision (P), recall (R) and F 1 -score (F) for using type-aggregated gaze features trained on all three eye-tracking datasets and tested on the CoNLL-2003 dataset (* marks statistically significant improvement).", "cite_spans": [], "ref_spans": [ { "start": 462, "end": 469, "text": "Table 6", "ref_id": "TABREF6" }, { "start": 943, "end": 950, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Cross-dataset evaluation", "sec_num": "6.2" }, { "text": "Evaluation on CoNLL-2003 Analogous to the individual dataset evaluation, we also test the potential of eye-tracking features in a cross-dataset scenario on an external benchmark dataset. Again, we use the CoNLL-2003 corpus for this purpose. We train a model on the Dundee, GECO and ZuCo corpora using type-aggregated eye-tracking features and test this model on the ConLL-2003 data. Table 7 shows that compared to a baseline without gaze features, the results improve by 3% F 1 -score. These results underpin our hypothesis of the possibility of generalizing eye-tracking features on word type level, such that no recorded gaze data is required at test time.", "cite_spans": [], "ref_spans": [ { "start": 383, "end": 390, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Cross-dataset evaluation", "sec_num": "6.2" }, { "text": "The models evaluated in the previous section show that eye-tracking data contain valuable semantic information that can be leveraged effectively by NER systems. While the individual datasets are still limited in size, the largest improvement is observed in the models making use of all the available data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "At a closer look, the model leveraging gaze data yield a considerably higher increase in recall when comparing to the baselines. In addition, a classwise analysis shows that the entity type benefiting the most from the gaze features over all models is ORGANIZATION, which is the most difficult class to predict. Figure 2 illustrates this with the results per class of the models trained on all three gaze corpora jointly.", "cite_spans": [], "ref_spans": [ { "start": 312, "end": 320, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In the individual dataset evaluation setting, the combined type-level feature aggregation from all datasets does not yield the best results, since each sentence in these corpora already has accurate eyetracking features on toke-level. Thus, it is under-standable that in this scenario the original gaze features and the gaze features aggregated only on the individual datasets result in better models. However, when evaluating the NER models in a cross-corpus scenario, the type-aggregated features lead to significant improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "Type aggregation evidently reduces the finegrained nuances contained in eye-tracking information and eliminates the possibility of disambiguation between homographic tokens. Nevertheless, this type of disambiguation is not crucial for named entities, which mainly consist of proper nouns and the same entities tend to appear in the same context. Especially noteworthy is the gain in the models tested on the CoNLL-2003 benchmark corpus, which shows that aggregated eyetracking features from other datasets can be applied to any unseen sentence and show improvements, even though more than 20% of the tokens have unknown gaze feature values. While the high number of unknown values is certainly a limitation of our approach, it shows at once the possibility of not requiring original gaze features at prediction time. Thus, the trained NER models can be applied robustly on unseen data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "We presented the first study of augmenting a NER system with eye-tracking information. Our results highlight the benefits of leveraging cognitive cues such as eye movements to improve entity recognition models. The manually annotated named entity labels for the three eye-tracking corpora are freely available. We augmented a neural NER architecture with gaze features. Experiments were performed using a wide range of features relevant to the human reading process and the results show significant improvements over the baseline for all corpora individually.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "In addition, the type-aggregated gaze features are effective in cross-domain settings, even on an external benchmark corpus. The results of these type-aggregated features are a step towards leveraging eye-tracking data for information extraction at training time, without requiring real-time recorded eye-tracking data at prediction time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "https://github.com/glample/tagger", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sequence classification with human attention", "authors": [ { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Nora", "middle": [], "last": "Hollenstein", "suffix": "" }, { "first": "Marek", "middle": [], "last": "Rei", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "302--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders S\u00f8gaard. 2018. Sequence classification with human attention. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 302-312.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Weakly supervised part-ofspeech tagging using eye-tracking data", "authors": [ { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Bingel", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "579--584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Barrett, Joachim Bingel, Frank Keller, and An- ders S\u00f8gaard. 2016. Weakly supervised part-of- speech tagging using eye-tracking data. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics, volume 2, pages 579-584.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reading behavior predicts syntactic categories", "authors": [ { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 19th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "345--349", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Barrett and Anders S\u00f8gaard. 2015a. Reading be- havior predicts syntactic categories. In Proceedings of the 19th Conference on Computational Natural Language Learning, pages 345-349.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using reading behavior to predict grammatical functions", "authors": [ { "first": "Maria", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Sixth Workshop on Cognitive Aspects of Computational Language Learning", "volume": "", "issue": "", "pages": "1--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maria Barrett and Anders S\u00f8gaard. 2015b. Using read- ing behavior to predict grammatical functions. In Proceedings of the Sixth Workshop on Cognitive As- pects of Computational Language Learning, pages 1-5.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association for Computational Linguistics", "authors": [ { "first": "P", "middle": [ "C" ], "last": "Jason", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "", "middle": [], "last": "Nichols", "suffix": "" } ], "year": 2016, "venue": "", "volume": "4", "issue": "", "pages": "357--370", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Trans- actions of the Association for Computational Lin- guistics, 4:357-370.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The Mysterious Affair at Styles. Retrieved from Project Gutenberg", "authors": [ { "first": "Agatha", "middle": [], "last": "Christie", "suffix": "" } ], "year": 1920, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agatha Christie. 1920. The Mysterious Affair at Styles. Retrieved from Project Gutenberg, www.gutenberg.org.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Eye movements in reading words and sentences", "authors": [ { "first": "Charles", "middle": [], "last": "Clifton", "suffix": "" }, { "first": "Adrian", "middle": [], "last": "Staub", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 2007, "venue": "Eye Movements", "volume": "", "issue": "", "pages": "341--371", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Clifton, Adrian Staub, and Keith Rayner. 2007. Eye movements in reading words and sentences. In Eye Movements, pages 341-371. Elsevier.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language processing (almost) from scratch", "authors": [ { "first": "Ronan", "middle": [], "last": "Collobert", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "L\u00e9on", "middle": [], "last": "Bottou", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Karlen", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Pavel", "middle": [], "last": "Kuksa", "suffix": "" } ], "year": 2011, "venue": "Journal of Machine Learning Research", "volume": "12", "issue": "", "pages": "2493--2537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence reading", "authors": [ { "first": "Uschi", "middle": [], "last": "Cop", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Dirix", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Drieghe", "suffix": "" }, { "first": "Wouter", "middle": [], "last": "Duyck", "suffix": "" } ], "year": 2017, "venue": "Behavior research methods", "volume": "49", "issue": "", "pages": "602--615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Uschi Cop, Nicolas Dirix, Denis Drieghe, and Wouter Duyck. 2017. Presenting GECO: An eyetracking corpus of monolingual and bilingual sentence read- ing. Behavior research methods, 49(2):602-615.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Integrating probabilistic extraction models and data mining to discover relations and patterns in text", "authors": [ { "first": "Aron", "middle": [], "last": "Culotta", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Betz", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "296--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aron Culotta, Andrew McCallum, and Jonathan Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Proceedings of the Human Language Tech- nology Conference of the North American Chap- ter of the Association of Computational Linguistics, pages 296-303.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Data from eyetracking corpora as evidence for theories of syntactic processing complexity", "authors": [ { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "109", "issue": "2", "pages": "193--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vera Demberg and Frank Keller. 2008. Data from eye- tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193-210.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Evaluation of temporal stability of eye tracking algorithms using webcams", "authors": [ { "first": "Jose", "middle": [], "last": "G\u00f3mez", "suffix": "" }, { "first": "-", "middle": [], "last": "Poveda", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Gaudioso", "suffix": "" } ], "year": 2016, "venue": "Expert Systems with Applications", "volume": "64", "issue": "", "pages": "69--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jose G\u00f3mez-Poveda and Elena Gaudioso. 2016. Evalu- ation of temporal stability of eye tracking algorithms using webcams. Expert Systems with Applications, 64:69-83.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "ZuCo, a simultaneous EEG and eyetracking resource for natural sentence reading", "authors": [ { "first": "Nora", "middle": [], "last": "Hollenstein", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Rotsztejn", "suffix": "" }, { "first": "Marius", "middle": [], "last": "Troendle", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Pedroni", "suffix": "" }, { "first": "Ce", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Langer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nora Hollenstein, Jonathan Rotsztejn, Marius Troen- dle, Andreas Pedroni, Ce Zhang, and Nicolas Langer. 2018. ZuCo, a simultaneous EEG and eye- tracking resource for natural sentence reading. Sci- entific Data.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Bidirectional LSTM-CRF models for sequence tagging", "authors": [ { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.01991" ] }, "num": null, "urls": [], "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading", "authors": [ { "first": "J", "middle": [], "last": "Barbara", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Juhasz", "suffix": "" }, { "first": "", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 2003, "venue": "Journal of Experimental Psychology: Learning, Memory, and Cognition", "volume": "29", "issue": "6", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Barbara J Juhasz and Keith Rayner. 2003. Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimen- tal Psychology: Learning, Memory, and Cognition, 29(6):1312.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A theory of reading: From eye fixations to comprehension", "authors": [ { "first": "A", "middle": [], "last": "Marcel", "suffix": "" }, { "first": "Patricia", "middle": [ "A" ], "last": "Just", "suffix": "" }, { "first": "", "middle": [], "last": "Carpenter", "suffix": "" } ], "year": 1980, "venue": "Psychological review", "volume": "87", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcel A Just and Patricia A Carpenter. 1980. A theory of reading: From eye fixations to comprehension. Psychological review, 87(4):329.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The Dundee corpus", "authors": [ { "first": "Alan", "middle": [], "last": "Kennedy", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Jo\u00ebl", "middle": [], "last": "Pynte", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 12th European Conference on Eye Movement", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alan Kennedy, Robin Hill, and Jo\u00ebl Pynte. 2003. The Dundee corpus. In Proceedings of the 12th Euro- pean Conference on Eye Movement.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Neural architectures for named entity recognition", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "260--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "ACE (Automatic Content Extraction) English annotation guidelines for entities. Version", "authors": [], "year": 2005, "venue": "", "volume": "5", "issue": "", "pages": "2005--2013", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linguistic Data Consortium. 2005. ACE (Automatic Content Extraction) English annotation guidelines for entities. Version, 5(6):2005-08.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1064--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 1:1064- 1074.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Leveraging cognitive features for sentiment analysis", "authors": [ { "first": "Abhijit", "middle": [], "last": "Mishra", "suffix": "" }, { "first": "Diptesh", "middle": [], "last": "Kanojia", "suffix": "" }, { "first": "Seema", "middle": [], "last": "Nagar", "suffix": "" }, { "first": "Kuntal", "middle": [], "last": "Dey", "suffix": "" }, { "first": "Pushpak", "middle": [], "last": "Bhattacharyya", "suffix": "" } ], "year": 2017, "venue": "Proceedings of The 20th Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "156--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2017. Leveraging cognitive features for sentiment analysis. Proceed- ings of The 20th Conference on Computational Nat- ural Language Learning, pages 156-166.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "WebGazer: Scalable webcam eye tracking using user interactions", "authors": [ { "first": "Alexandra", "middle": [], "last": "Papoutsaki", "suffix": "" }, { "first": "Patsorn", "middle": [], "last": "Sangkloy", "suffix": "" }, { "first": "James", "middle": [], "last": "Laskey", "suffix": "" }, { "first": "Nediyana", "middle": [], "last": "Daskalova", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Huang", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence-IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexandra Papoutsaki, Patsorn Sangkloy, James Laskey, Nediyana Daskalova, Jeff Huang, and James Hays. 2016. WebGazer: Scalable webcam eye tracking using user interactions. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence-IJCAI 2016.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1532-1543.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Visual attention in reading: Eye movements reflect cognitive processes", "authors": [ { "first": "Keith", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 1977, "venue": "Memory & Cognition", "volume": "5", "issue": "4", "pages": "443--448", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Rayner. 1977. Visual attention in reading: Eye movements reflect cognitive processes. Memory & Cognition, 5(4):443-448.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Eye movements in reading and information processing: 20 years of research", "authors": [ { "first": "Keith", "middle": [], "last": "Rayner", "suffix": "" } ], "year": 1998, "venue": "Psychological bulletin", "volume": "124", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psy- chological bulletin, 124(3):372.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Using gaze data to predict multiword expressions", "authors": [ { "first": "Shiva", "middle": [], "last": "Omid Rohanian", "suffix": "" }, { "first": "Victoria", "middle": [], "last": "Taslimipoor", "suffix": "" }, { "first": "Le", "middle": [ "An" ], "last": "Yaneva", "suffix": "" }, { "first": "", "middle": [], "last": "Ha", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "601--609", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omid Rohanian, Shiva Taslimipoor, Victoria Yaneva, and Le An Ha. 2017. Using gaze data to predict multiword expressions. In Proceedings of the In- ternational Conference Recent Advances in Natural Language Processing, pages 601-609.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Low-cost gaze interaction: ready to deliver the promises", "authors": [ { "first": "Javier", "middle": [], "last": "San Agustin", "suffix": "" }, { "first": "Henrik", "middle": [], "last": "Skovsgaard", "suffix": "" }, { "first": "John", "middle": [ "Paulin" ], "last": "Hansen", "suffix": "" }, { "first": "Dan", "middle": [ "Witzner" ], "last": "Hansen", "suffix": "" } ], "year": 2009, "venue": "CHI'09 Extended Abstracts on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "4453--4458", "other_ids": {}, "num": null, "urls": [], "raw_text": "Javier San Agustin, Henrik Skovsgaard, John Paulin Hansen, and Dan Witzner Hansen. 2009. Low-cost gaze interaction: ready to deliver the promises. In CHI'09 Extended Abstracts on Human Factors in Computing Systems, pages 4453-4458.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Introduction to the CoNLL-2003 shared task: Languageindependent named entity recognition", "authors": [ { "first": "F", "middle": [], "last": "Erik", "suffix": "" }, { "first": "Fien", "middle": [], "last": "Sang", "suffix": "" }, { "first": "", "middle": [], "last": "De Meulder", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 7th Conference on Natural Language Learning", "volume": "4", "issue": "", "pages": "142--147", "other_ids": {}, "num": null, "urls": [], "raw_text": "Erik F Sang and Fien De Meulder. 2003. Introduc- tion to the CoNLL-2003 shared task: Language- independent named entity recognition. In Proceed- ings of the 7th Conference on Natural Language Learning, volume 4, pages 142-147.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Realtime eye gaze tracking with an unmodified commodity webcam employing a neural network", "authors": [ { "first": "Weston", "middle": [], "last": "Sewell", "suffix": "" }, { "first": "Oleg", "middle": [], "last": "Komogortsev", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weston Sewell and Oleg Komogortsev. 2010. Real- time eye gaze tracking with an unmodified commod- ity webcam employing a neural network. In CHI'10", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Extended Abstracts on Human Factors in Computing Systems", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "3739--3744", "other_ids": {}, "num": null, "urls": [], "raw_text": "Extended Abstracts on Human Factors in Comput- ing Systems, pages 3739-3744.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Perelygin", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Chuang", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Ng", "suffix": "" }, { "first": "", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1631--1642", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, pages 1631-1642.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Evaluating word embeddings with fMRI and eye-tracking", "authors": [ { "first": "Anders", "middle": [], "last": "S\u00f8gaard", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", "volume": "", "issue": "", "pages": "116--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anders S\u00f8gaard. 2016. Evaluating word embeddings with fMRI and eye-tracking. In Proceedings of the 1st Workshop on Evaluating Vector-Space Represen- tations for NLP, pages 116-121.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Results of the wnut16 named entity recognition shared task", "authors": [ { "first": "Benjamin", "middle": [], "last": "Strauss", "suffix": "" }, { "first": "Bethany", "middle": [], "last": "Toma", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Marie-Catherine", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2nd Workshop on Noisy Usergenerated Text (WNUT)", "volume": "", "issue": "", "pages": "138--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Strauss, Bethany Toma, Alan Ritter, Marie- Catherine de Marneffe, and Wei Xu. 2016. Results of the wnut16 named entity recognition shared task. In Proceedings of the 2nd Workshop on Noisy User- generated Text (WNUT), pages 138-144.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "An eye-tracking study of named entity annotation", "authors": [ { "first": "Takenobu", "middle": [], "last": "Tokunaga", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Nishikawa", "suffix": "" }, { "first": "Tomoya", "middle": [], "last": "Iwakura", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "758--764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Takenobu Tokunaga, Hitoshi Nishikawa, and Tomoya Iwakura. 2017. An eye-tracking study of named entity annotation. Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 758-764.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A cognitive cost model of annotations based on eye-tracking data", "authors": [ { "first": "Katrin", "middle": [], "last": "Tomanek", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Lohmann", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Ziegler", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1158--1167", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katrin Tomanek, Udo Hahn, Steffen Lohmann, and J\u00fcrgen Ziegler. 2010. A cognitive cost model of an- notations based on eye-tracking data. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1158-1167.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Eye movements, word familiarity, and vocabulary acquisition", "authors": [ { "first": "Rihana", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Morris", "suffix": "" } ], "year": 2004, "venue": "European Journal of Cognitive Psychology", "volume": "16", "issue": "1-2", "pages": "312--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rihana Williams and Robin Morris. 2004. Eye move- ments, word familiarity, and vocabulary acquisition. European Journal of Cognitive Psychology, 16(1- 2):312-339.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Turkergaze: Crowdsourcing saliency with webcam based eye tracking", "authors": [ { "first": "Pingmei", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Krista", "middle": [ "A" ], "last": "Ehinger", "suffix": "" }, { "first": "Yinda", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Finkelstein", "suffix": "" }, { "first": "R", "middle": [], "last": "Sanjeev", "suffix": "" }, { "first": "Jianxiong", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "", "middle": [], "last": "Xiao", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1504.06755" ] }, "num": null, "urls": [], "raw_text": "Pingmei Xu, Krista A Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R Kulkarni, and Jianxiong Xiao. 2015. Turkergaze: Crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Main architecture of the network. Character and word embeddings concatenated with gaze features are given to a bidirectional LSTM. l i represents the word i and its left context, r i represents the word i and its right context. Concatenating these two vectors yields a representation of the word i in its context, c i .", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Results per class for the models trained on all gaze datasets combined.", "num": null }, "TABREF0": { "type_str": "table", "num": null, "html": null, "text": "", "content": "
also shows the differences in mean fixation times
between the datasets (i.e. fixation duration (the av-
erage duration of a single fixation on a word in
" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "text": "", "content": "" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "text": "Gaze features extracted from the Dundee, GECO and ZuCo corpora.", "content": "
" }, "TABREF5": { "type_str": "table", "num": null, "html": null, "text": "75.36 75.62 75.44 Dundee token 75.68 71.54 73.55* 78.85 74.51 77.02 type 76.44 77.09 76.75* 78.33 76.49 77.35", "content": "
DundeeGECOZuCo
PRFPRFPRF
baseline 72.40 baseline 58.91 74.20 70.71 34.91 43.8068.88 42.49 52.38
GECO token59.6135.6244.5369.18 44.22 53.81
type58.3935.9944.4467.69 42.36 52.01
baseline 65.8554.0159.34 83.00 78.1180.48
ZuCotoken72.6250.7659.70 82.92 75.3578.91
type69.2153.0559.95 83.68 74.5778.85
" }, "TABREF6": { "type_str": "table", "num": null, "html": null, "text": "Cross-corpus results: Precision (P), recall (R) and F 1 -score (F) for all models trained on one dataset and tested on another (rows = training dataset; columns = test dataset; best results in bold; * indicates statistically significant improvements). The baseline models are trained without eye-tracking features, token models on the original eye-tracking features, and type are the models trained with type-aggregated features computed on all datasets.", "content": "" } } } }