|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T03:35:20.649335Z" |
|
}, |
|
"title": "Situation-Specific Multimodal Feature Adaptation", |
|
"authors": [ |
|
{ |
|
"first": "Ozge", |
|
"middle": [], |
|
"last": "Ala\u00e7am", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "Language Technology Group", |
|
"institution": "Universit\u00e4t Hamburg Hamburg", |
|
"location": { |
|
"country": "Germany" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Current technological and scientific developments on assistive technologies result in a considerable need for NLP models to successfully grasp the intention of the user in situated settings. Situated language comprehension, where different multimodal cues are inherently present and essential parts of the situations, can not be handled in isolation. In this research proposal, we aim to quantify the influence of each modality including the eyemovements of the speaker as a deictic cue to gain deeper understanding about multimodal interaction. By doing this, we mainly focus on the role of various referential complexities in this interaction. The proposed model encodes the referential complexity of the situated settings in the embedding space during the pretraining phase. This will, in return, implicitly guide the model to adjust to situation-specific properties of an unseen test case. In this paper, we summarize the challenges of intention extraction and propose a methodological approach to investigate a situation-specific feature adaptation to improve crossmodal mapping and meaning recovery from noisy communication settings.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Current technological and scientific developments on assistive technologies result in a considerable need for NLP models to successfully grasp the intention of the user in situated settings. Situated language comprehension, where different multimodal cues are inherently present and essential parts of the situations, can not be handled in isolation. In this research proposal, we aim to quantify the influence of each modality including the eyemovements of the speaker as a deictic cue to gain deeper understanding about multimodal interaction. By doing this, we mainly focus on the role of various referential complexities in this interaction. The proposed model encodes the referential complexity of the situated settings in the embedding space during the pretraining phase. This will, in return, implicitly guide the model to adjust to situation-specific properties of an unseen test case. In this paper, we summarize the challenges of intention extraction and propose a methodological approach to investigate a situation-specific feature adaptation to improve crossmodal mapping and meaning recovery from noisy communication settings.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, we have witnessed a considerable increase in the use of assistive technologies that can engage in communication and perform tasks. These can come in different forms like smart speakers and mobile devices that you can command with audio, or more specialized task-oriented robots that can actually realize users' command in 3D environments. The steady increase in the use of collaborative robots (IFR, 2018) in daily life brings along another important Human-Computer Interaction theme: the capability of engaging in a natural and smooth spoken dialog with humans, which is a major scientific and technological challenge. Particularly, being able to follow a communication that conveys thoughts and intentions expressed in a flexible manner without the restrictions of a close-set of commands is a crucial component of assistive robots for the handicapped and elderly people and for the education / entertainment purposes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 411, |
|
"end": 422, |
|
"text": "(IFR, 2018)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Spoken and situated communication is composed of various perceptual (e.g. audio, visual) and representational modalities (e.g. language, deictic eye-movements, gestures). Effectiveness and fluency of human communication capabilities inspire us to develop robust language models that can deal with uncertainties by evaluating all the available information from multiple sources and reach a good-enough decision. In order to reach this performance, we need to model our situated language understanding systems to incorporate those modalities and let them interact in a meaningful way. This brings forth some important questions; how to integrate different modalities and how to utilize adaptive processing for effective situationawareness to be able to deal with cases where some of the modalities are restricted due to noise in the communication channels. This capability for crossmodal integration can be a very important feature in resolving references or executing commands for smart speakers or helper robots that aid people in their daily activities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Motivation", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In a task-oriented setting (e.g. helper robots completing a given task), the goal of natural language communication is to extract the intention of the speaker. Such communication usually happens in structurally rich visual environments like the one in Figure 1 , which contains several glasses of different types (wine and tea glasses), mice (computer mouse and cat toy), windows (open and closed) etc.. Some of them are even (partially) occluded from the viewer's perspective. The environments usually also include people and their interactions with the objects (actions). Visual information plays a crucial role in determining the referential objects related with the action and to accomplish the task. Thus, computational solutions that incorporate those cues are expected to perform better in grasping the meaning compared to their (text-only) unimodal counterparts (see Ala\u00e7am et al. (2020a) for a review).", |
|
"cite_spans": [ |
|
{ |
|
"start": 875, |
|
"end": 896, |
|
"text": "Ala\u00e7am et al. (2020a)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 260, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Determining the correct intention of a user is not always straightforward due to various reasons. Let us take the following sentences as an example:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1. Can you bring me the wine, I want to open it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "2. Can you bring me the wine, I want to drink it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In both cases, on the syntactic level, the pronoun it refers to the wine. But in the referential world, the first one clearly refers to a wine bottle, while the second (with a lesser degree of certainty) refers to a glass of wine.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In many cases, even the object names are verbally omitted in spoken utterances. This kind of implicit commands like \"I prefer red, can you open a bottle and bring it to me?\", require the hearer to reconstruct the underlying intention \"Open a bottle of red wine, and bring the bottle\" (Gundel et al., 2012) . Alternatively, depending on the spatial arrangements of the agents and the objects in the room, the intention of the speaker might be slightly different and more complex. For example, when the empty glasses are closer to the listener than the speaker, the interpretation might be: \"Open a bottle of red wine, pour the wine into one of the empty wine glasses and bring the glass of wine to me\". Expressing this intention explicitly most often results in unwieldy utterances.", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 305, |
|
"text": "(Gundel et al., 2012)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Furthermore, when the environment is noisy, or the communication partner suffers from a motor or cognitive impairment, multimodal integration plays a more critical role. Noise in communication can originate from various sources. It can be linguistic noise (e.g. spelling mistakes, complex attachments), visual ambiguities (e.g. clutter in the environment, occlusions) or an acoustic noise. Instead of waiting for clarification, combining the uncertain information from the linguistic channel with information from the other ones increases the fluency and the effectiveness of the communication (Garay-Vitoria and Abascal, 2004) . One of the most well-known examples to this phenomenon is the cocktail party effect, that highlights the human ability to focus on one particular source while inhibiting the noisy ones. When the informativeness of one modality is reduced due to environmental conditions, the human language processing system can successfully adjust itself by relying less on the unclear modality and using other cues in the environment. In this specific scenario, other informative cues provide more reliable information compared to the noisy linguistic input. These cues can come from the surrounding environment and from the communicational partners, and include eye-gaze direction or representational gestures combined with their referential link to the entities in the environment. Eye-tracking is attracting considerable interest in many assistive technologies such as educational VR systems that provide embodied learning environments or driver monitoring systems. The use of eye-tracking in daily technological products such as mobile phones, laptops and virtual reality headsets is increasing day by day (Brousseau et al., 2020; Rogers, 2019; Khamis et al., 2018) . Therefore, incorporating eye-movements in our language comprehension models is an inevitable outcome of these latest developments, and this makes the systematic research on the combination of this modality with others very crucial.", |
|
"cite_spans": [ |
|
{ |
|
"start": 594, |
|
"end": 627, |
|
"text": "(Garay-Vitoria and Abascal, 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1725, |
|
"end": 1749, |
|
"text": "(Brousseau et al., 2020;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1750, |
|
"end": 1763, |
|
"text": "Rogers, 2019;", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 1764, |
|
"end": 1784, |
|
"text": "Khamis et al., 2018)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Situated Language Understanding", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The success of a situation-aware language comprehension model is highly dependent on the representativeness of the modalities under various conditions. This kind of coverage requires a richly annotated multimodal corpus that displays the variety of language expressiveness and flexibility. Developing such a corpus is a very costly process. Thus, a dataset that profoundly incorporates a variety of modalities and their various aspects addressing language comprehension tasks is currently not available. Therefore, we plan to train the model on a set of available multimodal datasets (as listed below), by using whichever modality constellations they can offer (including datasets from various domains like psycholinguistics, language technology, computer vision, human-robot interaction etc.) in a stepwise manner; namely starting with simple / few relations, then gradually increasing the complexity of the interactions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Collection", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "There are general-purpose multimodal datasets that can be used for training:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Collection", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 MS COCO (Lin et al., 2014) : an object detection and captioning dataset with >200 K labeled images and 5 captions in a sentence form for each image \u2022 Flicker30k (Plummer et al., 2015): 31 K images collected from Flickr, together with 5 reference sentences \u2022 ImageNET (Deng et al., 2009) : 14 M annotated images, hierarchically organized (w.r.t. WordNet) \u2022 MVSO (Jou et al., 2015) : 15 K visual concepts across 12 languages, 7.36 M images Additionally, there are multimodal datasets that were created for a specific task:", |
|
"cite_spans": [ |
|
{ |
|
"start": 10, |
|
"end": 28, |
|
"text": "(Lin et al., 2014)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 269, |
|
"end": 288, |
|
"text": "(Deng et al., 2009)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 363, |
|
"end": 381, |
|
"text": "(Jou et al., 2015)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Collection", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 HuRIC 2.0 (Bastianelli et al., 2014) : audio files (656 sentences) paired with their transcriptions referring to commands for a robot \u2022 LAVA (Berzak et al., 2016) : 237 sentences, with 2 to 3 interpretations per sentence, and a total of 1679 videos that depict visual variations of each interpretation \u2022 CLEVR-Ref+ (Liu et al., 2019) : 100 K synthetic images with several referring expressions \u2022 Eye4Ref (Ala\u00e7am et al., 2020b) : 86 systematically controlled sentence--image pairs and 2024 eye-movement recordings from various referentially complex situations Multimodal embeddings will be created from this pool of datasets. Creating embeddings from various data sources will allow us to cover concepts from various aspects such as linguistic, auditory and visual representations. The variety on the visual modality will also help us to capture different visual depictions in a range from synthetic images to photographs. This will increase the representativeness of the concepts in the training dataset that will in return improve the prediction when it comes to unseen environments either in virtual reality or in a real-world setting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 38, |
|
"text": "(Bastianelli et al., 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 164, |
|
"text": "(Berzak et al., 2016)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 317, |
|
"end": 335, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 406, |
|
"end": 428, |
|
"text": "(Ala\u00e7am et al., 2020b)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Collection", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "70 % of this collection will be used to create multimodal concept embeddings. The remaining 30 % of the datasets will be included in the test and development sets after semi-automatic and manual annotation of contextual representations, target words, missing words, etc. However, Eye4REF will be used as main testset since it was systematically created to involve referentially complex situations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset Collection", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "One of the main objectives of this research proposal is to quantify the effect of each modality and their interactions by conducting systematic empirical research with computational modeling and human subject studies. Another objective lies in creating multimodal and multilayer embedding spaces in which the layers will be sensitive to various situation complexities, an approach that has not been considered yet. Moreover, eye-movements of the speakers, as a substantial but underrepresented component of face-to-face communication, are incorporated to further improve NLP methods on meaning extraction and crossmodal reference resolution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Model. The proposed method will be able to process several modalities that play a crucial role in communication; (i) Linguistic Information (at syntactic and semantic level), (ii) Situational Information, (iii) Prototypical Knowledge and Relations, and (iv) Speech-accompanying eye-movements of the speaker. The initial base model will focus on the first three capabilities by utilizing data-driven language models such as fasttext (Bojanowski et al., 2017) and commonsense knowledge-bases like ConceptNet (Speer et al., 2017) . At the same time, two modules that (i) incorporate eye-movements and (ii) perform situation-specific feature adaptation will be developed from scratch. In brief, vocabulary obtained from the pre-trained embeddings is used as a bridge between the modalities. For each vocabulary item, multimodal embeddings will be created by processing every input channel, see Figure 2 . For each modality and their joint training, we will utilize an appropriate encoder, such as Fast-R-CNN (Girshick, 2015) for images and attention-based bi-directional LSTMs (e.g. Song et al. (2019) , for text and eye-movement data. A neural network ensemble model will be trained on the embeddings for the task of intended object or action prediction from situated settings with masked information.", |
|
"cite_spans": [ |
|
{ |
|
"start": 432, |
|
"end": 457, |
|
"text": "(Bojanowski et al., 2017)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 506, |
|
"end": 526, |
|
"text": "(Speer et al., 2017)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 1004, |
|
"end": 1020, |
|
"text": "(Girshick, 2015)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 1079, |
|
"end": 1097, |
|
"text": "Song et al. (2019)", |
|
"ref_id": "BIBREF31" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 890, |
|
"end": 898, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Guided Multi-Modal Data Fusion. Based on the vast support provided (Qi et al., 2020; Akbari et al., 2019; Niu et al., 2017; Aytar et al., 2017; Kiros et al., 2014) , a guided multilayer datadriven approach will be utilized instead of fusing all datasets together without any guidance. Despite their impressive success to solve specific tasks so far, deep learning methods are hardly interpretable in understanding which properties of inputs contribute to the final decision to which degree. Besides, the abstraction capabilities, which are crucial for dealing with new cases, are still very limited. However, the more we know about the interactions among the modalities, the more we can extract and focus on relevant features, and the more we can guide those effective deep learning methods to perform better in an explainable way. This will pave the ground to advance current methods for crossmodal interaction in situated language processing with a comprehensive approach to process more modalities, thus to deal with new situations even under uncertainty.", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 84, |
|
"text": "(Qi et al., 2020;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 85, |
|
"end": 105, |
|
"text": "Akbari et al., 2019;", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 106, |
|
"end": 123, |
|
"text": "Niu et al., 2017;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 124, |
|
"end": 143, |
|
"text": "Aytar et al., 2017;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 144, |
|
"end": 163, |
|
"text": "Kiros et al., 2014)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We plan to obtain concept representations stepby-step and build the concepts over each other with increasing complexity, similar to the development of the human cognitive system. One of the key elements here is to encode referential complexity of the each situated setting in the training data. The multimodal embedding space for each concept will consist of several embeddings, which are sensitive to various complexities (as illustrated in Figure 2 ) and this structure forms the backbone of the situation-specific adaptation. By automatically classifying the complexity of multimodal input based on the predefined complexity factors, each entry in the datasets will contribute to the respective embeddings in the embedding space. This presents a new approach for creating embeddings, taking input complexity into account. Additionally, this configuration also provides a testbed to investigate another interesting question: does restricting the model to use only a complexity embedding that corresponds with input complexity improve the crossmodal mapping task performance? For example, when the multimodal input refers to a highly featured concept representation (a dinner accompanied by red wine), using a representation that is created from coarse-grain samples (a clip-art of a wine bottle) may yield misclassification and vice versa.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 442, |
|
"end": 450, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Dynamics among different information sources. In the second objective, we quantify the contribution of each modality and their aspects given the situation to mimic human heuristic processing capability. Language comprehension involves complex sequential decision making and is affected by both uncertainty about the current input and lack of knowledge about the upcoming material. Thus, people use -to a large extent -fast and frugal heuristics, i.e. choosing a good-enough representation (Ferreira, 2003) . The heuristic view provides a valid explanation for scenarios with a conversation inside noisy conditions. Instead of waiting / asking for clarification, the model will reach a good-enough decision based on all information gathered through all available input channels. In order to do that, the set of important features given the situated setting should be chosen automatically.", |
|
"cite_spans": [ |
|
{ |
|
"start": 489, |
|
"end": 505, |
|
"text": "(Ferreira, 2003)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Structuring the embeddings to have separate slots for each modality and for their combinations will allow us to quantify the contribution of each slot individually given the situation in various complexities. Depending on the communication goal or environmental factors, some modalities would contribute to the solution while others could be simply irrelevant or redundant. Understanding the intention of the user requires understanding of which information provided by the modalities is (more) relevant, complementary, or redundant. The human language processing system does this adjustment quite efficiently. Thus, a model will be designed to pick the most effective perceptual and conceptual cues and to ignore the irrelevant ones depending on the situated context. Then, the attention will be channeled towards the most relevant cue sources.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Integrating eye-movements of the speaker. Many eye-tracking technologies in the market employ a sufficient sampling frequency to enable gazecontingent applications. With advancements in the eye-tracking technology, incorporating eye movements of a speaker or a listener enables us to predict / resolve which entity is being referred to in a complex visual environment (Klerke and Plank, 2019; Mitev et al., 2018; Mishra et al., 2017; Koleva et al., 2015) . However, these studies are limited to relatively simple scenes. Situated language understanding in a referentially complex environment or under noisy situations imposes a different level of challenge that we aim to address. The number of studies that utilize gaze features (Sood et al., 2020; Park et al., 2019; Karessli et al., 2017) is very limited. In this study, we propose to incorporate the eye-movements of the speaker to improve the crossmodal mapping performance. This additional deictic modality may improve the recovery of the intended meaning especially when the communication is noisy (acoustically or visually). The gaze embeddings will be created by using existing eyemovement datasets. However, there are only few big-size eye-movement datasets available (Ala\u00e7am et al., 2020b; Wilming et al., 2017; Ehinger et al., 2009) . Thus, to enlarge available data, we will conduct a set of experimental studies with increasing referential complexity. There, we will record participants' instructions on a task-oriented scenario and their eye-movements regarding target objects.", |
|
"cite_spans": [ |
|
{ |
|
"start": 368, |
|
"end": 392, |
|
"text": "(Klerke and Plank, 2019;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 412, |
|
"text": "Mitev et al., 2018;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 413, |
|
"end": 433, |
|
"text": "Mishra et al., 2017;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 454, |
|
"text": "Koleva et al., 2015)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 749, |
|
"text": "(Sood et al., 2020;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 750, |
|
"end": 768, |
|
"text": "Park et al., 2019;", |
|
"ref_id": "BIBREF26" |
|
}, |
|
{ |
|
"start": 769, |
|
"end": 791, |
|
"text": "Karessli et al., 2017)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 1228, |
|
"end": 1250, |
|
"text": "(Ala\u00e7am et al., 2020b;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1251, |
|
"end": 1272, |
|
"text": "Wilming et al., 2017;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1273, |
|
"end": 1294, |
|
"text": "Ehinger et al., 2009)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Evaluation of the assistive model. After all information sources from various modalities are made available and integrated, the contribution of each modality will be investigated by performing systematic manipulations (e.g. removing a modality one-by-one from the input). The standard accuracy and efficiency metrics will be used for evaluating the models' performance, including the overall runtime, modality-specific accuracy parameters (such as PoS-tag or semantic class accuracy), target mapping accuracy, and accuracy in recovering the missing word.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In addition to evaluating how this model improves the task of reference resolution for acoustically and / or visually noisy settings, its role as assistive technology will be investigated by conducting a user study. The experimental setup will be very similar to the one in the data collection phase. However, this time the participant will interact with a demo model that displays all the abovementioned capabilities. The model will try to ex-tract user intention by predicting the communicationally relevant objects on the fly. The usability study on the demo model will be evaluated based on the efficiency (how long does it take to reach a decision?), effectiveness (how accurate is the system decision?) and the user satisfaction ratings that will be obtained through the same evaluation metrics and a user survey.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Objectives", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In this research proposal, we focus on three factors that can enhance the communication between humans and assistive technologies. The first one is the encoding of the referential complexity of the situated settings while creating multimodal embeddings. As pointed out in (Singh et al., 2020) , pre-trained models, that were created by fusing the modalities without constraints, are expected to be an out-of-the-box solution and work well for a variety of simpler tasks. In this research, we propose to encode referential complexity during the training phase to see whether the complexity-sensitive embeddings will improve the tasks of crossmodal mapping and meaning recovery. We believe that this will implicitly direct the model to focus on various textual and visual forms of the same concepts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 272, |
|
"end": 292, |
|
"text": "(Singh et al., 2020)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The second factor is the inclusion of eyemovements as an additional modality to enhance meaning recovery from noisy settings where some parts of the sentences or visual labels are masked.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "At last, this research will also contribute to a better understanding of the contributions of each individual modality, of amodal and modality-specific features and their interactions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The proposed method will be beneficial for other task-oriented communication scenarios, where the cognitive systems need to understand the intention and to aid the user in the most efficient and effective way, such as educational video-games, training simulations, and assistive navigation systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Multi-level multimodal common semantic space for image-phrase grounding", |
|
"authors": [ |
|
{ |
|
"first": "Hassan", |
|
"middle": [], |
|
"last": "Akbari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svebor", |
|
"middle": [], |
|
"last": "Karaman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Surabhi", |
|
"middle": [], |
|
"last": "Bhargava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brian", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Vondrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shih-Fu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12476--12486", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hassan Akbari, Svebor Karaman, Surabhi Bhargava, Brian Chen, Carl Vondrick, and Shih-Fu Chang. 2019. Multi-level multimodal common semantic space for image-phrase grounding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12476-12486, Long Beach, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Crossmodal language comprehension -psycholinguistic insights and computational approaches", |
|
"authors": [ |
|
{ |
|
"first": "Ozge", |
|
"middle": [], |
|
"last": "Ala\u00e7am", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xingshan", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wolfgang", |
|
"middle": [], |
|
"last": "Menzel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tobias", |
|
"middle": [], |
|
"last": "Staron", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Frontiers in Neurorobotics", |
|
"volume": "14", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.3389/fnbot.2020.00002" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozge Ala\u00e7am, Xingshan Li, Wolfgang Menzel, and Tobias Staron. 2020a. Crossmodal language com- prehension -psycholinguistic insights and computa- tional approaches. Frontiers in Neurorobotics, 14:2.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Eye4Ref: A multimodal eye movement dataset of referentially complex situations", |
|
"authors": [ |
|
{ |
|
"first": "Ozge", |
|
"middle": [], |
|
"last": "Ala\u00e7am", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugen", |
|
"middle": [], |
|
"last": "Ruppert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amr", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Salama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "Proceedings of the12th International Conference on Language Resources and Evaluation (LREC)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2396--2404", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ozge Ala\u00e7am, Eugen Ruppert, Amr R. Salama, To- bias Staron, and Wolfgang Menzel. 2020b. Eye4Ref: A multimodal eye movement dataset of referen- tially complex situations. In Proceedings of the12th International Conference on Language Resources and Evaluation (LREC), page 2396-2404, Marseille, France.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "See, hear, and read: Deep aligned representations", |
|
"authors": [ |
|
{ |
|
"first": "Yusuf", |
|
"middle": [], |
|
"last": "Aytar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carl", |
|
"middle": [], |
|
"last": "Vondrick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1706.00932" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yusuf Aytar, Carl Vondrick, and Antonio Torralba. 2017. See, hear, and read: Deep aligned representations. arXiv preprint arXiv:1706.00932.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Huric: a human robot interaction corpus", |
|
"authors": [ |
|
{ |
|
"first": "Emanuele", |
|
"middle": [], |
|
"last": "Bastianelli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Giuseppe", |
|
"middle": [], |
|
"last": "Castellucci", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danilo", |
|
"middle": [], |
|
"last": "Croce", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Iocchi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Basili", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniele", |
|
"middle": [], |
|
"last": "Nardi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4519--4526", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emanuele Bastianelli, Giuseppe Castellucci, Danilo Croce, Luca Iocchi, Roberto Basili, and Daniele Nardi. 2014. Huric: a human robot interaction cor- pus. In LREC, pages 4519-4526.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Do you see what i mean? visual resolution of linguistic ambiguities", |
|
"authors": [ |
|
{ |
|
"first": "Yevgeni", |
|
"middle": [], |
|
"last": "Berzak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrei", |
|
"middle": [], |
|
"last": "Barbu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Harari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Boris", |
|
"middle": [], |
|
"last": "Katz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shimon", |
|
"middle": [], |
|
"last": "Ullman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1603.08079" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yevgeni Berzak, Andrei Barbu, Daniel Harari, Boris Katz, and Shimon Ullman. 2016. Do you see what i mean? visual resolution of linguistic ambiguities. arXiv preprint arXiv:1603.08079.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Enriching word vectors with subword information", |
|
"authors": [ |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Bojanowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edouard", |
|
"middle": [], |
|
"last": "Grave", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Armand", |
|
"middle": [], |
|
"last": "Joulin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Transactions of the Association for Computational Linguistics", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "135--146", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vec- tors with subword information. Transactions of the Association for Computational Linguistics, 5:135- 146.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Hybrid eye-tracking on a smartphone with cnn feature extraction and an infrared 3d model", |
|
"authors": [ |
|
{ |
|
"first": "Braiden", |
|
"middle": [], |
|
"last": "Brousseau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Rose", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Moshe", |
|
"middle": [], |
|
"last": "Eizenman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Sensors", |
|
"volume": "20", |
|
"issue": "2", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Braiden Brousseau, Jonathan Rose, and Moshe Eizen- man. 2020. Hybrid eye-tracking on a smartphone with cnn feature extraction and an infrared 3d model. Sensors, 20(2):543.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Imagenet: A large-scale hierarchical image database", |
|
"authors": [ |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei", |
|
"middle": [], |
|
"last": "Dong", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "248--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition, pages 248- 255, Miami, Florida, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Modelling search for people in 900 scenes: A combined source model of eye guidance", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Krista", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Ehinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Hidalgo-Sotelo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aude", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Oliva", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Visual cognition", |
|
"volume": "17", |
|
"issue": "6-7", |
|
"pages": "945--978", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Krista A Ehinger, Barbara Hidalgo-Sotelo, Antonio Tor- ralba, and Aude Oliva. 2009. Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual cognition, 17(6-7):945-978.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "The misinterpretation of noncanonical sentences", |
|
"authors": [ |
|
{ |
|
"first": "Fernanda", |
|
"middle": [], |
|
"last": "Ferreira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Cognitive Psychology", |
|
"volume": "47", |
|
"issue": "2", |
|
"pages": "164--203", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fernanda Ferreira. 2003. The misinterpretation of noncanonical sentences. Cognitive Psychology, 47(2):164-203.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "A comparison of prediction techniques to enhance the communication rate", |
|
"authors": [ |
|
{ |
|
"first": "Nestor", |
|
"middle": [], |
|
"last": "Garay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Vitoria", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julio", |
|
"middle": [], |
|
"last": "Abascal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ERCIM Workshop on User Interfaces for All", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "400--417", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nestor Garay-Vitoria and Julio Abascal. 2004. A com- parison of prediction techniques to enhance the com- munication rate. In ERCIM Workshop on User Interfaces for All, pages 400-417, Vienna, Austria. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Fast r-cnn", |
|
"authors": [ |
|
{ |
|
"first": "Ross", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1440--1448", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440-1448, Santiago, Chile.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Underspecification of cognitive status in reference production: Some empirical predictions", |
|
"authors": [ |
|
{ |
|
"first": "Jeanette", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Gundel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Hedberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Zacharski", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Topics in Cognitive Science", |
|
"volume": "4", |
|
"issue": "2", |
|
"pages": "249--268", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeanette K. Gundel, Nancy Hedberg, and Ron Zacharski. 2012. Underspecification of cognitive status in refer- ence production: Some empirical predictions. Topics in Cognitive Science, 4(2):249-268.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Executive summary world robotics 2019 service robots", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ifr", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "IFR. 2018. Executive summary world robotics 2019 service robots.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Visual affect around the world: A large-scale multilingual visual sentiment ontology", |
|
"authors": [ |
|
{ |
|
"first": "Brendan", |
|
"middle": [], |
|
"last": "Jou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nikolaos", |
|
"middle": [], |
|
"last": "Pappas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Miriam", |
|
"middle": [], |
|
"last": "Redi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mercan", |
|
"middle": [], |
|
"last": "Topkara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shih-Fu", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 23rd ACM international conference on Multimedia", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "159--168", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara, and Shih-Fu Chang. 2015. Visual affect around the world: A large-scale multilingual visual sentiment ontology. In Proceedings of the 23rd ACM international conference on Multimedia, pages 159-168.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Gaze embeddings for zeroshot image classification", |
|
"authors": [ |
|
{ |
|
"first": "Nour", |
|
"middle": [], |
|
"last": "Karessli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeynep", |
|
"middle": [], |
|
"last": "Akata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4525--4534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nour Karessli, Zeynep Akata, Bernt Schiele, and An- dreas Bulling. 2017. Gaze embeddings for zero- shot image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4525-4534, Honolulu, USA.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "The past, present, and future of gaze-enabled handheld mobile devices: survey and lessons learned", |
|
"authors": [ |
|
{ |
|
"first": "Mohamed", |
|
"middle": [], |
|
"last": "Khamis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Alt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andreas", |
|
"middle": [], |
|
"last": "Bulling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mohamed Khamis, Florian Alt, and Andreas Bulling. 2018. The past, present, and future of gaze-enabled handheld mobile devices: survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, pages 1-17, Barcelona, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Unifying visual-semantic embeddings with multimodal neural language models", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ruslan", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Zemel", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1411.2539" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "At a glance: The impact of gaze aggregation views on syntactic tagging", |
|
"authors": [ |
|
{ |
|
"first": "Sigrid", |
|
"middle": [], |
|
"last": "Klerke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Barbara", |
|
"middle": [], |
|
"last": "Plank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "51--61", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sigrid Klerke and Barbara Plank. 2019. At a glance: The impact of gaze aggregation views on syntactic tagging. In Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN), pages 51-61, Hong Kong, China.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "The impact of listener gaze on predicting reference resolution", |
|
"authors": [ |
|
{ |
|
"first": "Nikolina", |
|
"middle": [], |
|
"last": "Koleva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mart\u00edn", |
|
"middle": [], |
|
"last": "Villalba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Staudte", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Koller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "812--817", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolina Koleva, Mart\u00edn Villalba, Maria Staudte, and Alexander Koller. 2015. The impact of listener gaze on predicting reference resolution. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, volume 2, pages 812-817, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Microsoft coco: Common objects in context", |
|
"authors": [ |
|
{ |
|
"first": "Tsung-Yi", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Maire", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Serge", |
|
"middle": [], |
|
"last": "Belongie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [], |
|
"last": "Hays", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pietro", |
|
"middle": [], |
|
"last": "Perona", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deva", |
|
"middle": [], |
|
"last": "Ramanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Piotr", |
|
"middle": [], |
|
"last": "Doll\u00e1r", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C Lawrence", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "European conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "740--755", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755, Zurich, Switzer- land. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Clevr-ref+: Diagnosing visual reasoning with referring expressions", |
|
"authors": [ |
|
{ |
|
"first": "Runtao", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chenxi", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yutong", |
|
"middle": [], |
|
"last": "Bai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Yuille", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4185--4194", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Runtao Liu, Chenxi Liu, Yutong Bai, and Alan L Yuille. 2019. Clevr-ref+: Diagnosing visual reason- ing with referring expressions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4185-4194, Long Beach, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network", |
|
"authors": [ |
|
{ |
|
"first": "Abhijit", |
|
"middle": [], |
|
"last": "Mishra", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kuntal", |
|
"middle": [], |
|
"last": "Dey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pushpak", |
|
"middle": [], |
|
"last": "Bhattacharyya", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "377--387", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Abhijit Mishra, Kuntal Dey, and Pushpak Bhattacharyya. 2017. Learning cognitive features from gaze data for sentiment and sarcasm classification us- ing convolutional neural network. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 377-387, Vancou- ver, Canada.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Using listener gaze to refer in installments benefits understanding", |
|
"authors": [ |
|
{ |
|
"first": "Nikolina", |
|
"middle": [], |
|
"last": "Mitev", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Renner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thies", |
|
"middle": [], |
|
"last": "Pfeiffer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Staudte", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 40th Annual Meeting of the Cognitive Science Society", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2122--2127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nikolina Mitev, Patrick Renner, Thies Pfeiffer, and Maria Staudte. 2018. Using listener gaze to refer in installments benefits understanding. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society, pages 2122-2127, Madison, Wisconsin, USA.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Hierarchical multimodal lstm for dense visual-semantic embedding", |
|
"authors": [ |
|
{ |
|
"first": "Zhenxing", |
|
"middle": [], |
|
"last": "Niu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mo", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Le", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xinbo", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gang", |
|
"middle": [], |
|
"last": "Hua", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "The IEEE International Conference on Computer Vision (ICCV)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, and Gang Hua. 2017. Hierarchical multimodal lstm for dense visual-semantic embedding. In The IEEE International Conference on Computer Vision (ICCV), Venice, Italy.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Few-shot adaptive gaze estimation", |
|
"authors": [ |
|
{ |
|
"first": "Seonwook", |
|
"middle": [], |
|
"last": "Park", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavlo", |
|
"middle": [], |
|
"last": "Shalini De Mello", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Molchanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9368--9377", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Seonwook Park, Shalini De Mello, Pavlo Molchanov, Umar Iqbal, Otmar Hilliges, and Jan Kautz. 2019. Few-shot adaptive gaze estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9368-9377, Seoul, Korea.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bryan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Liwei", |
|
"middle": [], |
|
"last": "Plummer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Juan", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Cervantes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Caicedo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lazebnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the IEEE international conference on computer vision", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "2641--2649", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649, Santiago, Chile.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Imagebert: Cross-modal pre-training with large-scale weak-supervised imagetext data", |
|
"authors": [ |
|
{ |
|
"first": "Di", |
|
"middle": [], |
|
"last": "Qi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lin", |
|
"middle": [], |
|
"last": "Su", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jia", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Cui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Taroon", |
|
"middle": [], |
|
"last": "Bharti", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Arun", |
|
"middle": [], |
|
"last": "Sacheti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2001.07966" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. 2020. Imagebert: Cross-modal pre-training with large-scale weak-supervised image- text data. arXiv preprint arXiv:2001.07966.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Seven reasons why eyetracking will fundamentally change vr", |
|
"authors": [ |
|
{ |
|
"first": "Sol", |
|
"middle": [ |
|
"Rogers" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sol Rogers. 2019. Seven reasons why eye- tracking will fundamentally change vr.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Are we pretraining it right? Digging deeper into visio-linguistic pretraining", |
|
"authors": [ |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vedanuj", |
|
"middle": [], |
|
"last": "Goswami", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.08744" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Amanpreet Singh, Vedanuj Goswami, and Devi Parikh. 2020. Are we pretraining it right? Digging deeper into visio-linguistic pretraining. arXiv preprint arXiv:2004.08744.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Abstractive text summarization using lstm-cnn based deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Shengli", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haitao", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tongxiao", |
|
"middle": [], |
|
"last": "Ruan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Multimedia Tools and Applications", |
|
"volume": "78", |
|
"issue": "1", |
|
"pages": "857--875", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Shengli Song, Haitao Huang, and Tongxiao Ruan. 2019. Abstractive text summarization using lstm-cnn based deep learning. Multimedia Tools and Applications, 78(1):857-875.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Philipp M\u00fcller, and Andreas Bulling. 2020. Improving natural language processing tasks with human gaze-guided neural attention", |
|
"authors": [ |
|
{ |
|
"first": "Ekta", |
|
"middle": [], |
|
"last": "Sood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Simon", |
|
"middle": [], |
|
"last": "Tannert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2010.07891" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ekta Sood, Simon Tannert, Philipp M\u00fcller, and Andreas Bulling. 2020. Improving natural language process- ing tasks with human gaze-guided neural attention. arXiv preprint arXiv:2010.07891.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Conceptnet 5.5: An open multilingual graph of general knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Robyn", |
|
"middle": [], |
|
"last": "Speer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [], |
|
"last": "Chin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Catherine", |
|
"middle": [], |
|
"last": "Havasi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
|
"volume": "31", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31, San Francisco, CA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "An extensive dataset of eye movements during viewing of complex images", |
|
"authors": [ |
|
{ |
|
"first": "Niklas", |
|
"middle": [], |
|
"last": "Wilming", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Selim", |
|
"middle": [], |
|
"last": "Onat", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Jos\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alper", |
|
"middle": [], |
|
"last": "Ossand\u00f3n", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "A\u00e7\u0131k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Tim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Kietzmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricardo", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Kaspar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandra", |
|
"middle": [], |
|
"last": "Gameiro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Vormberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "K\u00f6nig", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Scientific data", |
|
"volume": "4", |
|
"issue": "1", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niklas Wilming, Selim Onat, Jos\u00e9 P Ossand\u00f3n, Alper A\u00e7\u0131k, Tim C Kietzmann, Kai Kaspar, Ricardo R Gameiro, Alexandra Vormberg, and Peter K\u00f6nig. 2017. An extensive dataset of eye movements during viewing of complex images. Scientific data, 4(1):1- 11.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "An example image for a living room scenario.", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Schematic overview of the hierarchical embeddings.", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |