{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:36.855908Z" }, "title": "PanGEA: The Panoramic Graph Environment Annotation Toolkit", "authors": [ { "first": "Alexander", "middle": [], "last": "Ku", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jordi", "middle": [], "last": "Pont-Tuset", "suffix": "", "affiliation": {}, "email": "jponttuset@google.com" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "PanGEA, the Panoramic Graph Environment Annotation toolkit, is a lightweight toolkit for collecting speech and text annotations in photo-realistic 3D environments. PanGEA immerses annotators in a web-based simulation and allows them to move around easily as they speak and/or listen. It includes database and cloud storage integration, plus utilities for automatically aligning recorded speech with manual transcriptions and the virtual pose of the annotators. Out of the box, PanGEA supports two tasks-collecting navigation instructions and navigation instruction followingand it could be easily adapted for annotating walking tours, finding and labeling landmarks or objects, and similar tasks. We share best practices learned from using PanGEA in a 20,000 hour annotation effort to collect the Room-Across-Room dataset. We hope that our open-source annotation toolkit and insights will both expedite future data collection efforts and spur innovation on the kinds of grounded language tasks such environments can support.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "PanGEA, the Panoramic Graph Environment Annotation toolkit, is a lightweight toolkit for collecting speech and text annotations in photo-realistic 3D environments. PanGEA immerses annotators in a web-based simulation and allows them to move around easily as they speak and/or listen. It includes database and cloud storage integration, plus utilities for automatically aligning recorded speech with manual transcriptions and the virtual pose of the annotators. Out of the box, PanGEA supports two tasks-collecting navigation instructions and navigation instruction followingand it could be easily adapted for annotating walking tours, finding and labeling landmarks or objects, and similar tasks. We share best practices learned from using PanGEA in a 20,000 hour annotation effort to collect the Room-Across-Room dataset. We hope that our open-source annotation toolkit and insights will both expedite future data collection efforts and spur innovation on the kinds of grounded language tasks such environments can support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The release of high-quality 3D building and street captures (Chang et al., 2017; Mirowski et al., 2019; Mehta et al., 2020; Xia et al., 2018; Straub et al., 2019) has galvanized interest in developing embodied navigation agents that can operate in complex human environments. Based on these environments, annotations have been collected for a variety of tasks including navigating to a particular class of object (ObjectNav) (Batra et al., 2020) , navigating from language instructions aka visionand-language navigation (VLN) (Anderson et al., 2018b; Qi et al., 2020; Ku et al., 2020) , and vision-and-dialog navigation (Thomason et al., 2020; Hahn et al., 2020) . To date, most of these data collection efforts have required the development of custom annotation tools. * First two authors contributed equally.", "cite_spans": [ { "start": 60, "end": 80, "text": "(Chang et al., 2017;", "ref_id": null }, { "start": 81, "end": 103, "text": "Mirowski et al., 2019;", "ref_id": "BIBREF8" }, { "start": 104, "end": 123, "text": "Mehta et al., 2020;", "ref_id": "BIBREF7" }, { "start": 124, "end": 141, "text": "Xia et al., 2018;", "ref_id": "BIBREF14" }, { "start": 142, "end": 162, "text": "Straub et al., 2019)", "ref_id": "BIBREF11" }, { "start": 425, "end": 445, "text": "(Batra et al., 2020)", "ref_id": "BIBREF2" }, { "start": 526, "end": 550, "text": "(Anderson et al., 2018b;", "ref_id": "BIBREF1" }, { "start": 551, "end": 567, "text": "Qi et al., 2020;", "ref_id": "BIBREF10" }, { "start": 568, "end": 584, "text": "Ku et al., 2020)", "ref_id": "BIBREF6" }, { "start": 620, "end": 643, "text": "(Thomason et al., 2020;", "ref_id": "BIBREF13" }, { "start": 644, "end": 662, "text": "Hahn et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To expedite future data collection efforts, in this paper we introduce PanGEA, an open-sourced annotation toolkit designed for these settings. 1 Specifically, PanGEA assumes an environment represented by discrete navigation graphs connecting high-resolution 360\u00b0panoramas, where each node represents a unique viewpoint in the environment and actions involve moving between these viewpoints. Examples of suitable environments include the indoor buildings from Matterport3D (Chang et al., 2017 ) (using the navigation graphs from Anderson et al. (2018b)) and the street-level environments from StreetLearn (Mirowski et al., 2019) .", "cite_spans": [ { "start": 472, "end": 491, "text": "(Chang et al., 2017", "ref_id": null }, { "start": 604, "end": 627, "text": "(Mirowski et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Out of the box, PanGEA supports two annotation modes: the Guide task and the Follower task. In the Guide task, Guides look around and move through an environment to follow a pre-defined path and attempt to create a navigation instruction for others to follow. In the Follower task, annotators listen to a Guide's instructions and attempt to follow the path. These annotation modes are based on the Visionand-Language Navigation (VLN) setting proposed by Anderson et al. (2018b) . However, compared to similar annotation tools, PanGEA includes substantial additional capabilities, notably:", "cite_spans": [ { "start": 454, "end": 477, "text": "Anderson et al. (2018b)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 annotation via voice recording (in addition to text entry) \u2022 virtual pose tracking to record what annotators look at \u2022 utilities for aligning a transcript of the words heard or uttered by each annotator with their visual perceptions and actions \u2022 integration with cloud database and storage platforms \u2022 a modular API facilitating easy extension to new tasks and new environments PanGEA has already been used in two papers. It was used to collect Room-Across-Room (RxR) (Ku et al., 2020) , a dataset of human-annotated navigation instructions in English, Hindi and Telugu Figure 1 : Screenshots of the PanGEA Guide and Follower interfaces. In the Guide task (left), Guides explore a given path while attempting to create a navigation instruction for others to follow. Guides can pause and restart the audio recording at any time. After recording is completed, Guides transcribe their own audio. In the Follower task (right), annotators listen to a Guide's instructions and attempt to follow the intended path. Followers can skip around the Guide's audio using the audio waveform at bottom right. In both tasks, PanGEA tracks the annotators virtual camera pose and automatically aligns it with the Guide's audio transcript.", "cite_spans": [ { "start": 471, "end": 488, "text": "(Ku et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 573, "end": 581, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "which is the largest VLN dataset by an order of magnitude. PanGEA was also used to perform human evaluations of model-generated navigation instructions in Zhao et al. (2021) . It could be trivially adapted to other tasks that combine annotation with movement, such as annotating walking tours, or finding and labeling particular landmarks or objects.", "cite_spans": [ { "start": 155, "end": 173, "text": "Zhao et al. (2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We next describe PanGEA's capabitilities in more detail. In the final section we share some best practices learned from using PanGEA to collect RxR, which required more than 20,000 annotation hours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Guide Task In the Guide task (Figure 1 , left), Guides look around and move to explore an environment while recording an audio narration. For the RxR data collection, the Guide's movement was restricted to a particular path through the environment, and annotators were instructed to record navigation instructions that would be sufficiently descriptive for others to follow the same path. However, this restriction can be relaxed to allow free movement and narration for other purposes. Once the Guide is satisfied with their recording, they are asked to manually transcribe their own voice recording into text. This ensures high quality tran-scription results.", "cite_spans": [], "ref_spans": [ { "start": 29, "end": 38, "text": "(Figure 1", "ref_id": null } ], "eq_spans": [], "section": "PanGEA Toolkit", "sec_num": "2" }, { "text": "During the Guide task, in parallel to the annotator's voice recording, PanGEA captures a timestamped record of the annotator's virtual camera movements, which we call a pose trace. By default, PanGEA is configured to use Firebase 2 , saving the Guide's audio recording to a cloud storage bucket, and the transcript, pose trace and other metadata to a cloud database for post processing. Inspired by Localized Narratives (Pont-Tuset et al., 2020), PanGEA includes a utility to automatically align each Guide's pose trace with the manual transcript of their audio recording. This is achieved by using a Speech to Text service 3 to first generate a noisybut-timestamped automatic transcription. PanGEA then using dynamic time warping to align tokens in the automatic transcript to the manual transcript before propagating timestamps from the automatic to the manual transcription ( Figure 2 ). The result is fine-grained synchronization between the transcribed text, the pixels seen, and the actions taken by the Guide.", "cite_spans": [], "ref_spans": [ { "start": 879, "end": 887, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "PanGEA Toolkit", "sec_num": "2" }, { "text": "Follower Task In the Follower task (Figure 1 , right), Followers begin at a specified starting point in an environment and are asked to follow a Guide's Figure 2 : PanGEA time-aligns each annotators manual audio transcription (middle) to a pose trace recording their virtual camera movements (bottom). This is achieved by first generating a noisy-but-timestamped automatic transcription (top), which is aligned with the manual transcription using dynamic time warping in order to propagate timestamps to the manual transcription. Figure adapted from Pont-Tuset et al. 2020instructions. They observe the environment and navigate as the Guide's audio plays. Followers can skip forward or backward in the audio recording by clicking on an audio waveform representation of the Guide's recording. This allows them to skip over periods of silence or to listen to part of the audio again. Once the Follower believes they have reached the the end of the path, or they give up, they indicate they are done and the task ends. Note that although the Follower task supports audio instructions, it can be easily adapted to replace the audio instruction with a textual instruction. This was the approach taken by Zhao et al. (2021) .", "cite_spans": [ { "start": 1199, "end": 1217, "text": "Zhao et al. (2021)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 35, "end": 44, "text": "(Figure 1", "ref_id": null }, { "start": 153, "end": 161, "text": "Figure 2", "ref_id": null }, { "start": 530, "end": 544, "text": "Figure adapted", "ref_id": null } ], "eq_spans": [], "section": "PanGEA Toolkit", "sec_num": "2" }, { "text": "As with the Guide task, the Follower's pose trace is recorded and saved to a cloud database, along with the timestamp of the Guide's audio that the Follower listened to at each moment. This allows the Follower's visual percepts and actions to be accurately aligned with text tokens in the Guide's instructions. Similarity between the annotated (Guide) path and the Follower path is also a natural measure of the joint quality of both the Guide and the Follower annotations. In the experiments for RxR, the path extracted from the Follower's pose trace was also used as additional supervision when training Follower agents, since it represents a step-by-step account of how a human solved the task and the visual inputs they focused on in order to do so (Ku et al., 2020) .", "cite_spans": [ { "start": 753, "end": 770, "text": "(Ku et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "PanGEA Toolkit", "sec_num": "2" }, { "text": "Deployment PanGEA comes with several demos using a very simplistic environment. To deploy PanGEA for a new large-scale collection effort requires completing 3 main steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PanGEA Toolkit", "sec_num": "2" }, { "text": "\u2022 Creating a new app in Firebase to initialize the cloud storage and cloud database, \u2022 Setting up an appropriate crowdsourcing plat-form to serve the PanGEA front-end to a pool of annotators, and \u2022 Setting up the environment to be used, e.g., hosting the images and navigation graphs in a storage bucket in an appropriate format. Further details are provided in the PanGEA readme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PanGEA Toolkit", "sec_num": "2" }, { "text": "PanGEA was developed for the collection of the RxR dataset, a 20,000+ hour annotation effort based on Matterport3D indoor scenes. Many of the lessons learned during this collection effort are codified in the PanGEA toolkit. For example, we found that uploading recorded audio at the end of the Guide task was time consuming, and so in the final version of PanGEA the wav file is uploaded in the background while the annotator is busy transcribing their audio. We also found that audio annotations could include long periods of silence, so we provided Follower annotators with an audio waveform visualization and an interface to skip over silence. Some other observations and best practices for reducing annotation times and improving annotation quality are shared in this section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Observations and Best Practices", "sec_num": "3" }, { "text": "PanGEA is designed to capture the alignment between annotators' visual percepts, actions and utterances to provide fine-grained spatio-temporal grounding. In initial trials with PanGEA, we found that some annotators -with the best of intentions -completely undermined this paradigm. We had envisioned them speaking while moving and looking at the environment; however, in an effort to generate more fluent instructions, some annotators first explored the environment while drafting a nav-igation instruction separately in a text editor. Then, having finalized the textual instruction, the annotator read it all at the end of the audio recording. While this strategy indeed produced high-quality navigation instructions, the instructions were no longer time-aligned to the pose trace. Interestingly, the language used in the instructions also differed. Instructions drafted as text tended to use more connective phrases -for example, \"turn right and then you will see a dining table\" instead of \"turn right... now you see a dining table\". We found it challenging to add guardrails in PanGEA that could prevent this behaviour without unduly restricting the freedom of the annotators and the flexibility of the toolkit. Instead, we addressed this issuesuccessfully-via explicit training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotators Complete Tasks in Creative Ways", "sec_num": null }, { "text": "Annotator Training To overcome the aforementioned issue and to improve annotation quality in general, for RxR, we conducted an interactive virtual training session with annotators, providing examples of ideal annotations and various failure modes. Annotators were also able to ask questions regarding how to best complete the tasks assigned to them. Although interactive training sessions are not always possible, at minimum we recommend providing annotators with a training video that shows a walk-through of the task and notes common pitfalls to avoid. We provide links to the demo videos for the RxR Guide task 4 and Follower task 5 (initially called the Tourist task).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotators Complete Tasks in Creative Ways", "sec_num": null }, { "text": "Pilot Collections and Learning Periods Annotating and following navigation instructions in a virtual world is a complex task. We recommend allowing for several small-scale pilot data collections to identify issues with the collection process. This includes having the team creating the dataset perform the tasks using the tool. Secondly, we recommend allowing for a learning period whenever a new annotator is introduced to the task, i.e., planning to discard the first 5-10 annotations produced by a new annotator. We found that rotating annotators between both Guide and Follower tasks early in their experience improved annotation quality because doing so provides much greater awareness of the needs of Followers when completing the Guide tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotators Complete Tasks in Creative Ways", "sec_num": null }, { "text": "Data Monitoring Dashboard We recommend using VLN evaluation metrics such as success rate, navigation error and SPL (Anderson et al., 2018a) (or similar metrics for alternative tasks) to continually monitor the quality of the collected Guide navigation instructions and Follower paths. By storing the collected annotations in Firebase, it is relatively easy to construct web-based interfaces to monitor these metrics. In the case of RxR, we created a monitoring dashboard that displayed success rates for each annotator pool and also each annotator, with the capability to replay the pose traces from individual Guide and Follower annotations. Annotators were able to see an anonymized view on their progress as it related to others, which helped them assess whether they were performing the task correctly or needed additional changes and perhaps explicit guidance.", "cite_spans": [ { "start": 115, "end": 139, "text": "(Anderson et al., 2018a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotators Complete Tasks in Creative Ways", "sec_num": null }, { "text": "In tasks that require a person to perform actions while producing or comprehending language, it is much easier if people are allowed to use speech rather than writing because it allows them to use their hands and eyes fully for performing actions. This has very real consequences for thinking about future data collection efforts. Speech interactions will be essential for any tasks that include time pressure, such as collaborative games where players use language to coordinate. There is also a simple but significant cost advantage: on average, the transcription portion for an RxR Guide annotation took three to four times longer than collecting the speech instruction itself, so either a great deal more instructions could have been collected, or the cost could have been significantly reduced. Speech also encodes intonation and is more likely to elicit interesting dialectal differences. For these and other reasons, we may thus want to encourage more research that works on language grounding tasks that work with speech directly, and provide current best automatic speech recognition output for those who insist on working with text only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speech versus Writing", "sec_num": null }, { "text": "There are many potential future applications of PanGEA and tools that could be built based on the design decisions discussed above. We are particularly excited about multi-agent problems that collect pose traces from multiple participants as they coordinate via language, such as hide and seek games or tasks where items must be moved from one location to another to satisfy goals or solve puzzles, similar to CerealBar .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Applications", "sec_num": "4" }, { "text": "github.com/google-research/pangea", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://firebase.google.com 3 https://cloud.google.com/ speech-to-text", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://youtu.be/aJkJfB8oI2M 5 https://youtu.be/vcP-oX1t0CU", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Roozbeh Mottaghi, Manolis Savva, and Amir R. Zamir. 2018a. On evaluation of embodied navigation agents", "authors": [ { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Devendra", "middle": [], "last": "Singh Chaplot", "suffix": "" }, { "first": "Alexey", "middle": [], "last": "Dosovitskiy", "suffix": "" }, { "first": "Saurabh", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Vladlen", "middle": [], "last": "Koltun", "suffix": "" }, { "first": "Jana", "middle": [], "last": "Kosecka", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Malik", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1807.06757" ] }, "num": null, "urls": [], "raw_text": "Peter Anderson, Angel Chang, Devendra Singh Chap- lot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mot- taghi, Manolis Savva, and Amir R. Zamir. 2018a. On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Visionand-language navigation: Interpreting visuallygrounded navigation instructions in real environments", "authors": [ { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Damien", "middle": [], "last": "Teney", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Niko", "middle": [], "last": "S\u00fcnderhauf", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Reid", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Gould", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2018, "venue": "CVPR", "volume": "", "issue": "", "pages": "3674--3683", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S\u00fcnderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018b. Vision- and-language navigation: Interpreting visually- grounded navigation instructions in real environ- ments. In CVPR, pages 3674-3683.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Objectnav revisited: On evaluation of embodied agents navigating to objects", "authors": [ { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Gokaslan", "suffix": "" }, { "first": "Aniruddha", "middle": [], "last": "Kembhavi", "suffix": "" }, { "first": "Oleksandr", "middle": [], "last": "Maksymets", "suffix": "" }, { "first": "Roozbeh", "middle": [], "last": "Mottaghi", "suffix": "" }, { "first": "Manolis", "middle": [], "last": "Savva", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Wijmans", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2006.13171" ] }, "num": null, "urls": [], "raw_text": "Dhruv Batra, Aaron Gokaslan, Aniruddha Kembhavi, Oleksandr Maksymets, Roozbeh Mottaghi, Mano- lis Savva, Alexander Toshev, and Erik Wijmans. 2020. Objectnav revisited: On evaluation of em- bodied agents navigating to objects. arXiv preprint arXiv:2006.13171.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. 2017. Mat-terport3d: Learning from RGB-D data in indoor environments. International Conference on 3D Vision (3DV)", "authors": [ { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Funkhouser", "suffix": "" }, { "first": "Maciej", "middle": [], "last": "Halber", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Niessner", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angel Chang, Angela Dai, Thomas Funkhouser, Ma- ciej Halber, Matthias Niessner, Manolis Savva, Shu- ran Song, Andy Zeng, and Yinda Zhang. 2017. Mat- terport3d: Learning from RGB-D data in indoor en- vironments. International Conference on 3D Vision (3DV).", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Touchdown: Natural language navigation and spatial reasoning in visual street environments", "authors": [ { "first": "Howard", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Alane", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Dipendra", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Snavely", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. 2019. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In CVPR.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding", "authors": [ { "first": "Alexander", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Roma", "middle": [], "last": "Patel", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Ie", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, and Jason Baldridge. 2020. Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding. EMNLP.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Retouchdown: Adding touchdown to streetlearn as a shareable resource for language grounding tasks in street view", "authors": [ { "first": "Harsh", "middle": [], "last": "Mehta", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Ie", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Mirowski", "suffix": "" } ], "year": 2020, "venue": "EMNLP Workshop on Spatial Language Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harsh Mehta, Yoav Artzi, Jason Baldridge, Eugene Ie, and Piotr Mirowski. 2020. Retouchdown: Adding touchdown to streetlearn as a shareable re- source for language grounding tasks in street view. EMNLP Workshop on Spatial Language Under- standing (SpLU).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The streetlearn environment and dataset", "authors": [ { "first": "Piotr", "middle": [], "last": "Mirowski", "suffix": "" }, { "first": "Andras", "middle": [], "last": "Banki-Horvath", "suffix": "" }, { "first": "Keith", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Teplyashin", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Malinowski", "suffix": "" }, { "first": "Matthew", "middle": [ "Koichi" ], "last": "Grimes", "suffix": "" }, { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Kavukcuoglu", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.01292" ] }, "num": null, "urls": [], "raw_text": "Piotr Mirowski, Andras Banki-Horvath, Keith Ander- son, Denis Teplyashin, Karl Moritz Hermann, Ma- teusz Malinowski, Matthew Koichi Grimes, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, et al. 2019. The streetlearn environment and dataset. arXiv preprint arXiv:1903.01292.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Connecting vision and language with localized narratives", "authors": [ { "first": "Jordi", "middle": [], "last": "Pont-Tuset", "suffix": "" }, { "first": "Jasper", "middle": [], "last": "Uijlings", "suffix": "" }, { "first": "Soravit", "middle": [], "last": "Changpinyo", "suffix": "" }, { "first": "Radu", "middle": [], "last": "Soricut", "suffix": "" }, { "first": "Vittorio", "middle": [], "last": "Ferrari", "suffix": "" } ], "year": 2020, "venue": "ECCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari. 2020. Connect- ing vision and language with localized narratives. In ECCV.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Reverie: Remote embodied visual referring expression in real indoor environments", "authors": [ { "first": "Yuankai", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Xin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" }, { "first": "Chunhua", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2020, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuankai Qi, Qi Wu, Peter Anderson, Xin Wang, William Yang Wang, Chunhua Shen, and Anton van den Hengel. 2020. Reverie: Remote embod- ied visual referring expression in real indoor envi- ronments. In CVPR.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The Replica dataset: A digital replica of indoor spaces", "authors": [ { "first": "Julian", "middle": [], "last": "Straub", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Whelan", "suffix": "" }, { "first": "Lingni", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Yufan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Erik", "middle": [], "last": "Wijmans", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Green", "suffix": "" }, { "first": "Jakob", "middle": [ "J" ], "last": "Engel", "suffix": "" }, { "first": "Raul", "middle": [], "last": "Mur-Artal", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Shobhit", "middle": [], "last": "Verma", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Clarkson", "suffix": "" }, { "first": "Mingfei", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Budge", "suffix": "" }, { "first": "Yajie", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Xiaqing", "middle": [], "last": "Pan", "suffix": "" }, { "first": "June", "middle": [], "last": "Yon", "suffix": "" }, { "first": "Yuyang", "middle": [], "last": "Zou", "suffix": "" }, { "first": "Kimberly", "middle": [], "last": "Leon", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Carter", "suffix": "" }, { "first": "Jesus", "middle": [], "last": "Briales", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Gillingham", "suffix": "" }, { "first": "Elias", "middle": [], "last": "Mueggler", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Pesqueira", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1906.05797" ] }, "num": null, "urls": [], "raw_text": "Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xi- aqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard New- combe. 2019. The Replica dataset: A digital replica of indoor spaces. arXiv preprint arXiv:1906.05797.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Executing instructions in situated collaborative interactions", "authors": [ { "first": "Alane", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Claudia", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Schluger", "suffix": "" }, { "first": "Stanley", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hadi", "middle": [], "last": "Khader", "suffix": "" }, { "first": "Marwa", "middle": [], "last": "Mouallem", "suffix": "" }, { "first": "Iris", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2119--2130", "other_ids": { "DOI": [ "10.18653/v1/D19-1218" ] }, "num": null, "urls": [], "raw_text": "Alane Suhr, Claudia Yan, Jack Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019. Executing instructions in situ- ated collaborative interactions. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2119-2130, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Vision-and-dialog navigation", "authors": [ { "first": "Jesse", "middle": [], "last": "Thomason", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Murray", "suffix": "" }, { "first": "Maya", "middle": [], "last": "Cakmak", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Conference on Robot Learning (CoRL)", "volume": "", "issue": "", "pages": "394--406", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2020. Vision-and-dialog navi- gation. In Conference on Robot Learning (CoRL), pages 394-406. PMLR.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Gibson env: real-world perception for embodied agents", "authors": [ { "first": "Fei", "middle": [], "last": "Xia", "suffix": "" }, { "first": "R", "middle": [], "last": "Amir", "suffix": "" }, { "first": "Zhi-Yang", "middle": [], "last": "Zamir", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "He", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Sax", "suffix": "" }, { "first": "Silvio", "middle": [], "last": "Malik", "suffix": "" }, { "first": "", "middle": [], "last": "Savarese", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fei Xia, Amir R. Zamir, Zhi-Yang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. 2018. Gibson env: real-world perception for embodied agents. In CVPR. IEEE.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "On the evaluation of vision-and-language navigation instructions", "authors": [ { "first": "Ming", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Vihan", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Su", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Ku", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Ie", "suffix": "" } ], "year": 2021, "venue": "Conference of the European Chapter of the Association for Computational Linguistics (EACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ming Zhao, Peter Anderson, Vihan Jain, Su Wang, Alex Ku, Jason Baldridge, and Eugene Ie. 2021. On the evaluation of vision-and-language navigation in- structions. In Conference of the European Chap- ter of the Association for Computational Linguistics (EACL).", "links": null } }, "ref_entries": {} } }