{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:43:53.104750Z" }, "title": "CLEVR Parser: A Graph Parser Library for Geometric Learning on Language Grounded Image Scenes", "authors": [ { "first": "Raeid", "middle": [], "last": "Saqur", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Toronto", "location": {} }, "email": "raeidsaqur@cs.toronto.edu" }, { "first": "Ameet", "middle": [], "last": "Deshpande", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The CLEVR dataset has been used extensively in language grounded visual reasoning in Machine Learning (ML) and Natural Language Processing (NLP) domains. We present a graph parser library for CLEVR, that provides functionalities for object-centric attributes and relationships extraction, and construction of structural graph representations for dual modalities. Structural order-invariant representations enable geometric learning and can aid in downstream tasks like language grounding to vision, robotics, compositionality, interpretability, and computational grammar construction. We provide three extensible main components-parser, embedder, and visualizer that can be tailored to suit specific learning setups. We also provide out-of-thebox functionality for seamless integration with popular deep graph neural network (GNN) libraries. Additionally, we discuss downstream usage and applications of the library, and how it accelerates research for the NLP research community 1 .", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The CLEVR dataset has been used extensively in language grounded visual reasoning in Machine Learning (ML) and Natural Language Processing (NLP) domains. We present a graph parser library for CLEVR, that provides functionalities for object-centric attributes and relationships extraction, and construction of structural graph representations for dual modalities. Structural order-invariant representations enable geometric learning and can aid in downstream tasks like language grounding to vision, robotics, compositionality, interpretability, and computational grammar construction. We provide three extensible main components-parser, embedder, and visualizer that can be tailored to suit specific learning setups. We also provide out-of-thebox functionality for seamless integration with popular deep graph neural network (GNN) libraries. Additionally, we discuss downstream usage and applications of the library, and how it accelerates research for the NLP research community 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The CLEVR dataset (Johnson et al., 2017a ) is a modern 3D incarnation of historically significant shapes-based datasets like SHRDLU (Winograd, 1970) , used for demonstrating AI efficacy on language understanding (Ontanon, 2018; Winograd, 1980; Hudson and Manning, 2018) . Although originally aimed at the visual question answering (VQA) problem (Santoro et al., 2017; Hu et al., 2018) , its versatility has seen its use in diverse ML domains, including extensions to physics simulation engines for language augmented hierarchical reinforcement learning (Jiang et al., 2019) and causal reasoning (Yi et al., 2019) . Parallelly, research interest in geometric learning and GNN (Kipf and Welling, 2016; Schlichtkrull et al., 2018; Hamilton et al., 2017) based techniques have seen a dramatic surge in recent deep learning zeitgeist. In this focused paper, we present a library that allows easy integration and application of geometric representation learning on CLEVR dataset tasks -enabling the NLP research community to apply GNN based techniques to their research (see 4).", "cite_spans": [ { "start": 18, "end": 40, "text": "(Johnson et al., 2017a", "ref_id": "BIBREF12" }, { "start": 132, "end": 148, "text": "(Winograd, 1970)", "ref_id": null }, { "start": 212, "end": 227, "text": "(Ontanon, 2018;", "ref_id": "BIBREF24" }, { "start": 228, "end": 243, "text": "Winograd, 1980;", "ref_id": "BIBREF31" }, { "start": 244, "end": 269, "text": "Hudson and Manning, 2018)", "ref_id": "BIBREF10" }, { "start": 345, "end": 367, "text": "(Santoro et al., 2017;", "ref_id": "BIBREF27" }, { "start": 368, "end": 384, "text": "Hu et al., 2018)", "ref_id": "BIBREF9" }, { "start": 553, "end": 573, "text": "(Jiang et al., 2019)", "ref_id": "BIBREF11" }, { "start": 595, "end": 612, "text": "(Yi et al., 2019)", "ref_id": "BIBREF34" }, { "start": 671, "end": 699, "text": "GNN (Kipf and Welling, 2016;", "ref_id": null }, { "start": 700, "end": 727, "text": "Schlichtkrull et al., 2018;", "ref_id": "BIBREF28" }, { "start": 728, "end": 750, "text": "Hamilton et al., 2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The library has three main (extensible) components: 1. Parser: allows extraction of graph structured relationships among objects of the environment -both for textual questions, and semantic image scene graphs, 2. Embedder: allows generation of latent embeddings using any models or desired backend of choice (like PyTorch 2 ), 3. Visualizer: provides tools for visualizing structural graphs and latent embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "CLEVR Environment The dataset consists of images with rendered 3D objects of various shapes, colors, materials, and sizes, along with corresponding image scene graphs containing visual semantic information. Templated question generation on the images allows the creation of complex questions that test various aspects of scene understanding. The original dataset contains \u22481M questions generated from \u2248100k questions with 90 question template families that can be broadly categorized into five question types: count, exist, numerical comparison, attribute comparison, and query. The dataset also comes with a defined domainspecific-language (DSL) function library F, containing primitive functions that can be composed together to answer questions on CLEVR images (Johnson et al., 2017b) . We delegate further details of this dataset to (Johnson et al., 2017a) and the appendix A.", "cite_spans": [ { "start": 764, "end": 787, "text": "(Johnson et al., 2017b)", "ref_id": "BIBREF13" }, { "start": 837, "end": 860, "text": "(Johnson et al., 2017a)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Here we describe each of the main library components in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLEVR-PARSER", "sec_num": "3" }, { "text": "2 https://pytorch.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CLEVR-PARSER", "sec_num": "3" }, { "text": "Text The parser takes a language utterance, which can be a question, caption or command, that is valid in the CLEVR environment, and outputs a structural graph representation -G s , capturing object attributes, spatial relationships (spatial re), and attribute similarity based matching predicates (matching re) in the textual input. This is implemented by adding a CLEVR object entity recognizer (NER) in the NLP parse pipeline as depicted by Figure 3 . Note that the NER is permutationally equivariant to the object attributes -i.e. a 'large red rubber ball' will be detected as an object by any of these spans: 'red large rubber ball', 'large ball', 'ball' etc. Images The parser takes image scene graphs as input and outputs a structural graph -G t . The synthesized image scenes accompanying the original dataset can be used as input. Alternatively, parsed image scenes generated using any modern semantic image segmentation method (for e.g. 'Mask-RCNN' (He et al., 2017) ) can also be used as input (Yi et al., 2018) . A visualized example of a parsed image is shown in figure 4a. For the ease of reproducibility, we also include a curated dataset '1obj' with parsed image scenes using Mask-RCNN semantic segmentation (AppendixA).", "cite_spans": [ { "start": 959, "end": 976, "text": "(He et al., 2017)", "ref_id": "BIBREF8" }, { "start": 1005, "end": 1022, "text": "(Yi et al., 2018)", "ref_id": "BIBREF35" } ], "ref_spans": [ { "start": 444, "end": 452, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Parser", "sec_num": "3.1" }, { "text": "While we provide a concrete implementation using the SpaCy 3 NLP library, any other library like the Stanford Parser 4 , or NLTK 5 could be used in its place. The output of the parser from a question and image is depicted in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 225, "end": 233, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Parser", "sec_num": "3.1" }, { "text": "The embedder provides 'word-embedding' (Mikolov et al., 2017) based representation of input text utterances and image scenes using a pre-trained language model (LM). The end-user can instantiate the embedder with a preferred LM, which could be a simple one-hot representation of the CLEVR environment vocabulary, or a large transformer based SotA LMs like BERT, GPT-2, XLNet (Peters et al., 2018; Devlin et al., 2018; Radford et al., 2019; Yang et al., 2019) . The embedder uses the parser (see section 3.1) generated graphs G s , G t -where graph G s and G t are defined as generic graph G = (V, E, A), where V is the set of nodes {1,2,..}, E is the set of edges, and A is the adjacency matrix -and returns X , E, the feature matrices of the nodes and edges respectively:", "cite_spans": [ { "start": 39, "end": 61, "text": "(Mikolov et al., 2017)", "ref_id": "BIBREF22" }, { "start": 362, "end": 396, "text": "GPT-2, XLNet (Peters et al., 2018;", "ref_id": null }, { "start": 397, "end": 417, "text": "Devlin et al., 2018;", "ref_id": "BIBREF1" }, { "start": 418, "end": 439, "text": "Radford et al., 2019;", "ref_id": "BIBREF26" }, { "start": 440, "end": 458, "text": "Yang et al., 2019)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Embedder", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X s , A s , E s \u2190 EMBED(S) X t , A t , E t \u2190 EMBED(T ),", "eq_num": "(1)" } ], "section": "Embedder", "sec_num": "3.2" }, { "text": "The output signature of the embedder is a tuple: (X , A, E), which matches the fundamental datastructure of popular geometric learning libraries like PyTorch Geometric (Fey and Lenssen, 2019) , thus allowing seamless integration. We show a concrete implementation of this use case using Py-Torch Geometric (Fey and Lenssen, 2019) and Pytorch in 3.3.2.", "cite_spans": [ { "start": 168, "end": 191, "text": "(Fey and Lenssen, 2019)", "ref_id": "BIBREF2" }, { "start": 306, "end": 329, "text": "(Fey and Lenssen, 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Embedder", "sec_num": "3.2" }, { "text": "We provide multiple visualization tools for analyzing images, text, and latent embeddings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualizer", "sec_num": "3.3" }, { "text": "This visualizer sub-component enables visualization of the multimodal structural graph outputs -G s , G t -by the parser (see 3.1) using Graphviz and matplotlib.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualizing Structural Graphs", "sec_num": "3.3.1" }, { "text": "Visualizing Images Image graphs (G t ) can have a large number of objects and attributes. For ease of viewing, attributes like size, shape (e.g. cylinder), color (e.g. yellow), and material (e.g. metallic) are displayed as nodes of the graph (Figure 4a ). We explain elements of Figure 4a to describe the legend in greater detail. The double circles represent the objects, and the adjacent nodes are their attributes. The shape is depicted using the actual shape (e.g. the cyan cylinder -obj2), and the other attributes are depicted as diamonds. The size of one of the diamonds depicts if the object is small or large, e.g. the large cyan diamond attached to obj2 means that it is large. The color of all the attribute nodes depicts the color of the object (e.g. the cyan color of obj2). The presence of a gradient in the remaining diamond depicts the material of the object. For example, the gradient in the diamond attached to obj4 means that it is metallic, and the solid fill for obj2 means that it is rubber. While this legend is a little lengthy, we found that it makes visualiza-tion easier, but the user can choose to revert to the simpler setting of using text to depict the attributes.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 252, "text": "(Figure 4a", "ref_id": "FIGREF4" }, { "start": 279, "end": 288, "text": "Figure 4a", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Visualizing Structural Graphs", "sec_num": "3.3.1" }, { "text": "Visualizing Text Text corresponding to an image is a partially observable subset of objects, their relationships, and attributes. The dependency graph of the text is visualized just like the images, with only the observable information being depicted ( Figure 4b ).", "cite_spans": [], "ref_spans": [ { "start": 253, "end": 263, "text": "Figure 4b", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Visualizing Structural Graphs", "sec_num": "3.3.1" }, { "text": "Composing image and text We also provide an option to view an image and the text in the same graph. By connecting corresponding object nodes from the image and text, we create a bipartite graph that allows us to visualize all the information that an image-text pair contains (Figure 4c ). Additional examples from the visualizer are presented in appendix A.4.", "cite_spans": [], "ref_spans": [ { "start": 275, "end": 285, "text": "(Figure 4c", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Visualizing Structural Graphs", "sec_num": "3.3.1" }, { "text": "We also provide a visualizer to analyze the embeddings produced by using methods in section 3.2. We use t-SNE (Maaten and Hinton, 2008) , which is a method used to visualize high-dimensional data on 2 or 3 dimensions. We also offer clustering support to allow grouping of similar embeddings together. Both image (Frome et al., 2013) and word embeddings from learned models have the nice property of capturing semantic information, and our visualizers capture this semantic similarity information in the form of clusters. Figure 5 plots the embeddings for questions drawn from two different distributions train and test, which represent semantically different sequences, and they separate out into distinct clusters. Some lines of work attempt to generate scene graphs for images. The Visual Genome library (Krishna et al., 2017) , in a real-world image setting, is a collection of annotated images (from Flickr, COCO) and corresponding knowledge graph associations. The work of (Schuster et al., 2015) and the corresponding library which is a part of the Stanford NLP library 6 , allows scene graph generation from text (image caption) as input.", "cite_spans": [ { "start": 110, "end": 135, "text": "(Maaten and Hinton, 2008)", "ref_id": "BIBREF20" }, { "start": 312, "end": 332, "text": "(Frome et al., 2013)", "ref_id": "BIBREF4" }, { "start": 806, "end": 828, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF18" }, { "start": 978, "end": 1001, "text": "(Schuster et al., 2015)", "ref_id": "BIBREF29" } ], "ref_spans": [ { "start": 521, "end": 529, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Visualizer -Embeddings", "sec_num": "3.3.2" }, { "text": "Our work is orthogonal to these in that our target dataset is synthetic, which allows full control over the generation of images, questions, and ground truth semantic program chains. Thus, coalesced with our library's functionalities, it allows endto-end (e2e) control over experimenting on every modular aspect of research hypotheses (see 4.1). Further, our work premises on providing multimodal representations -including ground-truth paired graph (joint graph G u \u2190 (G s , G t )) -which has interesting downstream research applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visualizer -Embeddings", "sec_num": "3.3.2" }, { "text": "Applications of language grounding in ML/NLP research are quite broad. To avoid sounding overly grandiose, we exemplify possible applications citing work that pertains to the CLEVR dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Usages and Applications", "sec_num": "4.1" }, { "text": "Recent work by (Bahdanau et al., 2019) has shown lack of distributional robustness and compositional generalization (Fodor et al., 1988) in NLP. Permutation equivariance within local linguistic component groups has been shown to help with language compositionality (Gordon et al., 2020) . Graph-based representations are intrinsically or-der invariant -thus, may help with language compositionality research. Language augmented reward mechanisms are a dense topic in concurrent (human-in-the-loop) reinforcement learning (Knox and Stone, 2012; Griffith et al., 2013) , robotics (Knox et al., 2013; Kuhlmann et al., 2004) , longhorizon, hierarchical POMDP problems in general (Kaplan et al., 2017) -like command completion in physics simulators (Jiang et al., 2019) . Other applications could be in program synthesis and interpretability (Mascharka et al., 2018) , causal reasoning (Yao, 2010) , and general visually grounded language understanding (Yu et al., 2016) .", "cite_spans": [ { "start": 15, "end": 38, "text": "(Bahdanau et al., 2019)", "ref_id": "BIBREF0" }, { "start": 116, "end": 136, "text": "(Fodor et al., 1988)", "ref_id": "BIBREF3" }, { "start": 265, "end": 286, "text": "(Gordon et al., 2020)", "ref_id": "BIBREF5" }, { "start": 521, "end": 543, "text": "(Knox and Stone, 2012;", "ref_id": "BIBREF16" }, { "start": 544, "end": 566, "text": "Griffith et al., 2013)", "ref_id": "BIBREF6" }, { "start": 578, "end": 597, "text": "(Knox et al., 2013;", "ref_id": "BIBREF17" }, { "start": 598, "end": 620, "text": "Kuhlmann et al., 2004)", "ref_id": "BIBREF19" }, { "start": 675, "end": 696, "text": "(Kaplan et al., 2017)", "ref_id": "BIBREF14" }, { "start": 744, "end": 764, "text": "(Jiang et al., 2019)", "ref_id": "BIBREF11" }, { "start": 837, "end": 861, "text": "(Mascharka et al., 2018)", "ref_id": "BIBREF21" }, { "start": 881, "end": 892, "text": "(Yao, 2010)", "ref_id": "BIBREF33" }, { "start": 948, "end": 965, "text": "(Yu et al., 2016)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Usages and Applications", "sec_num": "4.1" }, { "text": "In general, we expect and hope that any existing line or domain of work in NLP using the CLEVR dataset (hundreds, based on citations), will benefit from having graph-based representational learning aided by our proposed library.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Usages and Applications", "sec_num": "4.1" }, { "text": "Code is available athttps://github.com/ raeidsaqur/clevr-parser", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://spacy.io/ 4 https://nlp.stanford.edu/software/lex-parser.shtml 5 https://www.nltk.org/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nlp.stanford.edu/software/ scenegraph-parser.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Closure: Assessing systematic generalization of clevr models", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "", "middle": [], "last": "Harm De Vries", "suffix": "" }, { "first": "J", "middle": [], "last": "Timothy", "suffix": "" }, { "first": "Shikhar", "middle": [], "last": "O'donnell", "suffix": "" }, { "first": "Philippe", "middle": [], "last": "Murty", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Beaudoin", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "", "middle": [], "last": "Courville", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.05783" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Harm de Vries, Timothy J O'Donnell, Shikhar Murty, Philippe Beaudoin, Yoshua Bengio, and Aaron Courville. 2019. Clo- sure: Assessing systematic generalization of clevr models. arXiv preprint arXiv:1912.05783.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Fast graph representation learning with pytorch geometric", "authors": [ { "first": "Matthias", "middle": [], "last": "Fey", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1903.02428" ] }, "num": null, "urls": [], "raw_text": "Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with pytorch geomet- ric. arXiv preprint arXiv:1903.02428.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Connectionism and cognitive architecture: A critical analysis", "authors": [ { "first": "", "middle": [], "last": "Jerry A Fodor", "suffix": "" }, { "first": "", "middle": [], "last": "Zenon W Pylyshyn", "suffix": "" } ], "year": 1988, "venue": "Cognition", "volume": "28", "issue": "1-2", "pages": "3--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jerry A Fodor, Zenon W Pylyshyn, et al. 1988. Connec- tionism and cognitive architecture: A critical analy- sis. Cognition, 28(1-2):3-71.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Devise: A deep visual-semantic embedding model", "authors": [ { "first": "Andrea", "middle": [], "last": "Frome", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jon", "middle": [], "last": "Shlens", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2121--2129", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Ben- gio, Jeff Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual-semantic em- bedding model. In Advances in neural information processing systems, pages 2121-2129.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Permutation equivariant models for compositional generalization in language", "authors": [ { "first": "Jonathan", "middle": [], "last": "Gordon", "suffix": "" }, { "first": "David", "middle": [], "last": "Lopez-Paz", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Diane", "middle": [], "last": "Bouchacourt", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Gordon, David Lopez-Paz, Marco Baroni, and Diane Bouchacourt. 2020. Permutation equiv- ariant models for compositional generalization in language. In International Conference on Learning Representations.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Policy shaping: Integrating human feedback with reinforcement learning", "authors": [ { "first": "Shane", "middle": [], "last": "Griffith", "suffix": "" }, { "first": "Kaushik", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Scholz", "suffix": "" }, { "first": "L", "middle": [], "last": "Charles", "suffix": "" }, { "first": "Andrea", "middle": [ "L" ], "last": "Isbell", "suffix": "" }, { "first": "", "middle": [], "last": "Thomaz", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "2625--2633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L Isbell, and Andrea L Thomaz. 2013. Pol- icy shaping: Integrating human feedback with rein- forcement learning. In Advances in neural informa- tion processing systems, pages 2625-2633.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Representation learning on graphs: Methods and applications", "authors": [ { "first": "Rex", "middle": [], "last": "William L Hamilton", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Ying", "suffix": "" }, { "first": "", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1709.05584" ] }, "num": null, "urls": [], "raw_text": "William L Hamilton, Rex Ying, and Jure Leskovec. 2017. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Mask r-cnn", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Georgia", "middle": [], "last": "Gkioxari", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "2961--2969", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Relation Networks for Object Detection", "authors": [ { "first": "Han", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Jiayuan", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Zheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jifeng", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yichen", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00378" ] }, "num": null, "urls": [], "raw_text": "Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. 2018. Relation Networks for Object Detection. Technical report.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Compositional attention networks for machine reasoning", "authors": [ { "first": "A", "middle": [], "last": "Drew", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Hudson", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Drew A. Hudson and Christopher D. Manning. 2018. Compositional attention networks for machine rea- soning. Technical report.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Language as an abstraction for hierarchical deep reinforcement learning", "authors": [ { "first": "Yiding", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Shixiang", "middle": [ "Shane" ], "last": "Gu", "suffix": "" }, { "first": "Kevin", "middle": [ "P" ], "last": "Murphy", "suffix": "" }, { "first": "Chelsea", "middle": [], "last": "Finn", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "9414--9426", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yiding Jiang, Shixiang Shane Gu, Kevin P Murphy, and Chelsea Finn. 2019. Language as an abstrac- tion for hierarchical deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 9414-9426.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Clevr: A diagnostic dataset for compositional language and elementary visual reasoning", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "2901--2910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017a. Clevr: A diagnostic dataset for compositional language and elementary visual rea- soning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2901-2910.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Inferring and executing programs for visual reasoning", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Judy", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "2989--2998", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017b. Inferring and executing programs for visual rea- soning. In Proceedings of the IEEE International Conference on Computer Vision, pages 2989-2998.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Beating atari with natural language guided reinforcement learning", "authors": [ { "first": "Russell", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Sauer", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Sosa", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1704.05539" ] }, "num": null, "urls": [], "raw_text": "Russell Kaplan, Christopher Sauer, and Alexander Sosa. 2017. Beating atari with natural language guided reinforcement learning. arXiv preprint arXiv:1704.05539.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1609.02907" ] }, "num": null, "urls": [], "raw_text": "Thomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Reinforcement learning from simultaneous human and mdp reward", "authors": [ { "first": "Bradley", "middle": [], "last": "Knox", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Stone", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems", "volume": "1", "issue": "", "pages": "475--482", "other_ids": {}, "num": null, "urls": [], "raw_text": "W Bradley Knox and Peter Stone. 2012. Reinforce- ment learning from simultaneous human and mdp reward. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages 475-482. International Foundation for Autonomous Agents and Multiagent Systems.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Training a robot via human feedback: A case study", "authors": [ { "first": "W Bradley", "middle": [], "last": "Knox", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Stone", "suffix": "" }, { "first": "Cynthia", "middle": [], "last": "Breazeal", "suffix": "" } ], "year": 2013, "venue": "International Conference on Social Robotics", "volume": "", "issue": "", "pages": "460--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "W Bradley Knox, Peter Stone, and Cynthia Breazeal. 2013. Training a robot via human feedback: A case study. In International Conference on Social Robotics, pages 460-470. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image anno- tations. International Journal of Computer Vision, 123(1):32-73.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Guiding a reinforcement learner with natural language advice: Initial results in robocup soccer", "authors": [ { "first": "Gregory", "middle": [], "last": "Kuhlmann", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Stone", "suffix": "" }, { "first": "Raymond", "middle": [], "last": "Mooney", "suffix": "" }, { "first": "Jude", "middle": [], "last": "Shavlik", "suffix": "" } ], "year": 2004, "venue": "The AAAI-2004 workshop on supervisory control of learning and adaptive systems", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Kuhlmann, Peter Stone, Raymond Mooney, and Jude Shavlik. 2004. Guiding a reinforcement learner with natural language advice: Initial results in robocup soccer. In The AAAI-2004 workshop on supervisory control of learning and adaptive sys- tems. San Jose, CA.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Visualizing data using t-sne", "authors": [ { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" } ], "year": 2008, "venue": "Journal of machine learning research", "volume": "9", "issue": "", "pages": "2579--2605", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Transparency by design: Closing the gap between performance and interpretability in visual reasoning", "authors": [ { "first": "David", "middle": [], "last": "Mascharka", "suffix": "" }, { "first": "Philip", "middle": [], "last": "Tran", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Soklaski", "suffix": "" }, { "first": "Arjun", "middle": [], "last": "Majumdar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "4942--4950", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Mascharka, Philip Tran, Ryan Soklaski, and Ar- jun Majumdar. 2018. Transparency by design: Clos- ing the gap between performance and interpretabil- ity in visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4942-4950.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Advances in pre-training distributed word representations", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Puhrsch", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.09405" ] }, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2017. Ad- vances in pre-training distributed word representa- tions. arXiv preprint arXiv:1712.09405.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Shrdlu: A game prototype inspired by winograd's natural language understanding work", "authors": [ { "first": "Santiago", "middle": [], "last": "Ontanon", "suffix": "" } ], "year": 2018, "venue": "Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santiago Ontanon. 2018. Shrdlu: A game prototype inspired by winograd's natural language understand- ing work. In Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep contextualized word representations", "authors": [ { "first": "E", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1802.05365" ] }, "num": null, "urls": [], "raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "OpenAI Blog", "volume": "1", "issue": "8", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A simple neural network module for relational reasoning", "authors": [ { "first": "Adam", "middle": [], "last": "Santoro", "suffix": "" }, { "first": "David", "middle": [], "last": "Raposo", "suffix": "" }, { "first": "G", "middle": [ "T" ], "last": "David", "suffix": "" }, { "first": "Mateusz", "middle": [], "last": "Barrett", "suffix": "" }, { "first": "Razvan", "middle": [], "last": "Malinowski", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Pascanu", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Battaglia", "suffix": "" }, { "first": "", "middle": [], "last": "Lillicrap", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "4968--4977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Santoro, David Raposo, David G.T. Barrett, Ma- teusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. 2017. A simple neural net- work module for relational reasoning. Advances in Neural Information Processing Systems, 2017- Decem(Nips):4968-4977.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Modeling relational data with graph convolutional networks", "authors": [ { "first": "Michael", "middle": [], "last": "Schlichtkrull", "suffix": "" }, { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "Rianne", "middle": [], "last": "Bloem", "suffix": "" }, { "first": "", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Berg", "suffix": "" }, { "first": "Max", "middle": [], "last": "Titov", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2018, "venue": "European Semantic Web Conference", "volume": "", "issue": "", "pages": "593--607", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolu- tional networks. In European Semantic Web Confer- ence, pages 593-607. Springer.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Generating semantically precise scene graphs from textual descriptions for improved image retrieval", "authors": [ { "first": "Sebastian", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the fourth workshop on vision and language", "volume": "", "issue": "", "pages": "70--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Schuster, Ranjay Krishna, Angel Chang, Li Fei-Fei, and Christopher D Manning. 2015. Gen- erating semantically precise scene graphs from tex- tual descriptions for improved image retrieval. In Proceedings of the fourth workshop on vision and language, pages 70-80.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "What does it mean to understand language? Cognitive science", "authors": [ { "first": "Terry", "middle": [], "last": "Winograd", "suffix": "" } ], "year": 1980, "venue": "", "volume": "4", "issue": "", "pages": "209--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terry Winograd. 1980. What does it mean to under- stand language? Cognitive science, 4(3):209-241.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5754--5764", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5754-5764.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Stage/individual-level predicates, topics and indefinite subjects", "authors": [ { "first": "Shuiying", "middle": [], "last": "Yao", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation", "volume": "", "issue": "", "pages": "573--582", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuiying Yao. 2010. Stage/individual-level predicates, topics and indefinite subjects. In Proceedings of the 24th Pacific Asia Conference on Language, Informa- tion and Computation, pages 573-582, Tohoku Uni- versity, Sendai, Japan. Institute of Digital Enhance- ment of Cognitive Processing, Waseda University.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Clevrer: Collision events for video representation and reasoning", "authors": [ { "first": "Kexin", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Chuang", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Yunzhu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Joshua", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1910.01442" ] }, "num": null, "urls": [], "raw_text": "Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B Tenen- baum. 2019. Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Neural-symbolic VQA: Disentangling reasoning from vision and language understanding", "authors": [ { "first": "Kexin", "middle": [], "last": "Yi", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "Chuang", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Joshua", "middle": [ "B" ], "last": "Tenenbaum", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1031--1042", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kexin Yi, Antonio Torralba, Jiajun Wu, Pushmeet Kohli, Chuang Gan, and Joshua B. Tenenbaum. 2018. Neural-symbolic VQA: Disentangling reason- ing from vision and language understanding. Ad- vances in Neural Information Processing Systems, 2018-Decem(NeurIPS):1031-1042.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Training an adaptive dialogue policy for interactive learning of visually grounded word meanings", "authors": [ { "first": "Yanchao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Arash", "middle": [], "last": "Eshghi", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Lemon", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue", "volume": "", "issue": "", "pages": "339--349", "other_ids": { "DOI": [ "10.18653/v1/W16-3643" ] }, "num": null, "urls": [], "raw_text": "Yanchao Yu, Arash Eshghi, and Oliver Lemon. 2016. Training an adaptive dialogue policy for interac- tive learning of visually grounded word meanings. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 339-349, Los Angeles. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "(a) Question on image (Figure 2): 'Is the color of the metal block that is right of the yellow rubber object the same as the large metal cylinder?' A question about a CLEVR image visualized as multimodal parsed graphs", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Figure 2: A CLEVR image", "type_str": "figure", "uris": null }, "FIGREF2": { "num": null, "text": "Entity visualization", "type_str": "figure", "uris": null }, "FIGREF3": { "num": null, "text": "Questions from two different distributions which form separate clusters Similarly, Figure 6 analyzes embeddings drawn from 7 different templates. Questions that corre-(a) Visualizing image graph -Gt (b) Visualizing text graph -Gs (c) Visualizing joint (image and text) graph -Gu for the above two figures", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Visualizing G s , G t , G uspond to the same templates form tight clusters while being far away from other questions.", "type_str": "figure", "uris": null }, "FIGREF5": { "num": null, "text": "Questions from 7 different templates forming tight clusters 4 Related Work and Applications", "type_str": "figure", "uris": null } } } }