{ "paper_id": "C14-1012", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:21:22.625298Z" }, "title": "Query-by-Example Image Retrieval using Visual Dependency Representations", "authors": [ { "first": "Desmond", "middle": [], "last": "Elliott", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "settlement": "Communication" } }, "email": "d.elliott@ed.ac.uk" }, { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "settlement": "Communication" } }, "email": "vlavrenk@inf.ed.ac.uk" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Edinburgh", "location": { "settlement": "Communication" } }, "email": "keller@inf.ed.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Image retrieval models typically represent images as bags-of-terms, a representation that is wellsuited to matching images based on the presence or absence of terms. For some information needs, such as searching for images of people performing actions, it may be useful to retain data about how parts of an image relate to each other. If the underlying representation of an image can distinguish between images where objects only co-occur from images where people are interacting with objects, then it should be possible to improve retrieval performance. In this paper we model the spatial relationships between image regions using Visual Dependency Representations, a structured image representation that makes it possible to distinguish between object co-occurrence and interaction. In a query-by-example image retrieval experiment on data set of people performing actions, we find an 8.8% relative increase in MAP and an 8.6% relative increase in Precision@10 when images are represented using the Visual Dependency Representation compared to a bag-of-terms baseline.", "pdf_parse": { "paper_id": "C14-1012", "_pdf_hash": "", "abstract": [ { "text": "Image retrieval models typically represent images as bags-of-terms, a representation that is wellsuited to matching images based on the presence or absence of terms. For some information needs, such as searching for images of people performing actions, it may be useful to retain data about how parts of an image relate to each other. If the underlying representation of an image can distinguish between images where objects only co-occur from images where people are interacting with objects, then it should be possible to improve retrieval performance. In this paper we model the spatial relationships between image regions using Visual Dependency Representations, a structured image representation that makes it possible to distinguish between object co-occurrence and interaction. In a query-by-example image retrieval experiment on data set of people performing actions, we find an 8.8% relative increase in MAP and an 8.6% relative increase in Precision@10 when images are represented using the Visual Dependency Representation compared to a bag-of-terms baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Every day millions of people search for images on the web, both professionally and for personal amusement. The majority of image searches are aimed at finding a particular named entity, such as Justin Bieber or supernova, and a typical image retrieval system is well-suited to this type of information need because it represents an image as a bag-of-terms drawn from data surrounding the image, such as text, manual tags, and anchor text (Datta et al., 2008 ). It is not always possible to find useful terms in the surrounding data; the last decade has seen advances in automatic methods for assigning terms to images that have neither user-assigned tags, nor a textual description (Duygulu et al., 2002; Lavrenko et al., 2003; Guillaumin and Mensink, 2009) . These automatic methods learn to associate the presence and absence of labels with the visual characteristics of an image, such as colour and texture distributions, shape, and points of interest, and can automatically generate a bag of terms for an unlabelled image.", "cite_spans": [ { "start": 438, "end": 457, "text": "(Datta et al., 2008", "ref_id": "BIBREF2" }, { "start": 682, "end": 704, "text": "(Duygulu et al., 2002;", "ref_id": "BIBREF3" }, { "start": 705, "end": 727, "text": "Lavrenko et al., 2003;", "ref_id": "BIBREF10" }, { "start": 728, "end": 757, "text": "Guillaumin and Mensink, 2009)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is important to remember that not all information needs are entity-based: people also search for images reflecting a mood, such as people having fun at a party, or an action, such as using a computer. The bag-of-terms representation is limited to matching images based on the presence or absence of terms, and not the relation of the terms to each other. Figures 1(a) and (b) highlight the problem with using unstructured representations for image retrieval: there is a person and a computer in both images but only (a) depicts a person actually using the computer. To address this problem with unstructured representations we propose to represent the structure of an image using the Visual Dependency Representation (Elliott and Keller, 2013) . The Visual Dependency Representation is a directed labelled graph over the regions of an image that captures the spatial relationships between regions. The representation is inspired by evidence from the psychology literature that people are better at recognising and searching for objects when the spatial relationships between the objects in the image are consistent with our expectations of the world. (Biederman, 1972 ; Bar and Ullman, 1996) . In an automatic image description task, Elliott Figure 1 : Three examples of images depicting a person and a computer, alongside a respective Visual Dependency Representation for each image. The bag-of-terms representation can be observed in the annotated regions of the Visual Dependency Representations. In (a) and (c) there is a person using a laptop, whereas in (b) the man is actually using the trumpet. The gold-standard action annotation is shown in the yellow bounding box.", "cite_spans": [ { "start": 720, "end": 746, "text": "(Elliott and Keller, 2013)", "ref_id": "BIBREF4" }, { "start": 1154, "end": 1170, "text": "(Biederman, 1972", "ref_id": null }, { "start": 1173, "end": 1194, "text": "Bar and Ullman, 1996)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 358, "end": 370, "text": "Figures 1(a)", "ref_id": null }, { "start": 1245, "end": 1253, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "and Keller (2013) showed that encoding the spatial relationships between objects in the Visual Dependency Representation helped to generate significantly better descriptions than approaches based on the spatial proximity of objects (Farhadi et al., 2010) or corpus-based models (Yang et al., 2011) . In this paper we study whether the Visual Dependency Representation of images can improve the performance of query-by-example image retrieval models. The main finding is that encoding images using the Visual Dependency Representation leads to significantly better retrieval accuracy compared to a bag-of-terms baseline, and that the improvements are most pronounced for transitive verbs.", "cite_spans": [ { "start": 232, "end": 254, "text": "(Farhadi et al., 2010)", "ref_id": "BIBREF6" }, { "start": 278, "end": 297, "text": "(Yang et al., 2011)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A central problem in image retrieval is how to abstractly represent images (Datta et al., 2008) . A bagof-terms representation of an image is created by grouping visual features, such as color, shape (Shi and Malik, 2000) , texture, and interest points (Lowe, 1999) , in a vector or as a probability distribution over the features. Image retrieval can then be performed by trying to find the best matchings of terms across an image collection. Spatial Pyramid Matching is an approach to constructing low-level image representations that capture the relationships between features at differently sized partitions of the image (Lazebnik et al., 2006) . This approach has proven successful for scene categorisation tasks. An alternative approach to representing images is to learn a mapping (Duygulu et al., 2002; Lavrenko et al., 2003; Guillaumin and Mensink, 2009) between the bags-of-terms and object tags. An image can then be represented as a bag-of-terms and image retrieval is similar to text retrieval (Wu et al., 2012) .", "cite_spans": [ { "start": 75, "end": 95, "text": "(Datta et al., 2008)", "ref_id": "BIBREF2" }, { "start": 200, "end": 221, "text": "(Shi and Malik, 2000)", "ref_id": "BIBREF18" }, { "start": 253, "end": 265, "text": "(Lowe, 1999)", "ref_id": "BIBREF14" }, { "start": 625, "end": 648, "text": "(Lazebnik et al., 2006)", "ref_id": "BIBREF11" }, { "start": 788, "end": 810, "text": "(Duygulu et al., 2002;", "ref_id": "BIBREF3" }, { "start": 811, "end": 833, "text": "Lavrenko et al., 2003;", "ref_id": "BIBREF10" }, { "start": 834, "end": 863, "text": "Guillaumin and Mensink, 2009)", "ref_id": "BIBREF8" }, { "start": 1007, "end": 1024, "text": "(Wu et al., 2012)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Representing Images", "sec_num": "2.1" }, { "text": "In this work, we represent an image as a directed acyclic graph over a set of labeled object region annotations. This representation captures the important spatial relationships between the image regions and makes it possible to distinguish between co-occurring regions and interacting regions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Representing Images", "sec_num": "2.1" }, { "text": "One approach to recognizing actions is to learn appearance models for visual phrases and use these models to predict actions (Sadeghi and Farhadi, 2011) . A visual phrase is defined as the people and the objects they interact with in an action. In this approach, a fixed number of visual phrase models are trained using the deformable parts object detector (Felzenszwalb et al., 2010) and used to perform action recognition.", "cite_spans": [ { "start": 125, "end": 152, "text": "(Sadeghi and Farhadi, 2011)", "ref_id": "BIBREF17" }, { "start": 357, "end": 384, "text": "(Felzenszwalb et al., 2010)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Still-Image Action Recognition", "sec_num": "2.2" }, { "text": "An alternative approach is to model the relationships between objects in an image, and hence the visible actions, as a Conditional Random Field (CRF), where each node in the field is an object and the factors between nodes correspond to features that capture the relationships between the objects (Zitnick et al., 2013) . The factors between object nodes in the CRF include object occurrence, absolute position, person attributes, and the relative location of pairs of objects. This model has been used to generate novel images of people performing actions and to retrieve images of people performing actions.", "cite_spans": [ { "start": 297, "end": 319, "text": "(Zitnick et al., 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Still-Image Action Recognition", "sec_num": "2.2" }, { "text": "Most recently, actions have been predicted in images by selecting the most likely verb and object pair given a set of candidate objects detected in an image (Le et al., 2013a). The verb and object is selected amongst those that maximize the distributional similarity of the pair in a large and diverse collection of documents. This approach is most similar to ours but it relies on an external corpus and, depending on the text collections used to train the distributional model, will compound the problem of co-occurrence of objects instead of the relationships between the objects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Still-Image Action Recognition", "sec_num": "2.2" }, { "text": "The work presented in this paper uses ground-truth annotation for region labels, an assumption similar to (Zitnick et al., 2013), but requires no external data to make predictions of the relationships between objects, unlike the approach of (Le et al., 2013a). The directed acyclic graph representation we propose for images can be seen as a latent representation of the depicted action in the image, where the spatial relationships between the regions capture the different types of actions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Still-Image Action Recognition", "sec_num": "2.2" }, { "text": "In this paper we study the task of query-by-example image retrieval within the restricted domain of images depicting actions. More specifically, given an image that depicts a given action, such as using a computer, the aim of the retrieval model is to find all other images in the image collection that depict the same action. We define an action as an event involving one or more entities in an image, e.g., a woman running or boy using a computer, and assume all images have been manually annotated for objects. This assumption means we can explore the utility of the Visual Dependency Representation without the noise introduced by automatic computer vision methods. The data available to the retrieval models can be seen in Figure 1 , and Section 5 provides further details about the different sources of data The action labelwhich is only used for evaluation -is shown in the labelled bounding box, and the Visual Dependency Representation -not used by the baseline model -is shown as a tree at the bottom of the figure.", "cite_spans": [], "ref_spans": [ { "start": 728, "end": 736, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Task and Baseline", "sec_num": "3" }, { "text": "The main hypothesis explored in this paper is that the accuracy of an image retrieval model will increase if the representation encodes information about the relationships between the objects in images. This hypothesis is tested by encoding images as either an unstructured bag-of-terms representation or as the structured Visual Dependency Representation. The Bag-of-Terms baseline represents the query image and the image collection as an unstructured bags-of-terms vector. All of the models used to test the main hypothesis use the cosine similarity function is to determine the similarity of the query image to other images in the collection, and thus to generate a ranked list from the similarity values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task and Baseline", "sec_num": "3" }, { "text": "The Visual Dependency Representation (VDR) is a structured representation of an image that captures the spatial relationships between pairs of image regions in a directed labelled graph. The Visual Dependency Grammar defines eight possible spatial relationships between pairs of regions, as shown in Table 1 . The relationships in the grammar were designed to provide sufficient coverage of the types of spatial relationships required to describe the data, and are mathematically defined in terms of pixel overlap, distance between regions, and the angle between regions. The frame of reference for annotating spatial relationships is the image itself and not the object in the image, and angles and distance measurements are taken or estimated from the centroids of the regions. The VDR of an image is created by a trained human annotator in a two-stage process:", "cite_spans": [], "ref_spans": [ { "start": 300, "end": 307, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Visual Dependency Representation", "sec_num": "4" }, { "text": "1. The annotator draws and labels boundaries around the parts of the image they think contribute to defining the action depicted in the image, and the context within which the action occurs;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dependency Representation", "sec_num": "4" }, { "text": "2. The annotator draws labelled directed edges between the annotated regions that captures how the relationships between the image convey the action. In Section 4.1, we will explain how to automate the second stage of the process from a collection of labelled region annotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dependency Representation", "sec_num": "4" }, { "text": "In addition to the annotated image regions, a VDR also contains a ROOT node, which acts as a placeholder for the image. In the remainder of this section we describe how a gold-standard VDR is created by a human annotator. The starting point for the VDR in Figure 1 (a) is the following set of regions and the ROOT node:", "cite_spans": [], "ref_spans": [ { "start": 256, "end": 264, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Visual Dependency Representation", "sec_num": "4" }, { "text": "ROOT Lamp Picture Girl Laptop Bed", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dependency Representation", "sec_num": "4" }, { "text": "First, the regions are attached to each other based on how the relationship between the objects contributes to the depicted action. In Figure 1 (a), the Girl is using the Laptop, therefore a labelled directed edge is created from the Girl region to the Laptop region. The spatial relationship is labelled as BESIDE.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 143, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Visual Dependency Representation", "sec_num": "4" }, { "text": "The Girl is also attached to the Bed because the bed supports her body. The spatial relation label is ABOVE because it expresses the spatial relationship between the regions, not the semantic relationship ON. ROOT is attached to the Girl without an edge label to symbolize that she is an actor in the image. because it is sitting on the table) , and then to the ROOT node to signify that they do not play a part in the depicted action. In this example, neither the Lamp nor the Picture are related to the action of using the computer, so they are attached to the ROOT node. This now forms a completed VDR for the image in Figure 1 (a). This structured representation of an image captures the prominent relationship between the girl, the laptop, and the bed. There is no prominent relationship defined between the girl and either the lamp of the picture, in effect these regions have been relegated to background objects. The central hypothesis underpinning the Visual Dependency Representation is that images that contain similar VDR substructures are more likely to depict the same action than images that only contain the same set of objects. For example, the VDR for Figure 1(a) correctly captures the relationship between the people and the laptops, whereas this relationship is not present in Figure 1(b) , where the person is playing a trumpet.", "cite_spans": [], "ref_spans": [ { "start": 308, "end": 343, "text": "because it is sitting on the table)", "ref_id": null }, { "start": 622, "end": 630, "text": "Figure 1", "ref_id": null }, { "start": 1298, "end": 1309, "text": "Figure 1(b)", "ref_id": null } ], "eq_spans": [], "section": "ROOT Lamp Picture Girl Laptop Bed beside", "sec_num": null }, { "text": "We follow the approach of Elliott and Keller (2013) and predict the VDR y of an image over a collection of labelled region annotations x. This task is framed as a supervised learning problem, where the aim is to construct a Maximum Spanning Tree from a fully-connected directed weighted graph over the labelled regions (McDonald et al., 2005) . Reducing the fully-connected graph to the Maximum Spanning Tree removes the region-region edges that are not important in defining the prominent relationships between the regions in an image. The score of the VDR y over the image regions is calculated as the sum of the scores of the directed labelled edges:", "cite_spans": [ { "start": 26, "end": 51, "text": "Elliott and Keller (2013)", "ref_id": "BIBREF4" }, { "start": 319, "end": 342, "text": "(McDonald et al., 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Predicting Visual Dependency Representations", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "score(x, y) = (a,b)\u2208y w \u2022 f (a, b)", "eq_num": "(1)" } ], "section": "Predicting Visual Dependency Representations", "sec_num": "4.1" }, { "text": "where the score of an edge between image regions a and b is calculated using a vector of weighted feature functions f . The feature functions characterize the image regions and the edge between pairs of regions, and include: the labels of the regions and the spatial relation annotated on the edge; the (normalized) distance between the centroids of the regions; the angle formed between the annotated regions, which is mapped onto the set of spatial relations; the relative size of the region compared to the image; and the distance of the region centroid from the center of the image. The model is trained over i instances of region-annotated images x i associated with human-created VDR structures y i , I train = {x i , y i }. The score of each edge a, b is calculated by applying the feature functions to the data associated with that edge, and this is performed over each edge in a VDR to obtain a score for a complete gold-standard structure. The parameters of the weight vector w are iteratively adjusted to maximise the score of the gold-standard structures in the training data using the Margin Infused Relaxation Algorithm (Crammer and Singer, 2002) .", "cite_spans": [ { "start": 1134, "end": 1160, "text": "(Crammer and Singer, 2002)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Predicting Visual Dependency Representations", "sec_num": "4.1" }, { "text": "The test data contains i instances of region-annotated images with image regions x i , I test = {x i }. The parsing model computes the highest scoring structure\u0177 i for each instance in the test data by scoring each possible directed edge between pairs of regions in x i . This process forms a fully-connected graph over the image regions, from which the Maximum Spanning Tree is taken and returned as the predicted VDR.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Visual Dependency Representations", "sec_num": "4.1" }, { "text": "We evaluate the performance of this VDR prediction model by comparing how well it can recover the manually created trees in the data set. This evaluation is performed on the development data in a 10-fold cross validation setting where each fold of the data is split 80%/10%/10%. Unlabelled directed accuracy means the model correctly proposes an edge between a pair of regions in the correct direction; Labelled directed accuracy means it additionally proposes the correct edge label. The baseline approach is to assume no latent image structure and attach all image regions to the ROOT node of the VDR; this achieves 51.6% labelled and unlabelled directed attachment accuracy. The accuracy of our automatic approach to VDR prediction is 61.3% labelled and 68.8% unlabelled attachment accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Predicting Visual Dependency Representations", "sec_num": "4.1" }, { "text": "It remains to define how to compare the Visual Dependency Representation of a pair of images. The most obvious approach is to use the labelled directed accuracy measurement used for the VDR prediction evaluation in the previous section, but we did not find significant improvements in retrieval accuracy using this method. We hypothesise that the lack of weight given to the edges between nodes in the Visual Dependency Representation results in this comparison function not distinguishing between object-object relationships that matter, such as PERSON \u2212 \u2212\u2212\u2212 \u2192 beside BIKE, compared to ROOT \u2212 \u2192 TREES. The former is a potential person-object relationship that explains the depicted event, whereas the latter is only a background object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Visual Dependency Representations", "sec_num": "4.2" }, { "text": "The approach we adopted in this paper is to compare Visual Dependency Representations of images by decomposing the structure into a set of labelled and a unlabelled parent-child subtrees in a depth-first traversal of the VDR. The decomposition process allows use to use the same similarity function as the Bag-of-Terms baseline model, removing the confound of choosing different similarity functions. The subtrees can be transformed into tokens and these tokens can be used as weighted terms in a vector representation. An example of a labelled transformation is shown below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparing Visual Dependency Representations", "sec_num": "4.2" }, { "text": "We now demonstrate the outcome of comparing images represented using either a vector that concatenates the decomposed transformed VDR and bag-of-terms, or a vector that contains only the bag-ofterms. In this demonstration, each term has a tf-idf weight of 1. The first illustration (Similar) compares images that depict the same underlying action: Figure 1 (a) and (c). The second illustration (Dissimilar) compares images that depict different actions: Figure 1 It can be seen that when the images represent the same action, the decomposed VDR increases the similarity of the pair of images compared to the bag-of-terms representation; and when images do not represent the same action, the decomposed VDR yields a lower similarity than the bag-of-terms representation. These illustrations confirm that Visual Dependency Representations can be used to distinguish the difference between presence or absence of objects, and the prominent relationships between objects.", "cite_spans": [], "ref_spans": [ { "start": 348, "end": 360, "text": "Figure 1 (a)", "ref_id": null }, { "start": 454, "end": 462, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Girl Bed \u2192 Girl above Bed above", "sec_num": null }, { "text": "We use an existing dataset of VDR-annotated images to study whether modelling the structure of an image can improve image retrieval in the domain of action depictions. The data set of Elliott and Keller (2013) contains 341 images annotated with region annotations, three visual dependency representations per image (making a total of 1,023 instances), and a ground-truth action label for each image. An example of the annotations can be seen in Figure 1 . The image collection is drawn from the PASCAL Visual Object Classification Challenge 2011 action recognition taster and covers a set of 10 actions (Everingham et al., 2011) : riding a bike, riding a horse, reading, running, jumping, walking, playing an instrument, using a computer, taking a photo, and talking on the phone.", "cite_spans": [ { "start": 184, "end": 209, "text": "Elliott and Keller (2013)", "ref_id": "BIBREF4" }, { "start": 603, "end": 628, "text": "(Everingham et al., 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 445, "end": 453, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "Each image is associated with three human-written descriptions collected from untrained annotators on Amazon Mechanical Turk. The descriptions do not form any part of the models presented in the current paper; they were used in the automatic image description task of Elliott and Keller (2013) . Each description contains two sentences: the first sentence describes the action depicted in the image, and the second sentence describes other objects not involved in the action. A two sentence description of an image helps distinguish objects that are central to depicting the action from objects that may be distractors.", "cite_spans": [ { "start": 268, "end": 293, "text": "Elliott and Keller (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Image Descriptions", "sec_num": null }, { "text": "The images contain human-drawn labelled region annotations. The annotations were drawn using the LabelMe toolkit, which allows for arbitrary labelled polygons to be created over an image (Russell et al., 2008) . The annotated regions were restricted to those present in at least one of three humanwritten descriptions. To reduce the effects of label sparsity, frequently occurring equivalent labels were conflated, i.e., man, child, and boy \u2192 person; bike, bicycle, motorbike \u2192 bike; this reduced the object label vocabulary from 496 labels to 362 labels. The data set contains a total of 5,034 region annotations, with a mean of 4.19 \u00b1 1.94 annotations per image.", "cite_spans": [ { "start": 187, "end": 209, "text": "(Russell et al., 2008)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Region Annotations", "sec_num": null }, { "text": "Recall that each image is associated with three descriptions, and that people were free to decide how to describe the action and background of the image. The differences between how people describe images leads to the creation of one Visual Dependency Representation per image-description pair in the data set, resulting in a total of 1,023 instances. The process for creating a visual dependency representation of an image is described in Section 4. The annotated dataset comprises a total of 5,748 spatial relations, corresponding to a mean of 4.79 \u00b1 3.51 relations per image. Elliott and Keller (2013) report interannotator agreement on a subset of the data at 84% agreement for labelled directed attachments and 95.1% for unlabelled directed attachments.", "cite_spans": [ { "start": 579, "end": 604, "text": "Elliott and Keller (2013)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dependency Representations", "sec_num": null }, { "text": "The original PASCAL action recognition dataset contains ground truth action class annotations for each image. These annotations are in the form of labelled bounding boxes around the person performing the action in the image. The action labels are only used as the gold-standard relevance judgements for the query-by-example image retrieval experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Labels", "sec_num": null }, { "text": "In this section we present the results of a query-by-example image retrieval experiment to determine the utility of the Visual Dependency Representation compared to a bag-of-terms representation. In this experiment, a single image (the query image) is used to rank the images in the test collection, where the goal is to construct a ranking where the top images depict the same action as the query image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "The image retrieval experiment is performed using 10-fold cross-validation in the following manner. The 341 images in the dataset are randomly partitioned into 80%/10%/10% splits, resulting in 1011 test queries 1 . For each query we compute average precision and Precision@10 of the ranked list, and use the resulting values to test the statistical significance of the results. The training set is used to train the VDR prediction model and to estimate inverse document frequency statistics. During the training phase, the VDR-based models have access to region boundaries, region labels and three manually-created VDRs for each training image. In the test set, all models have access to the region boundaries and labels for each image. Each image in the test set forms a query and the models produce a ranked list of the remaining images in the test collection. Images are marked for relevance as follows: a image at rank r is considered relevant if it has the same action label as the query image; otherwise it is non-relevant. The dev set was used to experiment with different matching functions and to optimise the feature functions used in the VDR prediction model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "6.1" }, { "text": "We compare the retrieval accuracy of three approaches: Bag-of-Terms uses an unstructured representation for each image. A tf-idf weight is assigned to each region label in an image, and the cosine measure is used to calculate the similarity of images. This model allows us to compare the usefulness of a structured vs. unstructured image representation. Automatic VDR is a model using the VDR prediction method from Section 4.1, and Manual VDR uses the gold-standard data described in Section 5. Both Table 2 : Overall Mean Average Precision and Precision@10 images. The VDR-based models are significantly better than the Bag-of-Terms model, supporting the hypothesis that modelling the structure of an image using the Visual Dependency Representation is useful for image retrieval. : significantly different than Bag-of-Terms at p < 0.01; \u2020: significantly different than Automatic VDR at p < 0.01.", "cite_spans": [], "ref_spans": [ { "start": 501, "end": 508, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": "6.2" }, { "text": "of the VDR-based models have a tf-idf weight assigned to the transformed decomposed terms and the cosine similarity measure is used to calculate the similarity of images.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "6.2" }, { "text": "Figure 2(a) shows the interpolated precision/recall curve and Table 2 shows the Mean Average Precision (MAP) and Precision at 10 retrieved images (P@10). The MAP of the Automatic VDR model increases by 8.8% relative to the Bag-of-Terms model, and a relative improvement up to 10.1% would possible if we had a better structure prediction model, as evidenced by Manual VDR. Furthermore, if we assume a user will only view the top results returned by the retrieval model, then P@10 increases by 8.6% when we model the structure of an image, relative to using an unstructured representation; a relative improvement of up to 9.4% would be possible if we had a better image parser.", "cite_spans": [], "ref_spans": [ { "start": 62, "end": 69, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "To determine whether the differences are statistically significant, we perform the Wilcoxon Signed Ranks Test on the average precision and P@10 values over the 1011 queries in our cross-validation data set. The results support the main hypothesis of this paper: structured image representations allow us to find images depicting actions more accurately than the standard bag-of-terms representation. We find significant differences in average precision and P@10 between the Bag-of-Terms baseline and both Automatic VDR (p < 0.01) and Manual VDR (p < 0.01). This suggests that structure is very useful in the query-by-example scenario. We find a significant difference in average precision between Automatic VDR and Manual VDR (p < 0.01), but no difference in P@10 between Automatic VDR and Manual VDR (p = 0.442).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "We now analyse whether image structure is useful when the action does not require a direct object. The analysis presented here compares the Bag-of-Terms model against the Automatic VDR model because there was no significant difference in P@10 between the Automatic and Manual VDR models. Table 3 shows the MAP and Precision@10 per type of action. Figure 3 shows the precision/recall curves for (a) transitive verbs, (b) intransitive verbs, and (c) light verbs.", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 295, "text": "Table 3", "ref_id": null }, { "start": 347, "end": 355, "text": "Figure 3", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Retrieval Performance by Type of Action and Verb", "sec_num": "6.4" }, { "text": "In Figure 3(a) , it can be seen that the actions that can be classified as transitive verbs benefit from exploiting the structure encoded in the Visual Dependency Representation. The only exception is for the action to read, which frequently behaves as an intransitive verb: the man reads on a train. The consistent improvement in both the entirety of the ranked list and at the top of the ranked list can be seen in the MAP and P@10 results in Table 3 . Figure 3(b) shows that there is a small increase in retrieval performance for intransitive verbs compared to the transitive verbs. We conjecture this is because there are fewer objects to annotate in an image when the verb does not require a direct object. The summary results for the intransitive verbs in Table 3 confirm the small but insignificant increase in MAP and P@10.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 14, "text": "Figure 3(a)", "ref_id": "FIGREF5" }, { "start": 445, "end": 452, "text": "Table 3", "ref_id": null }, { "start": 455, "end": 466, "text": "Figure 3(b)", "ref_id": "FIGREF5" }, { "start": 762, "end": 769, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Retrieval Performance by Type of Action and Verb", "sec_num": "6.4" }, { "text": "Finally, the light verbs, shown in Figure 3 (c), exhibit variable behaviour in retrieval performance. One reason for this could be that if the light verb encodes information about the object, as in using a computer, then the computer can be annotated in the image, and thus it acts as a transitive verb. Conversely, when Table 3 : Mean Average Precision and Precision@10 for each action in the data set, grouped into transitive (top), intransitive (middle), and light (bottom) verbs. VDR is the Automatic VDR model and Bag is the Bag-of-Terms model. It can be seen that the Automatic VDR retrieval model is consistently better than the Bag-of-Terms model on both MAP and Precision@10. : the Automatic VDR model is significantly different than Bag-of-Terms at p < 0.01. the light verb conveys information about the outcome of the event, as in the action take a photograph, the outcome is rarely possible to annotate in an image, and so no improvements can be gained from structured image representations.", "cite_spans": [], "ref_spans": [ { "start": 35, "end": 43, "text": "Figure 3", "ref_id": "FIGREF5" }, { "start": 321, "end": 328, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Retrieval Performance by Type of Action and Verb", "sec_num": "6.4" }, { "text": "In our experiments we observed that all models can achieve high precision at very low levels of recall. We found that this happens for testing images that are almost identical to the query image. For such images, objects that are unrelated to the target action form an effective context, which allows this image to be placed at the top of the ranking. However, near-identical images are relatively rare, and performance degrades for higher levels of recall. It is surprising that image retrieval using automatically predicted VDR model is statistically indistinguishable from the manually crafted VDR model, given the relatively low accuracy of our VDR prediction model: 61.3% by the labelled dependency attachment accuracy measure. One possible explanation could be that not all parts of the VDR structure are useful for retrieval purposes, and our VDR prediction model does well on the useful ones. This observation also suggests that we are unlikely to achieve better retrieval performance by continuing to improve the accuracy of VDR prediction. We believe a more promising direction is refining the current formulation of the VDR, and exploring more sophisticated ways to measure the similarity of two structured representations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6.5" }, { "text": "In this paper we argued that a limiting factor of retrieving images depicting actions is the unstructured bag-of-terms representation typically used for images. In a bag-of-terms representation, images that share similar sets of regions are deemed to be related even when the depicted actions are different. We proposed that representing an image using the Visual Dependency Representation (VDR) can prevent this type of misclassification in image retrieval. The VDR of an image captures the region-region relationships that explain what is happening in an image, and it can be automatically predicted from a region-annotated image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "In a query-by-example image retrieval task, we found that representing images as automatically predicted VDRs resulted in statistically significant 8.8% relative improvement in MAP and 8.6% relative improvement in Precision@10 compared to a Bag-of-Terms model. There was a significant difference in MAP when using manually or automatically predicted image structures, but no difference in the Precision@10, suggesting that the proposed automatic prediction model is accurate enough for retrieval purposes. Future work will focus on using automatically generated visual input, such as the output of the image tagger (Guillaumin and Mensink, 2009) , or an automatic object detector (Felzenszwalb et al., 2010), which will make it possible to tackle image ranking tasks (Hodosh et al., 2013) . It would also be interesting to explore alternative structure prediction methods, such as predicting the relationships using a conditional random field (Zitnick et al., 2013), or by leveraging distributional lexical semantics (Le et al., 2013b).", "cite_spans": [ { "start": 615, "end": 645, "text": "(Guillaumin and Mensink, 2009)", "ref_id": "BIBREF8" }, { "start": 767, "end": 788, "text": "(Hodosh et al., 2013)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Recall there are three Visual Dependency Representations for each image. The partitions are the same as those used in the VDR prediction experiment in Section 4.1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The anonymous reviewers provided valuable feedback on this paper. The research is funded by ERC Starting Grant SYNPROC No. 203427.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "I Biederman. 1972. Perceiving real-world scenes", "authors": [ { "first": "Moshe", "middle": [], "last": "Bar", "suffix": "" }, { "first": "Shimon", "middle": [], "last": "Ullman", "suffix": "" } ], "year": 1996, "venue": "Perception", "volume": "25", "issue": "3", "pages": "77--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moshe Bar and Shimon Ullman. 1996. Spatial Context in Recognition. Perception, 25(3):343-52, January. I Biederman. 1972. Perceiving real-world scenes. Science, 177(4043):77-80.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On the algorithmic implementation of multiclass kernel-based vector machines", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "2", "issue": "", "pages": "265--292", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer and Yoram Singer. 2002. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265-292.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Image retrieval: Ideas, influences, and trends of the new age", "authors": [ { "first": "Ritendra", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Dhiraj", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "James", "middle": [ "Z" ], "last": "Wang", "suffix": "" } ], "year": 2008, "venue": "ACM Computing Surveys", "volume": "40", "issue": "2", "pages": "1--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ritendra Datta, Dhiraj Joshi, Jia Li, and James Z. Wang. 2008. Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys, 40(2):1-60.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary", "authors": [ { "first": "P", "middle": [], "last": "Duygulu", "suffix": "" }, { "first": "Kobus", "middle": [], "last": "Barnard", "suffix": "" }, { "first": "J F G De", "middle": [], "last": "Freitas", "suffix": "" }, { "first": "David A", "middle": [], "last": "Forsyth", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 7th European Conference on Computer Vision", "volume": "", "issue": "", "pages": "97--112", "other_ids": {}, "num": null, "urls": [], "raw_text": "P Duygulu, Kobus Barnard, J F G de Freitas, and David A Forsyth. 2002. Object Recognition as Machine Translation: Learning a Lexicon for a Fixed Image Vocabulary. In Proceedings of the 7th European Conference on Computer Vision, pages 97-112, Copenhagen, Denmark.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Image Description using Visual Dependency Representations", "authors": [ { "first": "Desmond", "middle": [], "last": "Elliott", "suffix": "" }, { "first": "Frank", "middle": [], "last": "Keller", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1292--1302", "other_ids": {}, "num": null, "urls": [], "raw_text": "Desmond Elliott and Frank Keller. 2013. Image Description using Visual Dependency Representations. In Pro- ceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1292-1302, Seattle, Washington, U.S.A.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The PASCAL Visual Object Classes Challenge", "authors": [ { "first": "Mark", "middle": [], "last": "Everingham", "suffix": "" }, { "first": "Luc", "middle": [], "last": "Van Gool", "suffix": "" }, { "first": "K", "middle": [ "I" ], "last": "Christopher", "suffix": "" }, { "first": "John", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Winn", "suffix": "" }, { "first": "", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Everingham, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. 2011. The PASCAL Visual Object Classes Challenge 2011.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Every picture tells a story: generating sentences from images", "authors": [ { "first": "Ali", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Mohsen", "middle": [], "last": "Hejrati", "suffix": "" }, { "first": "Mohammad", "middle": [ "Amin" ], "last": "Sadeghi", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Young", "suffix": "" }, { "first": "Cyrus", "middle": [], "last": "Rashtchian", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" }, { "first": "David", "middle": [], "last": "Forsyth", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 15th European Conference on Computer Vision", "volume": "", "issue": "", "pages": "15--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: generating sentences from images. In Proceedings of the 15th European Conference on Computer Vision, pages 15-29, Heraklion, Crete, Greece.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Object Detection with Discriminatively Trained Part-Based Models", "authors": [ { "first": "R B", "middle": [], "last": "P F Felzenszwalb", "suffix": "" }, { "first": "D", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "D", "middle": [], "last": "Mcallester", "suffix": "" }, { "first": "", "middle": [], "last": "Ramanan", "suffix": "" } ], "year": 2010, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "32", "issue": "9", "pages": "1627--1645", "other_ids": {}, "num": null, "urls": [], "raw_text": "P F Felzenszwalb, R B Girshick, D McAllester, and D Ramanan. 2010. Object Detection with Discriminatively Trained Part-Based Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9):1627- 1645.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation", "authors": [ { "first": "Matthieu", "middle": [], "last": "Guillaumin", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Mensink", "suffix": "" } ], "year": 2009, "venue": "IEEE 12th International Conference on Computer Vision", "volume": "", "issue": "", "pages": "309--316", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthieu Guillaumin and Thomas Mensink. 2009. Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. In IEEE 12th International Conference on Computer Vision, pages 309-316, Kyoto, Japan.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics", "authors": [ { "first": "Micah", "middle": [], "last": "Hodosh", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Young", "suffix": "" }, { "first": "Julia", "middle": [], "last": "Hockenmaier", "suffix": "" } ], "year": 2013, "venue": "Journal of Artificial Intelligence Research", "volume": "47", "issue": "", "pages": "853--899", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. Journal of Artificial Intelligence Research, 47:853-899.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A Model for Learning the Semantics of Pictures", "authors": [ { "first": "Victor", "middle": [], "last": "Lavrenko", "suffix": "" }, { "first": "Jiwoon", "middle": [], "last": "Manmatha", "suffix": "" }, { "first": "", "middle": [], "last": "Jeon", "suffix": "" } ], "year": 2003, "venue": "Advances in Neural Information Processing Systems", "volume": "16", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Lavrenko, R Manmatha, and Jiwoon Jeon. 2003. A Model for Learning the Semantics of Pictures. In Advances in Neural Information Processing Systems 16, Vancouver and Whistler, British Columbia, Canada.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories", "authors": [ { "first": "S", "middle": [], "last": "Lazebnik", "suffix": "" }, { "first": "C", "middle": [], "last": "Schmid", "suffix": "" }, { "first": "J", "middle": [], "last": "Ponce", "suffix": "" } ], "year": 2006, "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "2169--2178", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Lazebnik, C. Schmid, and J. Ponce. 2006. Beyond Bags of Features: Spatial Pyramid Matching for Recog- nizing Natural Scene Categories. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2169-2178, New York, NY, USA.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploiting language models to recognize unseen actions", "authors": [ { "first": "", "middle": [], "last": "Dt Le", "suffix": "" }, { "first": "Jasper", "middle": [], "last": "Bernardi", "suffix": "" }, { "first": "", "middle": [], "last": "Uijlings", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the International Conference on Multimedia Retrieval", "volume": "", "issue": "", "pages": "231--238", "other_ids": {}, "num": null, "urls": [], "raw_text": "DT Le, R Bernardi, and Jasper Uijlings. 2013a. Exploiting language models to recognize unseen actions. In Proceedings of the International Conference on Multimedia Retrieval, pages 231-238, Dallas, Texas, U.S.A.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Exploiting language models for visual recognition", "authors": [ { "first": "Jasper", "middle": [], "last": "Dt Le", "suffix": "" }, { "first": "Raffaella", "middle": [], "last": "Uijlings", "suffix": "" }, { "first": "", "middle": [], "last": "Bernardi", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "769--779", "other_ids": {}, "num": null, "urls": [], "raw_text": "DT Le, Jasper Uijlings, and Raffaella Bernardi. 2013b. Exploiting language models for visual recognition. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 769-779, Seattle, Washington, U.S.A.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Object recognition from local scale-invariant features", "authors": [ { "first": "D G", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the International Conference on Computer Vision", "volume": "", "issue": "", "pages": "1150--1157", "other_ids": {}, "num": null, "urls": [], "raw_text": "D G Lowe. 1999. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision, pages 1150-1157, Washington, D.C., USA.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Non-projective dependency parsing using spanning tree algorithms", "authors": [ { "first": "Ryan", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Pereira", "suffix": "" }, { "first": "Kiril", "middle": [], "last": "Ribarov", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "523--530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 523-530, Vancouver, British Columbia, Canada.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "LabelMe: A Database and Web-Based Tool for Image Annotation", "authors": [ { "first": "C", "middle": [], "last": "Bryan", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Russell", "suffix": "" }, { "first": "Kevin", "middle": [ "P" ], "last": "Torralba", "suffix": "" }, { "first": "William", "middle": [ "T" ], "last": "Murphy", "suffix": "" }, { "first": "", "middle": [], "last": "Freeman", "suffix": "" } ], "year": 2008, "venue": "International Journal of Computer Vision", "volume": "77", "issue": "1-3", "pages": "157--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan C. Russell, Antonio Torralba, Kevin P. Murphy, and William T. Freeman. 2008. LabelMe: A Database and Web-Based Tool for Image Annotation. International Journal of Computer Vision, 77(1-3):157-173.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Recognition Using Visual Phrases", "authors": [ { "first": "A", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Sadeghi", "suffix": "" }, { "first": "", "middle": [], "last": "Farhadi", "suffix": "" } ], "year": 2011, "venue": "2011 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "1745--1752", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad A Sadeghi and Ali Farhadi. 2011. Recognition Using Visual Phrases. In 2011 IEEE Conference on Computer Vision and Pattern Recognition, pages 1745-1752, Colorado Springs, Colorado, U.S.A.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Normalized Cuts and Image Segmentation", "authors": [ { "first": "Jianbo", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jitendra", "middle": [], "last": "Malik", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "22", "issue": "8", "pages": "888--905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianbo Shi and Jitendra Malik. 2000. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888-905, August.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Tag Completion for Image Retrieval", "authors": [ { "first": "Lei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rong", "middle": [], "last": "Jin", "suffix": "" }, { "first": "", "middle": [], "last": "Jain", "suffix": "" } ], "year": 2012, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "35", "issue": "3", "pages": "716--727", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lei Wu, Rong Jin, and Anil K Jain. 2012. Tag Completion for Image Retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(3):716-727.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Corpus-Guided Sentence Generation of Natural Images", "authors": [ { "first": "Yezhou", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ching", "middle": [ "Lik" ], "last": "Teo", "suffix": "" }, { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Yiannis", "middle": [], "last": "Aloimonos", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "444--454", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yezhou Yang, Ching Lik Teo, Hal Daum\u00e9 III, and Yiannis Aloimonos. 2011. Corpus-Guided Sentence Generation of Natural Images. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 444-454, Edinburgh, Scotland, UK.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning the Visual Interpretation of Sentences", "authors": [ { "first": "Devi", "middle": [], "last": "Cl Zitnick", "suffix": "" }, { "first": "Lucy", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Vanderwende", "suffix": "" } ], "year": 2013, "venue": "IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "1681--1688", "other_ids": {}, "num": null, "urls": [], "raw_text": "CL Zitnick, Devi Parikh, and Lucy Vanderwende. 2013. Learning the Visual Interpretation of Sentences. In IEEE International Conference on Computer Vision, pages 1681-1688, Sydney, Australia.", "links": null } }, "ref_entries": { "FIGREF1": { "uris": null, "text": "that are not concerned with the depicted action are first attached to each other if there is a clear spatial relationship between them (for an example, seeFigure 1(b), where the laptop is attached to the table", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "(a) and (b). Similar : cos(VDR a , VDR c ) = 0.56 > cos(Bag a , Bag c ) = 0.52 Dissimilar : cos(VDR b , VDR a ) = 0.201 cos(Bag b , Bag a ) = 0.4", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "Average 11-point precision/recall curves show that the VDR-based retrieval models are consistently better than the Bag-of-Terms model.", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "Precision/recall curves grouped by the type of verb. The solid lines represent the Automatic VDR model; the dashed lines represent the Bag-of-Terms model; y-axis is Precision, and the x-axis is Recall. (a) Images depicting transitive verbs benefit the most from the Visual Dependency Representation and are easiest to retrieve. (b) Intransitive verbs are difficult to retrieve and there is is a negligible improvement in performance when using Visual Dependency Representation. (c) Light verbs benefit from the Visual Dependency Representation depending on the type of the object involved in the action.", "num": null, "type_str": "figure" }, "TABREF0": { "content": "
ManTrumpet Boy
onbeside
beside
(b)
using computer
ROOTSofaManLaptopChair
beside
on
(c)
", "num": null, "html": null, "type_str": "table", "text": "" }, "TABREF1": { "content": "
More than 50% of the pix-
X \u2212 \u2192 on Yels of region X overlap with region Y.X\u2212 \u2212\u2212\u2212\u2212\u2212\u2212 \u2192 surrounds YThe entirety of region X overlaps with region Y.
Similar to beside, but
X\u2212\u2212\u2212\u2192 beside YX\u2212 \u2212\u2212\u2212\u2212 \u2192 opposite Yused when there X and Y are at opposite sides of the image.
The angle between X andThe angle between X
X\u2212 \u2212\u2212 \u2192 above YY lies between 225 \u2022 and 315 \u2022 .X\u2212 \u2212\u2212 \u2192 below Yand Y lies between 45 \u2022 and 135 \u2022 .
The Z-plane relationshipIdentical to infront ex-
X\u2212 \u2212\u2212\u2212 \u2192 infront Ybetween the regions is dominant.X\u2212 \u2212\u2212\u2212 \u2192 behind Ycept X is behind Y in the Z-plane.
", "num": null, "html": null, "type_str": "table", "text": "The angle between the centroid of X and the centroid of Y lies between 315 \u2022 and 45 \u2022 or 135 \u2022 and 225 \u2022 ." }, "TABREF2": { "content": "
ROOT Lamp PictureGirlLaptopBed
beside
above
", "num": null, "html": null, "type_str": "table", "text": "Visual Dependency Grammar defines eight relations between pairs of annotated regions. To simplify explanation, all regions are circles, where X is the grey region and Y is the white region. All relations are considered with respect to the centroid of a region and the angle between those centroids." } } } }