{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:27:57.652839Z" }, "title": "Diverse and Relevant Visual Storytelling with Scene Graph Embeddings", "authors": [ { "first": "Xudong", "middle": [], "last": "Hong", "suffix": "", "affiliation": {}, "email": "xhong@coli.uni-saarland.de" }, { "first": "Rakshith", "middle": [], "last": "Shetty", "suffix": "", "affiliation": { "laboratory": "", "institution": "MPI Informatics", "location": {} }, "email": "rshetty@mpg.mpi-inf.de" }, { "first": "Asad", "middle": [], "last": "Sayeed", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Gothenburg", "location": {} }, "email": "asad.sayeed@gu.se" }, { "first": "Khushboo", "middle": [], "last": "Mehra", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": {} }, "email": "kmehra@coli.uni-saarland.de" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "", "affiliation": { "laboratory": "", "institution": "Saarland University", "location": {} }, "email": "" }, { "first": "Bernt", "middle": [], "last": "Schiele", "suffix": "", "affiliation": { "laboratory": "", "institution": "MPI Informatics", "location": {} }, "email": "schiele@mpg.mpi-inf.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A problem in automatically generated stories for image sequences is that they use overly generic vocabulary and phrase structure and fail to match the distributional characteristics of human-generated text. We address this problem by introducing explicit representations for objects and their relations by extracting scene graphs from the images. Utilizing an embedding of this scene graph enables our model to more explicitly reason over objects and their relations during story generation, compared to the global features from an object classifier used in previous work. We apply metrics that account for the diversity of words and phrases of generated stories as well as for reference to narratively-salient image features and show that our approach outperforms previous systems. Our experiments also indicate that our models obtain competitive results on reference-based metrics.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "A problem in automatically generated stories for image sequences is that they use overly generic vocabulary and phrase structure and fail to match the distributional characteristics of human-generated text. We address this problem by introducing explicit representations for objects and their relations by extracting scene graphs from the images. Utilizing an embedding of this scene graph enables our model to more explicitly reason over objects and their relations during story generation, compared to the global features from an object classifier used in previous work. We apply metrics that account for the diversity of words and phrases of generated stories as well as for reference to narratively-salient image features and show that our approach outperforms previous systems. Our experiments also indicate that our models obtain competitive results on reference-based metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Visual storytelling is the generation of a coherent narrative from a series of images (Huang et al., 2016) . In this paper, we address a particular challenge in visual storytelling: reflecting human preferences in narrative structure, especially the choice of content words and phrases that comprise a readable story. Humans prefer to use diverse words and phrases to construct the storyline to avoid repetitions within or across sentences. For example, in the human-written story in Fig. 1 , very few content words are repeated. However, Modi and Parde (2019) have found that recent work often generate repetitive words and phrases which leads to repetitions across sentences and makes stories less diverse. For example, in the first story of Fig. 1 , the model generates a verb phrase had a great time and then repeats it in the fifth sentence. These words 1 Typo generated by human: \"have\" instead of \"gave\".", "cite_spans": [ { "start": 86, "end": 106, "text": "(Huang et al., 2016)", "ref_id": "BIBREF16" }, { "start": 539, "end": 560, "text": "Modi and Parde (2019)", "ref_id": "BIBREF27" } ], "ref_spans": [ { "start": 484, "end": 490, "text": "Fig. 1", "ref_id": null }, { "start": 744, "end": 750, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Human: Governor Brandon had an event scheduled. When he arrived at his event he was escorted in by military men. He went to the serving line and ordered some food. He even gave the server a tip. The Governor have a speech about healthcare. After speech he stayed and talked to local people about their concerns. As few of the residents presented him with a jacket to show their appreciation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "Baseline: everyone was excited for the ceremony to begin . everyone was excited to be there . he was very happy to be there . we had a great time . they all had a great time .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "Ours: i went to the party last week . the chef was preparing the food . he was very happy to see him . they had a great time . after the ceremony was over , everyone gathered together to talk about their plans . Figure 1 : Example of extracting scene graphs from images and their relationship to content words and phrases in the stories. The first story (Baseline) is generated by AREL (Wang et al., 2018b) . The second story (Ours) is generated by our proposed model. The Human story comes from the VIST dataset (Huang et al., 2016) 1 . and phrases are usually overly generic. We argue that this is because relations between objects in the last image are not well-represented in the image embedding, forcing the model to produce generic alternatives.", "cite_spans": [ { "start": 386, "end": 406, "text": "(Wang et al., 2018b)", "ref_id": "BIBREF39" }, { "start": 513, "end": 533, "text": "(Huang et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 212, "end": 220, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "We address this problem by employing a more explicit and structured representation of objects and their relations in form of scene graphs (Johnson et al., 2015) . Scene graphs encode both spatial and predicate relations between objects in the images as well as semantic event relations (actions and their participants). Relations like (man, near, food) in the scene graph in Fig. 1 are essential to generate more specific noun phrases (e.g., the chef ) instead of generic ones (e.g., everyone).", "cite_spans": [ { "start": 138, "end": 160, "text": "(Johnson et al., 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 375, "end": 381, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "In our approach, we extract scene graphs from the images and then learn scene graph embeddings using graph neural networks (Marcheggiani and Perez-Beltrachini, 2018) for each image, which combine the visual features and the discrete semantic information from the scene graphs. A combination of story-wide and individual-image scene graph features is then decoded in the form of a story; parameter-sharing in the decoder encourages narrative coherence.", "cite_spans": [ { "start": 123, "end": 165, "text": "(Marcheggiani and Perez-Beltrachini, 2018)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "One difficulty in learning scene graph embeddings together with an end-to-end visual storytelling model is that they introduce a large number of parameters, increasing both computational and learning complexity and making them more difficult to integrate into larger, computationallyexpensive learning approaches. We therefore break down the problem into a pipeline with three steps designed to be parameter-efficient and trained independently ( Fig. 2): (1) the extraction and augmentation of scene graphs with an existing automatic tool;", "cite_spans": [], "ref_spans": [ { "start": 446, "end": 454, "text": "Fig. 2):", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "(2) the training of a graph encoder to obtain scene graph embeddings; and (3) the application of an attention-based visual storytelling model to these embeddings to generate stories. The first two steps establish that we can achieve competitive results without an end-to-end model that requires both story and image to be paired at all steps of training. The third step uses an attention mechanism to supplant a complex graph encoder in the second step, reducing the number of parameters in the story generation model. Our results show that not only is this approach competitive with other recent work in terms of standard reference-based measures (e.g., BLEU), it has an addtional advantage: the distributional properties of the generated text are closer to humangenerated stories than the output of competing systems. The improved quality of the stories and the finer control over the bias of the captioning model afforded by our approach is thus reflected in the outcome of our implementation and experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "The main contributions of this paper are: (a) we introduce a pipeline method for visual storytelling that uses a graph-to-sequence model to learn embeddings for augmented scene graphs and an attention mechanism to combine the scene graph embeddings; (b) we perform the first finegrained analysis of the diversity of visual stories by inspecting word and phrase distributions and show that machine generated stories from previous models are far less diverse than human-written stories; and (c) we show that the generated stories from our pipeline are not only more diverse than previous work but also more relevant to the images.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene graph Story", "sec_num": null }, { "text": "Visual storytelling. Extracting a good representation of the information in the visual input is a key part of the visual storytelling task. Prior work in visual storytelling has typically opted for global features extracted from a pre-trained convolutional neural network (Liu et al., 2017; Yu et al., 2017; Wang et al., 2018a,b; Huang et al., 2019) and has focused on improving the language generation model. Wang et al. (2017) show that introducing regional features and implicit coreference relations of entities leads to more human-realistic word usage in generated stories. Only few prior works employ an intermediate structured representation on story telling task. Yang et al. (2019a) use an external database of knowledge graphs to enchance the visual representation and improve story telling performance. We use scene graphs extracted from images, which does not require an external knowledgebase. extract scene graphs from images and train an end-to-end model with a graph convolutional encoder directly on visual stories. We propose a pipeline method which first obtains scene graph embeddings from images then applies them to visual storytelling in order to reduce the difficulty of learning both the scene graph embeddings and the story generation model together. Our attention-based story generation model has less parameters while obtaining competitive results. Scene graph representation. A scene graph is a symbolic representation of structural information where entities are nodes and their relations are edges (Johnson et al., 2015) . The large scenegraph annotated Visual Genome (Krishna et al., 2017) dataset has enabled the development of models to extract scene graph representations from images (Zellers et al., 2018; . These scene graph represenations have proven effective on various tasks like image retrieval (Johnson et al., 2015) and image generation (Johnson et al., 2018) . Scene graph based image captioning. A sequential scene graph representation is used to encode images in Gao et al. (2018) to improve image captioning. Yang et al. (2019b) propose auto-encoding text-based scene graphs to learn a shared dictionary between visual and text based graphs, achieving state-of-the-art image captioning performance. Wang et al. (2019b) show that image scene graphs extracted using a trained model can match the captioning performance of an oracle with access to ground-truth graphs. Aligning text-and imagebased scene graphs has also been used to generate image captions without paired data (Gu et al., 2019) .", "cite_spans": [ { "start": 272, "end": 290, "text": "(Liu et al., 2017;", "ref_id": "BIBREF24" }, { "start": 291, "end": 307, "text": "Yu et al., 2017;", "ref_id": "BIBREF24" }, { "start": 308, "end": 329, "text": "Wang et al., 2018a,b;", "ref_id": null }, { "start": 330, "end": 349, "text": "Huang et al., 2019)", "ref_id": "BIBREF15" }, { "start": 410, "end": 428, "text": "Wang et al. (2017)", "ref_id": "BIBREF36" }, { "start": 672, "end": 691, "text": "Yang et al. (2019a)", "ref_id": "BIBREF40" }, { "start": 1529, "end": 1551, "text": "(Johnson et al., 2015)", "ref_id": "BIBREF18" }, { "start": 1599, "end": 1621, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF21" }, { "start": 1719, "end": 1741, "text": "(Zellers et al., 2018;", "ref_id": "BIBREF43" }, { "start": 1837, "end": 1859, "text": "(Johnson et al., 2015)", "ref_id": "BIBREF18" }, { "start": 1881, "end": 1903, "text": "(Johnson et al., 2018)", "ref_id": "BIBREF17" }, { "start": 2010, "end": 2027, "text": "Gao et al. (2018)", "ref_id": "BIBREF6" }, { "start": 2057, "end": 2076, "text": "Yang et al. (2019b)", "ref_id": "BIBREF41" }, { "start": 2247, "end": 2266, "text": "Wang et al. (2019b)", "ref_id": "BIBREF35" }, { "start": 2522, "end": 2539, "text": "(Gu et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The task of visual storytelling can be decomposed into two distinct parts: (1) extracting relevant information from input images I into compact features and (2) generating stories using these visual features. We improve the visual feature representation by switching from commonly-used global feature vectors to a scene graph-based representation which explicitly encodes objects and their relations. We also reduce the number of parameters by taking a modular approach that separates learning scene graph embeddings from images and generating visual stories. This allows us to independently train the scene graph embedding model and to design a visual storytelling model with fewer parameters yet competitive performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Design", "sec_num": "3" }, { "text": "Our full pipeline is shown in Fig. 2 . We first apply a scene graph generator to extract scene graphs containing vertices for objects and edges for relations between two objects. We then augment the scene graph for each image by adding regional features (see section 3.1). A graph neural network embeds each graph node by aggregating information from across the graph. We propose a pre-training step to independently learn this graph embedding. To do this, we obtain the confidence of the object detector for each object in each image, termed as visual saliency, and construct a sequence of object labels ordered by their visual saliency for each image. Then we train a graph-to-sequence model to predict this object sequence given the scene graph embedding of the corresponding image (see section 3.2). To generate stories, we extract both global and regional features from the scene graph embedding for each image and feed them to an attention-based story generation model (see section 3.3).", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 36, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model Design", "sec_num": "3" }, { "text": "Scene graphs can be extracted with the Knowledgeembedded Routing Network (KERN), a state-ofthe-art scene graph generator built on top of a Faster R-CNN object detector (Ren et al., 2015) . KERN generates scene graphs", "cite_spans": [ { "start": 168, "end": 186, "text": "(Ren et al., 2015)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Scene Graph Augmentation", "sec_num": "3.1" }, { "text": "G = (G 1 , G 2 , ..., G N ) for all images, where each scene graph G j = {V j , E j } contains a set of nodes V j representing recognised entities with node la- bels v 1 , v 2 , ..., v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene Graph Augmentation", "sec_num": "3.1" }, { "text": "M and a set of edges E j with edge labels representing relations between entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene Graph Augmentation", "sec_num": "3.1" }, { "text": "An issue here is that scene graphs are not always connected, but graph neural encoders require connected graphs as input (see the first scene graph in Fig. 2 ). To obtain a single connected graph for each image, we augment the scene graphs by introducing a global node in each graph G j , and connect it to all other nodes in the graph.", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 157, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Scene Graph Augmentation", "sec_num": "3.1" }, { "text": "At this stage, the augmented scene graph contains discrete categorical triplets like (man, near, table) (see Fig. 2 for examples). It does not contain detailed visual appearance or shape information: e.g., the color of the man's suit. We address this by augmenting each node in the graph with a corresponding visual feature vector. This is done by extracting Regions of Interest (RoIs) of each object from the backbone Faster R-CNN model of KERN. Then we apply the RoI align algorithm (He et al., 2017) to extract visual features corresponding to each node. The global node is assigned the mean features of all the nodes in the graph.", "cite_spans": [ { "start": 485, "end": 502, "text": "(He et al., 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 109, "end": 115, "text": "Fig. 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Scene Graph Augmentation", "sec_num": "3.1" }, { "text": "We employ graph convolution networks (GCN; Kipf and Welling, 2017) to encode our augmented scene graph, since they have been effective in learning representations with graph-like structures like parse trees (Du and Black, 2019) and knowledge graphs (Song et al., 2020) .", "cite_spans": [ { "start": 207, "end": 227, "text": "(Du and Black, 2019)", "ref_id": "BIBREF5" }, { "start": 249, "end": 268, "text": "(Song et al., 2020)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Scene Graph Embedding", "sec_num": "3.2" }, { "text": "When it comes to the learning of the scene graph encoders, we are inspired by human behaviours in image description task. Objects that appear earlier in image captions usually attract more human attention and are more visually salient to humans (Griffin and Bock, 2000; He et al., 2019). There is a large agreement between human attended regions and activation maps of the last convolutional layer of a VGG-16 network, even though the VGG-16 network is not fine-tuned for captioning . If a region of the feature maps is highly activated, it is very likely to be classified as an object with higher confidence. Therefore, we conclude that objects that appear earlier in captions should have a higher confidence when they are passed through a VGG-16 network. We make an assumption that it is the same in visual storytelling and leave the proof for future work due to space limitations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene Graph Embedding", "sec_num": "3.2" }, { "text": "for each graph for each graph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scene Graph Embedding", "sec_num": "3.2" }, { "text": "Story i went to the party last week . the chef was preparing the food . he was very happy to see him . they had a great time . after the ceremony was over , everyone gathered together to talk about their plans . To simulate this phenomenon, we order the object labels in the scene graph of each image with their confidence and design a graph-to-sequence model to predict this sequence. The model contains two major components, a GCN which encodes the augmented scene graph and a recurrent neural network decoder which generates the sequence of object labels (v 1 , v 2 , ..., v M ) ordered by their visual saliency, i.e. confidence from the object detector. This allows us to train the GCN in in a self-supervised manner without needing additional labels and keeps objects that tend to be more salient in similar sequence positions across images, giving them an advantage in training. Graph encoder. We use a multiplicative Relational Graph Convolutional Network (mRGCN; Hong et al., 2019), a variant of GCN assigning parameters not only for nodes but also for edges in a graph, as the graph encoder to introduce explicit representations for edge labels. Given the augmented scene graph G, each node is represented with an regional visual feature vector x v \u2208 R d extracted from the object detector. For the first layer of the encoder, the hidden representation of the node h 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention-Based Story Generation", "sec_num": null }, { "text": "v = x v .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GRU-RNNs", "sec_num": null }, { "text": "Then the l-th mRGCN layer computes the hidden representation for node v in (l + 1)-th layer as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GRU-RNNs", "sec_num": null }, { "text": "h l+1 v = f (Wh l v + u\u2208N (v) W dir(e) h l u \u2022 r e ) (1) where W \u2208 R d\u00d7h is a trainable parameter. N (v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GRU-RNNs", "sec_num": null }, { "text": "is the set of all neighbours of node v. f is the ReLU non-linearity. \"\u2022\" is the Hadamard product, W dir(e) \u2208 R d\u00d7h , dir(e) \u2208 {in, out} is the direction of the edge e u,v connecting u and v. r e \u2208 R h is an embedding of the label of the edge e u,v . Each layer aggregates the direct neighbours of each node. We stack L GCN layers to encode the full graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GRU-RNNs", "sec_num": null }, { "text": "Object label generator. We use a two-layer LSTM (Hochreiter and Schmidhuber, 1997) to merge the node representations and generate the sequence of object labels. We apply global attention (Luong et al., 2015) to re-weight the hidden representations from the first layer and merge them into a global hidden vector h G . The we feed the global hidden vector into two-layer feed-forward networks to get the global encoder output h G . The probability of node label y t conditioned on input G and previous node label y 1:t\u22121 is obtained by applying a softmax layer on the decoder output as P (y t |y 1:t\u22121 , G) = sof tmax(g(h G , h C )), where g is a perceptron. Pre-training. The graph-to-sequence model is trained to maximize the likelihood function ll = t=1 |Y | P (y t |y 1:t\u22121 , G). We use extracted visual features as node embeddings and randomly initialise edge embeddings in the encoder. We tune three hyper-parameters on a validation set to minimise the loss, namely the number of hidden units in mRGCN encoder, the number of hidden units in LSTM, and the number of GCN layers. Then we extract augmented scene graph embeddings for the target dataset. After the pre-training of the graph embeddings, each node representation should contain not only node-specific information but also the information from neighbours up to a distance of L.", "cite_spans": [ { "start": 48, "end": 82, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF12" }, { "start": 187, "end": 207, "text": "(Luong et al., 2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "GRU-RNNs", "sec_num": null }, { "text": "The pre-trained graph embeddings serve as input to the story generation model. Instead of using a full graph encoder as Wang et al. 2020, we use the global representations of each image and the local representations of each entity extracted from the pre-trained mRGCN scene graph encoder. This allows us to encode both object-specific information and the relations between each object and the whole image.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention-Based Story Generation", "sec_num": "3.3" }, { "text": "We use a dot product attention mechanism to merge all the entities into one hidden vector for each image as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Attention-Based Story Generation", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "a = exp(Kq) M j=1 exp(K j q) (2) h = V T a", "eq_num": "(3)" } ], "section": "Attention-Based Story Generation", "sec_num": "3.3" }, { "text": "where we use the global image representations as the query q \u2208 R d and local object representations as the keys K \u2208 R M \u00d7d and values V \u2208 R M \u00d7d . We follow Wang et al. (2018b) in using a GRU to encode the hidden vectors of all images in a sequential manner and to generate the story. The model is optimised using maximum likelihood estimation with backpropagation.", "cite_spans": [ { "start": 157, "end": 176, "text": "Wang et al. (2018b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Attention-Based Story Generation", "sec_num": "3.3" }, { "text": "Now we show that using pre-trained scene graph embeddings yields competitive results as compared to state-of-the-art approaches on reference-based metrics while using fewer parameters in the image encoder. We also perform an ablation study to show that all proposed components contribute to the performance of the full model and that scene graph embeddings are effective across different attention mechanisms. While the reference-based metrics are useful, they do not always correlate with better story quality as perceived by humans (Wang et al., 2018b) . Hence, we also evaluate our model in terms of diversity of word and phrase structure and propose metrics to explicity measure the correctness of object references in section 5. Results show that our scene graph-based model uses more diverse/relevant words and phrases compared to prior work.", "cite_spans": [ { "start": 534, "end": 554, "text": "(Wang et al., 2018b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment and Evaluation", "sec_num": "4" }, { "text": "We train and evaluate our storytelling model on the VIST dataset (Huang et al., 2016) , containing 50K visual stories of 10K Flickr albums with 210K images. Each story is based on a 5-image sequence. We follow Wang et al. (2018b) and split the data into 40K training, 10K validation, and 10K test set. We extract scene graphs (including node and edge labels) with the state-of-the-art scene graph generator, KERN, mentioned above.", "cite_spans": [ { "start": 65, "end": 85, "text": "(Huang et al., 2016)", "ref_id": "BIBREF16" }, { "start": 210, "end": 229, "text": "Wang et al. (2018b)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4.1" }, { "text": "For neural architecture like GCN in scene graph embeddings, we need to select one important hyperparameter, the number of layers in the GCN encoder. We therefore perform a grid seach from 1 to the maximal diameter in all augmented scene graph. The number of GCN layers is also bounded by the memory size of our GPU cards. So we choose a maximum of 6. We train the scene graph embedding on the VIST dataset and select the optimal setting by validation loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4.1" }, { "text": "We compare our models with previous baselines: Contextual Attention (CA; Wang et al., 2017) uses local features from an object detector and a contexual attention layer to intergrate features from different images. Hierarchically Structured Reinforcement Learning (HSRL; Huang et al., 2019) proposed a hierarchical RNN trained to generate stories by reinforcement learning, with two critics including a multi-modal and a language-style discriminator. Adversarial Reward Learning (AREL; Wang et al., 2018b) is an Adversarial REward Learning framework to learn an implicit reward function from human demonstrations and then optimize policy search with the learned reward. Hierarchical Photo-Scene Encoder (HPSR; Wang et al., 2019a) applied hierarchically struc-", "cite_spans": [ { "start": 73, "end": 91, "text": "Wang et al., 2017)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Design", "sec_num": "4.1" }, { "text": "# para B-1 B-2 B-3 B-4 M R-L C CA (Wang et al., 2017) 3.36 M ----31.73 --HSRL (Huang et al., 2019) Table 2 : Ablation study of our full model versus different variants using reference-based metrics including BLEU-4 (B-4), METEOR (M), and ROUGE-L (R-L).", "cite_spans": [ { "start": 34, "end": 53, "text": "(Wang et al., 2017)", "ref_id": "BIBREF36" }, { "start": 78, "end": 98, "text": "(Huang et al., 2019)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 99, "end": 106, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "tured reinforcement learning to generate topically coherent multi-sentence stories. Knowledgeable Storyteller (KS; Yang et al., 2019a) extract objects with an object detector, infer relations between objects with an external knowledge base, and train a knowledge-augmented story generation model. SGVST extract scene graphs from the image sequence and use GCN with temporal convolutionals to merge features across images. The ablation study we performed over our full model is intended to demonstrate whether the scene graph embedding and the attention mechanism contribute to the final results. We compare the full model with the following simplified models: VGG global is an seq2seq model using VGG16 (Simonyan and Zisserman, 2015) global features. ResNet global is a seq2seq model using ResNet-152 global features. SGEmb global is a seq2seq model which uses only global features from the scene graph embedding. VGG, attn is an attention-based model which uses regional features directly from the object detector instead of the scene graph embedding. SGEmb, attn is our full model with scene graph embedding and attention mechanism.", "cite_spans": [ { "start": 703, "end": 733, "text": "(Simonyan and Zisserman, 2015)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": null }, { "text": "We first evaluate our model and ablations using automatic reference-based metrics on the test set to quantify the similarity between the generated stories w.r.t. human-written ones. We use metrics including unigram (B-1), bigram (B-2), trigram (B-3), and 4-gram (B-4) BLEU scores (Papineni et al., 2002) , METEOR (M; Banerjee and Lavie, 2005) , ROUGE-L (R; Lin, 2004) , and CIDEr (C; Vedantam et al., 2015), based on Wang et al. (2018b)'s evaluation code. Comparison with baselines. We compare our model with baselines on reference-based metrics in table 1. Our model outperforms all previous methods which do not utilize scene graphs (except SGVST) on BLEU-4 and METEOR. Compared to the recent work using scene graphs, SGVST, we obtain a better score on BLEU-4 and competitive results on BLEU-3, METEOR, and ROUGE-L, although we perform with lower scores on BLEU-1, BLEU-2, and CIDEr. This indicates that the relations between objects in scene graph embeddings empower our model to generate long phrases that are more similar to human text. However, the similarity of shorter grammatical units is sacrificed. Ablation study. We also report the results of our ablated models to show the importance of the scene graph embedding and the attention mechanism in table 1. Removing the scene graph embedding from our final model and using VGG features in-Human: It's great being the bookstore cat, I feel so literate! Oh yeah? It's way better being the liquor store cat! I feel sooo meow lol fft Who do you think you are? I'm Catman, who the hell are you? Would you two shut up and help me knit a sweater? AREL: i went to the store last week . the cat was so excited to see him . i bought a lot of books . i bought a lot of stuff . the cat was very happy to see the cat .", "cite_spans": [ { "start": 280, "end": 303, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF28" }, { "start": 317, "end": 342, "text": "Banerjee and Lavie, 2005)", "ref_id": "BIBREF1" }, { "start": 357, "end": 367, "text": "Lin, 2004)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Ours: i went to the store to buy some books . the cat was very excited to see the dog . i had a lot of food . i had a great time at the restaurant . the cat was on the bed .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Human: There was a good variety of costumes at this Halloween party. There was a pirate. There was a balloon man. There was even a Shrinner. The diversity of the outfits added color to this Halloween party. It also reminded us how different each of us are as individuals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "AREL: the party was a lot of fun . we had a lot of fun at the party . we all had a lot of fun . we had a lot of fun playing games . at the end of the day , they all had a great time .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Ours: it was time for the halloween party . there were a lot of people there . the man was dressed up as a man . the men are having a great time at the party . the whole family was there to celebrate the occasion .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Human: My family was taking a trip in another country. We decided to try the local river cruise to see the sites. It dropped us off deep in the jungle Me and dad were able to get alot of good photos. We ended up back in town at the end of the day.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "AREL: a group of friends went on a trip to the lake . we had a lot of fun . they were able to take a swim in the water . we had a great time . the river was very beautiful and beautiful .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Ours: the family went on a trip to the lake . we went to the park to take a boat ride . the kids enjoyed the water and the water . we had a great time at the beach . it was a beautiful day .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Human: James would be retiring this year and everyone decided to get him a cake at work. The cake was the most delicious part of the lunch! Everyone was really excited to wish him luck and get in line for some cake. It was great because everyone got to talk and remember the great times with their co-worker. James decides to wave goodbye for the final picture. Everyone will miss him.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "AREL: today , we had a cake . he was very excited . everyone was having a great time . all of my friends were there to celebrate . he was very happy to be there .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "Ours: it was a birthday cake for the party . i had a great time at the party . my friends and family were there to celebrate . i had a great time at the party .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "[male] was very happy to be there . stead (VGG, attn) decreases BLEU-4 by 1.3 (-8.8%). Using global features from the object detector (VGG global, ResNet global) or global scene graph embeddings (SGEmb global) without the attention mechanism harms performance across all metrics significantly. We further compare models using regional features from scene graph embeddings and from the VGG object detector across different attention mechanisms, like additive attention (add attn; Bahdanau et al., 2014) , locationbased attention (location attn; Luong et al., 2015) , simple attention (simple attn, computing coefficient with keys only) and dot product attention (attn, i.e., the one we use in the full model). Results show that scene graph embeddings boost performance of models across all types of attention mechanisms on all three metrics.", "cite_spans": [ { "start": 479, "end": 501, "text": "Bahdanau et al., 2014)", "ref_id": "BIBREF0" }, { "start": 544, "end": 563, "text": "Luong et al., 2015)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Reference-Based Evaluation", "sec_num": "4.2" }, { "text": "We perform a qualitative comparison to identify what is different in generated stories when we introduce scene graph embeddings and the attention mechanism, as in Figure 1 . AREL generates everyone, a very generic expression referring to all man objects in the image. After introducing scene graph embeddings, our model generates a more specific term chef which can be inferred from the sub-graph (man, near, food) of the second image. More examples can be found in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 163, "end": 171, "text": "Figure 1", "ref_id": null }, { "start": 466, "end": 474, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Qualitative Results", "sec_num": "4.3" }, { "text": "To get an in-depth understanding of the diversity of different types of words or phrases in generated stories, we perform the first fine-grained analysis of the distributions of words by different Part-of-Speech (POS) tags and phrases by different constituent tags. We first process the generated stories with a state-of-the-art POS tagger and constituency parser (Joshi et al., 2018) . Then we plot the frequency vs. rank distributions following Zipf's Law for each POS tag and each constituent tag. We follow Holtzman et al. (2019) to compute the Zipf's coefficient to check how similar the distributions of generated stories are to human-written stories. Using this metric, we compare the diversity of output stories from our model to the baselines and to the best-available prior work, AREL 2 . Table 4 : Zipf's coefficient of the phrase distribution on test set compared to baselines. The score of generated stories should be as close to the human scores as possible, so the smaller numbers are better.", "cite_spans": [ { "start": 364, "end": 384, "text": "(Joshi et al., 2018)", "ref_id": "BIBREF19" }, { "start": 511, "end": 533, "text": "Holtzman et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 799, "end": 806, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Evaluating Diversity and Relevance", "sec_num": "5" }, { "text": "In table 3, our model obtains the lowest Zipf's coefficient, closest to the human score, which shows that our model generates more diverse words than the baselines. By POS tag, our model generates the most diverse nouns. The ResNet global baseline generate more diverse verbs, adverbs and pronouns by using a stronger image feature extraction backbone. Generating diverse adjectives requires accurate visual features. The performance of our model is bounded by the VGG object detector. Producing pronouns requires cross-image coreference resolutions for objects. Handling this implicitly leads to sub-optimal results of our model diversity in pronouns. However, our proposed architecture is independent of the backbone network and can be upgraded to the stronger ResNet backbone in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Diversity", "sec_num": "5.1" }, { "text": "From Table 4 , we see that the phrase diversity scores are similar to word diversity, with our model achieving lowest Zipf's coefficient overall and across all tags except on adjective phrases. This indicates that our stories are also more diverse on the phrase level than the baselines. Suprisingly, the VGG global obtains the lowest score on adjective phrases. We thus counted the unique adjective phrases generated by VGG global (31) and by our model (65). We can conclude that the VGG global model generates less unique adjective phrases but with a distribution closer to that of humans. We show in previous sections that our model generates more diverse nouns and noun phrases. However, do these diverse nouns actually appear in the corresponding images? To explicitly measure this, we utilize the ground truth image captions also available in VIST. Since human written captions refer to salient objects appearing the image, we posit that a relevant story should also refer to these objects as much as possible. Based on this we can quantify the relevance of the generated stories. First, we automatically match the noun phrases in the generated stories with the noun phrases in the corresponding human image captions. The matching is based on the head noun in the noun phrase. We experimented with Lin's similarity on Word-Net synsets (Lin, 1998) and cosine similarity using GloVe and BERT embeddings (Pennington et al., 2014; Devlin et al., 2019) . The threshold value for counting a match was optimised to minimise false positives on a set of human annotated matches (number=194) from 10 stories in the validation set. We obtained the highest precision using GloVe embeddings, with a threshold of 0.85 (precision=0.82, recall=0.11). This metric is then computed on our model as well as the baselines. The results in table 5 show that the stories generated by our model have higher matches with entities in human-generated captions. Our scene graph embedding model also outperforms the model using the stronger ResNet features, showing that explicitly representing objects and relations in the form of scene graphs helps the model correctly refer to salient objects.", "cite_spans": [ { "start": 1341, "end": 1352, "text": "(Lin, 1998)", "ref_id": "BIBREF23" }, { "start": 1407, "end": 1432, "text": "(Pennington et al., 2014;", "ref_id": "BIBREF29" }, { "start": 1433, "end": 1453, "text": "Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Phrase Diversity", "sec_num": "5.2" }, { "text": "We show that introducing scene graph embeddings into visual storytelling with a pipeline method can obtain competitive results while reducing the number of parameters in the storytelling model. We also perform the first fine-grained analysis on the distributions of words and phrases in generated stories which shows that scene graph embeddings increase word and phrase diversities and bring the distributions closer to that of humans. We finally show that the diverse noun phrases we generate are more relevant to the objects in the images.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Future work One benefit of this work is that it provides a baseline for the pre-training of images in visual storytelling, allowing for any images to be used to augment the model without requiring story text; in future work, we will show that this mitigates the limitation of data size. We are currently working on how to merge regional representations for each graph effectively in pre-training and storytelling. GCN is a powerful method for pre-training, but the number of layers is strongly related to the diameter of the graph which is highly variable. A solution is to use Graph Transformer (Cai and Lam, 2020) which learns global attentions across the whole graph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Moreover, we would like to explore how to extract features from images more accurately for storytelling. The edges of scene graphs in the Visual Genome dataset only contain spatio-temporal relations and limited numbers of general actions like 'holding' as in Fig. 1 . We need to extract more common-sense directed events like 'giving' from a sub-graph of the scene graph. This requires implicit graph induction in the current model; we will test an explicit component.", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 265, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "Despite our best efforts, we could not get access to the code or stories generated by the SGVST model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgement This research was funded in part by the German Research Foundation (DFG) as part of SFB 1102 \"Information Density and Linguistic Encoding\" and a Swedish Research Council (VR) grant (2014-39) for the Centre for Linguistic Theory and Studies in Probability (CLASP). Xudong Hong is supported by International Max Planck Research School for Computer Science (IMPRS-CS) of Max-Planck Institute for Informatics (MPI-INF). We sincerely thank the anonymous reviewers for their insightful comments that helped us to improve this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "Dzmitry", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.0473" ] }, "num": null, "urls": [], "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "authors": [ { "first": "Satanjeev", "middle": [], "last": "Banerjee", "suffix": "" }, { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization", "volume": "", "issue": "", "pages": "65--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference", "authors": [], "year": 2020, "venue": "The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence", "volume": "2020", "issue": "", "pages": "7464--7471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Deng Cai and Wai Lam. 2020. Graph transformer for graph-to-sequence learning. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7464-7471. AAAI Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Knowledge-embedded routing network for scene graph generation", "authors": [ { "first": "Tianshui", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Weihao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Riquan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Liang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6163--6171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianshui Chen, Weihao Yu, Riquan Chen, and Liang Lin. 2019. Knowledge-embedded routing network for scene graph generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6163-6171.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Learning to order graph elements with application to multilingual surface realization", "authors": [ { "first": "Wenchao", "middle": [], "last": "Du", "suffix": "" }, { "first": "Alan", "middle": [ "W" ], "last": "Black", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)", "volume": "", "issue": "", "pages": "18--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenchao Du and Alan W Black. 2019. Learning to order graph elements with application to multilin- gual surface realization. In Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019), pages 18-24.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Image captioning with scene-graph based semantic concepts", "authors": [ { "first": "Lizhao", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenmin", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 10th International Conference on Machine Learning and Computing", "volume": "", "issue": "", "pages": "225--229", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lizhao Gao, Bo Wang, and Wenmin Wang. 2018. Im- age captioning with scene-graph based semantic con- cepts. In Proceedings of the 2018 10th International Conference on Machine Learning and Computing, pages 225-229.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "What the eyes say about speaking", "authors": [ { "first": "M", "middle": [], "last": "Zenzi", "suffix": "" }, { "first": "Kathryn", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "", "middle": [], "last": "Bock", "suffix": "" } ], "year": 2000, "venue": "Psychological science", "volume": "11", "issue": "4", "pages": "274--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zenzi M Griffin and Kathryn Bock. 2000. What the eyes say about speaking. Psychological science, 11(4):274-279.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unpaired image captioning via scene graph alignments", "authors": [ { "first": "Jiuxiang", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Shafiq", "middle": [], "last": "Joty", "suffix": "" }, { "first": "Jianfei", "middle": [], "last": "Cai", "suffix": "" }, { "first": "Handong", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "10323--10332", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiuxiang Gu, Shafiq Joty, Jianfei Cai, Handong Zhao, Xu Yang, and Gang Wang. 2019. Unpaired image captioning via scene graph alignments. In Proceed- ings of the IEEE International Conference on Com- puter Vision, pages 10323-10332.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Mask r-cnn", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Georgia", "middle": [], "last": "Gkioxari", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "2961--2969", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Human attention in image captioning: Dataset and analysis", "authors": [ { "first": "Sen", "middle": [], "last": "He", "suffix": "" }, { "first": "R", "middle": [], "last": "Hamed", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Tavakoli", "suffix": "" }, { "first": "Nicolas", "middle": [], "last": "Borji", "suffix": "" }, { "first": "", "middle": [], "last": "Pugeault", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "8529--8538", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sen He, Hamed R Tavakoli, Ali Borji, and Nicolas Pugeault. 2019. Human attention in image caption- ing: Dataset and analysis. In Proceedings of the IEEE International Conference on Computer Vision, pages 8529-8538.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "The curious case of neural text degeneration", "authors": [ { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Li", "middle": [], "last": "Du", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text de- generation. In International Conference on Learn- ing Representations.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improving language generation from feature-rich tree-structured data with relational graph convolutional encoders", "authors": [ { "first": "Xudong", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Ernie", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)", "volume": "", "issue": "", "pages": "75--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xudong Hong, Ernie Chang, and Vera Demberg. 2019. Improving language generation from feature-rich tree-structured data with relational graph convolu- tional encoders. In Proceedings of the 2nd Work- shop on Multilingual Surface Realisation (MSR 2019), pages 75-80.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Hierarchically structured reinforcement learning for topically coherent visual story generation", "authors": [ { "first": "Qiuyuan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Asli", "middle": [], "last": "Celikyilmaz", "suffix": "" }, { "first": "Dapeng", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "8465--8472", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qiuyuan Huang, Zhe Gan, Asli Celikyilmaz, Dapeng Wu, Jianfeng Wang, and Xiaodong He. 2019. Hier- archically structured reinforcement learning for top- ically coherent visual story generation. In Proceed- ings of the AAAI Conference on Artificial Intelli- gence, volume 33, pages 8465-8472.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Visual storytelling", "authors": [ { "first": "Ting-Hao", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Ferraro", "suffix": "" }, { "first": "Nasrin", "middle": [], "last": "Mostafazadeh", "suffix": "" }, { "first": "Ishan", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Pushmeet", "middle": [], "last": "Kohli", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1233--1239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ting-Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Push- meet Kohli, Dhruv Batra, et al. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1233-1239.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Image generation from scene graphs", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Agrim", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "1219--1228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Agrim Gupta, and Li Fei-Fei. 2018. Im- age generation from scene graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1219-1228.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Image retrieval using scene graphs", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Stark", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [], "last": "Shamma", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "3668--3678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li Fei- Fei. 2015. Image retrieval using scene graphs. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 3668-3678.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Extending a parser to distant domains using a few dozen partially annotated examples", "authors": [ { "first": "Vidur", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hopkins", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1190--1199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vidur Joshi, Matthew Peters, and Mark Hopkins. 2018. Extending a parser to distant domains using a few dozen partially annotated examples. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1190-1199.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2017, "venue": "5th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In 5th International Conference on Learn- ing Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li", "middle": [ "Jia" ], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" }, { "first": "Michael", "middle": [ "S" ], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": { "DOI": [ "10.1007/s11263-016-0981-7" ] }, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. 2017. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. Interna- tional Journal of Computer Vision, 123(1):32-73.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "An information-theoretic definition of similarity", "authors": [ { "first": "Dekang", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the Fifteenth International Conference on Machine Learning", "volume": "", "issue": "", "pages": "296--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of the Fifteenth Inter- national Conference on Machine Learning, pages 296-304.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Let your photos talk: Generating narrative paragraph for photo stream via bidirectional attention recurrent neural networks", "authors": [ { "first": "Yu", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianlong", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Chang", "middle": [ "Wen" ], "last": "Chen", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "1445--1452", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Liu, Jianlong Fu, Tao Mei, and Chang Wen Chen. 2017. Let your photos talk: Generating narrative paragraph for photo stream via bidirectional atten- tion recurrent neural networks. In Proceedings of the Thirty-First AAAI Conference on Artificial Intel- ligence, pages 1445-1452.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Effective approaches to attention-based neural machine translation", "authors": [ { "first": "Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": { "DOI": [ "10.18653/v1/D15-1166" ] }, "num": null, "urls": [], "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Deep graph convolutional encoders for structured data to text generation", "authors": [ { "first": "Diego", "middle": [], "last": "Marcheggiani", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th International Conference on Natural Language Generation", "volume": "", "issue": "", "pages": "1--9", "other_ids": { "DOI": [ "10.18653/v1/W18-6501" ] }, "num": null, "urls": [], "raw_text": "Diego Marcheggiani and Laura Perez-Beltrachini. 2018. Deep graph convolutional encoders for struc- tured data to text generation. In Proceedings of the 11th International Conference on Natural Lan- guage Generation, pages 1-9, Tilburg University, The Netherlands. Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The steep road to happily ever after: an analysis of current visual storytelling models", "authors": [ { "first": "Yatri", "middle": [], "last": "Modi", "suffix": "" }, { "first": "Natalie", "middle": [], "last": "Parde", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Second Workshop on Shortcomings in Vision and Language", "volume": "", "issue": "", "pages": "47--57", "other_ids": { "DOI": [ "10.18653/v1/W19-1805" ] }, "num": null, "urls": [], "raw_text": "Yatri Modi and Natalie Parde. 2019. The steep road to happily ever after: an analysis of current visual storytelling models. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 47-57, Minneapolis, Minnesota. Association for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "authors": [ { "first": "Kaiming", "middle": [], "last": "Shaoqing Ren", "suffix": "" }, { "first": "Ross", "middle": [], "last": "He", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "91--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time ob- ject detection with region proposal networks. In Advances in neural information processing systems, pages 91-99.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Very deep convolutional networks for large-scale image recognition", "authors": [ { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Structural information preserving for graph-to-text generation", "authors": [ { "first": "Linfeng", "middle": [], "last": "Song", "suffix": "" }, { "first": "Ante", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jinsong", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yubin", "middle": [], "last": "Ge", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7987--7998", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linfeng Song, Ante Wang, Jinsong Su, Yue Zhang, Kun Xu, Yubin Ge, and Dong Yu. 2020. Struc- tural information preserving for graph-to-text gen- eration. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7987-7998.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Cider: Consensus-based image description evaluation", "authors": [ { "first": "Ramakrishna", "middle": [], "last": "Vedantam", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "4566--4575", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4566-4575.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Hierarchical Photo-Scene Encoder for Album Storytelling", "authors": [ { "first": "Bairui", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Wenhao", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Feng", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", "volume": "33", "issue": "", "pages": "8909--8916", "other_ids": { "DOI": [ "10.1609/aaai.v33i01.33018909" ] }, "num": null, "urls": [], "raw_text": "Bairui Wang, Lin Ma, Wei Zhang, Wenhao Jiang, and Feng Zhang. 2019a. Hierarchical Photo-Scene En- coder for Album Storytelling. Proceedings of the AAAI Conference on Artificial Intelligence, 33:8909- 8916.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "On the role of scene graphs in image captioning", "authors": [ { "first": "Dalin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Beck", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)", "volume": "", "issue": "", "pages": "29--34", "other_ids": { "DOI": [ "10.18653/v1/D19-6405" ] }, "num": null, "urls": [], "raw_text": "Dalin Wang, Daniel Beck, and Trevor Cohn. 2019b. On the role of scene graphs in image captioning. In Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN), pages 29-34, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Learning deep contextual attention network for narrative photo stream captioning", "authors": [ { "first": "Hanqi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Siliang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Yin", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Yueting", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the on Thematic Workshops of ACM Multimedia", "volume": "", "issue": "", "pages": "271--279", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanqi Wang, Siliang Tang, Yin Zhang, Tao Mei, Yuet- ing Zhuang, and Fei Wu. 2017. Learning deep con- textual attention network for narrative photo stream captioning. In Proceedings of the on Thematic Work- shops of ACM Multimedia 2017, pages 271-279.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Show, Reward and Tell: Automatic Generation of Narrative Paragraph from Photo Stream by Adversarial Training", "authors": [ { "first": "Jing", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jianlong", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Jinhui", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Zechao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2018, "venue": "The AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "7396--7403", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Wang, Jianlong Fu, Jinhui Tang, Zechao Li, and Tao Mei. 2018a. Show, Reward and Tell: Auto- matic Generation of Narrative Paragraph from Photo Stream by Adversarial Training. The AAAI Confer- ence on Artificial Intelligence (AAAI), 2018., pages 7396-7403.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Storytelling from an image stream using scene graphs", "authors": [ { "first": "Ruize", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhongyu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2020, "venue": "The Thirty-Fourth AAAI Conference on Artificial Intelligence", "volume": "2020", "issue": "", "pages": "9185--9192", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruize Wang, Zhongyu Wei, Piji Li, Qi Zhang, and Xu- anjing Huang. 2020. Storytelling from an image stream using scene graphs. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9185-9192. AAAI Press.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "No metrics are perfect: Adversarial reward learning for visual storytelling", "authors": [ { "first": "Xin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yuan-Fang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "899--909", "other_ids": { "DOI": [ "10.18653/v1/P18-1083" ] }, "num": null, "urls": [], "raw_text": "Xin Wang, Wenhu Chen, Yuan-Fang Wang, and William Yang Wang. 2018b. No metrics are perfect: Adversarial reward learning for visual storytelling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 899-909, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Knowledgeable storyteller: A commonsense-driven generative model for visual storytelling", "authors": [ { "first": "Pengcheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Fuli", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Xu", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2019, "venue": "IJCAI International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "5356--5362", "other_ids": { "DOI": [ "10.24963/ijcai.2019/744" ] }, "num": null, "urls": [], "raw_text": "Pengcheng Yang, Fuli Luo, Peng Chen, Lei Li, Zhiyi Yin, Xiaodong He, and Xu Sun. 2019a. Knowl- edgeable storyteller: A commonsense-driven gen- erative model for visual storytelling. IJCAI Inter- national Joint Conference on Artificial Intelligence, 2019-Augus:5356-5362.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Auto-encoding scene graphs for image captioning", "authors": [ { "first": "Xu", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kaihua", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Hanwang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianfei", "middle": [], "last": "Cai", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "10685--10694", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xu Yang, Kaihua Tang, Hanwang Zhang, and Jianfei Cai. 2019b. Auto-encoding scene graphs for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10685-10694.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Hierarchically-attentive RNN for album summarization and storytelling", "authors": [ { "first": "Licheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Tamara", "middle": [], "last": "Berg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "966--971", "other_ids": { "DOI": [ "10.18653/v1/D17-1101" ] }, "num": null, "urls": [], "raw_text": "Licheng Yu, Mohit Bansal, and Tamara Berg. 2017. Hierarchically-attentive RNN for album summariza- tion and storytelling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 966-971, Copenhagen, Denmark. Association for Computational Linguis- tics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Neural Motifs: Scene Graph Parsing with Global Context", "authors": [ { "first": "Rowan", "middle": [], "last": "Zellers", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Yatskar", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Thomson", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "5831--5840", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00611" ] }, "num": null, "urls": [], "raw_text": "Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. 2018. Neural Motifs: Scene Graph Pars- ing with Global Context. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 5831-5840.", "links": null } }, "ref_entries": { "FIGREF1": { "text": "Our pipeline for visual storytelling.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Qualitative results of our model versus AREL and human-written stories.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "num": null, "html": null, "content": "
1.05 M ---12.3 35.230.8 10.7
AREL (Wang et al., 2018b) 1.05 M 63.7 3923.1 143529.6 9.5
HPSR (Wang et al., 2019a) 1.05 M 61.9 37.8 21.5 12.2 34.431.2 8
KS (Yang et al., 2019a)1.05 M 66.4 39.2 23.1 12.8 35.229.9 12.1
SGVST (Wang et al., 2020) 3.41 M 65.1 40.1 23.8 14.7 35.829.9 9.8
Ours: SGEmb, attn2.10 M 62.2 38.7 23.5 14.8 35.630.2 8.6
Table 1: Results of proposed model on test set compared to previous work using reference-based metrics including
BLEU (B), METEOR (M), ROUGE-L (R-L), and CIDEr-D (C). Model variations B-4 M R-L
Visual features
VGG global1334.4 29.7
ResNet global13.6 34.9 29.5
SGEmb global1233.8 28.8
VGG, attn13.5 35.5 30.1
Attention types
VGG, add attn12.6 34.2 29.5
VGG, location attn13.8 35.1 29.8
VGG, simple attn13.9 35.1 29.7
SGEmb, add attn13.6 35.5 30.1
SGEmb, location attn 14.1 35.5 30.1
SGEmb, simple attn1435.5 30.2
Our full model
SGEmb, attn14.8 35.6 30.2
", "type_str": "table", "text": "# para is the number of parameters in the image encoder to obtain one vector representation for each image. Parameters in pre-trained components are not counted." }, "TABREF3": { "num": null, "html": null, "content": "
BaselinesNPVPPPAdj. P Adv. P all
VGG global1.191 1.208 1.148 1.0233.0431.067
ResNet global1.128 1.054 1.087 1.2152.4241.013
AREL**1.117 1.043 1.035 1.112.9531
SGEmb global1.164 1.137 1.119 1.3093.4561.046
VGG, attn1.231.245 1.183 1.2273.2731.093
Ours: SGEmb, attn 1.101 1.037 1.007 1.0571.9590.987
Human0.794 0.563 0.583 0.7030.9830.723
", "type_str": "table", "text": "Zipf's coefficient of the word distribution on test set compared to baselines. The score of generated stories should be as close to the human scores as possible, so the smaller numbers are better." }, "TABREF5": { "num": null, "html": null, "content": "", "type_str": "table", "text": "Relevance metric evaluation on the test set." } } } }