{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T12:13:52.202386Z" }, "title": "Leveraging Visual Question Answering to Improve Text-to-Image Synthesis", "authors": [ { "first": "Stanislav", "middle": [], "last": "Frolov", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technical University of Kaiserslautern", "location": { "country": "Germany" } }, "email": "" }, { "first": "Shailza", "middle": [], "last": "Jolly", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technical University of Kaiserslautern", "location": { "country": "Germany" } }, "email": "" }, { "first": "J\u00f6rn", "middle": [], "last": "Hees", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Andreas", "middle": [], "last": "Dengel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Technical University of Kaiserslautern", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Generating images from textual descriptions has recently attracted a lot of interest. While current models can generate photo-realistic images of individual objects such as birds and human faces, synthesising images with multiple objects is still very difficult. In this paper, we propose an effective way to combine Text-to-Image (T2I) synthesis with Visual Question Answering (VQA) to improve the image quality and image-text alignment of generated images by leveraging the VQA 2.0 dataset. We create additional training samples by concatenating question and answer (QA) pairs and employ a standard VQA model to provide the T2I model with an auxiliary learning signal. We encourage images generated from QA pairs to look realistic and additionally minimize an external VQA loss. Our method lowers the FID from 27.84 to 25.38 and increases the R-prec. from 83.82% to 84.79% when compared to the baseline, which indicates that T2I synthesis can successfully be improved using a standard VQA model.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Generating images from textual descriptions has recently attracted a lot of interest. While current models can generate photo-realistic images of individual objects such as birds and human faces, synthesising images with multiple objects is still very difficult. In this paper, we propose an effective way to combine Text-to-Image (T2I) synthesis with Visual Question Answering (VQA) to improve the image quality and image-text alignment of generated images by leveraging the VQA 2.0 dataset. We create additional training samples by concatenating question and answer (QA) pairs and employ a standard VQA model to provide the T2I model with an auxiliary learning signal. We encourage images generated from QA pairs to look realistic and additionally minimize an external VQA loss. Our method lowers the FID from 27.84 to 25.38 and increases the R-prec. from 83.82% to 84.79% when compared to the baseline, which indicates that T2I synthesis can successfully be improved using a standard VQA model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Text-to-image synthesis (T2I), the task to generate realistic images given textual descriptions, received a lot of attention in recent years (Reed et al., 2016; Zhu et al., 2019; Li et al., 2019) . T2I synthesis can be seen as the inverse of image captioning. Given a caption, the model is trained to produce realistic images that correctly reflect the meaning of the input captions. Many existing T2I methods use Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) . GANs consist of two artificial neural networks that play a game in which a discriminator is trained to distinguish between real and generated images, while a generator is trained to produce images to fool the discriminator. They have successfully been applied to many image synthesis applications such as image-to-image translation (Isola et al., 2016; Zhu et al., 2017) , image super-resolution (Ledig et al., 2016) , and image in-painting (Yeh et al., 2016) .", "cite_spans": [ { "start": 141, "end": 160, "text": "(Reed et al., 2016;", "ref_id": "BIBREF18" }, { "start": 161, "end": 178, "text": "Zhu et al., 2019;", "ref_id": "BIBREF28" }, { "start": 179, "end": 195, "text": "Li et al., 2019)", "ref_id": "BIBREF12" }, { "start": 453, "end": 478, "text": "(Goodfellow et al., 2014)", "ref_id": "BIBREF2" }, { "start": 813, "end": 833, "text": "(Isola et al., 2016;", "ref_id": "BIBREF8" }, { "start": 834, "end": 851, "text": "Zhu et al., 2017)", "ref_id": "BIBREF27" }, { "start": 877, "end": 897, "text": "(Ledig et al., 2016)", "ref_id": "BIBREF11" }, { "start": 922, "end": 940, "text": "(Yeh et al., 2016)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Similarly, Visual Question Answering (VQA) (Antol et al., 2015 ) emerged as an important task to build systems that better understand the relationship between vision and language by learning to answer", "cite_spans": [ { "start": 43, "end": 62, "text": "(Antol et al., 2015", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "L VQ A (x g , t) x g , (x, t Q A ) (x g , t Q A ) x g , (x g , t) x g t Q A = concat(q, a) T2I Generator tx g Discriminator DAMSM L DAMSM L ADV VQA (x g , q)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 2: An overview of our architecture with our VQA extension highlighted in red. Given text inputs t (captions) and t QA (concatenated QA pairs) we generate imagesx g andx g . As in AttnGAN, we use the discriminator and DAMSM model to differentiate between real (omitted in the figure for brevity) and generated images as well as image-text pairs, leading to loss L ADV , and DAMSM loss L DAMSM for improved image-text alignment. Images generated from QA pairs are passed through a VQA model resulting in the additional loss L VQA .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "questions about an image. It can be seen as image conditioned question answering, which encourages the model to both look at the image and understand the question to predict the correct answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A good T2I model should produce images that look realistic and correctly reflect the semantic meaning of the input description. Considering the complexity of natural language descriptions and difficulty to produce photographic images, current models struggle to achieve these goals. In this paper we propose a simple, yet effective way to combine T2I with VQA to improve both the image quality and image-text alignment of generated images of a T2I model by leveraging the questions and answers (QA) provided in the VQA 2.0 dataset (Antol et al., 2015) . Both captions and QA pairs can describe the overall image or very specific details. In fact, many QA pairs can be rephrased as captions and vice versa which further motivates to leverage VQA for T2I. Additionally, the VQA 2.0 dataset contains complementary images with different answers for the same question which requires the T2I model to learn to pay close attention to the input in order to generate an image that correctly reflects the meaning of the input text. To leverage the VQA 2.0 dataset for T2I, we concatenate QA pairs and use them as additional training samples in our T2I pipeline. Images generated from QA pairs can subsequently be used as inputs to a VQA model which can provide an additional learning signal to the T2I generator. See Figure 1 for an overview of our approach.", "cite_spans": [ { "start": 531, "end": 551, "text": "(Antol et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 1307, "end": 1315, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Initial T2I approaches (Reed et al., 2016; Dash et al., 2017) adopted the conditional GAN (cGAN) (Mirza and Osindero, 2014) and AC-GAN (Odena et al., 2016) ideas to replace the conditioning variable by a text embedding which allows to condition the generator on a textual description. Analog to many current approaches, our approach is also based on AttnGAN , which incorporates an attention mechanism on word features to allow the network to synthesize fine-grained details.", "cite_spans": [ { "start": 23, "end": 42, "text": "(Reed et al., 2016;", "ref_id": "BIBREF18" }, { "start": 43, "end": 61, "text": "Dash et al., 2017)", "ref_id": "BIBREF1" }, { "start": 97, "end": 123, "text": "(Mirza and Osindero, 2014)", "ref_id": "BIBREF15" }, { "start": 135, "end": 155, "text": "(Odena et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In terms of using VQA for T2I, to our knowledge the only other approach is VQA-GAN (Niu et al., 2020) . However, in contrast to their approach, our architecture is simpler, we do not use layout information, work on individual QA pairs, and produce higher resolution images (256x256 vs. 128x128).", "cite_spans": [ { "start": 83, "end": 101, "text": "(Niu et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We extend the AttnGAN architecture to leverage question and answer (QA) pairs from the VQA 2.0 (Goyal et al., 2017) dataset by appending a VQA model (Kazemi and Elqursh, 2017) . In addition to generating images from image descriptions, our model is also trained to produce images given a QA pair and minimize an external VQA loss. See Figure 2 for an overview. After revisiting the individual components, we explain our extension in more detail.", "cite_spans": [ { "start": 95, "end": 115, "text": "(Goyal et al., 2017)", "ref_id": "BIBREF3" }, { "start": 149, "end": 175, "text": "(Kazemi and Elqursh, 2017)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 335, "end": 343, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "AttnGAN consists of a multi-stage refinement pipeline that employs attention-driven generators for fine-grained T2I synthesis. A pre-trained bidirectional LSTM (BiLSTM) (Schuster and Paliwal, 1997) is used to extract global sentence as well as individual word features. Discriminators jointly approximate the conditional and unconditional distributions simultaneously. Additionally, they use an image-text matching loss at the word level called Deep Attentional Multimodal Similarity Model (DAMSM) is employed to guide the image generation process. The attention-driven generator together with the DAMSM loss help the generator to be able to focus on individual words to synthesize finegrained details and improve the semantic alignment between input description and final image.", "cite_spans": [ { "start": 169, "end": 197, "text": "(Schuster and Paliwal, 1997)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "T2I: AttnGAN", "sec_num": "3.1" }, { "text": "We use a very basic and widely used VQA model, proposed in (Kazemi and Elqursh, 2017) 1 . Given an image I and question q, the model uses image features extracted from a pre-trained ResNet (He et al., 2016) , an LSTM (Hochreiter and Schmidhuber, 1997) based question embedding, and employs stacked attention (Yang et al., 2016) to produce probabilities over a fixed set of answers. The VQA loss, given in Equation 1, is simply an average over the negative log-likelihoods over all the correct answers a 1 , a 2 , ..., a K .", "cite_spans": [ { "start": 182, "end": 206, "text": "ResNet (He et al., 2016)", "ref_id": null }, { "start": 217, "end": 251, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF7" }, { "start": 308, "end": 327, "text": "(Yang et al., 2016)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L VQA = 1 K K k=1 \u2212logP (a k |I, q)", "eq_num": "(1)" } ], "section": "VQA Model", "sec_num": "3.2" }, { "text": "3.3 T2I + VQA", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "We extend AttnGAN by appending the VQA model (Kazemi and Elqursh, 2017) , and create additional training samples by concatenating question and answer (QA) pairs. Given captions t and QA pairs t QA , we generate fake imagesx g , andx g . Real images are denoted as x r . Next, the VQA model takes the image generated from the QA pair and corresponding question to produce an answer. Similar to (Niu et al., 2020) , we use the VQA loss to predict the correct answer to guide the image generator. At the same time, the discriminators encourage the generated images to be realistic and matching to their corresponding text input. The DAMSM loss further helps to focus on the individual words to generate fine-grained details. The final objective of our generator is defined in Equation 2, where L ADV is the adversarial (conditional and unconditional) loss, and L DAMSM is the DAMSM loss, both as described in . The adversarial loss and DAMSM loss are applied to both captions and QA pairs and their correspondingly generated images. The VQA loss L VQA is applied only to images generated from QA pairs.", "cite_spans": [ { "start": 45, "end": 71, "text": "(Kazemi and Elqursh, 2017)", "ref_id": "BIBREF9" }, { "start": 393, "end": 411, "text": "(Niu et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L G =L ADV + L DAMSM (x g , t) + L DAMSM (x g , t QA ) + L VQA (x g , q) L ADV = \u2212E[logD(x g )] \u2212 E[logD(x g )] unconditional loss \u2212E[logD(x g , t)] \u2212 E[logD(x g , t QA )] conditional loss", "eq_num": "(2)" } ], "section": "VQA Model", "sec_num": "3.2" }, { "text": "The discriminators are trained to classify between real and fake images, as well as image-caption and image-QA pairs to simultaneously approximate the conditional and unconditional distributions by minimizing the modified loss defined in Equation 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "L D = \u2212E[logD(x r )] \u2212 E[log(1 \u2212 D(x g ))] \u2212 E[log(1 \u2212 D(x g ))] unconditional loss \u2212E[logD(x r , t)] \u2212 E[log(1 \u2212 D(x g , t))] \u2212 E[log(1 \u2212 D(x g , t QA ))]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "conditional loss", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA Model", "sec_num": "3.2" }, { "text": "We train three different variants of our extension. The first two are naive extensions in which we do not change the discriminator loss of AttnGAN. Instead, we simply append the VQA model, sample a QA pair during training and add the external VQA loss to the overall generator loss function. We experiment with an end-to-end training approach where we start with a randomly initialized VQA model, and a pre-trained VQA model. Next, we train a model in which we change the discriminator and generator loss functions. In other words, given a QA pair, the model is not only trained to minimize the VQA loss for that particular QA pair, but also to produce realistic and matched images as judged by the discriminator and DAMSM loss. For fair comparison we re-train and re-evaluate AttnGAN using the codebase provided by the authors 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We use the commonly used COCO (Lin et al., 2014) and VQA 2.0 (Goyal et al., 2017) datasets to train our model. COCO depicts complex scenes and multiple interacting objects and contains around 80k images for training and 40k images for testing. Each image has five captions. VQA 2.0 is a large dataset of question answer pairs based on the COCO images with roughly 400k QAs for training and 200k for testing, hence extensively used by VQA researchers (Tan and Bansal, 2019; Kim et al., 2018; Lu et al., 2019) . It contains complementary images for the same question, such that the model learns to look closely at the image before answering instead of deploying language biases (Antol et al., 2015) .", "cite_spans": [ { "start": 30, "end": 48, "text": "(Lin et al., 2014)", "ref_id": "BIBREF13" }, { "start": 61, "end": 81, "text": "(Goyal et al., 2017)", "ref_id": "BIBREF3" }, { "start": 450, "end": 472, "text": "(Tan and Bansal, 2019;", "ref_id": "BIBREF21" }, { "start": 473, "end": 490, "text": "Kim et al., 2018;", "ref_id": "BIBREF10" }, { "start": 491, "end": 507, "text": "Lu et al., 2019)", "ref_id": "BIBREF14" }, { "start": 676, "end": 696, "text": "(Antol et al., 2015)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation", "sec_num": "4.1" }, { "text": "We evaluate the quality and diversity of generated images using the Inception Score (IS) (Salimans et al., 2016) and Fr\u00e9chet Inception Distance (FID) (Heusel et al., 2017) . To compute the IS 3 and FID 4 we generate 30k images from 30k randomly sampled test captions. R-prec. is used to evaluate the semantic alignment between generated images and input captions. Although R-prec. might be unreliable, as current models seem to achieve higher scores than real images (Hinz et al., 2019) , we include it for reference since it is still commonly used. Similar to (Niu et al., 2020) , we evaluate the VQA accuracy of our models by generating images from test QA pairs and passing them to the pre-trained VQA model with the corresponding question and answer.", "cite_spans": [ { "start": 89, "end": 112, "text": "(Salimans et al., 2016)", "ref_id": "BIBREF19" }, { "start": 150, "end": 171, "text": "(Heusel et al., 2017)", "ref_id": "BIBREF5" }, { "start": 467, "end": 486, "text": "(Hinz et al., 2019)", "ref_id": "BIBREF6" }, { "start": 561, "end": 579, "text": "(Niu et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets and Evaluation", "sec_num": "4.1" }, { "text": "IS \u2191 FID \u2193 R-prec. \u2191 VQA Acc. \u2191 Real Images 34.88 6.09 68.58 60.00 AttnGAN 26 (Hinz et al., 2019) . We re-train and re-evaluate the baseline AttnGAN (second row). Third and fourth row are our naive extensions in which we simply append a VQA model for an external VQA loss for images generated from QA pairs. In the last row, we change the discriminator and generator losses to also encourage images generated from QA pairs to look realistic and match the input (similar to standard AttnGAN losses for images generated from captions). We train each model for 120 epochs, select the checkpoint with the best IS and report corresponding FID, R-prec., and VQA accuracy.", "cite_spans": [ { "start": 78, "end": 97, "text": "(Hinz et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "As can be seen in Table 1 , merely adding the external VQA loss to the generator impairs the performance, regardless of using a pre-trained VQA model or training it as part of the pipeline in an end-to-end way. We hypothesize this is due to the images produced from QA pairs not being encouraged to look realistic during training and the generator struggling to minimize the external VQA loss for images generated from QA pairs, on the one hand, and standard AttnGAN losses for images generated from captions, on the other hand. Therefore, in our third experiment (last row) we change the overall objective and encourage all generated images to be realistic and matched to the input description using the conditional and unconditional losses from the discriminator and DAMSM loss (as in standard AttnGAN). While achieving almost identical IS, our model greatly improves the FID from 27.84 to 25.38, which indicates better image quality and diversity. Additionally, the images produced by our extension are better aligned to the input descriptions as indicated by the improvement of R-prec. from 83.82 to 84.79, and VQA accuracy from 43.00 to 43.75. Since the VQA 2.0 dataset contains complementary image and QA pairs, the slight variation in linguistic inputs might hence help the model to generate better images. Our results show that it is possible to improve T2I synthesis by simply appending a standard pre-trained VQA model, leveraging the VQA 2.0 dataset as additional supervision and encouraging the model to also produce realistic images from QA pairs. Although we show that already a simple VQA model can help to improve the final T2I performance, we hypothesize that using a state-of-the-art VQA model might further improve the results.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Method", "sec_num": null }, { "text": "In this paper we proposed a simple method to leverage VQA data for T2I via a combination of AttnGAN, a well-known T2I model, and a standard VQA model. By concatenating quesion answer (QA) pairs from the VQA 2.0 dataset we created additional training samples. Images generated from the QA pairs are passed to the VQA model which provides an additional learning signal to the generator. Our results show a substantial improvement over the baseline in terms of FID, but requires additional supervision. Possible future research directions could be to investigate whether our extension also boosts other T2I models, and the influence of more training data and the external VQA loss separately.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://github.com/Cyanogenoid/pytorch-vqa", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/taoxugit/AttnGAN 3 https://github.com/sbarratt/inception-score-pytorch 4 https://github.com/mseitzer/pytorch-fid", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the BMBF project DeFuseNN (Grant 01IW17002) and the TU Kaiserslautern PhD program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Vqa: Visual question answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "ICCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In ICCV.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Tac-gan -text conditioned auxiliary classifier generative adversarial network", "authors": [ { "first": "Ayushman", "middle": [], "last": "Dash", "suffix": "" }, { "first": "John Cristian Borges", "middle": [], "last": "Gamboa", "suffix": "" }, { "first": "Sheraz", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Liwicki", "suffix": "" }, { "first": "Muhammad Zeshan", "middle": [], "last": "Afzal", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ayushman Dash, John Cristian Borges Gamboa, Sheraz Ahmed, Marcus Liwicki, and Muhammad Zeshan Afzal. 2017. Tac-gan -text conditioned auxiliary classifier generative adversarial network. ArXiv, abs/1703.06412.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Generative adversarial nets", "authors": [ { "first": "Ian", "middle": [ "J" ], "last": "Goodfellow", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Pouget-Abadie", "suffix": "" }, { "first": "M", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "David", "middle": [], "last": "Warde-Farley", "suffix": "" }, { "first": "Sherjil", "middle": [], "last": "Ozair", "suffix": "" }, { "first": "Aaron", "middle": [ "C" ], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ian J. Goodfellow, Jean Pouget-Abadie, M. Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In NIPS.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering", "authors": [ { "first": "Yash", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Tejas", "middle": [], "last": "Khot", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Summers-Stay", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2017, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering. In CVPR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Deep residual learning for image recognition", "authors": [ { "first": "X", "middle": [], "last": "Kaiming He", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Ren", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, X. Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "authors": [ { "first": "Martin", "middle": [], "last": "Heusel", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Ramsauer", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Unterthiner", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Nessler", "suffix": "" }, { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" } ], "year": 2017, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantic object accuracy for generative text-to-image synthesis", "authors": [ { "first": "Tobias", "middle": [], "last": "Hinz", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Heinrich", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Wermter", "suffix": "" } ], "year": 2019, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tobias Hinz, Stefan Heinrich, and Stefan Wermter. 2019. Semantic object accuracy for generative text-to-image synthesis. ArXiv, abs/1910.13321.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Long short-term memory", "authors": [ { "first": "S", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural Computation", "volume": "9", "issue": "", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Hochreiter and J. Schmidhuber. 1997. Long short-term memory. Neural Computation, 9:1735-1780.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Image-to-image translation with conditional adversarial networks", "authors": [ { "first": "Phillip", "middle": [], "last": "Isola", "suffix": "" }, { "first": "Jun-Yan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Tinghui", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Alexei", "middle": [ "A" ], "last": "Efros", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2016. Image-to-image translation with conditional adversarial networks. In CVPR.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Show, ask, attend, and answer: A strong baseline for visual question answering", "authors": [ { "first": "Vahid", "middle": [], "last": "Kazemi", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Elqursh", "suffix": "" } ], "year": 2017, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vahid Kazemi and Ali Elqursh. 2017. Show, ask, attend, and answer: A strong baseline for visual question answering. ArXiv, abs/1704.03162.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bilinear attention networks", "authors": [ { "first": "Jin-Hwa", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jaehyun", "middle": [], "last": "Jun", "suffix": "" }, { "first": "Byoung-Tak", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In NeurIPS.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Photo-realistic single image super-resolution using a generative adversarial network", "authors": [ { "first": "Christian", "middle": [], "last": "Ledig", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Theis", "suffix": "" }, { "first": "Ferenc", "middle": [], "last": "Husz\u00e1r", "suffix": "" }, { "first": "Jos\u00e9", "middle": [ "Antonio" ], "last": "Caballero", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Aitken", "suffix": "" }, { "first": "Alykhan", "middle": [], "last": "Tejani", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Totz", "suffix": "" }, { "first": "Zehan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenzhe", "middle": [], "last": "Shi", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christian Ledig, Lucas Theis, Ferenc Husz\u00e1r, Jos\u00e9 Antonio Caballero, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. 2016. Photo-realistic single image super-resolution using a generative adversarial network. In CVPR.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Object-driven text-to-image synthesis via adversarial training", "authors": [ { "first": "Wenbo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Pengchuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiuyuan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Siwei", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan Huang, Xiaodong He, Siwei Lyu, and Jianfeng Gao. 2019. Object-driven text-to-image synthesis via adversarial training. In CVPR.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Microsoft coco: Common objects in context", "authors": [ { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Maire", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Belongie", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Deva", "middle": [], "last": "Ramanan", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" } ], "year": 2014, "venue": "ECCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "NeurIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Conditional generative adversarial nets", "authors": [ { "first": "Mehdi", "middle": [], "last": "Mirza", "suffix": "" }, { "first": "Simon", "middle": [], "last": "Osindero", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Image synthesis from locally related texts", "authors": [ { "first": "Tianrui", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Fangxiang", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Lingxuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xiaojie", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "ICMR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianrui Niu, Fangxiang Feng, Lingxuan Li, and Xiaojie Wang. 2020. Image synthesis from locally related texts. In ICMR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Conditional image synthesis with auxiliary classifier gans", "authors": [ { "first": "Augustus", "middle": [], "last": "Odena", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Olah", "suffix": "" }, { "first": "Jonathon", "middle": [], "last": "Shlens", "suffix": "" } ], "year": 2016, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Augustus Odena, Christopher Olah, and Jonathon Shlens. 2016. Conditional image synthesis with auxiliary classifier gans. In ICML.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Generative adversarial text to image synthesis", "authors": [ { "first": "Scott", "middle": [ "E" ], "last": "Reed", "suffix": "" }, { "first": "Zeynep", "middle": [], "last": "Akata", "suffix": "" }, { "first": "Xinchen", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Lajanugen", "middle": [], "last": "Logeswaran", "suffix": "" }, { "first": "Bernt", "middle": [], "last": "Schiele", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2016, "venue": "ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott E. Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In ICML.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Improved techniques for training gans", "authors": [ { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Goodfellow", "suffix": "" }, { "first": "Wojciech", "middle": [], "last": "Zaremba", "suffix": "" }, { "first": "Vicki", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Xi", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. In NIPS.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bidirectional recurrent neural networks", "authors": [ { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "K", "middle": [], "last": "Paliwal", "suffix": "" } ], "year": 1997, "venue": "IEEE Trans. Signal Process", "volume": "45", "issue": "", "pages": "2673--2681", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Schuster and K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Process., 45:2673-2681.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Lxmert: Learning cross-modality encoder representations from transformers", "authors": [ { "first": "Hao", "middle": [], "last": "Tan", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.07490" ] }, "num": null, "urls": [], "raw_text": "Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Attngan: Fine-grained text to image generation with attentional generative adversarial networks", "authors": [ { "first": "Tao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Pengchuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiuyuan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" } ], "year": 2017, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2017. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In CVPR.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Stacked attention networks for image question answering", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In CVPR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Semantic image inpainting with deep generative models", "authors": [ { "first": "Raymond", "middle": [ "A" ], "last": "Yeh", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Teck-Yian", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Alexander", "middle": [ "G" ], "last": "Schwing", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Hasegawa-Johnson", "suffix": "" }, { "first": "Minh", "middle": [ "N" ], "last": "Do", "suffix": "" } ], "year": 2016, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raymond A. Yeh, Chen Chen, Teck-Yian Lim, Alexander G. Schwing, Mark Hasegawa-Johnson, and Minh N. Do. 2016. Semantic image inpainting with deep generative models. In CVPR.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "authors": [ { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2016, "venue": "ICCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Zhang, Tao Xu, and Hongsheng Li. 2016. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Stackgan++: Realistic image synthesis with stacked generative adversarial networks", "authors": [ { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shaoting", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dimitris", "middle": [ "N" ], "last": "Metaxas", "suffix": "" } ], "year": 2017, "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "volume": "41", "issue": "", "pages": "1947--1962", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N. Metaxas. 2017. Stackgan++: Realistic image synthesis with stacked generative adversarial networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41:1947-1962.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "authors": [ { "first": "Jun-Yan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Taesung", "middle": [], "last": "Park", "suffix": "" }, { "first": "Phillip", "middle": [], "last": "Isola", "suffix": "" }, { "first": "Alexei", "middle": [ "A" ], "last": "Efros", "suffix": "" } ], "year": 2017, "venue": "ICCV", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis", "authors": [ { "first": "Minfeng", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Pingbo", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Yandg", "suffix": "" } ], "year": 2019, "venue": "CVPR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yandg. 2019. Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. In CVPR.", "links": null } }, "ref_entries": {} } }