{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:12:15.700089Z" }, "title": "Language-agnostic Semantic Consistent Text-to-Image Generation", "authors": [ { "first": "Seongjun", "middle": [], "last": "Jung", "suffix": "", "affiliation": { "laboratory": "", "institution": "Seoul National University", "location": {} }, "email": "seongjunjung@bi.snu.ac.kr" }, { "first": "Woo", "middle": [ "Suk" ], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Seoul National University", "location": {} }, "email": "wschoi@bi.snu.ac.kr" }, { "first": "Seongho", "middle": [], "last": "Choi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Seoul National University", "location": {} }, "email": "shchoi@bi.snu.ac.kr" }, { "first": "Byoung-Tak", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Seoul National University", "location": {} }, "email": "btzhang@bi.snu.ac.kr" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent GAN-based text-to-image generation models have advanced that they can generate photo-realistic images matching semantically with descriptions. However, research on multilingual text-to-image generation has not been carried out yet much. There are two problems when constructing a multilingual text-to-image generation model: 1) language imbalance issue in text-to-image paired datasets and 2) generating images that have the same meaning but are semantically inconsistent with each other in texts expressed in different languages. To this end, we propose a Language-agnostic Semantic Consistent Generative Adversarial Network (LaSC-GAN) for text-to-image generation, which can generate semantically consistent images via language-agnostic text encoder and Siamese mechanism. Experiments on relatively low-resource language text-image datasets show that the model has comparable generation quality as images generated by highresource language text, and generates semantically consistent images for texts with the same meaning even in different languages.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Recent GAN-based text-to-image generation models have advanced that they can generate photo-realistic images matching semantically with descriptions. However, research on multilingual text-to-image generation has not been carried out yet much. There are two problems when constructing a multilingual text-to-image generation model: 1) language imbalance issue in text-to-image paired datasets and 2) generating images that have the same meaning but are semantically inconsistent with each other in texts expressed in different languages. To this end, we propose a Language-agnostic Semantic Consistent Generative Adversarial Network (LaSC-GAN) for text-to-image generation, which can generate semantically consistent images via language-agnostic text encoder and Siamese mechanism. Experiments on relatively low-resource language text-image datasets show that the model has comparable generation quality as images generated by highresource language text, and generates semantically consistent images for texts with the same meaning even in different languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we consider multilingual text-toimage generation. There are two problems with multilingual text-to-image generation. The first problem is the language imbalance issue in textto-image datasets. Most text-to-image generation datasets are in English, so it is difficult to construct text-to-image generation models for other languages. Furthermore, since existing multilingual datasets have a small amount of data, a discriminator overfitting may cause problems such as instability of learning in GAN. The second is that generative models have difficulty extracting semantic commonality between languages.This can produce different images for captions with the same semantics but different languages. In Yin et al. (2019) , they treat the problem that captions with same meanings in English create semantically different images. We extend this awareness between languages.", "cite_spans": [ { "start": 716, "end": 733, "text": "Yin et al. (2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To solve those problems, we propose LaSC-GAN for text-to-image generation. LaSC-GAN consists of a language-agnostic text encoder and a hierarchical generator. Language-agnostic text encoder generates text embeddings to be used in the hierarchical generator for the first problem mentioned above. And we exploit the Siamese structure training to capture the semantic consistency between images generated in various languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions are as follows: 1) By using a language-agnostic text encoder, images for low-resource language text can be generated only by learning the high-resource language. 2) Texts with the same semantics in different languages can generate semantically consistent images using the Siamese mechanism in hierarchical generator to extract semantic consistency between languages. We show the effect of each contribution in experiments using English MS-COCO (COCO-EN), Chinese MS-COCO (COCO-CN) and Korean MS-COCO (COCO-KO) datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "for Text-to-Image", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Adversarial Network (GAN)", "sec_num": "2.1" }, { "text": "Text-to-image generation using GAN has advanced a lot since GAN-INT-CLS (Reed et al., 2016) . The discovery of hierarchical model architectures (Zhang et al., 2017 has produced realistic images that semantically match with texts. However, these models only considered image generation for a single language, and to the best of our knowledge the first paper dealing with multilingual text to image generation is Zhang et al. (2022) . The model proposed in Zhang et al. (2022) requires learning for each language. However, our method can generate images from multi- In stage1, the model is trained followed by with COCO-EN. In stage 2, the text-to-image generation is trained with contrastive loss based on a Siamese structure with COCO-EN, CN, and KO.", "cite_spans": [ { "start": 72, "end": 91, "text": "(Reed et al., 2016)", "ref_id": "BIBREF3" }, { "start": 144, "end": 163, "text": "(Zhang et al., 2017", "ref_id": "BIBREF7" }, { "start": 411, "end": 430, "text": "Zhang et al. (2022)", "ref_id": "BIBREF9" }, { "start": 455, "end": 474, "text": "Zhang et al. (2022)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Adversarial Network (GAN)", "sec_num": "2.1" }, { "text": "lingual texts only by learning about high-resource language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generative Adversarial Network (GAN)", "sec_num": "2.1" }, { "text": "Multilingual text embedding models usually use the translation pairs datasets, and sometimes the translation pairs datasets and monolingual datasets are used together. Among these, language-agnostic BERT sentence embedding (LaBSE) (Feng et al., 2020) using MLM(Masked Language Model) pretraining was proposed.", "cite_spans": [ { "start": 231, "end": 250, "text": "(Feng et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Text Encoders", "sec_num": "2.2" }, { "text": "We propose a LaSC-GAN for text-to-image generation. Our goal is to obtain as good visual quality of images created with low-resource language text as images generated with high-resource language text and to enable the model to reflect semantic consistency between languages in image generation. The LaSC-GAN consists of a languageagnostic text encoder and a hierarchical generator. The language-agnostic text encoder is used to obtain a text representation that will be fed as a condition to the generator. The hierarchical generator generates images for text conditions. Training strategy of the model consists of two stages as shown in Figure1. In stage 1, the model is trained followed by using only a high-resource language dataset. In stage 2, the model is trained with a Siamese structure with two model branches using data from different language pairs (EN-CN, EN-KO).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3" }, { "text": "Language-agnostic Text Encoder consists of a Language Adaptation module and a Visual Adaptation module. We use pre-trained LaBSE (Feng et al., 2020) for the Language Adaptation module and bi-directional LSTM for the Visual Adaptation module. We get language-agnostic token embeddings from each token embedding passed through the Language Adaptation module. Then, the obtained embedding is transferred to a visual representation space through the Visual Adaptation module and used as the text condition of the generator. Hidden states of each token in the bi-directional LSTM of the Visual Adaptation module are used as word embeddings, and the last hidden state is used as sentence embeddings. Our model can use 109 languages used in LaBSE training as inputs. Hierarchical Generator uses the hierarchical generative adversarial network structure used in , which consists of 3 sub-generators (G 0 , G 1 , G 2 ). Each generator has an independent discriminator N (0, 1) ) from normal distribution. The following sub-generators generate a higher resolution image by using the previous generation result.", "cite_spans": [ { "start": 129, "end": 148, "text": "(Feng et al., 2020)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 959, "end": 967, "text": "N (0, 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "(D 0 , D 1 , D 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Architecture", "sec_num": "3.1" }, { "text": "In the first training stage, the model is trained followed by using only a highresource language dataset with DAMSM loss, and the parameters learned in the first stage are used in the second learning stage. Then, in the second learning stage, we use the Siamese mechanism such as SD-GAN (Yin et al., 2019) to learn semantic commons between texts in different languages. In addition to the DAMSM loss, we compute contrastive loss as follows by using the visual features of the discriminator for the inputs to the two branches of the Siamese structure.", "cite_spans": [ { "start": 287, "end": 305, "text": "(Yin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Training Strategy", "sec_num": "3.2" }, { "text": "L = 1 2N N n=1 y \u2022 d 2 + (1 \u2212 y) max(\u03f5 \u2212 d, 0) 2 (1) where d = \u2225v 1 \u2212 v 2 \u2225 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Strategy", "sec_num": "3.2" }, { "text": "is the distance between the visual feature vectors v 1 and v 2 from the two Siamese branches respectively, and y is a flag to mark whether the input descriptions are from the same image or not (i.e., 1 for the same and 0 for different). The hyper-parameter N is the length of the feature vector. The hyper-parameter \u03f5 is used to balance the distance value when y = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Strategy", "sec_num": "3.2" }, { "text": "We used MS-COCO (COCO-EN) (Lin et al., 2014) for stage 1. COCO-EN has 80K image train set and 40K image validation set. Each image has 5 English descriptions. We also used the multilingual versions of COCO-EN: COCO-CN and COCO-KO for stage 2. COCO-CN (Li et al., 2019) has 1 manually translated Chinese description for the 18K image train set and 1K image validation set.", "cite_spans": [ { "start": 26, "end": 44, "text": "(Lin et al., 2014)", "ref_id": "BIBREF2" }, { "start": 251, "end": 268, "text": "(Li et al., 2019)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "We used the validation set index of COCO-CN for other languages as well. COCO-KO has Korean machine translation results for all descriptions of in COCO-EN. In stage 2, we use a subset of data from COCO-EN and COCO-KO that overlap with COCO-CN. In stage 2, EN-CN and EN-KO language pair datasets are used for training respectively. The models trained with EN-CN, EN-KO pair datasets are evaluated on the COCO-CN, COCO-KO validation set respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "4.1" }, { "text": "The hierarchical generator and discriminator followed , and the language-agnostic text encoder is comprised of LaBSE (Feng et al., 2020) and bi-directional LSTM. The Siamese mechanism learning method follows (Yin et al., 2019) . We freeze the pre-trained parameters of LaBSE when learning the language-agnostic text encoder for stability of learning.", "cite_spans": [ { "start": 117, "end": 136, "text": "(Feng et al., 2020)", "ref_id": "BIBREF0" }, { "start": 208, "end": 226, "text": "(Yin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Implementation Details", "sec_num": "4.2" }, { "text": "We evaluated the visual quality of generated images using Inception Score (IS) and Fr\u00e9chet Inception Distance (FID) used by . In addition, we evaluated how much generated images are semantically similar to the conditioned texts through CLIP score used by Wu et al. (2021) .", "cite_spans": [ { "start": 255, "end": 271, "text": "Wu et al. (2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Metrics", "sec_num": "4.3" }, { "text": "In this section, we shows the benefits of the language-agnostic text encoder. We trained only on the high-resource language dataset(COCO-EN) in stage 1. Thanks to the language-agnostic text encoder, our model can generate images from zeroshot languages. In Table 1 , CN and KO are not used for learning in stage 1 but show metric scores that are not significantly different from EN used for learning. Figure 2 shows images generated in various languages using the stage1 model. The gener-ated images from zero-shot language show similar visual quality to images generated with languages used for learning in Figure 2 . In particular, our model can generate images from low-resource languages such as Thai(TH) and Nepali(NE). ", "cite_spans": [], "ref_spans": [ { "start": 257, "end": 264, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 401, "end": 409, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 608, "end": 616, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Zero-shot Language Text-to-image Generation", "sec_num": "4.4" }, { "text": "We conducted an experiment to show the effect of the Siamese mechanism training. In The images were generated with sentences in which the nouns in the English description were replaced with Chinese and Korean nouns, respectively. semantically consistent images if the semantics are the same in different languages. As shown in Table 2 , it can be confirmed that the distance has gotten closer after stage 2. In addition, images generated from texts in different languages with the same meaning have similar images as shown in Figure 3 . And Figure 4 shows the model can extract semantic commons between languages.", "cite_spans": [], "ref_spans": [ { "start": 327, "end": 335, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 527, "end": 535, "text": "Figure 3", "ref_id": "FIGREF2" }, { "start": 542, "end": 550, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Multilingual Semantic Consistent Text-to-image Generation", "sec_num": "4.5" }, { "text": "In this paper, we propose a LaSC-GAN for text-toimage generation. Through language-agnostic text encoder, the model can generate images with lowresource language texts in zero-shot setting. Furthermore, by Siamese mechanism, the model can extract high-level consistent semantics between languages when generating images. The experiments on COCO-EN, KO, and CN show that our proposed method can generate photo-realistic images from the relatively low-resource language text and extract semantic commons between languages for image generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This work was partly supported by the IITP (2015-0-00310-SW.Star-Lab/20%, 2018-0-00622-RMI/15%, 2019-0-01371-BabyMind/20%, 2021-0-02068-AIHub/15%, 2021-0-01343-GSAI (SNU)/15%) grants, and the CARAI 4 (UD190031RD/15%) grant funded by the DAPA and ADD.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Languageagnostic bert sentence embedding", "authors": [ { "first": "Fangxiaoyu", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Naveen", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.01852" ] }, "num": null, "urls": [], "raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Language- agnostic bert sentence embedding. arXiv preprint arXiv:2007.01852.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Coco-cn for cross-lingual image tagging, captioning, and retrieval", "authors": [ { "first": "Xirong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Chaoxi", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoxu", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Weiyu", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Zhengxiong", "middle": [], "last": "Jia", "suffix": "" }, { "first": "Gang", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jieping", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2019, "venue": "IEEE Transactions on Multimedia", "volume": "21", "issue": "9", "pages": "2347--2360", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xirong Li, Chaoxi Xu, Xiaoxu Wang, Weiyu Lan, Zhengxiong Jia, Gang Yang, and Jieping Xu. 2019. Coco-cn for cross-lingual image tagging, caption- ing, and retrieval. IEEE Transactions on Multimedia, 21(9):2347-2360.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Microsoft coco: Common objects in context", "authors": [ { "first": "Tsung-Yi", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Maire", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Belongie", "suffix": "" }, { "first": "James", "middle": [], "last": "Hays", "suffix": "" }, { "first": "Pietro", "middle": [], "last": "Perona", "suffix": "" }, { "first": "Deva", "middle": [], "last": "Ramanan", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Doll\u00e1r", "suffix": "" }, { "first": "C Lawrence", "middle": [], "last": "Zitnick", "suffix": "" } ], "year": 2014, "venue": "European conference on computer vision", "volume": "", "issue": "", "pages": "740--755", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2014. Microsoft coco: Com- mon objects in context. In European conference on computer vision, pages 740-755. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Generative adversarial text to image synthesis", "authors": [ { "first": "Scott", "middle": [], "last": "Reed", "suffix": "" }, { "first": "Zeynep", "middle": [], "last": "Akata", "suffix": "" }, { "first": "Xinchen", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Lajanugen", "middle": [], "last": "Logeswaran", "suffix": "" }, { "first": "Bernt", "middle": [], "last": "Schiele", "suffix": "" }, { "first": "Honglak", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2016, "venue": "International conference on machine learning", "volume": "", "issue": "", "pages": "1060--1069", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthesis. In International conference on machine learning, pages 1060-1069. PMLR.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Godiva: Generating open-domain videos from natural descriptions", "authors": [ { "first": "Chenfei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Lun", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Qianxi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Binyang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Guillermo", "middle": [], "last": "Sapiro", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Duan", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.14806" ] }, "num": null, "urls": [], "raw_text": "Chenfei Wu, Lun Huang, Qianxi Zhang, Binyang Li, Lei Ji, Fan Yang, Guillermo Sapiro, and Nan Duan. 2021. Godiva: Generating open-domain videos from natural descriptions. arXiv preprint arXiv:2104.14806.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Attngan: Fine-grained text to image generation with attentional generative adversarial networks", "authors": [ { "first": "Tao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Pengchuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qiuyuan", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "1316--1324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2018. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 1316-1324.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantics disentangling for text-to-image generation", "authors": [ { "first": "Guojun", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Bin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Sheng", "suffix": "" }, { "first": "Nenghai", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Shao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE/CVF conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "2327--2336", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guojun Yin, Bin Liu, Lu Sheng, Nenghai Yu, Xiaogang Wang, and Jing Shao. 2019. Semantics disentangling for text-to-image generation. In Proceedings of the IEEE/CVF conference on computer vision and pat- tern recognition, pages 2327-2336.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "authors": [ { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shaoting", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dimitris", "middle": [ "N" ], "last": "Metaxas", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "5907--5915", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. 2017. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 5907-5915.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Stackgan++: Realistic image synthesis with stacked generative adversarial networks", "authors": [ { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hongsheng", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shaoting", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiaogang", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaolei", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Dimitris", "middle": [ "N" ], "last": "Metaxas", "suffix": "" } ], "year": 2018, "venue": "IEEE transactions on pattern analysis and machine intelligence", "volume": "41", "issue": "", "pages": "1947--1962", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. 2018. Stackgan++: Realistic image syn- thesis with stacked generative adversarial networks. IEEE transactions on pattern analysis and machine intelligence, 41(8):1947-1962.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Cjetig: Zero-shot cross-lingual text-to-image generation by corpora-based joint encoding. Knowledge-Based Systems", "authors": [ { "first": "Han", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Suyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Hongqing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2022, "venue": "", "volume": "239", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Han Zhang, Suyi Yang, and Hongqing Zhu. 2022. Cje- tig: Zero-shot cross-lingual text-to-image generation by corpora-based joint encoding. Knowledge-Based Systems, 239:108006.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The architecture of LaSC-GAN.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Qualitative results of zero-shot language text to image generation of stage 1. In stage 1, the model was trained using only English texts. GT, EN, KO, CN, FR, TH, and NE denote grond-truth, English, Korean, Chinese, French, Thai, and Nepali respectively. The English description was translated into each language and used for generation.", "num": null, "uris": null, "type_str": "figure" }, "FIGREF2": { "text": "Qualitative examples of the LaSC-GAN. The results of each stage with given a pair of language descriptions", "num": null, "uris": null, "type_str": "figure" }, "FIGREF3": { "text": "Image generation results of the LaSC-GAN.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "text": "ST.1 ST.2 ST.1 \u2227 ST.2 ST.1 ST.2 ST.1 \u2227 ST.2 ST.1 ST.2 ST.1\u2227 ST.2 Quantitative results for each stage of LaSC-GAN. ST, EN, KO, and CN denote stage, English, Korean, and Chinese. ST.1 and ST.2 refer to models that have undergone only Stage 1 and 2 learning processes, respectively. And ST.1 \u2227 ST.2 refer to a model using both learning processes together.", "num": null, "html": null, "content": "
MetricIS \u2191FID \u2193CLIP \u2191
EN 14.89--97.41--0.227--
KO 12.24 14.7615.58103.26 102.0493.160.196 0.1980.195
CN 14.98 16.1416.5597.26 93.6493.400.213 0.2140.212
", "type_str": "table" }, "TABREF1": { "text": "", "num": null, "html": null, "content": "
, the
", "type_str": "table" }, "TABREF2": { "text": "FID between languages in each stage.", "num": null, "html": null, "content": "", "type_str": "table" } } } }