SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation
Abstract
The proposed image-to-image translation method uses a discriminator that focuses on statistical features, enhancing shape deformation and fine details with a simplified framework, surpassing existing models in applications like selfie-to-anime conversion.
For unsupervised image-to-image translation, we propose a discriminator architecture which focuses on the statistical features instead of individual patches. The network is stabilized by distribution matching of key statistical features at multiple scales. Unlike the existing methods which impose more and more constraints on the generator, our method facilitates the shape deformation and enhances the fine details with a greatly simplified framework. We show that the proposed method outperforms the existing state-of-the-art models in various challenging applications including selfie-to-anime, male-to-female and glasses removal.
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper