Published on Sat Jun 19 2021

Unbalanced Feature Transport for Exemplar-based Image Translation

Fangneng Zhan, Yingchen Yu, Kaiwen Cui, Gongjie Zhang, Shijian Lu, Jianxiong Pan, Changgong Zhang, Feiying Ma, Xuansong Xie, Chunyan Miao

This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in image translation. The introduction of optimal transport mitigates the constraint of many-to-one feature matching significantly.

1
0
0
Abstract

Despite the great success of GANs in images translation with different conditioned inputs such as semantic segmentation and edge maps, generating high-fidelity realistic images with reference styles remains a grand challenge in conditional image-to-image translation. This paper presents a general image translation framework that incorporates optimal transport for feature alignment between conditional inputs and style exemplars in image translation. The introduction of optimal transport mitigates the constraint of many-to-one feature matching significantly while building up accurate semantic correspondences between conditional inputs and exemplars. We design a novel unbalanced optimal transport to address the transport between features with deviational distributions which exists widely between conditional inputs and exemplars. In addition, we design a semantic-activation normalization scheme that injects style features of exemplars into the image translation process successfully. Extensive experiments over multiple image translation tasks show that our method achieves superior image translation qualitatively and quantitatively as compared with the state-of-the-art.

Sun Jun 09 2019
Computer Vision
What and Where to Translate: Local Mask-based Image-to-Image Translation
Image-to-image translation has obtained significant attention. Two intrinsic problems exist in the existing methods. We propose a novel approach that extracts out a local mask from the exemplar that determines what style to transfer. We demonstrate the quantitative and qualitative evaluation results to demonstrate the advantages of our proposed approach.
0
0
0
Mon Dec 24 2018
Computer Vision
Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation
The method is based on unsupervised exemplar-based image-to-image translation. It works via whitening and coloring. The code is available at https://github.com/WonwoongCho/GDWCT.
0
0
0
Thu Jul 23 2020
Machine Learning
TSIT: A Simple and Versatile Framework for Image-to-Image Translation
We introduce a simple and versatile framework for image-to-image translation. We unearth the importance of normalization layers, and provide a carefully designed two-stream generative model with newly proposed feature transformations.
0
0
0
Wed Jul 07 2021
Computer Vision
Bi-level Feature Alignment for Versatile Image Translation and Manipulation
High-fidelity image generation with faithful style control remains a grand challenge in computer vision. This paper presents a versatile image translation and manipulation framework that achieves in accurate semantic and style guidance in image generation by explicitly building a correspondence.
2
0
1
Mon May 28 2018
Computer Vision
Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency
The Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component and a style component specific to each domain.
0
0
0
Sun Apr 12 2020
Computer Vision
Cross-domain Correspondence Learning for Exemplar-based Image Translation
We present a framework for exemplar-based image translation. The network synthesizes images based on the appearance of semantically corresponding patches in the exemplar. Our method is superior to state-of-the-art methods in terms of image quality.
0
0
0
Fri Oct 27 2017
Neural Networks
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively. We add new layers that model increasingly fine details as training progresses.
5
1,390
3,769
Mon Nov 21 2016
Computer Vision
Image-to-Image Translation with Conditional Adversarial Networks
conditional adversarial networks are a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping.
5
2
8
Thu Sep 04 2014
Computer Vision
Very Deep Convolutional Networks for Large-Scale Image Recognition
Convolutional networks of increasing depth can achieve state-of-the-art results. The research was the basis of the team's ImageNet Challenge 2014.
2
2
7
Sun Mar 28 2021
Computer Vision
Defect-GAN: High-Fidelity Defect Synthesis for Automated Defect Inspection
Automated defect inspection is critical for effective and efficient maintenance, repair, and operations in advanced manufacturing. Defect-GAN is an automated defect synthesis network that generates realistic and diverse defect samples.
1
1
2
Mon May 21 2018
Machine Learning
Self-Attention Generative Adversarial Networks
SAGAN allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points.
4
1
2
Mon Dec 21 2020
Computer Vision
EMLight: Lighting Estimation via Spherical Distribution Approximation
Earth Mover Light (EMLight) leverages a regression network and a neural projector for accurate illumination estimation. Extensive experiments show that EMLight achieves superior illumination estimation and the generated relighting in 3D object embedding exhibits superior plausibility and fidelity.
1
3
2
Thu Jul 01 2021
Computer Vision
Blind Image Super-Resolution via Contrastive Representation Learning
Most existing SR methods are non-blind and assume that degradation has a single fixed and known distribution (e.g., bicubic) CRL-SR focuses on images with multi-modal and spatially variant distributions.
1
0
1
Fri Jun 25 2021
Computer Vision
Image-to-image Transformation with Auxiliary Condition
The performance of image recognition like human pose detection, trained with simulated images would usually get worse due to the divergence between real and simulated data. To overcome this problem, we propose to introduce the label information of subjects, e.g., pose and type of objects in the training of CycleGAN.
5
0
0
Thu Jun 24 2021
Computer Vision
Sparse Needlets for Lighting Estimation with Spherical Transport Loss
Accurate lighting estimation is critical to many computer vision and graphics tasks such as high-dynamic-range (HDR) relighting. Existing approaches model lighting in either frequency domain or spatial domain. NeedleLight is a new lighting estimation model that represents illumination with needlets.
8
0
0
Thu Apr 08 2021
Computer Vision
Deep Monocular 3D Human Pose Estimation via Cascaded Dimension-Lifting
0
0
0
Wed Jun 30 2021
Computer Vision
A Survey on Adversarial Image Synthesis
Adversarial image synthesis has drawn increasing attention and made tremendous progress in recent years. This paper provides a taxonomy of methods used in image synthesis, review different models for text-to-image synthesis and image- to-image translation.
4
0
0