Published on Wed Dec 20 2017

DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

K. Ram Prabhakar, V. Sai Srikar, R. Venkatesh Babu

We present a novel deep learning architecture for fusing static and multi-exposure images. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free

0
0
0
Abstract

We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have known to be robust to input conditions and have shown phenomenal performance in a supervised setting. However, the stumbling block in using deep learning for MEF was the lack of sufficient training data and an oracle to provide the ground-truth for supervision. To address the above issues, we have gathered a large dataset of multi-exposure image stacks for training and to circumvent the need for ground truth images, we propose an unsupervised deep learning framework for MEF utilizing a no-reference quality metric as loss function. The proposed approach uses a novel CNN architecture trained to learn the fusion operation without reference ground truth image. The model fuses a set of common low level features extracted from each image to generate artifact-free perceptually pleasing results. We perform extensive quantitative and qualitative evaluation and show that the proposed technique outperforms existing state-of-the-art approaches for a variety of natural images.

Wed Apr 04 2018
Computer Vision
Learnable Exposure Fusion for Dynamic Scenes
In this paper, we focus on Exposure Fusion (EF) [ExposFusi2] for dynamic scenes. The task is to fuse multiple images obtained by exposure bracketing to create an image which comprises a high level of details. A major problem of such tasks is that the images may not be spatially aligned due to scene motion.
0
0
0
Thu Jul 30 2020
Computer Vision
Benchmarking and Comparing Multi-exposure Image Fusion Algorithms
Multi-exposure image fusion (MEF) is an important area in computer vision. The lack of a benchmark makes it difficult to perform fair and comprehensive performance comparison among MEF algorithms. This paper proposes a benchmark for MEF.
0
0
0
Tue May 26 2020
Computer Vision
Learning a Reinforced Agent for Flexible Exposure Bracketing Selection
EBSNet is formulated as a reinforced agent that is trained by maximizing rewards provided by a multi-exposure fusion network. EBSNet can select an optimal exposure bracketing for multi-Exposure fusion. The network can be jointly trained to produce favorable results.
0
0
0
Tue Nov 12 2019
Computer Vision
Merging-ISP: Multi-Exposure High Dynamic Range Image Signal Processing
The image signal processing pipeline (ISP) is a core element of digital cameras to capture high-quality images from raw data. In high dynamic range (HDR) imaging, ISPs include steps like demosaicing of raw color filter array (CFA) data at different exposure times.
0
0
0
Mon Jun 29 2020
Computer Vision
End-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images
Conventional networks focus on the exposure transfer task to reconstruct the multi-exposure stack. We propose a novel framework with a fully differentiable high dynamic range imaging process. The experimental results show that the proposed network outperforms the state-of-the-art quantitative and qualitative results.
0
0
0
Thu May 09 2019
Computer Vision
Fast and Efficient Zero-Learning Image Fusion
We propose a real-time image fusion method using pre-trained neural networks. Our method generates a single image containing features from multiple sources. The experimental results demonstrate that our technique achieves state-of-the-art performance.
0
0
0