Published on Mon Aug 23 2021

Burst Imaging for Light-Constrained Structure-From-Motion

Ahalya Ravendran, Mitch Bryson, Donald G. Dansereau

Images captured under extremely low light conditions are noise-limited, which can cause existing robotic vision algorithms to fail. Our technique, based on burst photography, uses direct methods for image registration within bursts of short exposure time.

2
0
1
Abstract

Images captured under extremely low light conditions are noise-limited, which can cause existing robotic vision algorithms to fail. In this paper we develop an image processing technique for aiding 3D reconstruction from images acquired in low light conditions. Our technique, based on burst photography, uses direct methods for image registration within bursts of short exposure time images to improve the robustness and accuracy of feature-based structure-from-motion (SfM). We demonstrate improved SfM performance in challenging light-constrained scenes, including quantitative evaluations that show improved feature performance and camera pose estimates. Additionally, we show that our method converges more frequently to correct reconstructions than the state-of-the-art. Our method is a significant step towards allowing robots to operate in low light conditions, with potential applications to robots operating in environments such as underground mines and night time operation.

Thu May 31 2018
Computer Vision
Distinguishing Refracted Features using Light Field Cameras with Application to Structure from Motion
Robots must reliably interact with refractive objects in many applications. Refractive objects can cause many robotic vision algorithms to become unreliable or even fail. We propose a method to distinguish between refracted and Lambertian image features using a light field camera.
0
0
0
Mon Mar 16 2015
Computer Vision
PiMPeR: Piecewise Dense 3D Reconstruction from Multi-View and Multi-Illumination Images
New piecewise framework is proposed to explicitly take into account the change in illumination across several wide-baseline images. Unlike multi-view stereo methods, this pipeline deals with uncalibrated images that are subject to strong lighting variations.
0
0
0
Thu Jul 18 2019
Computer Vision
Temporally Coherent General Dynamic Scene Reconstruction
Existing techniques for dynamic scene reconstruction from multiple wide-baseline cameras primarily focus on reconstruction in controlled environments. This paper introduces a general approach to obtain a 4D representation of complex dynamic scenes without prior knowledge of the scene structure, appearance, or illumination.
0
0
0
Tue Apr 13 2021
Computer Vision
Lucas-Kanade Reloaded: End-to-End Super-Resolution from Raw Image Bursts
0
0
0
Tue Jun 14 2016
Computer Vision
Richardson-Lucy Deblurring for Moving Light Field Cameras
We generalize Richardson-Lucy (RL) deblurring to 4-D light fields by replacing the convolution steps with light field rendering of motion blur. The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes.
0
0
0
Sun Jan 13 2019
Computer Vision
LiFF: Light Field Features in Scale and Depth
Feature detectors and descriptors are key low-level vision tools. Unfortunately these fail in the presence of light transport effects including partial occlusion, low contrast, reflective or refractive surfaces. We introduce a new andcomputationally efficient 4D light field feature detector and descriptor: LiFF.
0
0
0
Sat Jun 15 2019
Computer Vision
Image-based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era
3D reconstruction is a longstanding ill-posed problem, which has been explored for decades. Since 2015, image-based 3D reconstruction using convolutional neural networks (CNN) has attracted increasing interest. This article provides a comprehensive survey of the recent developments in this field.
1
5
38
Wed May 08 2019
Computer Vision
Handheld Multi-Frame Super-Resolution
The technique uses hand tremor to acquire a burst of raw frames with small offsets. These frames are aligned and merged to form a single image with red, green, and blue values at every pixel site. The algorithm is the basis of the Super-Res Zoom feature on Google's flagship phone.
1
0
6
Tue Feb 03 2015
Computer Vision
ORB-SLAM: a Versatile and Accurate Monocular SLAM System
ORB-SLAM is a feature-based monocular SLAM system that operates in real time. It is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization.
1
0
1
Tue Mar 31 2020
Machine Learning
Supervised Raw Video Denoising with a Benchmark Dataset on Dynamic Scenes
Clean video frames for dynamic scenes cannot be captured with a long-exposure shutter or averaging multi-shots as was done for static images. In this paper, we solve this problem by creating motions for controllable objects, such as toys, and capturing each static moment for multiple times to generate clean video frames.
0
0
0
Tue Jan 26 2021
Computer Vision
Deep Burst Super-Resolution
Multi-frame super-resolution (MFSR) offers the possibility of reconstructing rich details by combining signal information from multiple shifted images. The increasing popularity of burst photography has made MFSR an important problem for real-world applications.
0
0
0
Sat Mar 28 2020
Computer Vision
A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising
Digital camera electronics are largely overlooked, despite their significant effect on raw measurement. We present a highly accurate noise formation model based on the characteristics of CMOS photosensors. We additionally propose a method to calibrate the noise parameters for available modern digital cameras.
0
0
0