Physics to the RescueDeep Non-line-of-sight Reconstruction for High-speed Imaging ICCP 2022
- Fangzhou Mu1
- Sicheng Mo1
- Jiayong Peng2
- Xiaochun Liu1
- Ji Hyun Nam1
- Siddeshwar Raghavan1
- Andreas Velten1
- Yin Li1
- 1University of Wisconsin-Madison
- 2University of Science and Technology of China
Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, Nam et al. demonstrated a high-speed non-confocal imaging system that operates at 5Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.
We consider non-confocal non-line-of-sight (NLOS) imaging based on the time-of-flight principle (left). A pulsed laser located at l0 illuminates a part of a relay wall. Light bounces off the wall, interacts with the occluded object, scatters to the wall again, and is eventually captured by a time-resolved sensor at position s0 (different from l0).
Our deep model (right) takes a transient measurement and reconstructs the hidden scene in the form of intensity and/or depth images. One key challenge in high-speed NLOS imaging is the approximations in light transport introduced by the hardware in exchange for data acquisition speed. This breaks existing learning based methods that (1) learn from synthetic data that assume an idealized image formation model, and (2) do not account for the domain gap between real and synthetic data in their model design. To this end, we embed two physics priors into our model to regularize its solution space. This ensures that our model, despite being trained exclusively on synthetic data, generalizes well on real captures.
Here we demonstrate our reconstruction results on several challenging real captures. Compared to the state-of-the-art physics based method (RSD) and learning based method (LFE and NeTF), our method produces sharp and artifact-free reconstructions and is able to infer plausible depth of the objects without training using ground-truth depth.
Comparison to NeTF
Our transient volume renderer can be readily adapted for per-scene iterative optimization (NeTF++) and compare favorably to Neural Transient Fields (NeTF). Note that we keep the image contrast unaltered for fair comparison.