Real Results: Text Scene

In this sequence, the two cameras used in the driving scene are stationary. A sheet of paper with text and line art was moved in front of the cameras.

Back to the index

Frame 2

Noisy input CBM3D Liu and Freeman Our algorithm

Noisy input (adaptive exposure)

CBM3D

Liu and Freeman

Our algorithm

Noisy input (adaptive exposure)

Noisy input (adaptive exposure)

The CBM3D algorithm over-smooths the text and flower filament significantly. The readability of the text in our result is comparable to that of Liu and Freeman.

Frame 35

Noisy input CBM3D Liu and Freeman Our algorithm

Noisy input (adaptive exposure)

CBM3D

Liu and Freeman

Our algorithm

Noisy input (adaptive exposure)

Noisy input (adaptive exposure)

CBM3D over-smooths the text, and Liu and Freeman's method fades the Gaussian curves. Our method is better on both counts.

Our method produces slight artifacts in the blank regions of the paper due to structured noise in the input, as CBM3D does. Liu and Freeman's method is more consistent at smoothing the blank regions.

Video

Video

Left: A video captured with a constant 1/30 second exposure time on a Canon EOS 7D.
Middle: A video captured using motion-based exposure control which we implemented on a Point Grey Grasshopper.
Right: Our denoising result for the motion-based exposure control video.
Please click on the bottom right button to watch the video in full screen mode.

In frames with large motion, the text is more readable in our denoised images than in the motion-blurred images in the constant exposure squence.

Back to the index