Outdoor images often suffer from low contrast and limited visibility due to haze, small particles such as dust, mist, and fumes which deflect light from its original course of propagation. Haze has two effects on the image: it weaken the imgage contrast and also adds an additive component to the image, so-called airlight. Recovering a haze-free image can restore the visibility of the scene and correct the color shift caused by the airlight. Furthermore, dehazing can benefit many computer vision algorithms which usually assume that the input image after radiometric calibration is the scene radiance and will suffer from the biased, low contrast scene radiance. Last but not lease, since haze is dependent on the unknow depth information, scene depth estimation is usually a by-product of dehazing and the depth information can be used for other applications.
Approach and Implementation:
In this project, we have performed single image dehazing and also video dehazing using the algorithm proposed by He et al.. Given the hazy image, we compute the dark channel. Based on the assumption that dark channel of haze-free image is zero, we obtain the raw transmission map. Then we perform transmission refinement by solving sparse linear system or using guided filter. Finally we use refined transmission and estimated atmospheric light to calculate the scene radiance, which is result haze-free image. Furthermore, we can also recover scene depth from the refined transmission map.
Besides single image dehazing, we have also conducted video hazing. Instead of performing dehazing on each single image frame of video using the method proposed previously. We have performed video dehazing more efficiently. We first calculate the atmospheric light and refined transmission map based on the first frame of input video. By making the assumption that the atmospheric light are constant for all the frames of the video, we use the estimated atmospheric light obtained from the first frame to recover all frames in the video, which saves the computation work of calculating atmospheric light for each frame. To compute the transmission map for each frame, we first compute the intensity difference between the current frame and previous adjacent frame, and then based on the difference and transmission map of previous adjacent frame, we calcuate the transmission map of the current frame. More specifically, we first compute the transmission map of the first frame from scatch using the method of single image dehazing. Then we calculate the transmission map of the second frame based on the result transmission map of first frame and also the intensity difference between the two frames. Then we do the same procedure for the following frames recursively. Performing video dehazing using the proposed method is much faster than the naive solution which perform dehazing on each single frame independently.
In the experiments, we compute the raw transmission map using the a dark channel prior proposed in . And then filter the raw transmmission map under the guidance of the hazy input image. Results below shows the recovered images, raw depth map and refined depth map. As can be seen, the refined depth maps are sharp near depth edges and consistent with the input images. The atmospheric lights in these images are automatically estimated, which are indicated by the red pixels in the firsts column of images. The approach proposed can recover the details and also vivid colors even in heavily hazy regions.
A key parameter in the algorithm is patch size. Results present the haze removal results obtained using different patch sizes. It is shown that the smaller the patch size, the recoverd scene radiance is oversaturated. For larger pathc size, the darker the dark channel, consequently the dark channel of the scene radiance after haze removal is close to zero. On the other hand, the assumption that the transmission is constant within a patch becomes less appropriate. If the patch size is too large, halos near depth edges in the recovered image may become larger before refinement. When the patch size is 3 × 3, the colors looks oversaturated. The results appear more natural when using larger patch sizes. This shows that the method works well for sufficiently large patch sizes. Although large patch sizes will produce halos near depth edges, the following guided filter process will be able to reduce the artifacts introduced by large patches. We also notice that the result images obtained by applying larger patch sizes look to be slightly hazier, but the differences are small. Typically, for iamge with size 600 × 400, patch size of 15 × 15 is large enough to produce satisfactory results. The haze removal results in this report is produced using patch size 15 × 15 unless explicitly stated.
Another important input parameter for the algorithm is scattering coefficient of the atmosphere β. When the atmosphere is homogeneous, the scene radiance is attenuated exponentially with the depth. If we know the transmission, we can recover the depth up to an unknown scale. Results below show the recovery results using different β values. To get the best haze-free results, we need to experiment different β values in a trial and error fashion. As β increases, the recovered images becomes darker , less hazier and also the color appears oversaturated.
Dark channel prior method is an simple but quite efficient way to dehaze images. By performing haze removel on images or videos, we can recover the scene radiance and also get the depth map as by-product. However, it has some limitations. One limitation is it cannot correctly recover the scene radiance for inherently white or grayish objects. Another limitation is for the situation when atmospheric light is not constant over the scene, we cannot get satisfactory results by using Dark channel prior method.
Since current algorithm requires tuning parameters such as dark prior patch size, one of the possible future work could be design a more robust algorithm. Another aspect needs more future work is to suppress the visual artifact of the result images.