Options
Saliency based framework for thermal and visual image fusion
Date Issued
2015-01-01
Author(s)
Saha, Ashirbani
Bhatnagar, Gaurav
Wu, Q. M.Jonathan
Abstract
In this work, we propose an efficient framework for integrating or fusingthermal and visual images. This category of fusion finds applicationin navigation and surveillance systems. The main idea in this category offusion is to successfully extract significant information from both of thethermal and visual images and combine them to form the fused image.The fused image has better representation of the entire scene as comparedto the source images alone. Since, human beings are the principal judgeof navigation and surveillance systems, we explore the ability of visual attentionproperty of the human visual system (HVS) to generate the fusedimage. Generally, techniques for modeling the visual attention generatesaliency maps in order to highlight the relative importance of pixels (orregions) in any image based on HVS. In the proposed approach, we usesaliency maps from the source images to combine the significant thermal information with the significant visual information. The saliency mapsserve two purposes. Firstly, they highlight the salient areas in the thermaland visual images. Secondly, the saliency values are also used to computeweights to generate the final fused image. Hence, the salient parts ofthe visual and thermal images are retained in the fused image. Differenttypes of visual attention modeling techniques are used in the frameworkto demonstrate their relative performance in thermal and visual imagefusion. Though the chosen modeling technique plays a key role in theperformance of the fusion framework, our experiments on various imagesets demonstrate the promise of the proposed approach in terms of visualinspection and different objective evaluation criteria. The novelty ofthe approach lies in developing a framework for saliency based fusion ofthermal and visual information.