Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

New method for the fusion of complementary information from infrared and visual images for object detection

New method for the fusion of complementary information from infrared and visual images for object detection

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Visual and infrared cameras have complementary properties and using them together may increase the performance of object detection applications. Although the fusion of visual and infrared information results in a better recall rate than using only one of those domains, there is always a decrease in the precision rate whereas the infrared domain on its own always has higher precision. Thus, the fusion of these domains is meaningful only for a better recall rate, which means that more foreground pixels are detected correctly. This study presents a new computationally more efficient and simpler method for extracting the complementary information from both domains and fusing them to obtain better recall rates than those previously achieved. The method has been tested using a well-known database and a database created for the study and compared with earlier fusion methods.

References

    1. 1)
    2. 2)
      • Ibrahim, S., Wirth, M.: `Visible and IR data fusion technique using the contourlet transform', Int. Conf. on Computational Science and Engineering, 2009.
    3. 3)
      • Davis, J., Sharma, V.: `Robust background-subtraction for person detection in thermal imagery', IEEE Int. Workshop on Object Tracking and Classification Beyond the Visible Spectrum, 2004.
    4. 4)
    5. 5)
      • http://www.cse.ohio-state.edu/otcbvs-bench.
    6. 6)
      • R. Fletcher . (1987) Practical methods of optimization.
    7. 7)
    8. 8)
    9. 9)
    10. 10)
    11. 11)
      • G. Bhatnagar , B. Raman . A new image fusion technique based on directive contrast. Electron. Lett. Comput. Vis. Image Anal. , 2 , 18 - 38
    12. 12)
      • Elgammal, A., Hardwood, D., Davis, L.: `Non-parametric model for background subtraction', Proc. Sixth European Conf. on Computer Vision, 2000.
    13. 13)
      • Davis, J., Sharma, V.: `Robust detection of people in thermal imagery', Proc. Int. Conf. on Pattern Recognition, 2004, p. 713–716.
    14. 14)
    15. 15)
      • A. Leykin , Y. Ran , R. Hammoud . Infrared-visible video fusion for moving target tracking and pedestrian classification. Comput. Vis. Pattern Recognit. , 17 - 22
    16. 16)
    17. 17)
    18. 18)
      • Leykin, A., Tuceryan, M.: `A vision system for automated customer tracking for marketing analysis: low level feature extraction', Human Activity Recognition and Modelling Workshop, 2005.
    19. 19)
    20. 20)
      • Zhang, L., Wu, B., Nevatia, R.: `Pedestrian detection in infrared images based on local shape features', Fourth Joint IEEE Int. Workshop on Object Tracking and Classification in and Beyond the Visible Spectrum (OTCBVS’07), in Conjunction with CVPR 2007.
    21. 21)
      • L. Jiao , F. Liu , Y. Qi . Fusion of infrared and visual images based on contrast pyramid directional filter banks using clonal selection optimizing. Opt. Eng. , 2 , 1 - 8
    22. 22)
    23. 23)
    24. 24)
    25. 25)
    26. 26)
      • Conaire, O.C., Cooke, E., O'Connor, N., Murphy, N., Smeaton, A.: `Background modelling in infrared and visible spectrum video for people tracking', Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshop, 2005, Washington, DC, USA, p. 20.
    27. 27)
      • Park, C., Baea, K., Choia, S., Jungb, J.: `Image fusion in infrared image and visual image using normalized mutual information', Proc. SPIE Signal Processing, Sensor Fusion, and Target Recognition XVII, 2008.
    28. 28)
    29. 29)
      • KadewTraKuPong, P., Bowden, R.: `An improved adaptive background mixture model for real-time tracking with shadow detection', Proc. Second European Workshop on Advanced Video-Based Surveillance Systems, 2001.
    30. 30)
      • Zhang, X., Chen, Q., Men, T.: `Comparison of fusion methods for the infrared and color visible images', Second IEEE Int. Conf. on Computer Science and Information Technology, 2009.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2009.0374
Loading

Related content

content/journals/10.1049/iet-ipr.2009.0374
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address