http://iet.metastore.ingenta.com
1887

Development of saliency-based seamless image compositing using hybrid blending (SSICHB)

Development of saliency-based seamless image compositing using hybrid blending (SSICHB)

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

With the advancement in computer vision and graphics tools, digital compositing has become an integral part of the present computer-generated visual effects. However, factors such as inaccurate mask generation, intensive user interaction, and the creation of boundary seams due to colour or texture difference, make it hard to achieve quality composites. Poisson editing efficiently generates seamless composites but results in undesirable colour bleeding. Multi-resolution blending, in contrast, produces colour consistent composites; however often introduces blurry boundaries around the source object inserted. Motivated by these observations, the authors propose a colour consistent seamless compositing pipeline by integrating two new approaches. First, the authors use a visual attention algorithm based on the colour difference with increased edge weight using Gaussian filter bank (GFB) to ensure the least user interaction during mask generation. Second, the authors propose a hybrid framework by incorporating the goodness of the two different blending methods namely modified Poisson editing (MPE) and GFB-based multi-resolution blending to create colour consistent seamless composites. An extensive experiment has been carried out on challenging datasets to validate the proposed technique. Comparison with the state-of-the-art techniques shows the efficacy of the authors’ algorithm in generating colour consistent, seamless, and natural-looking composites for present image editing applications.

References

    1. 1)
      • 1. Bie, X.-H., Huang, H.-D., Wang, W.-C.: ‘Free appearance-editing with improved Poisson image cloning’, J. Comput. Sci. Technol., 2011, 26, (6), pp. 10111016.
    2. 2)
      • 2. Wen, J., Zhang, B., Pan, C., et al: ‘Image composition by constraining responses of filters’, IET Image Process., 2012, 6, (1), pp. 1121.
    3. 3)
      • 3. Wright, S.: ‘Digital compositing for film and video’ (Focal Press Taylor and Francis, New York and London, 2013, 3rd edn.).
    4. 4)
      • 4. Philip, S., Summa, B., Tierny, J., et al: ‘Distributed seams for gigapixel panoramas’, IEEE Trans. Visual. Comput. Graph., 2015, 21, (3), pp. 350362.
    5. 5)
      • 5. Sun, J., Jia, J., Tang, C.-K., et al: ‘Poisson matting’, ACM Trans. Graph. (TOG), 2004, 23, (3), pp. 315321.
    6. 6)
      • 6. Pérez, P., Gangnet, M., Blake, A.: ‘Poisson image editing’, ACM Trans. Graph. (TOG), 2003, 22, (3), pp. 313318.
    7. 7)
      • 7. Jia, J., Sun, J., Tang, C.K., et al: ‘Drag-and-drop pasting’, ACM Trans. Graph. (TOG), 2006, 25, (3), pp. 631637.
    8. 8)
      • 8. Bugeau, A., Bertalmo, M., Caselles, V., et al: ‘A comprehensive framework for image inpainting’, IEEE Trans. Image Process., 2010, 19, (10), pp. 26342645.
    9. 9)
      • 9. Chen, T., Cheng, M.-M., Tan, P., et al: ‘Sketch2Photo: internet image montage’, ACM Trans. Graph. (TOG), 2009, 28, (5), p. 124.
    10. 10)
      • 10. Zhang, Y., Ling, J., Zhang, X., et al: ‘Image copy-and-paste with optimized gradient’, Vis. Comput., 2014, 30, (10), pp. 11691178.
    11. 11)
      • 11. Yang, W., Zheng, J., Cai, J., et al: ‘Natural and seamless image composition with color control’, IEEE Trans. Image Process., 2009, 18, (11), pp. 25842592.
    12. 12)
      • 12. Henz, B., Limberger, F.A., Oliveira, M.M.: ‘Independent color-channel adjustment for seamless cloning based on Laplacian-membrane modulation’, Comput. Graph., 2016, 57, pp. 4654.
    13. 13)
      • 13. Hays, J., Efros, A.A.: ‘Scene completion using millions of photographs’, ACM Trans. Graph. (TOG), 2007, 26, (3), pp. 8794.
    14. 14)
      • 14. Johnson, M., Brostow, G.J., Shotton, J., et al: ‘Semantic photo synthesis’, Comput. Graph. Forum, 2006, 25, (3), pp. 407413.
    15. 15)
      • 15. Xiong, Y., Pulli, K.: ‘Mask-based image blending and its applications on mobile devices’. Proc. SPIE 6th Int. Symp. on Multispectral Image Processing and Pattern Recognition, Yichang, China, October-November 2009, p. 926.
    16. 16)
      • 16. Adelson, E.H., Anderson, C.H., Bergen, J.R., et al: ‘Pyramid methods in image processing’, RCA Eng., 1984, 29, (6), pp. 3341.
    17. 17)
      • 17. Pandey, A., Pati, U.C.: ‘A novel technique for non-overlapping image mosaicing based on pyramid method’. Proc. Annual IEEE India Conf. (INDICON), Bombay, India, December 2013, pp. 16.
    18. 18)
      • 18. Burt, P.J., Adelson, E.H.: ‘A multiresolution spline with application to image mosaics’, ACM Trans. Graph. (TOG), 1983, 2, (4), pp. 217236.
    19. 19)
      • 19. Pandey, A., Pati, U.C.: ‘A novel approach to multi-scale blending based on saliency mapping for multimedia image compositing applications’, Comput. Graph., 2016, 59, pp. 93106.
    20. 20)
      • 20. Levin, A., Zomet, A., Peleg, S., et al: ‘Seamless image stitching in the gradient domain’. Proc. European Conf. on Computer Vision (ECCV), Prague, Czech Republic, May 2004, pp. 377389.
    21. 21)
      • 21. Davis, J.: ‘Mosaics of scenes with moving objects’. Proc. Computer Vision and Pattern Recognition (CVPR), Santa Barbara, CA, USA, June 1998, pp. 354360.
    22. 22)
      • 22. Wang, W., Ng, M.K.: ‘A variational method for multiple-image blending’, IEEE Trans. Image Process., 2012, 21, (4), pp. 18091822.
    23. 23)
      • 23. Agarwala, A., Dontcheva, M., Agrawala, M., et al: ‘Interactive digital photomontage’, ACM Trans. Graph. (TOG), 2004, 23, (3), pp. 294302.
    24. 24)
      • 24. Zhou, J., Gao, S., Yan, Y., et al: ‘Saliency detection framework via linear neighbourhood propagation’, IET Image Process., 2014, 8, (12), pp. 804814.
    25. 25)
      • 25. Koch, C., Ullman, S.: ‘Shifts in selective visual attention: towards the underlying neural circuitry’, Matters Intell., 1987, 188, pp. 115141.
    26. 26)
      • 26. Borji, A., Cheng, M.M., Jiang, H., et al: ‘Salient object detection: A benchmark’, IEEE Trans. Image Process., 2015, 24, (12), pp. 57065722.
    27. 27)
      • 27. Itti, L., Koch, C., Niebur, E.: ‘A model of saliency-based visual attention for rapid scene analysis’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 11, pp. 12541259.
    28. 28)
      • 28. Ma, Y.-F., Zhang, H.-J.: ‘Contrast-based image attention analysis by using fuzzy growing’. Proc. 11th ACM Int. Conf. on Multimedia, Berkeley, CA, USA, November 2003, pp. 374381.
    29. 29)
      • 29. Hou, X., Zhang, L.: ‘Saliency detection: A spectral residual approach’. Proc. Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, USA, June 2007, pp. 18.
    30. 30)
      • 30. Zhang, J., Sclaroff, S.: ‘Exploiting surroundedness for saliency detection: a Boolean map approach’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (5), pp. 889902.
    31. 31)
      • 31. Arya, R., Singh, N., Agrawal, R.K.: ‘A novel hybrid approach for salient object detection using local and global saliency in frequency domain’, Multimed. Tools Appl., 2016, 75, (14), pp. 82678287.
    32. 32)
      • 32. Murala, S., Wu, Q.J.: ‘Spherical symmetric 3D local ternary patterns for natural, texture and biomedical image indexing and retrieval’, Neurocomputing, 2015, 149, pp. 15021514.
    33. 33)
      • 33. Gonzalez, R.C., Woods, R.E.: ‘Digital image processing’ (Prentice-Hall, New Jersey, 2007, 3rd edn.).
    34. 34)
      • 34. Liu, T., Yuan, Z., Sun, J., et al: ‘Learning to detect a salient object’, IEEE Trans. Pattern Anal. Mach. Intell., 2011, 33, (2), pp. 353367.
    35. 35)
      • 35. Achanta, R., Estrada, F., Wils, P., et al: ‘Salient region detection and segmentation’. Proc. Int. Conf. on Computer Vision Systems (ICVS), 2008, (LNCS, 5008), pp. 6675.
    36. 36)
      • 36. Vikram, T.N., Tscherepanow, M., Wrede, B.: ‘A saliency map based on sampling an image into random rectangular regions of interest’, Pattern Recognit.., 2012, 45, (9), pp. 31143124.
    37. 37)
      • 37. Harel, J., Koch, C., Perona, P.: ‘Graph-based visual saliency’. Proc. Advances in Neural Information Processing Systems, 2006, pp. 545552.
    38. 38)
      • 38. Tao, M.W., Johnson, M.K., Paris, S.: ‘Error-tolerant image compositing’, Int. J. Comput. Vis., 2013, 103, (2), pp. 178189.
    39. 39)
      • 39. Afifi, M., Hussain, K.F.: ‘MPB: A modified Poisson blending technique’, Comput. Vis. Media, 2015, 1, (4), pp. 331341.
    40. 40)
      • 40. Celik, T., Tjahjadi, T.: ‘Automatic image equalization and contrast enhancement using Gaussian mixture modeling’, IEEE Trans. Image Process., 2012, 21, (1), pp. 145156.
    41. 41)
      • 41. Crete, F., Dolmiere, T., Ladret, P., et al: ‘The blur effect: perception and estimation with a new no-reference perceptual blur metric’. Proc. of SPIE Human Vision and Electronic Imaging XII, San Jose, CA, USA, January 2007, p. 6492.
    42. 42)
      • 42. Gabarda, S., Cristóbal, G.: ‘Blind image quality assessment through anisotropy’, J. Opt. Soc. Am. A., 2007, 24, (12), pp. B42B51.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2016.0754
Loading

Related content

content/journals/10.1049/iet-ipr.2016.0754
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address