http://iet.metastore.ingenta.com
1887

Image colourisation using deep feature-guided image retrieval

Image colourisation using deep feature-guided image retrieval

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

In this study, the authors aim to colourise a greyscale image using a fully automated framework which retrieves similar images from a reference database and then transfers the colour from the most similar retrieved images to perform colourisation. Inspired by the recent success of deep learning techniques in extracting semantic information from images, they first use fc7 features from AlexNet to retrieve similar images from the reference database. Top-k retrieved images are considered for colour transfer to the target greyscale image, using various pixel level features. The images which result from the previous step are given a colour enhancement with Reinhard stain normalisation. They follow a pixel-wise colour saturation based averaging technique to impart colour at pixel level. The final image is rectified using joint bilateral filtering. The resulting coloured images have a realistic appearance, similar in quality to the original coloured images. The proposed method outperforms several previous colourisation techniques, yielding superior performance both quantitatively and qualitatively. The method also enhances low-contrast images.

References

    1. 1)
      • 1. Huang, Y.C., Tung, Y.S., Chen, J.C., et al: ‘An adaptive edge detection based colorization algorithm and its applications’. Proc. of the 13th Annual ACM Int. Conf. on Multimedia, Singapore, 2005, vol. 14, no. 1, pp. 351354.
    2. 2)
      • 2. Levin, A., Lischinski, D., Weiss, Y.: ‘Colorization using optimization’. ACM SIGGRAPH 2004 Papers.Los Angeles, USA, 2004, vol. 14, no. 1, pp. 689694.
    3. 3)
      • 3. Luan, Q., Wen, F., Cohen-Or, D., et al: ‘Natural image colorization’. Proc. of the 18th Eurographics Conf. on Rendering Techniques, EGSR 07, Grenoble, France, 2007, vol. 42, no. 3, pp. 309320.
    4. 4)
      • 4. Qu, Y., Wong, T.-T., Heng, P.-A.: ‘Natural image colorization’. ACM SIGGRAPH 2006 Papers, SIGGRAPH 06, Boston, USA, 2006, vol. 3, pp. 12141220.
    5. 5)
      • 5. Yatziv, L., Sapiro, G.: ‘Fast image and video colorization using chrominance blending’, IEEE Trans. Img. Proc., 2006, 15, (5), pp. 11201129.
    6. 6)
      • 6. Welsh, T., Ashikhmin, M., Mueller, K.: ‘Transferring color to greyscale images’, ACM Trans. Graph., 2002, 21, (1), pp. 277280.
    7. 7)
      • 7. Gupta, R.K., Chia, A.Y.-S., Rajan, D., et al: ‘Image colorization using similar images’. ACM Int. Conf. on Multimedia, 2012, vol. 1, pp. 369378.
    8. 8)
      • 8. Charpiat, G., Hofmann, M., Scholkopf, B.: ‘Automatic image colorization via multimodal predictions’. European Conf. on Computer Vision ECCV, 2008, pp. 126139.
    9. 9)
      • 9. Chia, A.Y.-S., Zhuo, S., Gupta, R.K., et al: ‘Semantic colorization with internet images’, ACM Trans. Graph., 2011, 30, (156), pp. 18.
    10. 10)
      • 10. Krizhevsky, A., Sutskever, I., Hinton, G.: ‘Imagenet classification with deep convolutional neural networks’. Neural Information Processing Systems, 2012.
    11. 11)
      • 11. Morimoto, Y., Taguchi, Y., Naemura, T.: ‘Automatic colorization of grayscale images using multiple images on the web’. SIGGRAPH 2009: Talks, SIGGRAPH 09, New York, NY, USA, 2009, vol. 1.
    12. 12)
      • 12. Iizuka, S., Edgar, S.S., Ishikawa, H.: ‘Let there be color!: joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification’, ACM Trans. Graph. (TOG), 2016, 35, (4), p. 110.
    13. 13)
      • 13. Hertzmann, A., Jacobs, C.E., Oliver, N., et al: ‘Image analogies’. Proc. of the 28th Annual Conf. on Computer Graphics and Interactive Techniques, SIGGRAPH 2001, New York, USA, 2001, pp. 327340.
    14. 14)
      • 14. Reinhard, E., Ashikhmin, M., Gooch, B., et al: ‘Color transfer between images’, IEEE Comput. Graph. Appl., 2001, 42, (3), pp. 3440.
    15. 15)
      • 15. Irony, R., Cohen-Or, D., Lischinski, D.: ‘Colorization by example’. Eurographics Symp. on Rendering, Konstanz, Germany, 2005, vol. 2.
    16. 16)
      • 16. Liu, X., Wan, L., Qu, Y., et al: ‘Intrinsic colorization’, Trans. Graph., 2008, 27, p. 152.
    17. 17)
      • 17. Ng, J.Y.-H., Yang, F., Davis, L.S.: ‘Exploiting local features from deep networks for image retrieval’. CVPR Workshops, Boston, USA, 2015, pp. 5361.
    18. 18)
      • 18. Babenko, A., Slesarev, A., Chigorin, A., et al: ‘Neural codes for image retrieval’. European Conf. on Computer Vision ECCV, Zurich, Switzerland, Springer International Publishing, 2014, vol. 2014, pp. 584599.
    19. 19)
      • 19. Cheng, Z., Yang, Q., Sheng, B.: ‘Deep colorization’. Int. Conf. on Computer Vision (ICCV), Las Condes, Chile, 2015, vol. 1.
    20. 20)
      • 20. Deshpande, A., Rock, J., Forsyth, D.: ‘Learning large-scale automatic image colorization’. Int. Conf. on Computer Vision (ICCV), Las Condes, Chile, 2015, vol. 1.
    21. 21)
      • 21. Larsson, G., Gustav, M.M., Shakhnarovich, G.: ‘Learning representations for automatic colorization’. European Conf. on Computer Vision, Amsterdam, Netherlands, 2016, pp. 577593.
    22. 22)
      • 22. Zhang, R., Isola, P., Efros, A.A.: ‘Colorful image colorization’. European Conf. on Computer Vision, Amsterdam, Netherlands, 2016, pp. 649666.
    23. 23)
      • 23. Deshpande, A., Lu, J., Yeh, M.-C., et al: ‘Learning diverse image colorization’. IEEE Conf. on Computer Vision and Pattern Recognition, Honolulu, USA, 2017, pp. 28772885.
    24. 24)
      • 24. Li, B., Zhao, F., Su, Z., et al: ‘Example-based image colorization using locality consistent sparse representation’, IEEE Trans. Image Proc., 2017, 26, (11).
    25. 25)
      • 25. Zhao, J., Liu, L., Snoek, C.G., et al: ‘Pixel-level semantics guided image colorization’. British Machine Vision Conf., 2018.
    26. 26)
      • 26. Kim, J.M., Jane, Z.: ‘Colorization of mountainous landscape images in grayscale using texture feature analysis’. Proc. of the 2nd Int. Conf. on Vision, Image and Signal Processing, Las Vegas, USA, ACM, 2018, p. 8.
    27. 27)
      • 27. Zhuo, S., Liang, X., Guo, J., et al: ‘An edge-refined vectorized deep colorization model for grayscale-to-color images’, Neurocomputing, 2018, 311, pp. 305315.
    28. 28)
      • 28. Xia, Y., Qu, S., Wan, S.: ‘Scene guided colorization using neural networks’, Neural Comput. Appl., 2018, pp. 114.
    29. 29)
      • 29. Shinya, A., Mori, K., Harada, T., et al: ‘Potential improvement of CNN-based colorization for non-natural images’. IEEE Int. Workshop on Advanced Image Technology (IWAIT) 2018, Chiang Mai, Thailand, January 2018, pp. 14.
    30. 30)
      • 30. Kopf, J., Cohen, M.F., Lischinski, D., et al: ‘Joint bilateral upsampling’, ACM Trans. Graph., 2007, 26, (3), p. 96.
    31. 31)
      • 31. Oliva, A., Torralba, A.: ‘Modeling the shape of the scene: a holistic representation of the spatial envelope’, Int. J. Comput. Vis., 2001, 42, (3), pp. 145175.
    32. 32)
      • 32. Tola, E., Lepetit, V., Fua, P.: ‘A fast local descriptor for dense matching’. IEEE Conf. on Computer Vision and Pattern Recognition, Anchorage, Alaska, 2008, pp. 18.
    33. 33)
      • 33. Xiao, J., Hays, J., Ehinger, K., et al: ‘SUN database: large-scale scene recognition from abbey to Zoo’. IEEE Conf. on Computer Vision and Pattern Recognition, San Francisco, USA, 2010, vol. 15, no. 5.
    34. 34)
      • 34. Philbin, J., Chum, O., Isard, M., et al: ‘Object retrieval with large vocabularies and fast spatial matching’. IEEE Conf. on Computer Vision and Pattern Recognition, Minnesota, USA, 2007.
    35. 35)
      • 35. Martin, D., Fowlkes, C., Tal, D., et al: ‘A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics’. Proc. 8th Int'l Conf. Computer Vision, Vancouver, Canada, 2001, vol. 2, pp. 416423.
    36. 36)
      • 36. Zhou, B., Lapedriza, A., Xiao, J., et al: ‘Learning deep features for scene recognition using places database’. Advances in Neural Information Processing Systems, Montreal, Canada, 2014, vol. 27.
    37. 37)
      • 37. Guo, H., Zhang, G., Mei, C., et al: ‘Color enhancement algorithm for low-quality image based on gamma mapping’. Sixth Int. Conf. on Electronics and Information Engineering, Dalian, China, 2015, vol. 9794, p. 97941X.
    38. 38)
      • 38. Park, J., Lee, J.Y., Yoo, D., et al: ‘Distort-and-recover: color enhancement using deep reinforcement learning’. Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018, pp. 59285936.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2018.6169
Loading

Related content

content/journals/10.1049/iet-ipr.2018.6169
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address