Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Flower classification using deep convolutional neural networks

Flower classification is a challenging task due to the wide range of flower species, which have a similar shape, appearance or surrounding objects such as leaves and grass. In this study, the authors propose a novel two-step deep learning classifier to distinguish flowers of a wide range of species. First, the flower region is automatically segmented to allow localisation of the minimum bounding box around it. The proposed flower segmentation approach is modelled as a binary classifier in a fully convolutional network framework. Second, they build a robust convolutional neural network classifier to distinguish the different flower types. They propose novel steps during the training stage to ensure robust, accurate and real-time classification. They evaluate their method on three well known flower datasets. Their classification results exceed 97% on all datasets, which are better than the state-of-the-art in this domain.

References

    1. 1)
      • 11. Hsu, T., Lee, C., Chen, L.: ‘An interactive flower image recognition system’, Multimedia Tools Appl., 2011, 53, (1), pp. 5373.
    2. 2)
      • 46. Xia, X., Xu, C., Nan, B.: ‘Inception-v3 for flower classification’. Proc. Int. Conf. Image, Vision and Computing (ICIVC), Chengdu, China, June 2017, pp. 783787.
    3. 3)
      • 57. Liu, F., Lin, G., Qiao, R., et al: ‘Structured learning of tree potentials in CRF for image segmentation’, IEEE Trans. Neural Netw. Learn. Syst., 2017, pp. 17, doi: 10.1109/TNNLS.2017.2690453.
    4. 4)
      • 3. Larson, R. (Ed.): ‘Introduction to floriculture’ (Academic Press, San Diego, CA, USA, 1992, 2nd edn.).
    5. 5)
      • 15. Chai, Y., Lempitsky, V., Zisserman, A.: ‘BiCoS: a Bi-level co-segmentation method for image classification’. Proc. Int. Conf. Computer Vision, Barcelona, Spain, November 2011, pp. 25792586.
    6. 6)
      • 30. Fernando, B., Fromont, E., Tuytelaars, T.: ‘Mining mid-level features for image classification’, Int. J. Comput. Vis., 2014, 108, (3), pp. 186203.
    7. 7)
      • 10. Xie, L., Wang, J., Lin, W., et al: ‘Towards reversal-invariant image representation’, Int. J. Comput. Vis., 2017, 123, (2), pp. 226250.
    8. 8)
      • 21. Girshick, R., Donahue, J., Darrell, T., et al: ‘Rich feature hierarchies for accurate object detection and semantic segmentation’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, June 2014, pp. 580587.
    9. 9)
      • 35. Song, G., Jin, X., Chen, G., et al: ‘Two-level hierarchical feature learning for image classification’, Front. Inf. Technol. Electron. Eng., 2016, 17, (9), pp. 897906.
    10. 10)
      • 8. Yang, M., Zhang, L., Feng, X., et al: ‘Sparse representation based Fisher discrimination dictionary learning for image classification’, Int. J. Comput. Vis., 2014, 109, (3), pp. 209232.
    11. 11)
      • 24. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only look once: unified, real-time object detection’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, June 2016, pp. 779788.
    12. 12)
      • 36. Xie, L., Hong, R., Zhang, B., et al: ‘Image classification and retrieval are ONE’. Proc. Fifth ACM on Int. Conf. Multimedia Retrieval, Shanghai, China, June 2015, pp. 310.
    13. 13)
      • 6. Nilsback, M., Zisserman, A.: ‘A visual vocabulary for flower classification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, New York, NY, June 2006, 2, pp. 14471454.
    14. 14)
      • 37. Razavian, A., Azizpour, H., Sullivan, J., et al: ‘CNN features off-the-shelf: an astounding baseline for recognition’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Columbus, OH, June 2014, pp. 512519.
    15. 15)
      • 40. Zheng, L., Zhao, Y., Wang, S., et al: ‘Good practice in CNN feature transfer’, arXiv preprint arXiv:1604.00133, 2016.
    16. 16)
      • 2. Das, M., Manmatha, R., Riseman, E.: ‘Indexing flower patent images using domain knowledge’, IEEE Intell. Syst. Appl., 1999, 14, (5), pp. 2433.
    17. 17)
      • 26. Murray, N., Perronnin, F.: ‘Generalized max pooling’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, June 2014, pp. 24732480.
    18. 18)
      • 1. Kenrick, P.: ‘Botany: the family tree flowers’, Nature, 1999, 402, (6760), pp. 358359.
    19. 19)
      • 22. Girshick, R.: ‘Fast R-CNN’. Proc. IEEE Int. Conf. Computer Vision, Santiago, Chile, December 2015, pp. 14401448.
    20. 20)
      • 52. Nilsback, M., Zisserman, A.: ‘Delving deeper into the whorl of flower segmentation’, Image Vis. Comput., 2010, 28), (6), pp. 10491062.
    21. 21)
      • 14. Chen, Q., Song, Z, Hua, Y., et al: ‘Hierarchical matching with side information for image classification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, RI, June 2012, pp. 34263433.
    22. 22)
      • 16. Qi, X., Xiao, R., Li, C., et al: ‘Pairwise rotation invariant co-occurrence local binary pattern’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (11), pp. 21992213.
    23. 23)
      • 32. Ye, G., Liu, D., Jhuo, I., et al: ‘Robust late fusion with rank minimization’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, RI, June 2012, pp. 30213028.
    24. 24)
      • 17. Hu, W., Hu, R., Xie, N., et al: ‘Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering’, IEEE Trans. Image Process., 2014, 23, (4), pp. 15131526.
    25. 25)
      • 5. Nilsback, M., Zisserman, A.: ‘Automated flower classification over a large number of classes’. Proc. Sixth Indian Conf. Computer Vision, Graphics & Image Processing, Bhubaneswar, India, December 2008, pp. 722729.
    26. 26)
      • 38. Qian, Q., Jin, R., Zhu, S., et al: ‘Fine-grained visual categorization via multi-stage metric learning’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, June 2015, pp. 37163724.
    27. 27)
      • 20. Shelhamer, E., Long, J., Darrell, T.: ‘Fully convolutional networks for semantic segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (4), pp. 640651.
    28. 28)
      • 58. Belongie, S., Perona, P.: ‘Visipedia circa 2015’, Pattern Recognit. Lett., 2016, 72, pp. 1524.
    29. 29)
      • 29. Zhang, C., Huang, Q., Tian, Q.: ‘Contextual exemplar classifier based image representation for classification’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (8), pp. 16911699.
    30. 30)
      • 53. Jia, Y., Shelhamer, E., Donahue, J., et al: ‘Caffe: convolutional architecture for fast feature embedding’. Proc. 22nd ACM Int. Conf. Multimedia, Orlando, FL, November 2014, pp. 675678.
    31. 31)
      • 55. Aydin, D., Uğur, A.: ‘Extraction of flower regions in color images using ant colony optimization’, Procedia Comput. Sci., 2011, 3, pp. 530536.
    32. 32)
      • 56. Visin, F., Romero, A., Cho, K., et al: ‘Reseg: a recurrent neural network-based model for semantic segmentation’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, June 2016, pp. 426433.
    33. 33)
      • 51. Nilsback, M., Zisserman, A.: ‘Delving into the whorl of flower segmentation’. Proc. British Machine Vision Conf., Warwick, UK, September 2007, pp. 54.154.10.
    34. 34)
      • 33. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, June 2016, pp. 770778.
    35. 35)
      • 25. Xu, Y., Zhang, Q., Wang, L.: ‘Metric forests based on Gaussian mixture model for visual image classification’, Soft Comput., 2018, 22, (2), pp. 499509.
    36. 36)
      • 34. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, June 2015, pp. 19.
    37. 37)
      • 47. Wei, X., Luo, J., Wu, J., et al: ‘Selective convolutional descriptor aggregation for fine-grained image retrieval’, IEEE Trans. Image Process., 2017, 26, (6), pp. 28682881.
    38. 38)
      • 9. Khan, F., van de Weijer, J., Vanrell, M.: ‘Modulating shape features by color attention for object recognition’, Int. J. Comput. Vis., 2012, 98, (1), pp. 4964.
    39. 39)
      • 41. Bakhtiary, A., Lapedriza, A., Masip, D.: ‘Winner takes all hashing for speeding up the training of neural networks in large class problems’, Pattern Recognit. Lett., 2017, 93, pp. 3847.
    40. 40)
      • 31. Zhang, C., Liu, J., Liang, C., et al: ‘Image classification using Haar-like transformation of local features with coding residuals’, Signal Process., 2013, 93, (8), pp. 21112118.
    41. 41)
      • 48. Xie, G., Zhang, X., Yang, W., et al: ‘LG-CNN: from local parts to global discrimination for fine-grained recognition’, Pattern Recognit., 2017, 71, pp. 118131.
    42. 42)
      • 18. Krizhevsky, A., Sutskever, I., Hinton, G.: ‘ImageNet classification with deep convolutional neural networks’, in Pereira, F., Burges, C., Bottou, L., et al (ED.): ‘Advances in neural information processing systems’ (Curran Associates, Inc., Red Hook, NY, USA, 2012), pp. 10971105.
    43. 43)
      • 13. Chai, Y., Rahtu, E., Lempitsky, V., et al: ‘TriCoS: a tri-level class-discriminative co-segmentation method for image classification’. Proc. European Conf. Computer Vision, Florence, Italy, October 2012, I, pp. 794807.
    44. 44)
      • 7. Zou, J., Nagy, G.: ‘Evaluation of model-based interactive flower recognition’. Proc. Int. Conf. Pattern Recognition, Cambridge, UK, August 2004, 2, pp. 311314.
    45. 45)
      • 54. Saitoh, T., Aoki, K., Kaneko, T.: ‘Automatic recognition of blooming flowers’. Proc. Int. Conf. Pattern Recognition, Cambridge, UK, August 2004, 1, pp. 2730.
    46. 46)
      • 43. Liu, Y., Tang, F., Zhou, D., et al: ‘Flower classification via convolutional neural network’. Proc. IEEE Int. Conf. Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications, Qingdao, China, November 2016, pp. 110116.
    47. 47)
      • 23. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (6), pp. 11371149.
    48. 48)
      • 27. Xie, L., Wang, J., Zhang, B., et al: ‘Incorporating visual adjectives for image classification’, Neurocomputing, 2016, 182, pp. 4855.
    49. 49)
      • 19. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. Proc. Int. Conf. Learning Representations, San Diego, CA, May 2015, arXiv preprint arXiv:1409.1556.
    50. 50)
      • 50. Loshchilov, I., Hutter, F.: ‘SGDR: stochastic gradient descent with warm restarts’. Proc. Int. Conf. Learning Representations, Toulon, France, April 2017, arXiv preprint arXiv:1608.03983.
    51. 51)
      • 39. Xie, G., Zhang, X., Shu, X., et al: ‘Task-driven feature pooling for image classification’. Proc. IEEE Int. Conf. Computer Vision, Santiago, Chile, December 2015, pp. 11791187.
    52. 52)
      • 44. Liu, Y., Guo, Y., Lew, M.: ‘On the exploration of convolutional fusion networks for visual recognition’, Proc. Int. Conf. MultiMedia Modeling, Reykjavik, Iceland, January 2017, pp. 227289.
    53. 53)
      • 12. Mottos, A., Feris, R.: ‘Fusing well-crafted feature descriptors for efficient fine-grained classification’. Proc. IEEE Int. Conf. Image Processing, Paris, France, October 2014, pp. 51975201.
    54. 54)
      • 4. Chi, Z.: ‘Data management for live plant identification’, in Feng, D., , Siu, W.C., , Zhang, H.J. (ED.): ‘Mutimedia information retrieval and Management’ (Springer, Berlin Heidelberg, 2003), pp. 432457.
    55. 55)
      • 42. Zhang, C., Li, R., Huang, Q., et al: ‘Hierarchical deep semantic representation for visual categorization’, Neurocomputing, 2017, 257, pp. 8896.
    56. 56)
      • 49. Shapiro, L., Stockman, G.: ‘Computer Vision’ (Prentice-Hall, Upper Saddle River, NJ, USA, 2001), pp. 5354.
    57. 57)
      • 45. Chakraborti, T., McCane, B., Mills, S., et al: ‘Collaborative representation based fine-grained species recognition’. Proc. Int. Conf. Image and Vision Computing New Zealand, Palmerston North, New Zealand, November 2016, pp. 16.
    58. 58)
      • 28. Ito, S., Kubota, S.: ‘Object classification using heterogeneous co-occurrence features’. Proc. European Conf. Computer Vision, Heraklion, Crete, Greece, September 2010, V, pp. 701714.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0155
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0155
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address