http://iet.metastore.ingenta.com
1887

Flower classification using deep convolutional neural networks

Flower classification using deep convolutional neural networks

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Flower classification is a challenging task due to the wide range of flower species, which have a similar shape, appearance or surrounding objects such as leaves and grass. In this study, the authors propose a novel two-step deep learning classifier to distinguish flowers of a wide range of species. First, the flower region is automatically segmented to allow localisation of the minimum bounding box around it. The proposed flower segmentation approach is modelled as a binary classifier in a fully convolutional network framework. Second, they build a robust convolutional neural network classifier to distinguish the different flower types. They propose novel steps during the training stage to ensure robust, accurate and real-time classification. They evaluate their method on three well known flower datasets. Their classification results exceed 97% on all datasets, which are better than the state-of-the-art in this domain.

References

    1. 1)
      • 1. Kenrick, P.: ‘Botany: the family tree flowers’, Nature, 1999, 402, (6760), pp. 358359.
    2. 2)
      • 2. Das, M., Manmatha, R., Riseman, E.: ‘Indexing flower patent images using domain knowledge’, IEEE Intell. Syst. Appl., 1999, 14, (5), pp. 2433.
    3. 3)
      • 3. Larson, R. (Ed.): ‘Introduction to floriculture’ (Academic Press, San Diego, CA, USA, 1992, 2nd edn.).
    4. 4)
      • 4. Chi, Z.: ‘Data management for live plant identification’, in Feng, D., , Siu, W.C., , Zhang, H.J. (ED.): ‘Mutimedia information retrieval and Management’ (Springer, Berlin Heidelberg, 2003), pp. 432457.
    5. 5)
      • 5. Nilsback, M., Zisserman, A.: ‘Automated flower classification over a large number of classes’. Proc. Sixth Indian Conf. Computer Vision, Graphics & Image Processing, Bhubaneswar, India, December 2008, pp. 722729.
    6. 6)
      • 6. Nilsback, M., Zisserman, A.: ‘A visual vocabulary for flower classification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, New York, NY, June 2006, 2, pp. 14471454.
    7. 7)
      • 7. Zou, J., Nagy, G.: ‘Evaluation of model-based interactive flower recognition’. Proc. Int. Conf. Pattern Recognition, Cambridge, UK, August 2004, 2, pp. 311314.
    8. 8)
      • 8. Yang, M., Zhang, L., Feng, X., et al: ‘Sparse representation based Fisher discrimination dictionary learning for image classification’, Int. J. Comput. Vis., 2014, 109, (3), pp. 209232.
    9. 9)
      • 9. Khan, F., van de Weijer, J., Vanrell, M.: ‘Modulating shape features by color attention for object recognition’, Int. J. Comput. Vis., 2012, 98, (1), pp. 4964.
    10. 10)
      • 10. Xie, L., Wang, J., Lin, W., et al: ‘Towards reversal-invariant image representation’, Int. J. Comput. Vis., 2017, 123, (2), pp. 226250.
    11. 11)
      • 11. Hsu, T., Lee, C., Chen, L.: ‘An interactive flower image recognition system’, Multimedia Tools Appl., 2011, 53, (1), pp. 5373.
    12. 12)
      • 12. Mottos, A., Feris, R.: ‘Fusing well-crafted feature descriptors for efficient fine-grained classification’. Proc. IEEE Int. Conf. Image Processing, Paris, France, October 2014, pp. 51975201.
    13. 13)
      • 13. Chai, Y., Rahtu, E., Lempitsky, V., et al: ‘TriCoS: a tri-level class-discriminative co-segmentation method for image classification’. Proc. European Conf. Computer Vision, Florence, Italy, October 2012, I, pp. 794807.
    14. 14)
      • 14. Chen, Q., Song, Z, Hua, Y., et al: ‘Hierarchical matching with side information for image classification’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, RI, June 2012, pp. 34263433.
    15. 15)
      • 15. Chai, Y., Lempitsky, V., Zisserman, A.: ‘BiCoS: a Bi-level co-segmentation method for image classification’. Proc. Int. Conf. Computer Vision, Barcelona, Spain, November 2011, pp. 25792586.
    16. 16)
      • 16. Qi, X., Xiao, R., Li, C., et al: ‘Pairwise rotation invariant co-occurrence local binary pattern’, IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36, (11), pp. 21992213.
    17. 17)
      • 17. Hu, W., Hu, R., Xie, N., et al: ‘Image classification using multiscale information fusion based on saliency driven nonlinear diffusion filtering’, IEEE Trans. Image Process., 2014, 23, (4), pp. 15131526.
    18. 18)
      • 18. Krizhevsky, A., Sutskever, I., Hinton, G.: ‘ImageNet classification with deep convolutional neural networks’, in Pereira, F., Burges, C., Bottou, L., et al (ED.): ‘Advances in neural information processing systems’ (Curran Associates, Inc., Red Hook, NY, USA, 2012), pp. 10971105.
    19. 19)
      • 19. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. Proc. Int. Conf. Learning Representations, San Diego, CA, May 2015, arXiv preprint arXiv:1409.1556.
    20. 20)
      • 20. Shelhamer, E., Long, J., Darrell, T.: ‘Fully convolutional networks for semantic segmentation’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (4), pp. 640651.
    21. 21)
      • 21. Girshick, R., Donahue, J., Darrell, T., et al: ‘Rich feature hierarchies for accurate object detection and semantic segmentation’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, June 2014, pp. 580587.
    22. 22)
      • 22. Girshick, R.: ‘Fast R-CNN’. Proc. IEEE Int. Conf. Computer Vision, Santiago, Chile, December 2015, pp. 14401448.
    23. 23)
      • 23. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’, IEEE Trans. Pattern Anal. Mach. Intell., 2017, 39, (6), pp. 11371149.
    24. 24)
      • 24. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only look once: unified, real-time object detection’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, June 2016, pp. 779788.
    25. 25)
      • 25. Xu, Y., Zhang, Q., Wang, L.: ‘Metric forests based on Gaussian mixture model for visual image classification’, Soft Comput., 2018, 22, (2), pp. 499509.
    26. 26)
      • 26. Murray, N., Perronnin, F.: ‘Generalized max pooling’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Columbus, OH, June 2014, pp. 24732480.
    27. 27)
      • 27. Xie, L., Wang, J., Zhang, B., et al: ‘Incorporating visual adjectives for image classification’, Neurocomputing, 2016, 182, pp. 4855.
    28. 28)
      • 28. Ito, S., Kubota, S.: ‘Object classification using heterogeneous co-occurrence features’. Proc. European Conf. Computer Vision, Heraklion, Crete, Greece, September 2010, V, pp. 701714.
    29. 29)
      • 29. Zhang, C., Huang, Q., Tian, Q.: ‘Contextual exemplar classifier based image representation for classification’, IEEE Trans. Circuits Syst. Video Technol., 2017, 27, (8), pp. 16911699.
    30. 30)
      • 30. Fernando, B., Fromont, E., Tuytelaars, T.: ‘Mining mid-level features for image classification’, Int. J. Comput. Vis., 2014, 108, (3), pp. 186203.
    31. 31)
      • 31. Zhang, C., Liu, J., Liang, C., et al: ‘Image classification using Haar-like transformation of local features with coding residuals’, Signal Process., 2013, 93, (8), pp. 21112118.
    32. 32)
      • 32. Ye, G., Liu, D., Jhuo, I., et al: ‘Robust late fusion with rank minimization’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Providence, RI, June 2012, pp. 30213028.
    33. 33)
      • 33. He, K., Zhang, X., Ren, S., et al: ‘Deep residual learning for image recognition’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, NV, June 2016, pp. 770778.
    34. 34)
      • 34. Szegedy, C., Liu, W., Jia, Y., et al: ‘Going deeper with convolutions’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, June 2015, pp. 19.
    35. 35)
      • 35. Song, G., Jin, X., Chen, G., et al: ‘Two-level hierarchical feature learning for image classification’, Front. Inf. Technol. Electron. Eng., 2016, 17, (9), pp. 897906.
    36. 36)
      • 36. Xie, L., Hong, R., Zhang, B., et al: ‘Image classification and retrieval are ONE’. Proc. Fifth ACM on Int. Conf. Multimedia Retrieval, Shanghai, China, June 2015, pp. 310.
    37. 37)
      • 37. Razavian, A., Azizpour, H., Sullivan, J., et al: ‘CNN features off-the-shelf: an astounding baseline for recognition’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops, Columbus, OH, June 2014, pp. 512519.
    38. 38)
      • 38. Qian, Q., Jin, R., Zhu, S., et al: ‘Fine-grained visual categorization via multi-stage metric learning’. Proc. IEEE Conf. Computer Vision and Pattern Recognition, Boston, MA, June 2015, pp. 37163724.
    39. 39)
      • 39. Xie, G., Zhang, X., Shu, X., et al: ‘Task-driven feature pooling for image classification’. Proc. IEEE Int. Conf. Computer Vision, Santiago, Chile, December 2015, pp. 11791187.
    40. 40)
      • 40. Zheng, L., Zhao, Y., Wang, S., et al: ‘Good practice in CNN feature transfer’, arXiv preprint arXiv:1604.00133, 2016.
    41. 41)
      • 41. Bakhtiary, A., Lapedriza, A., Masip, D.: ‘Winner takes all hashing for speeding up the training of neural networks in large class problems’, Pattern Recognit. Lett., 2017, 93, pp. 3847.
    42. 42)
      • 42. Zhang, C., Li, R., Huang, Q., et al: ‘Hierarchical deep semantic representation for visual categorization’, Neurocomputing, 2017, 257, pp. 8896.
    43. 43)
      • 43. Liu, Y., Tang, F., Zhou, D., et al: ‘Flower classification via convolutional neural network’. Proc. IEEE Int. Conf. Functional-Structural Plant Growth Modeling, Simulation, Visualization and Applications, Qingdao, China, November 2016, pp. 110116.
    44. 44)
      • 44. Liu, Y., Guo, Y., Lew, M.: ‘On the exploration of convolutional fusion networks for visual recognition’, Proc. Int. Conf. MultiMedia Modeling, Reykjavik, Iceland, January 2017, pp. 227289.
    45. 45)
      • 45. Chakraborti, T., McCane, B., Mills, S., et al: ‘Collaborative representation based fine-grained species recognition’. Proc. Int. Conf. Image and Vision Computing New Zealand, Palmerston North, New Zealand, November 2016, pp. 16.
    46. 46)
      • 46. Xia, X., Xu, C., Nan, B.: ‘Inception-v3 for flower classification’. Proc. Int. Conf. Image, Vision and Computing (ICIVC), Chengdu, China, June 2017, pp. 783787.
    47. 47)
      • 47. Wei, X., Luo, J., Wu, J., et al: ‘Selective convolutional descriptor aggregation for fine-grained image retrieval’, IEEE Trans. Image Process., 2017, 26, (6), pp. 28682881.
    48. 48)
      • 48. Xie, G., Zhang, X., Yang, W., et al: ‘LG-CNN: from local parts to global discrimination for fine-grained recognition’, Pattern Recognit., 2017, 71, pp. 118131.
    49. 49)
      • 49. Shapiro, L., Stockman, G.: ‘Computer Vision’ (Prentice-Hall, Upper Saddle River, NJ, USA, 2001), pp. 5354.
    50. 50)
      • 50. Loshchilov, I., Hutter, F.: ‘SGDR: stochastic gradient descent with warm restarts’. Proc. Int. Conf. Learning Representations, Toulon, France, April 2017, arXiv preprint arXiv:1608.03983.
    51. 51)
      • 51. Nilsback, M., Zisserman, A.: ‘Delving into the whorl of flower segmentation’. Proc. British Machine Vision Conf., Warwick, UK, September 2007, pp. 54.154.10.
    52. 52)
      • 52. Nilsback, M., Zisserman, A.: ‘Delving deeper into the whorl of flower segmentation’, Image Vis. Comput., 2010, 28), (6), pp. 10491062.
    53. 53)
      • 53. Jia, Y., Shelhamer, E., Donahue, J., et al: ‘Caffe: convolutional architecture for fast feature embedding’. Proc. 22nd ACM Int. Conf. Multimedia, Orlando, FL, November 2014, pp. 675678.
    54. 54)
      • 54. Saitoh, T., Aoki, K., Kaneko, T.: ‘Automatic recognition of blooming flowers’. Proc. Int. Conf. Pattern Recognition, Cambridge, UK, August 2004, 1, pp. 2730.
    55. 55)
      • 55. Aydin, D., Uğur, A.: ‘Extraction of flower regions in color images using ant colony optimization’, Procedia Comput. Sci., 2011, 3, pp. 530536.
    56. 56)
      • 56. Visin, F., Romero, A., Cho, K., et al: ‘Reseg: a recurrent neural network-based model for semantic segmentation’. Proc. IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, NV, June 2016, pp. 426433.
    57. 57)
      • 57. Liu, F., Lin, G., Qiao, R., et al: ‘Structured learning of tree potentials in CRF for image segmentation’, IEEE Trans. Neural Netw. Learn. Syst., 2017, pp. 17, doi: 10.1109/TNNLS.2017.2690453.
    58. 58)
      • 58. Belongie, S., Perona, P.: ‘Visipedia circa 2015’, Pattern Recognit. Lett., 2016, 72, pp. 1524.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0155
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0155
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address