http://iet.metastore.ingenta.com
1887

Varied channels region proposal and classification network for wildlife image classification under complex environment

Varied channels region proposal and classification network for wildlife image classification under complex environment

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

A varied channels region proposal and classification network (VCRPCN) is developed based on a deep convolutional neural network (DCNN) and the characteristics of the animals appearing for automatic wildlife animal classification in camera trapped images, the architecture of the network is improved by feeding different channels into different components of the network to accomplish different aims, i.e. the animal images and their background images are employed in the region proposal component to extract region candidates for the animal's location, and the animal images combined with the region candidates are fed into the classification component to identify their categories. This novel architecture considers changes to the image due to the animals' appearances, and identifies potential animal regions in images and extracts their local features to describe and classify them. Five hundred low contrast animal images have been collected. All images have low contrast due to being acquired during the night. Cross-validation is employed to statistically measure the performance of the proposed algorithm. The experimental results demonstrate that in comparison with the well-known object detection network, faster R-CNN, the proposed VCRPCN achieved higher accuracy with the same dataset and training configuration with an average accuracy improvement of 21%.

References

    1. 1)
      • 1. Vitousek, P.M., Mooney, H.A., Lubchenco, J., et al: ‘Human domination of earth's ecosystems’, Science (80-), 1997, 277, (5325), pp. 494499.
    2. 2)
      • 2. Nguyen, H., Maclagan, S.J., Nguyen, T.D., et al: ‘Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring’. Proc. - 2017 Int. Conf. Data Science and Advanced Analytics DSAA 2017, Tokyo, Japan, 2018, vol. 2018-January, pp. 4049.
    3. 3)
      • 3. Schneider, S., Taylor, G.W., Kremer, S.: ‘Deep learning object detection methods for ecological camera trap data’. Proc. - 2018 15th Conf. Computer and Robot Vision, CRV 2018, Toronto, Canada, 2018, pp. 321328.
    4. 4)
      • 4. Burton, A.C., Neilson, E., Moreira, D., et al: ‘Wildlife camera trapping: A review and recommendations for linking surveys to ecological processes’, J. Appl. Ecol., 2015, 52, (3), pp. 675685.
    5. 5)
      • 5. Rowcliffe, J.M., Carbone, C.: ‘Surveys using camera traps: are we looking to a brighter future?’, Anim. Conserv., 2008, 11, (3), pp. 185186.
    6. 6)
      • 6. Kays, R., Tilak, S., Kranstauber, B., et al: ‘Monitoring wild animal communities with arrays of motion sensitive camera traps’, arXiv Prepr., September 2010.
    7. 7)
      • 7. Tabak, M.A., Norouzzadeh, M.S., Wolfson, D.W., et al: ‘Machine learning to classify animal species in camera trap images: applications in ecology’, Methods Ecol. Evol., 2018, 2019, pp. 585590.
    8. 8)
      • 8. Norouzzadeh, M.S., Nguyen, A., Kosmala, M., et al: ‘Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning’, Proc. Natl. Acad. Sci.,2018, 115, (25), pp. E5716E5725.
    9. 9)
      • 9. Swinnen, K.R.R., Reijniers, J., Breno, M., et al: ‘A novel method to reduce time investment when processing videos from camera trap studies’, PLOS One, 2014, 9, (6), p. e98881.
    10. 10)
      • 10. Yu, X., Wang, J., Kays, R., et al: ‘Automated identification of animal species in camera trap images’, Eurasip J. Image Video Process., 2013, 2013, p. 561.
    11. 11)
      • 11. Matuska, S., Hudec, R., Kamencay, P., et al: ‘A novel system for non-invasive method of animal tracking and classification in designated area using intelligent camera system’, Radioengineering, 2016, 25, (1), pp. 161168.
    12. 12)
      • 12. Zhang, W., Sun, J., Tang, X.: ‘From tiger to Panda: animal head detection’, IEEE Trans. Image Process., 2011, 20, (6), pp. 16961708.
    13. 13)
      • 13. Zhang, S., Wang, C., Chan, S.-C., et al: ‘New object detection, tracking, and recognition approaches for video surveillance over camera network’, IEEE Sens. J., 2015, 15, (5), pp. 26792691.
    14. 14)
      • 14. Weinstein, B.G.: ‘A computer vision for animal ecology’, J. Anim. Ecol., 2018, 87, (3), pp. 533545.
    15. 15)
      • 15. LeCun, Y., Bengio, Y., Hinton, G.: ‘Deep learning’, Nature, 2015, 521, (7553), pp. 436444.
    16. 16)
      • 16. Van Horn, G., Branson, S., Farrell, R., et al: ‘Building a bird recognition app and large scale dataset with citizen scientists: the fine print in fine-grained dataset collection’, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 595604.
    17. 17)
      • 17. Chen, G., Han, T.X., He, Z., et al: ‘Deep convolutional neural network based species recognition for wild animal monitoring’. 2014 IEEE Int. Conf. Image Processing ICIP 2014, Paris, France, October 27-30 2014, pp. 858862.
    18. 18)
      • 18. Gomez Villa, A., Salazar, A., Vargas, F.: ‘Towards automatic wild animal monitoring: identification of animal species in camera-trap images using very deep convolutional neural networks’, Ecol. Inf., 2017, 41, pp. 2432.
    19. 19)
      • 19. Gomez, A., Diez, G., Salazar, A., et al: ‘Animal identification in low quality camera-trap images using very deep convolutional neural networks and confidence thresholds’, in Bebis, G., et al (Eds.): Advances in Visual Computing. ISVC 2016. Lecture Notes in Computer Science, vol 10072, (Springer, Cham, 2016), pp. 747756.
    20. 20)
      • 20. Girshick, R., Donahue, J., Darrell, T., et al: ‘Rich feature hierarchies for accurate object detection and semantic segmentation’. Proceedings of the IEEE conference on computer vision and pattern recognition, Columbus, Ohio, USA, 23-28 June 2014, pp. 580587.
    21. 21)
      • 21. Girshick, R.: ‘Fast R-CNN’. 2015 IEEE Int. Conf. on Computer Vision (ICCV), Santiago, Chile, December 7-13 2015, pp. 14401448.
    22. 22)
      • 22. Ren, S., He, K., Girshick, R., et al: ‘Faster R-CNN: towards real-time object detection with region proposal networks’. Advances in Neural Information Processing Systems, Montreal, Quebec, Canada, December 7-12 2015, pp. 9199.
    23. 23)
      • 23. He, K., Zhang, X., Ren, S., et al: ‘Spatial pyramid pooling in deep convolutional networks for visual recognition’. European Conf. on Computer Vision, Zurich, Switzerland, September 2014, pp. 346361.
    24. 24)
      • 24. Redmon, J., Divvala, S., Girshick, R., et al: ‘You only look once: unified, real-time object detection’. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 779788.
    25. 25)
      • 25. Liu, W., Anguelov, D., Erhan, D., et al: ‘SSd: single shot multibox detector’. European Conf. on Computer Vision, Amsterdam, Netherlands, October 2016, pp. 2137.
    26. 26)
      • 26. Law, H., Deng, J.: ‘Cornernet: detecting objects as paired keypoints’. The European Conf. on Computer Vision (ECCV), Munich, Germany, September 2018, pp. 734750.
    27. 27)
      • 27. Duan, K., Bai, S., Xie, L., et al: ‘Centernet: keypoint triplets for object detection’, arXiv:1904.08189, 2019.
    28. 28)
      • 28. Bridle, J.S.: ‘Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition’. Neurocomputing, Berlin, Heidelberg, 1990, pp. 227236.
    29. 29)
      • 29. Borji, A., Itti, L.: ‘Human vs. Computer in scene and object recognition’. 2014 IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, Ohio, USA, 23-28 June 2014, pp. 113120.
    30. 30)
      • 30. Wilber, M.J., Scheirer, W.J., Leitner, P., et al: ‘Animal recognition in the mojave desert: vision tools for field biologists’. 2013 IEEE Workshop on Applications of Computer Vision (WACV), Clearwater Beach, FL, USA, January 15-17, 2013, pp. 206213.
    31. 31)
      • 31. Christiansen, P., Nielsen, L.N., Steen, K.A., et al: ‘Deepanomaly: combining background subtraction and deep learning for detecting obstacles and anomalies in an agricultural field’, Sensors (Switzerland), 2016, 16, (11), p. 1904.
    32. 32)
      • 32. Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’, arXiv Prepr., September 2014.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2019.1042
Loading

Related content

content/journals/10.1049/iet-ipr.2019.1042
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address