http://iet.metastore.ingenta.com
1887

Bionic RSTN invariant feature extraction method for image recognition and its application

Bionic RSTN invariant feature extraction method for image recognition and its application

For access to this article, please select a purchase option:

Buy article PDF
$19.95
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

It is significant to extract rotation, scaling, translation, and noise (RSTN) invariant features inspired by biological vision for image recognition. A bionic RSTN-invariant feature extraction are proposed. This extraction process comprises two stages. In the first stage, a novel orientation edge detection is designed based on a filter-to-filter scheme. Gabor filters, a bottom filter, smoothen an image by simulating biological vision. Bipolar filters, a top filter, detect the horizontal and vertical direction orientation edge by simulating vision cortex response. After obtaining the orientation edge of the image, an interval detector is executed by a spatial frequency of different direction and distance. Then, the interval detection results are transformed into pixels of the orientation-interval feature map. RSTN invariant features are generated through the repetition of orientation edge detection and interval detection. Several experimental results demonstrate that RSTN-invariant features have striking robustness, and capable to classify RSTN images. Finally, bionic invariant features are practiced in traffic sign recognition.

References

    1. 1)
      • 1. Ramesh, B., Xiang, C., Lee, T.H.: ‘Shape classification using invariant features and contextual information in the bag-of-words model’, Pattern Recogn., 2015, 48, (3), pp. 894906.
    2. 2)
      • 2. Rolls, E.T., Webb, T.J.: ‘Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems’, Front. Comput. Neurosci., 2014, 8, (85), pp. 119.
    3. 3)
      • 3. Riesenhuber, M., Poggio, T.: ‘Hierarchical models of object recognition in cortex’, Nat. Neurosci., 1999, 2, pp. 10191025.
    4. 4)
      • 4. Poggio, T., Serre, T.: ‘Models of visual cortex’, Scholarpedia, 2013, 8, (4), pp. 3516.
    5. 5)
      • 5. Hubel, D.H.: ‘Exploration of the primary visual cortex’, Nature, 1982, 299, (5883), pp. 515524.
    6. 6)
      • 6. Nicolas, P., David, D.C.: ‘High-throughput-derived biologically-inspired features for unconstrained face recognition’, Image Vis. Comput., 2012, 30, (3), pp. 159168.
    7. 7)
      • 7. Graham, N.V.: ‘Beyond multiple pattern analysers modelled as linear filters (as classical V1 simple cells): Useful additions of the last 25 years’, Vis. Res., 2011, 51, (13), pp. 13971430.
    8. 8)
      • 8. Sountsov, P., Santucci, D.M., Lisman, J.E.: ‘A biologically plausible transform for visual recognition that is invariant to translation, scale, and rotation’, Front. Comput. Neurosci., 2011, 5, pp. 17.
    9. 9)
      • 9. Liu, K., Skibbe, H., Schmidt, T., et al: ‘Rotation-invariant HOG descriptors using Fourier analysis in polar and spherical coordinates’, Int. J. Comput. Vis., 2013, 106, (3), pp. 342364.
    10. 10)
      • 10. Zhang, J., Liang, J., Zhang, C., et al: ‘Scale invariant texture representation based on frequency decomposition and gradient orientation’, Pattern Recogn. Lett., 2015, 51, (1), pp. 5762.
    11. 11)
      • 11. Jérémie, B., Fabrice, G., Myriam, V.: ‘Estimation of translation, rotation, and scaling between noisy images using the Fourier-Mellin transform’, J. SIAM J. Imaging Sci., 2009, 2, (2), pp. 614645.
    12. 12)
      • 12. Mennesson, J., Saint-Jean, C., Mascarilla, L.: ‘Color Fourier-Mellin descriptors for image recognition’, Pattern Recogn. Lett., 2014, 40, pp. 2735.
    13. 13)
      • 13. Franklin, S.W., Rajan, S.E.: ‘Retinal vessel segmentation employing ANN technique by Gabor and moment invariants-based features’, Appl. Soft Comput., 2014, 22, pp. 94100.
    14. 14)
      • 14. Shi, Y., Yang, X., Guo, Y.: ‘Translation invariant directional framelet transform combined with Gabor filters for image denoising’, IEEE Trans. Image Process., 2014, 23, (1), pp. 4455.
    15. 15)
      • 15. Li, H., Liu, Z., Huang, Y., et al: ‘Quaternion generic Fourier descriptor for colour object recognition’, Pattern Recogn., 2015, 48, (12), pp. 38953903.
    16. 16)
      • 16. Hu, M.K.: ‘Visual pattern recognition by moment invariants’, IRE Trans. Inf. Theory, 1962, 8, (2), pp. 179187.
    17. 17)
      • 17. Farokhi, S., Shamsuddin, S.M., Flusser, J., et al: ‘Rotation and noise invariant near-infrared face recognition by means of Zernike moments and spectral regression discriminant analysis’, Journal of Electronic Imaging, 2013, 22, (1), pp. 111.
    18. 18)
      • 18. Azeem, A., Sharif, M., Shah, J.H., et al: ‘Hexagonal scale invariant feature transform (H-SIFT) for facial feature extraction’, Journal of Applied Research and Technology, 2015, 13, (3), pp. 402408.
    19. 19)
      • 19. Chen, J., Zhang, X.B., Xu, Y.Q.: ‘SIFT and preserving topology structures of local neighbourhood: matching feature point in deformation measurement of nonrigid biological tissues from magnetic resonance images’, Journal of Medical Imaging And Health Informatics, 2015, 5, (3), pp. 477485.
    20. 20)
      • 20. Nanni, L., Brahnam, S., Ghidoni, S., et al: ‘Improving the descriptors extracted from the co-occurrence matrix using preprocessing approaches’, Expert Systems with Applications, 2015, 42, (22), pp. 89899000.
    21. 21)
      • 21. Sebastian, H., Andreas, U.: ‘A scale- and orientation-adaptive extension of Local Binary Patterns for texture classification’, Pattern Recogn., 2015, 48, (8), pp. 26332644.
    22. 22)
      • 22. Hong, X., Zhao, G., Pietikäinen, M., et al: ‘Combining LBP difference and feature correlation for texture description’, IEEE Transactions on Image Processing, 2014, 23, (6), pp. 25572568.
    23. 23)
      • 23. Qi, X., Shen, L., Zhao, G., et al: ‘Globally rotation invariant multi-scale co-occurrence local binary pattern’, Image and Vision Computing, 2015, 43, pp. 1626.
    24. 24)
      • 24. Shih, H.C., Yu, K.C.: ‘Aggregation Map (SPLAM): A new descriptor for robust template matching with fast algorithm’, Pattern Recogn., 2015, 48, (5), pp. 17071723.
    25. 25)
      • 25. Ngoc-Son, V., Thanh, P.N.: ‘Christophe Garcia. Improving texture categorization with biologically-inspired filtering’, Image and Vision Computing, 2014, 32, (6-7), pp. 424436.
    26. 26)
      • 26. Cao, Y., Chen, Y., Khosla, D.: ‘Spiking deep convolutional neural networks for energy-efficient object recognition’, Int. J. Comput. Vis., 2015, 113, (1), pp. 5466.
    27. 27)
      • 27. Rolls, E.T.: ‘Invariant visual object and face recognition: Neural and computational bases, and a model VisNet’, Front. Comput. Neurosci., 2012, 6, (35), pp. 170.
    28. 28)
      • 28. Leigh, R., Edmund, T.R.: ‘Invariant visual object recognition: biologically plausible approaches’, Biological Cybernetics, 2015, 109, (4), pp. 505535.
    29. 29)
      • 29. Mutch, J., Lowe, D.G.: ‘Object class recognition and localization using sparse features with limited receptive fields’, Int. J. Comput. Vis., 2008, 80, (1), pp. 4557.
    30. 30)
      • 30. Lu, Y., Kang, T., Zhang, H., et al: ‘Enhanced hierarchical model of object recognition based on a novel patch selection method in salient regions’, IET Computer Vision, 2015, 9, (5), pp. 663672.
    31. 31)
      • 31. Ghodrati, M., Khaligh-Razavi, S.M., Ebrahimpour, R., et al: ‘How can selection of biologically inspired features improve the performance of a robust object recognition model’, PLoS ONE, 2012, 7, (2), pp. e32357.
    32. 32)
      • 32. Serre, T., Wolf, L., Poggio, T.: ‘Object recognition with features inspired by visual cortex’. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR, 2005, vol. 2, pp. 9941000.
    33. 33)
      • 33. Mehrotra, R., Namuduri, K.R., Ranganathan, N.: ‘Gabor filter-based edge detection’, Pattern Recogn., 1992, 25, (12), pp. 14791494.
    34. 34)
      • 34. Dacey, D., Packer, O.S., Diller, L., et al: ‘Center surround receptive field structure of cone bipolar cells in primate retina’, Vision Research, 2000, 40, (14), pp. 18011811.
    35. 35)
      • 35. Stallkampa, J., Schlipsinga, M., Salmena, J., et al: ‘Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition’, Neural Networks, 2012, 32, pp. 323332.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-ipr.2016.0326
Loading

Related content

content/journals/10.1049/iet-ipr.2016.0326
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address