Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Driving posture recognition by convolutional neural networks

Loading full text...

Full text loading...

/deliver/fulltext/iet-cvi/10/2/IET-CVI.2015.0175.html;jsessionid=bw21v5xn6ln0.x-iet-live-01?itemId=%2fcontent%2fjournals%2f10.1049%2fiet-cvi.2015.0175&mimeType=html&fmt=ahah

References

    1. 1)
      • 26. Simonyan, K., Zisserman, A.: ‘Two-stream convolutional networks for action recognition in videos’, in Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (Eds.): ‘Advances in neural information processing systems 27’ (Curran Associates Inc., 2014), pp. 568576.
    2. 2)
      • 1. WHO: ‘World report on road traffic injury prevention’, 2004. Available at: http://www.who.int/violence_injury_prevention/publications/road_traffic/world_report/en/.
    3. 3)
      • 23. Hu, B., Lu, Z., Li, H., et al: ‘Convolutional neural network architectures for matching natural language sentences’, in Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (Eds.): ‘Advances in neural information processing systems 27’, (Curran Associates Inc., 2014), pp. 20422050.
    4. 4)
      • 54. Olshausen, B.A., Fieldt, D.J.: ‘Sparse coding with an overcomplete basis set: a strategy employed by v1 ?.
    5. 5)
      • 44. Simoncelli, E.: ‘Statistical models for images: compression, restoration and synthesis’. Conf. Record of the Thirty-First Asilomar Conf. on Signals, Systems Computers 1997, 1997, vol. 1, pp. 673678.
    6. 6)
    7. 7)
      • 34. Ngiam, J., Chen, Z., Bhaskar, S.A., et al: ‘Sparse filtering’, in Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., Weinberger, K. (Eds.): ‘Advances in neural information processing systems 24’ (Curran Associates Inc., 2011), pp. 11251133.
    8. 8)
    9. 9)
      • 27. Zhang, N., Paluri, M., Ranzato, M., et al: ‘Panda: pose aligned networks for deep attribute modeling’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014, 2014, pp. 16371644.
    10. 10)
      • 24. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ‘Imagenet classification with deep convolutional neural networks’. Advances in Neural Information Processing Systems, 2012, pp. 10971105.
    11. 11)
      • 55. Lee, H., Ekanadham, C., Ng, A.Y.: ‘Sparse deep belief net model for visual area v2’, in Platt, J., Koller, D., Singer, Y., Roweis, S. (Eds.): ‘Advances in neural information processing systems 20’ (Curran Associates Inc., 2008), pp. 873880.
    12. 12)
      • 49. Murphy, K.P.: ‘Machine learning: a probabilistic perspective, adaptive computation and machine learning’ (MIT Press, Cambridge, Mass, 2012).
    13. 13)
    14. 14)
      • 50. Online, http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/.
    15. 15)
      • 30. Weinzaepfel, P., Revaud, J., Harchaoui, Z., et al: ‘Deepflow: large displacement optical flow with deep matching’. IEEE Int. Conf. on Computer Vision (ICCV), 2013, 2013, pp. 13851392, doi: 10.1109/ICCV.2013.175.
    16. 16)
    17. 17)
    18. 18)
      • 41. Zeiler, M.D., Fergus, R.: ‘Stochastic pooling for regularization of deep convolutional neural networks’, Available at: http://arxiv.org/abs/1301.3557 abs/1301.3557.
    19. 19)
      • 40. Boureau, Y.-L., Ponce, J., Lecun, Y.: ‘A theoretical analysis of feature pooling in visual recognition’. Twenty-Seventh Int. Conf. on Machine Learning, Haifa, Israel, 2010.
    20. 20)
      • 52. Yosinski, J., Clune, J., Bengio, Y., et al: ‘How transferable are features in deep neural networks?’, in Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N., Weinberger, K. (Eds.): ‘Advances in neural information processing systems 27’ (Curran Associates Inc., 2014), pp. 33203328.
    21. 21)
    22. 22)
      • 25. Krause, J., Gebru, T., Deng, J., et al: ‘Learning features and parts for fine-grained recognition’. Twenty-Second Int. Conf. on Pattern Recognition (ICPR), 2014, 2014, pp. 2633, doi: 10.1109/ICPR.2014.15.
    23. 23)
      • 53. Vincent, P., Larochelle, H., Bengio, Y., et al: ‘Extracting and composing robust features with denoising autoencoders’. Proc. Twenty-Fifth Int. Conf. on Machine Learning (ICML 2008), Helsinki, Finland, 5–9 June 2008, pp. 10961103.
    24. 24)
      • 31. Yi, D., Lei, Z., Liao, S., et al: ‘Deep metric learning for person re-identification’. Twenty-Second Int. Conf. on Pattern Recognition (ICPR), 2014, 2014, pp. 3439.
    25. 25)
    26. 26)
    27. 27)
    28. 28)
    29. 29)
      • 32. Taigman, Y., Yang, M., Ranzato, M., et al: ‘Deepface: closing the gap to human-level performance in face verification’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014, 2014, pp. 17011708.
    30. 30)
    31. 31)
    32. 32)
      • 33. Sun, Y., Wang, X., Tang, X.: ‘Deep learning face representation from predicting 10,000 classes’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014, 2014, pp. 18911898, doi: 10.1109/CVPR.2014.244.
    33. 33)
    34. 34)
    35. 35)
    36. 36)
    37. 37)
    38. 38)
      • 51. Erhan, D., Bengio, Y., Courville, A., et al: ‘Why does unsupervised pre-training help deep learning?’, J. Mach. Learn. Res., 2010, 11, pp. 625660.
    39. 39)
      • 47. Lyu, S., Simoncelli, E.: ‘Nonlinear image representation using divisive normalization’. IEEE Conf. on Computer Vision and Pattern Recognition, 2008. CVPR 2008, 2008, pp. 18, doi:10.1109/CVPR.2008.4587821.
    40. 40)
    41. 41)
    42. 42)
      • 43. Dong, Z., Pei, M., He, Y., et al: ‘Vehicle type classification using unsupervised convolutional neural network’. Twenty-Second Int. Conf. on Pattern Recognition (ICPR),2014, pp. 172177, doi: 10.1109/ICPR.2014.39.
    43. 43)
    44. 44)
      • 19. Le, Q., Zou, W., Yeung, S., et al: ‘Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2011, 2011, pp. 33613368, doi: 10.1109/CVPR.2011.5995496.
    45. 45)
    46. 46)
    47. 47)
      • 59. Bosch, A., Zisserman, A., Munoz, X.: ‘Representing shape with a spatial pyramid kernel’. Proc. 6th ACM Int. Conf. on Image and Video Retrieval, CIVR ‘07, ACM, New York, NY, USA, 2007, pp. 401408.
    48. 48)
    49. 49)
      • 39. Dugas, C., Bengio, Y., Bélisle, F., et al: ‘Incorporating second-order functional knowledge for better option pricing’, in Leen, T., Dietterich, T., Tresp, V. (Eds.): ‘Advances in neural information processing systems 13’ (MIT Press, 2001), pp. 472478.
    50. 50)
      • 35. Glorot, X., Bordes, A., Bengio, Y.: ‘Deep sparse rectifier neural networks’. Journal of Machine Learning Research 15 (Proc. 14th Int. Conf. on Artificial Intelligence and Statistics, AISTATS 2011), 2011, pp. 315323.
    51. 51)
      • 28. Girshick, R., Donahue, J., Darrell, T., et al: ‘Rich feature hierarchies for accurate object detection and semantic segmentation’. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2014, 2014, pp. 580587.
    52. 52)
    53. 53)
    54. 54)
      • 42. Jarrett, K., Kavukcuoglu, K., Ranzato, M., et al: ‘What is the best multi-stage architecture for object recognition?’. IEEE 12th Int. Conf. on Computer Vision, 2009, pp. 21462153, doi: 10.1109/ICCV.2009.5459469.
    55. 55)
    56. 56)
      • 10. Teyeb, I., Jemai, O., Zaied, M., et al: ‘A drowsy driver detection system based on a new method of head posture estimation’, in Corchado, E., Lozano, J., Quinti¢n, H., Yin, H. (Eds.): ‘Intelligent data engineering and automated learning C IDEAL 2014’, (LNCS, 8669) (Springer International Publishing, 2014), pp. 362369.
    57. 57)
    58. 58)
      • 11. Teyeb, I., Jemai, O., Zaied, M., et al: ‘A novel approach for drowsy driver detection using head posture estimation and eyes recognition system based on wavelet network’. The 5th Int. Conf. on Information, Intelligence, Systems and Applications, IISA 2014, 2014, pp. 379384.
    59. 59)
    60. 60)
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0175
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0175
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address