http://iet.metastore.ingenta.com
1887

Support vector machine approach to fall recognition based on simplified expression of human skeleton action and fast detection of start key frame using torso angle

Support vector machine approach to fall recognition based on simplified expression of human skeleton action and fast detection of start key frame using torso angle

For access to this article, please select a purchase option:

Buy article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Falls sustained by subjects can have severe consequences, especially for elderly persons living alone. A fall detection method for indoor environments based on the Kinect sensor and analysis of three-dimensional skeleton joints information is proposed. Compared with state-of-the-art methods, the authors’ method provides two major improvements. First, possible fall activity is quantified and represented by a one-dimensional float array with only 32 items, followed by fall recognition using a support vector machine (SVM). Unlike typical deep learning methods, the input parameters of their method are dramatically reduced. Hence, videos are trained and recognised by an SVM with a low time cost. Second, the torso angle is imported to detect the start key frame of a possible fall, which is much more efficient than using a sliding window. Their approach is evaluated on the telecommunication systems team (TST) fall detection dataset v2. The results show that their approach achieves an accuracy of 92.05%, better than other typical methods. According to the characters of machine learning, when more samples are imported, their method is expected to achieve a higher accuracy and stronger capability of fall-like discrimination. It can be used in real-time video surveillance because of its time efficiency and robustness.

References

    1. 1)
      • 1. Baldewijns, G., Debard, G., Mertes, G., et al: ‘Bridging the gap between real-life data and simulated data by providing a highly realistic fall dataset for evaluating camera-based fall detection algorithms’, Healthc. Technol. Lett., 2016, 3, (1), pp. 611.
    2. 2)
      • 2. Mubashir, M., Shao, L., Seed, L.: ‘A survey on fall detection: principles and approaches’, Neuro Comput., 2013, 100, (2), pp. 144152.
    3. 3)
      • 3. Xugang, X., Tang, M., Miran, S.M.: ‘Evaluation of feature extraction and recognition for activity monitoring and fall detection based on wearable sEMG sensors’, MDPI Sensors, 2017, 17, (6), pp. 120.
    4. 4)
      • 4. Tong, C., Lian, Y., Zhang, Y., et al: ‘A novel real-time fall detection system based on real-time video and mobile phones’, J. Circuits Syst. Comput., 2016, 26, (4), pp. 116.
    5. 5)
      • 5. Concepción, M.Á.Á.D.L., Morillo, L.M.S., García, J.A.Á., et al: ‘Mobile activity recognition and fall detection system for elderly people using Ameva algorithm’, Pervasive Mob. Comput., 2017, 34, (SI), pp. 313.
    6. 6)
      • 6. Droghini, D., Ferretti, D., Principi, E., et al: ‘Combined one-class SVM and template-matching approach for user-aided human fall detection by means of floor acoustic features’, Comput. Intell. Neurosci., 2017, 5, (1), pp. 114.
    7. 7)
      • 7. Al-Qaness, M., Li, F., Ma, X., et al: ‘Device-free indoor activity recognition system’, Appl. Sci., 2016, 6, (11), pp. 329342.
    8. 8)
      • 8. Rougier, C., Meunier, J., St-Arnaud, A., et al: ‘3D head tracking for fall detection using a single calibrated camera’, Image Vis. Comput., 2013, 31, (3), pp. 246254.
    9. 9)
      • 9. Gasparrini, S., Cippitelli, E., Spinsante, S., et al: ‘A depth-based fall detection system using a Kinect® sensor’, Sensors, 2014, 14, (2), pp. 27562775.
    10. 10)
      • 10. Alazrai, R., Momani, M., Daoud, M.I.: ‘Fall detection for elderly from partially observed depth-map video sequences based on view-invariant human activity representation’, Appl. Sci., 2017, 7, (1), pp. 316335.
    11. 11)
      • 11. Rougier, C., Auvinet, E., Rousseau, J., et al: ‘Fall detection from depth map video sequences’. Proc. Int. Conf. 9th Int. Conf. Smart Homes and Health Telematics, Montreal, Canada, 20–22 June 2011.
    12. 12)
      • 12. Bosch-Jorge, M., Sánchez-Salmerón, A.J., Valera, Á., et al: ‘Fall detection based on the gravity vector using a wide-angle camera’, Expert Syst. Appl., 2014, 41, (17), pp. 79807986.
    13. 13)
      • 13. Yu, M., Naqvi, S.M., Rhuma, A., et al: ‘One class boundary method classifiers for application in a video-based fall detection system’, IET Comput. Vis., 2012, 6, (2), pp. 90100.
    14. 14)
      • 14. Rajpoot, Q.M., Jensen, C.D.: ‘Security and privacy in video surveillance: requirements and challenges’. Proc. Int. Conf. Security and Privacy Conf., Marrakech, Morocco, June 2014, pp. 169184.
    15. 15)
      • 15. Makantasis, K., Protopapadakis, E., Doulamis, A., et al: ‘3D measures exploitation for a monocular semi-supervised fall detection system’, Multimedia Tools Appl., 2016, 75, (22), pp. 1501715049.
    16. 16)
      • 16. Zhang, C., Tian, Y., Capezuti, E.: ‘Privacy preserving automatic fall detection for elderly using RGBD cameras’. Proc. Int. Conf. 13th Int. Conf. Computers Helping People with Special Needs, Linz, Austria, 11–13 July 2012.
    17. 17)
      • 17. Mastorakis, G., Makris, D.: ‘Fall detection system using Kinect's infrared sensor’, J. Real-Time Image Process., 2014, 9, (4), pp. 635646.
    18. 18)
      • 18. Zhu, G., Zhang, L., Shen, P., et al: ‘An online continuous human action recognition algorithm based on the Kinect sensor’, Sensors, 2016, 16, (2), pp. 161179.
    19. 19)
      • 19. Manzi, A., Dario, P., Cavallo, F.: ‘A human activity recognition system based on dynamic clustering of skeleton data’, Sensors, 2017, 17, (5), pp. 11001114.
    20. 20)
      • 20. Yang, L., Ren, Y., Hu, H., et al: ‘New fast fall detection method based on spatio-temporal context tracking of head by using depth images’, Sensors, 2015, 15, (9), pp. 2300423019.
    21. 21)
      • 21. Planinc, R., Kampel, M.: ‘Introducing the use of depth data for fall detection’, Pers. Ubiquitous Comput., 2013, 17, (6), pp. 10631072.
    22. 22)
      • 22. Ibañez, R., Soria, Á., Teyseyre, A., et al: ‘Approximate string matching: a lightweight approach to recognize gestures with Kinect’, Pattern Recognit., 2016, 62, (1), pp. 7386.
    23. 23)
      • 23. Shan, J., Srinivas, A.: ‘3D human action segmentation and recognition using pose kinetic energy’. Proc. Int. Conf. Advanced Robotics and its Social Impacts (ARSO), Evanston, USA, September 2014, pp. 6975.
    24. 24)
      • 24. ‘Skeletal Joint Smoothing White Paper’, Available at https://msdn.microsoft.com/en-us/library/jj131429.aspx, accessed 10 April 2014.
    25. 25)
      • 25. Yao, L., Min, W., Lu, K.: ‘A new approach to fall detection based on the human torso motion model’, Appl. Sci. (Basel), 2017, 9, (1), pp. 117.
    26. 26)
      • 26. Shechtman, E., Irani, M.: ‘Space-time behavior based correlation– or –how to tell if two underlying motion fields are similar without computing them?’, IEEE Trans. Pattern Anal. Mach. Intell., 2007, 29, (11), pp. 20452056.
    27. 27)
      • 27. Guo, P., Miao, Z., Shen, Y., et al: ‘Continuous human action recognition in real time’, Multimed. Tools Appl., 2014, 68, (3), pp. 827844.
    28. 28)
      • 28. Melzer, I., Benjuya, N., Kaplanski, J.: ‘Association between ankle muscle strength and limit of stability in older adults’, Age Ageing, 2009, 38, (1), pp. 119123.
    29. 29)
      • 29. Gasparrini, S., Cippitelli, E., Gambi, E., et al: ‘Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion’. Proc. Int. Conf. 7th ICT Innovations Conf. on Emerging Technologies for Better Living, Ohrid, Macedonia, 1–4 October 2015.
    30. 30)
      • 30. Sung, J., Ponce, C., Selman, B.: ‘Human activity detection from RGBD images’. Proc. Int. Conf. AAAI Conf. Plan, Activity, and Intent Recognition, San Francisco, California USA, 7–18 August 2011.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5324
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5324
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address