Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free 3D motion capture system for assessing patient motion during Fugl-Meyer stroke rehabilitation testing

The authors introduce a novel marker-less multi-camera setup that allows easy synchronisation between 3D cameras as well as a novel pose estimation method that is calculated on the fly based on the human body being tracked, and thus requires no calibration session nor special calibration equipment. They show high accuracy in both calibration and data merging and is on par with equipment-based calibration. They deduce several insights and practical guidelines for the camera setup and for the preferred data merging methods. Finally, they present a test case that computerises the Fugl-Meyer stroke rehabilitation protocol using the authors’ multi-sensor capture system. They conducted a Helsinki-approved research in a hospital in which they collected data on stroke patients and healthy subjects using their multi-camera system. Spatio-temporal features were extracted from the acquired data and machine learning-based evaluations were applied. Results showed that patients and healthy subjects can be correctly classified at a rate of above 90%. Furthermore, they show that the most significant features in the classification are strongly correlated with the Fugl-Meyer guidelines. This demonstrates the feasibility of a low-cost, flexible and non-invasive motion capture system that can potentially be operated in a home setting.

References

    1. 1)
      • 18. Funaya, H., Shibata, T., Wada, Y., et al: ‘Accuracy assessment of Kinect body tracker in instant posturography for balance disorders’. 2013 7th Int. Symp. on Medical Information and Communication Technology (ISMICT), 2013, pp. 213217.
    2. 2)
      • 28. OpenKinect: ‘Project homepage’. Available at https://openkinect.org, accessed May 2018.
    3. 3)
      • 36. Benjamin, E.J., Blaha, M.J., Chiuve, S.E., et al: ‘Heart disease and stroke statistics-2017 update: a report from the American heart association’, Circulation, 2017, 135, (10), pp. e146e603.
    4. 4)
      • 15. Wang, Q., Kurillo, G., Ofli, F., et al: ‘Evaluation of pose tracking accuracy in the first and second generations of Microsoft Kinect’. 2015 Int. Conf. on Healthcare Informatics (ICHI), 2015, pp. 380389.
    5. 5)
      • 5. Microsoft: ‘Kinect for xbox one’. Available at http://www.xbox.com/en-US/xbox-one/accessories/kinect, accessed May 2018.
    6. 6)
      • 40. Kullback, S., Leibler, R.A.: ‘On information and sufficiency’, Ann. Math. Stat., 1951, 22, (1), pp. 7986.
    7. 7)
      • 19. Liu, T.T., Hsieh, C.T., Chung, R.C., et al: ‘Physical rehabilitation assistant system based on kinect’. Applied Mechanics and Materials, 2013, vol. 284, pp. 16861690.
    8. 8)
      • 33. Zhang, Z.: ‘Flexible camera calibration by viewing a plane from unknown orientations’. The Proc. of the Seventh IEEE Int. Conf. on Computer Vision, 1999, vol. 1, pp. 666673.
    9. 9)
      • 43. Cirstea, M., Levin, M.F.: ‘Compensatory strategies for reaching in stroke’, Brain, 2000, 123, (5), pp. 940953.
    10. 10)
      • 29. Kabsch, W.: ‘A solution for the best rotation to relate two sets of vectors’, Acta Crystallogr., 1976, 32, (5), pp. 922923.
    11. 11)
      • 31. Fischler, M.A., Bolles, R.C.: ‘Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography’, Commun. ACM, 1981, 24, (6), pp. 381395.
    12. 12)
      • 4. IPIsoft: ‘Motion capture system homepage’. Available at http://www.ipisoft.com/, accessed May 2018.
    13. 13)
      • 30. Kabsch, W.: ‘A discussion of the solution for the best rotation to relate two sets of vectors’, Acta Crystallogr., 1978, 34, (5), pp. 827828.
    14. 14)
      • 14. Clark, R.A., Pua, Y.-H., Bryant, A.L., et al: ‘Validity of the Microsoft Kinect for providing lateral trunk lean feedback during gait retraining’, Gait Posture, 2013, 38, (4), pp. 10641066.
    15. 15)
      • 32. Besl, P.J., McKay, N.D.: ‘Method for registration of 3-d shapes’. Sensor Fusion IV: Control Paradigms and Data Structures, 1992, vol. 1611, pp. 586607.
    16. 16)
      • 9. Choo, B., Landau, M., DeVore, M., et al: ‘Statistical analysis-based error models for the Microsoft Kinect depth sensor’, Sensors, 2014, 14, (9), pp. 1743017450.
    17. 17)
      • 23. Maimone, A., Bidwell, J., Peng, K., et al: ‘Enhanced personal autostereoscopic telepresence system using commodity depth cameras’, Comput. Graph., 2012, 36, (7), pp. 791807.
    18. 18)
      • 13. Obdržálek, Š., Kurillo, G., Ofli, F., et al: ‘Accuracy and robustness of Kinect pose estimation in the context of coaching of elderly population’. 2012 Annual Int. Conf. of the IEEE Engineering in Medicine and Biology Society (EMBC), 2012, pp. 11881193.
    19. 19)
      • 25. Otten, P., Kim, J., Son, S.H.: ‘A framework to automate assessment of upperlimb motor function impairment: a feasibility study’, Sensors, 2015, 15, (8), pp. 2009720114.
    20. 20)
      • 22. Han, J., Shao, L., Xu, D., et al: ‘Enhanced computer vision with Microsoft Kinect sensor: a review’, IEEE Trans. Cybernet., 2013, 43, (5), pp. 13181334.
    21. 21)
      • 37. Cristianini, N., Shawe-Taylor, J.: ‘An introduction to support vector machines and other kernel-based learning methods’ (Cambridge University Press, Cambridge, UK, 2000).
    22. 22)
      • 16. Bonnechere, B., Jansen, B., Salvia, P., et al: ‘Validity and reliability of the kinect within functional assessment activities: comparison with standard stereophotogrammetry’, Gait Posture, 2014, 39, (1), pp. 593598.
    23. 23)
      • 2. VICON: ‘Motion capture system homepage’. Available at http://www.vicon.com, accessed May 2018.
    24. 24)
      • 1. Fugl-Meyer, A.R., Jääskö, L., Leyman, I., et al: ‘The poststroke hemiplegic patient. A method for evaluation of physical performance’, Scand. J. Rehabil. Med., 1975, 7, (1), pp. 1331.
    25. 25)
      • 21. Kitsikidis, A., Dimitropoulos, K., Douka, S., et al: ‘Dance analysis using multiple kinect sensors’. 2014 Int. Conf. on Computer Vision Theory and Applications (VISAPP), 2014, vol. 2, pp. 789795.
    26. 26)
      • 20. Alexiadis, D.S., Kelly, P., Daras, P., et al: ‘Evaluating a dancer's performance using kinect-based skeleton tracking’. Proc. of the 19th ACM int. Conf. on Multimedia, 2011, pp. 659662.
    27. 27)
      • 6. Shotton, J., Fitzgibbon, A., Cook, M., et al: ‘Real-time human pose recognition in parts from single depth images’. Computer Vision and Pattern Recognition (CVPR), 2011, pp. 12971304.
    28. 28)
      • 10. Falie, D., Buzuloiu, V.: ‘Noise characteristics of 3d time-of-flight cameras’. Int. Symp. on Signals, Circuits and Systems (ISSCS), 2007, vol. 1, pp. 14.
    29. 29)
      • 39. Ho, T.K.: ‘The random subspace method for constructing decision forests’, IEEE Trans. Pattern Anal. Mach. Intell., 1998, 20, (8), pp. 832844.
    30. 30)
      • 8. Mallick, T., Das, P.P., Majumdar, A.K.: ‘Characterizations of noise in Kinect depth images: a review’, IEEE Sens. J., 2014, 14, (6), pp. 17311740.
    31. 31)
      • 24. Asteriadis, S., Chatzitofis, A., Zarpalas, D., et al: ‘Estimating human motion from multiple kinect sensors’. Proc. of the 6th int. Conf. on Computer Vision/Computer Graphics Collaboration Techniques and Applications, 2013, p. 3.
    32. 32)
      • 41. Levin, M.F., Kleim, J.A., Wolf, S.L.: ‘What do motor ŞrecoveryŤ and ŞcompensationŤ mean in patients following stroke’, Neurorehabil. Neural Repair, 2009, 23, (4), pp. 313319.
    33. 33)
      • 17. Plotnik, M., Shema, S., Dorfman, M., et al: ‘A motor learning-based intervention to ameliorate freezing of gait in subjects with parkinsons disease’, J. Neurol., 2014, 261, (7), pp. 13291339.
    34. 34)
      • 12. Galna, B., Barry, G., Jackson, D., et al: ‘Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson's disease’, Gait Posture, 2014, 39, (4), pp. 10621068.
    35. 35)
      • 26. Wang, J., Yu, L., Wang, J., et al: ‘Automated Fugl-Meyer assessment using svr model’. Bioelectronics and Bioinformatics (ISBB), 2014, pp. 14.
    36. 36)
      • 35. Eichler, N., Hel-Or, H., Shimshoni, I., et al: ‘Non-invasive motion analysis for stroke rehabilitation using off the shelf 3d sensors’. 2018 Int. Joint Conf. on Neural Networks Neural Networks (IJCNN), 2018.
    37. 37)
      • 38. Quinlan, J.R.: ‘C4.5: programming for machine learning’, vol. 38, (Morgan Kauffmann, Burlington, MA, USA, 1993), p. 48.
    38. 38)
      • 44. Michaelsen, S.M., Jacobs, S., Roby-Brami, A., et al: ‘Compensation for distal impairments of grasping in adults with hemiparesis’, Exp. Brain Res., 2004, 157, (2), pp. 162173.
    39. 39)
      • 11. Fetić, A., Jurić, D., Osmanković, D.: ‘The procedure of a camera calibration using camera calibration toolbox for matlab’. MIPRO, 2012 Proc. of the 35th Int. Convention, 2012, pp. 17521757.
    40. 40)
      • 3. OPTOTRAK: ‘Motion capture system homepage’. Available at https://www.ndigital.com/msci/products/optotrak-certus/, accessed May 2018.
    41. 41)
      • 42. Box, J.F.: ‘Guinness, Gosset, Fisher, and small samples’, Stat. Sci., 1987, 2, (1), pp. 4552.
    42. 42)
      • 27. Kim, W.S., Cho, S., Baek, D., et al: ‘Upper extremity functional evaluation by Fugl-Meyer assessment scoring using depth-sensing camera in hemiplegic stroke patients’, PloS One, 2016, 11, (7), p. e0158640.
    43. 43)
      • 34. Wasenmüller, O., Stricker, D.: ‘Comparison of Kinect v1 and v2 depth images in terms of accuracy and precision’. Asian Conf. on Computer Vision, 2016, pp. 3445.
    44. 44)
      • 7. Shotton, J., Sharp, T., Kipman, A., et al: ‘Real-time human pose recognition in parts from single depth images’, Commun. ACM, 2013, 56, (1), pp. 116124.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5274
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5274
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address