http://iet.metastore.ingenta.com
1887

access icon openaccess Energy expenditure estimation using visual and inertial sensors

Loading full text...

Full text loading...

/deliver/fulltext/iet-cvi/12/1/IET-CVI.2017.0112.html;jsessionid=5amohi87al5t9.x-iet-live-01?itemId=%2fcontent%2fjournals%2f10.1049%2fiet-cvi.2017.0112&mimeType=html&fmt=ahah

References

    1. 1)
      • 1. Samitz, G., Egger, M., Zwahlen, M.: ‘Domains of physical activity and all-cause mortality: systematic review and dose–response meta-analysis of cohort studies’, Int. J. Epidemiol., 2011, 40, (5), pp. 13821400.
    2. 2)
      • 2. Ainsworth, B.E., William, L.H., Melicia, C.W., et al: ‘Compendium of physical activities: an update of activity codes and met intensities’, Med. Sci. Sports Exercise, 2000, 32, (9), pp. 498504.
    3. 3)
      • 3. Ravussin, E., Lillioja, S., Anderson, T., et al: ‘Determinants of 24-hour energy expenditure in man. Methods and results using a respiratory chamber’, J. Clin. Invest., 1986, 78, (6), p. 1568.
    4. 4)
      • 4. Cosmed K4b2. Available at http://www.cosmed.com/.
    5. 5)
      • 5. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘Improving human action recognition using fusion of depth camera and inertial sensors’, IEEE Trans. Hum.-Mach. Syst., 2015, 45, (1), pp. 5161.
    6. 6)
      • 6. Gjoreski, H., Kaluža, B., Gams, M., et al: ‘Context-based ensemble method for human energy expenditure estimation’, Appl. Soft Comput., 2015, 37, pp. 960970.
    7. 7)
      • 7. Aggarwal, J., Xia, L.: ‘Human activity recognition from 3D data: a review’, Pattern Recognit. Lett., 2014, 48, pp. 7080.
    8. 8)
      • 8. Zhu, N., Diethe, T., Camplani, M., et al: ‘Bridging e-health and the internet of things: the sphere project’, IEEE Intell. Syst., 2015, 30, (4), pp. 3946.
    9. 9)
      • 9. Woznowski, P., Fafoutis, X., Song, T., et al: ‘A multi-modal sensor infrastructure for healthcare in a residential environment’. IEEE Int. Conf. on Communication Workshop (ICCW), 2015, 2015, pp. 271277.
    10. 10)
      • 10. Planinc, R., Chaaraoui, A.A., Kampel, M., et al: ‘Computer vision for active and assisted living’, Act. Assist. Living, Technol. Appl., 2016, 57, pp. 123.
    11. 11)
      • 11. Pitsikalis, V., Katsamanis, A., Theodorakis, S., et al: ‘Multimodal gesture recognition via multiple hypotheses rescoring’, J. Mach. Learn. Res., 2015, 16, (1), pp. 255284.
    12. 12)
      • 12. Leutenegger, S., Lynen, S., Bosse, M., et al: ‘Keyframe-based visual–inertial odometry using nonlinear optimization’, Int. J. Robot. Res., 2015, 34, (3), pp. 314334.
    13. 13)
      • 13. Fang, W., Zheng, L., Deng, H., et al: ‘Real-time motion tracking for mobile augmented/virtual reality using adaptive visual-inertial fusion’, Sensors, 2017, 17, (5), p. 1037.
    14. 14)
      • 14. Gasparrini, S., Cippitelli, E., Gambi, E., et al: ‘Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion’, ICT Innov., 2015, 1 (2016), pp. 99108.
    15. 15)
      • 15. Stein, S., McKenna, S.J.: ‘Combining embedded accelerometers with computer vision for recognizing food preparation activities’. Proc. of the 2013 ACM Int. Joint Conf. on Pervasive and Ubiquitous Computing, 2013, pp. 729738.
    16. 16)
      • 16. Diethe, T., Twomey, N., Kull, M., et al: ‘Probabilistic sensor fusion for ambient assisted living’, arXiv preprint arXiv:1702.01209.
    17. 17)
      • 17. Tao, L., Burghardt, T., Hannuna, S., et al: ‘A comparative home activity monitoring study using visual and inertial sensors’. IEEE Int. Conf. on E-Health Networking, Application and Services, 2015, pp. 644647.
    18. 18)
      • 18. Tao, L., Burghardt, T., Mirmehdi, M., et al: ‘Real-time estimation of physical activity intensity for daily living’. 2nd IET Int. Conf. on Technologies for Active and Assisted Living, 2016, pp. 1116.
    19. 19)
      • 19. Tao, L., Burghardt, T., Mirmehdi, M., et al: ‘Calorie counter: RGB-depth visual estimation of energy expenditure at home’. Asian Conf. on Computer Vision, Workshop on Assistive Vision, 2016, 0.
    20. 20)
      • 20. Edgcomb, A., Vahid, F.: ‘Estimating daily energy expenditure from video for assistive monitoring’. Int. Conf. on Healthcare Informatics, 2013, pp. 184191.
    21. 21)
      • 21. Tsou, P.-F., Wu, C.-C.: ‘Estimation of calories consumption for aerobics using kinect based skeleton tracking’. Int. Conf. on Systems, Man, and Cybernetics, 2015, pp. 12211226.
    22. 22)
      • 22. Lara, O.D., Labrador, M.A.: ‘A survey on human activity recognition using wearable sensors’, IEEE Commun. Surv. Tutor., 2013, 15, (3), pp. 11921209.
    23. 23)
      • 23. Igual, R., Medrano, C., Plaza, I.: ‘Challenges, issues and trends in fall detection systems’, Biomed. Eng. Online, 2013, 12, (1), p. 1.
    24. 24)
      • 24. Qudah, I., Leijdekkers, P., Gay, V.: ‘Using mobile phones to improve medication compliance and awareness for cardiac patients’. Proc. of the 3rd Int. Conf. on PErvasive Technologies Related to Assistive Environments, 2010, vol. 36.
    25. 25)
      • 25. Bennett, T.R., Wu, J., Kehtarnavaz, N., et al: ‘Inertial measurement unitbased wearable computers for assisted living applications: a signal processing perspective’, IEEE Signal Process. Mag., 2016, 33, (2), pp. 2835.
    26. 26)
      • 26. Ravi, N., Dandekar, N., Mysore, P., et al: ‘Activity recognition from accelerometer data’. AAAI, 2005, vol. 5, pp. 15411546.
    27. 27)
      • 27. Ofli, F., Chaudhry, R., Kurillo, G., et al: ‘Berkeley mhad: a comprehensive multimodal human action database’. IEEE Workshop on Applications of Computer Vision (WACV), 2013, 2013, pp. 5360.
    28. 28)
      • 28. Ravi, D., Wong, C., Lo, B., et al: ‘Deep learning for human activity recognition: a resource efficient implementation on low-power devices’. IEEE 13th Int. Conf. on Wearable and Implantable Body Sensor Networks (BSN), 2016, 2016, pp. 7176.
    29. 29)
      • 29. Hendelman, D., Miller, K., Baggett, C., et al: ‘Validity of accelerometry for the assessment of moderate intensity physical activity in the field.’, Med. Science Sports Exercise, 2000, 32, (9 Suppl), pp. S442S449.
    30. 30)
      • 30. Crouter, S.E., Churilla, J.R., Bassett, D.R.Jr: ‘Estimating energy expenditure using accelerometers’, Eur. J. Appl. Physiol., 2006, 98, (6), pp. 601612.
    31. 31)
      • 31. Hees, V.T., Lummel, R.C., Westerterp, K.R.: ‘Estimating activity-related energy expenditure under sedentary conditions using a tri-axial seismic accelerometer’, Obesity, 2009, 17, (6), pp. 12871292.
    32. 32)
      • 32. Altini, M., Penders, J., Amft, O.: ‘Estimating oxygen uptake during nonsteady-state activities and transitions using wearable sensors’, IEEE J. Biomed. Health Inf., 2016, 20, (2), pp. 469475.
    33. 33)
      • 33. Altini, M., Penders, J., Vullers, R., et al: ‘Estimating energy expenditure using body-worn accelerometers: a comparison of methods, sensors number and positioning’, IEEE J. Biomed. Health Inf., 2015, 19, (1), pp. 219226.
    34. 34)
      • 34. Aggarwal, J., Ryoo, M.: ‘Human activity analysis: a review’, ACM Comput. Surv., 2011, 43, (3), p. 16.
    35. 35)
      • 35. Leo, M., Medioni, G., Trivedi, M., et al: ‘Computer vision for assistive technologies’, Comput. Vis. Image Underst., 2017, 154, pp. 115.
    36. 36)
      • 36. Guo, G., Lai, A.: ‘A survey on still image based human action recognition’, Pattern Recognit., 2014, 47, (10), pp. 33433361.
    37. 37)
      • 37. Laptev, I.: ‘On space-time interest points’, Int. J. Comput. Vis., 2005, 64, (2-3), pp. 107123.
    38. 38)
      • 38. Jia, Y., Shelhamer, E., Donahue, J., et al: ‘Caffe: convolutional architecture for fast feature embedding’. Proc. of the ACM Int. Conf. on Multimedia, 2014, pp. 675678.
    39. 39)
      • 39. Oreifej, O., Liu, Z.: ‘Hon4d: histogram of oriented 4D normals for activity recognition from depth sequences’, In the Proceedings of Comput. Vis. Pattern Recognit., 2013, pp. 716723.
    40. 40)
      • 40. Tao, L., Paiement, A., Damen, D., et al: ‘A comparative study of pose representation and dynamics modelling for online motion quality assessment’, Comput. Vis. Image Underst., 2016, 148, pp. 136152.
    41. 41)
      • 41. Laptev, I., Marszałek, M., Schmid, C., et al: ‘Learning realistic human actions from movies’, In the Proceedings of Comput. Vis. Pattern Recognit., 2008, pp. 18.
    42. 42)
      • 42. Perronnin, F., Sánchez, J., Mensink, T.: ‘Improving the fisher kernel for large-scale image classification’. Eur. Conf. Comput. Vis., 2010, pp. 143156.
    43. 43)
      • 43. Ryoo, M., Rothrock, B., Matthies, L.: ‘Pooled motion features for first-person videos’, In the Proceedings of Comput. Vis. Pattern Recognit., 2015, pp. 896904.
    44. 44)
      • 44. Dobhal, T., Shitole, V., Thomas, G., et al: ‘Human activity recognition using binary motion image and deep learning’, Procedia Comput. Sci., 2015, 58, pp. 178185.
    45. 45)
      • 45. Presti, L.L., La Cascia, M.: ‘3d skeleton-based human action classification: a survey’, Pattern Recognit., 2016, 53, pp. 130147.
    46. 46)
      • 46. Snoek, C.G., Worring, M., Smeulders, A.W.: ‘Early versus late fusion in semantic video analysis’. Proc. of the 13th Annual ACM Int. Conf. on Multimedia, 2005, pp. 399402.
    47. 47)
      • 47. Liu, K., Chen, C., Jafari, R., et al: ‘Fusion of inertial and depth sensor data for robust hand gesture recognition’, IEEE Sens. J., 2014, 14, (6), pp. 18981903.
    48. 48)
      • 48. Wu, J., Cheng, J.: ‘Bayesian co-boosting for multi-modal gesture recognition.’, J. Mach. Learn. Res., 2014, 15, (1), pp. 30133036.
    49. 49)
      • 49. Chen, C., Jafari, R., Kehtarnavaz, N.: ‘A real-time human action recognition system using depth and inertial sensor fusion’, IEEE Sens. J., 2016, 16, (3), pp. 773781.
    50. 50)
      • 50. Breiman, L.: ‘Stacked regressions’, Mach. Learn., 1996, 24, (1), pp. 4964.
    51. 51)
      • 51. OpenNI organization, OpenNI User Guide (November 2010). Available at http://www.openni.org/documentation.
    52. 52)
      • 52. Tran, D., Sorokin, A.: ‘Human activity recognition with metric learning’. Eur. Conf. on Computer Vision, 2008, pp. 548561.
    53. 53)
      • 53. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’, Comput. Vis. Pattern Recognit., 2005, 1, pp. 886893.
    54. 54)
      • 54. Dietterich, T.: ‘Machine learning for sequential data: a review’, In the Proceedings of Struct. Syntactic Statist. Pattern Recognit., 2002, pp. 1530.
    55. 55)
      • 55. Bouten, C.V., Koekkoek, K., Verduin, M., et al: ‘A triaxial accelerometer and portable data processing unit for the assessment of daily physical activity’, IEEE Trans. Biomed. Eng., 1997, 44, (3), pp. 136147.
    56. 56)
      • 56. Chang, C., Lin, C.: ‘Libsvm: a library for support vector machines’, ACM Trans. Intell. Syst. Technol., 2011, 2, (3), p. 27.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0112
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0112
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address