Your browser does not support JavaScript!
http://iet.metastore.ingenta.com
1887

access icon free Using context from inside-out vision for improved activity recognition

The authors propose a method to improve activity recognition by including the contextual information from first person vision (FPV). Adding the context, i.e. objects seen while performing an activity, increases the activity recognition precision. This is because, in goal-oriented tasks, human gaze precedes the action and tends to focus on relevant objects. They extract object information from FPV images and combine it with the activity information from external or FPV videos to train an Artificial Neural Network (ANN). They used four configurations as combination of gaze/eye-tracker, head-mounted and externally mounted cameras using three standard cooking datasets from Georgia Tech Egocentric Activities Gaze, Technische Universität München kitchen and CMU multi-modal activity database. Adding object information when training the ANN increased the precision and accuracy of activity recognition from average 58.02% (and 89.78%) to 74.03% (and 93.42%). Experiments also showed that when objects are not considered, having an external camera is necessary. However, when objects are considered, the combination of internal and external cameras is optimal because of their complementary advantages in observing hand and objects. Adding object information also decreases ANN training cycles from 513.25 to 139, which shows that it provides critical information that speeds up training.

References

    1. 1)
      • 18. Rahul, K., Vinod, K., Fnu, R., et al: ‘Multiview fusion for activity recognition using deep neural networks’, J. Electron. Imaging, 2016, 25, (4), pp. 043010, Available at http://dx.doi.org/10.1117/1.JEI.25.4.043010.
    2. 2)
      • 24. Ariki, Y., Tonaru, T., Takiguchi, T.: ‘Human action recognition using HDP by integrating motion and location information’ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2010), pp. 291300. Available at http://dx.doi.org/10.1007/978-3-642-12304-7_28.
    3. 3)
      • 93. Hinton, G.E.: ‘Learning multiple layers of representation’, Trends Cogn. Sci., 2007, 11, (10), pp. 428434.
    4. 4)
      • 80. Schneider, E., Bartl, K., Dera, T., et al: ‘Gaze-aligned head-mounted camera with pan, tilt, and roll motion control for medical documentation and teaching applications’. 2006 IEEE Int. Conf. Systems, Man and Cybernetics, October 2006, vol. 1, pp. 327331.
    5. 5)
      • 5. Duchowski, A.T.: ‘A breadth-first survey of eye-tracking applications’, Behav. Res. Methods Instrum. Comput., 2002, 34, (4), pp. 455470. Available at http://dx.doi.org/10.3758/BF03195475.
    6. 6)
      • 40. Bengio, Y., LeCun, Y.: ‘Scaling learning algorithms towards AI’, in Bottou, L., Chapelle, O., DeCoste, D., et al (Eds.): ‘Large-scale kernel machines’ (MIT Press, 2007).
    7. 7)
      • 21. Wilson, D.H., Atkeson, C.: ‘Simultaneous tracking and activity recognition (star) using many anonymous, binary sensors’. Proc. Third Int. Conf. Pervasive Computing, ser. PERVASIVE'05, 2005, pp. 6279. Available at http://dx.doi.org/10.1007/11428572_5.
    8. 8)
      • 65. Calonder, M., Lepetit, V., Ozuysal, M., et al: ‘BRIEF: computing a local binary descriptor very fast’, IEEE Trans. Pattern Anal. Mach. Intell., 2012, 34, (7), pp. 12811298.
    9. 9)
      • 36. Bourobou, S.T.M., Yoo, Y.: ‘User activity recognition in smart homes using pattern clustering applied to temporal ANN algorithm’, Sensors, 2015, 15, pp. 1195311971.
    10. 10)
      • 73. Kang, H., Efros, A.A., Hebert, M., et al: ‘Image composition for object pop-out’. IEEE 12th Int. Conf. Computer Vision Workshops (ICCV Workshops), September 2009, pp. 681688.
    11. 11)
      • 74. Noor, S., Uddin, V.: ‘MapReduce for multi-view object recognition’. 2016 Int. Conf. High Performance Computing & Simulation (HPCS 2016), 2016.
    12. 12)
      • 41. Salzberg, S.L.: ‘C4.5: programs for machine learning by J. Ross Quinlan. Morgan Kaufmann Publishers, Inc., 1993’, Mach. Learn., 1994, 16, (3), pp. 235240. Available at https://doi.org/10.1007/BF00993309.
    13. 13)
      • 19. Kwapisz, J.R., Weiss, G.M., Moore, S.A.: ‘Activity recognition using cell phone accelerometers’, ACM SIGKDD Explor. Newsl., 2010, 12, (2), pp. 7482. Available at http://doi.acm.org/10.1145/1964897.1964918.
    14. 14)
      • 60. Marszalek, M., Laptev, I., Schmid, C.: ‘Actions in context’. IEEE Conf. Computer Vision and Pattern Recognition 2009 (CVPR 2009), 2009, pp. 29292936.
    15. 15)
      • 63. Tan, C., Goh, H., Chandrasekhar, V., et al: ‘Understanding the nature of first-person videos: characterization and classification using low-level features’. 2014 IEEE Conf. Computer Vision and Pattern Recognition Workshops (CVPRW), June 2014, pp. 549556.
    16. 16)
      • 81. Schmitow, C., Stenberg, G., Billard, A., et al: ‘Using a head-mounted camera to infer attention direction’, Int. J. Behav. Dev., 2013, 37, (5), pp. 468474.
    17. 17)
      • 16. Wang, X., Ji, Q.: ‘Video event recognition with deep hierarchical context model’. 2015 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), June 2015, pp. 44184427.
    18. 18)
      • 4. Yarbus, A.L.: ‘Eye movements and vision’ (Plenum, 1967).
    19. 19)
      • 10. Kang, H., Efros, A.A., Hebert, M., et al: ‘Image matching in large scale environments’. IEEE Conf. Computer Vision and Pattern Recognition (CVPR) Workshop on Egocentric Vision, ser. IVCNZ ‘12, 2009, pp. 6772. Available at http://doi.acm.org/10.1145/2425836.2425852.
    20. 20)
      • 15. Plötz, T., Hammerla, N.Y., Olivier, P.: ‘Feature learning for activity recognition in ubiquitous computing’. Proc. 22nd Int. Joint Conf. Artificial Intelligence, ser. IJCAI'11, 2011, vol. 2, pp. 17291734. Available at http://dx.doi.org/10.5591/978-1-57735-516-8/IJCAI11-290.
    21. 21)
      • 77. Noris, B., Keller, J.-B., Billard, A.: ‘A wearable gaze tracking system for children in unconstrained environments’, Elsevier Comput. Vis. Image Underst., 2011, 115, (4), pp. 476486.
    22. 22)
      • 47. De la Torre, F., Hodgins, J., Bargteil, A., et al: ‘Guide to the Carnegie Mellon University multimodal activity (CMU-MMAC) database’. Technical Report CMU-RI-TR-08-22, Carnegie Mellon University, 2008. Available at http://repository.cmu.edu/robotics.
    23. 23)
      • 79. Schneider, E., Villgrattner, T., Vockeroth, J., et al: ‘Eyeseecam: an eye movement-driven head camera for the examination of natural visual exploration’, Ann. New York Acad. Sci., 2009, 1164, pp. 461467.
    24. 24)
      • 8. Betancourt, A., Morerio, P., Regazzoni, C.S., et al: ‘The evolution of first person vision methods: a survey’, IEEE Trans. Circuits Syst. Video Technol., 2015, 25, (5), pp. 744760.
    25. 25)
      • 37. Yao, M.: ‘Understanding the limits of deep learning’, 2017. Available at https://venturebeat.com/2017/04/02/understanding-the-limits-of-deep-learning/.
    26. 26)
      • 59. Laptev, I., Marszalek, M., Schmid, C., et al: ‘Learning realistic human actions from movies’. IEEE Conf. Computer Vision and Pattern Recognition 2008 CVPR 2008, 2008, pp. 18.
    27. 27)
      • 66. Mair, E., Hager, G., Burschka, D., et al: ‘Adaptive and generic corner detection based on the accelerated segment test’. European Conf. Computer Vision (ECCV2010), 2010 (LNCS, 6312), pp. 183196.
    28. 28)
      • 69. Bauer, J., Sunderhauf, N., Protzel, P.: ‘Comparing several implementations of two recently published feature detectors’. Proc. Int. Conf. Intelligent and Autonomous Systems, 2007.
    29. 29)
      • 11. Sun, L., Klank, U., Beetz, M.: ‘Eyewatchme – 3D hand and object tracking for inside out activity analysis’. 2009 IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops, June 2009, pp. 916.
    30. 30)
      • 91. Noor, S., Uddin, V.: ‘Using ANN for multi-view activity recognition in indoor environment’. 14th Int. Conf. Frontier of Information Technology, 2016.
    31. 31)
      • 33. Yang, J.: ‘Toward physical activity diary: motion recognition using simple acceleration features with mobile phones’. Proc. First ACM Int. Workshop on Interactive Multimedia for Consumer Electronics (IMCE Š09), 2009.
    32. 32)
      • 17. Hasan, M., Chowdhury, R., Amit, K.: ‘Continuous learning of human activity models using deep nets’. Computer Vision – ECCV 2014: 13th European Conf. Proc., Part III, Zurich, Switzerland, 6–12 September, 2014, pp. 705720.
    33. 33)
      • 42. Jacob, R.J.K., Karn, K.S.: ‘Eye tracking in human–computer interaction and usability research: ready to deliver the promises’, J. Mind's Eye, Cogn. Appl. Aspects Eye Mov. Res., 2003, 2, pp. 573605.
    34. 34)
      • 29. Jatoba, L.C., Grossmann, U., Kunze, C., et al: ‘Context aware mobile health monitoring: evaluation of different pattern recognition methods for classification of physical activity’. 30th Annual Int. Conf. IEEE Engineering in Medicine and Biology Society, 2008.
    35. 35)
      • 27. Segundo, R.S., Montero, J.M., Pimentel, J.M., et al: ‘HMM adaptation for improving a human activity recognition system’, Algorithms, 2016, 9, (3), pp. 13, 60.
    36. 36)
      • 31. Maurer, U., Smailagic, A., Siewiorek, D.P., et al: ‘Activity recognition and monitoring using multiple sensors on different body positions’. Proc. Int. Workshop on Wearable and Implantable Body Sensor Networks (BSNŠ06), 2006.
    37. 37)
      • 84. Breeze, J.: ‘A detailed examination of the benefits of eye tracking’, Eye Tracking: Best Way to Test Rich App Usability, 2011, 8 December 2011, Available at http://uxmag.com/articles/eye-tracking-the-best-way-to-test-rich-app-usability.
    38. 38)
      • 98. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. 2005 IEEE Computer Society Conf. Computer Vision and Pattern Recognition (CVPR'05), June 2005, vol. 1, pp. 886893.
    39. 39)
      • 70. Rublee, E., Rabaud, V., Konolige, K., et al: ‘ORB: an efficient alternative to SIFT or SURF’. 2011 IEEE Int. Conf. Computer Vision (ICCV), November 2011, pp. 25642571.
    40. 40)
      • 62. Fathi, A., Ren, X., Rehg, J.M.: ‘Learning to recognize objects in egocentric activities’. 2011 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), June 2011, pp. 32813288.
    41. 41)
      • 96. Yang, J., Yu, K., Gong, Y., et al: ‘Linear spatial pyramid matching using sparse coding for image classification’. IEEE Conf. Computer Vision and Pattern Recognition, 2009 (CVPR 2009), June 2009, pp. 17941801.
    42. 42)
      • 89. Oliva, A., Torralba, A.: ‘Modeling the shape of the scene: a holistic representation of the spatial envelope’, Int. J. Comput. Vis., 2001, 42, (3), pp. 145175. Available at http://dx.doi.org/10.1023/A:1011139631724.
    43. 43)
      • 50. Ma, M., Fan, H., Kitani, K.M.: ‘Going deeper into first-person activity recognition’, CoRR, 2016, pp. 18941903, 27–30 June 2016, abs/1605.03688. Available at http://arxiv.org/abs/1605.03688.
    44. 44)
      • 25. Wu, C., Khalili, A.H., Aghajan, H.: ‘Multiview activity recognition in smart homes with spatio-temporal features’. Proc. Fourth ACM/IEEE Int. Conf. Distributed Smart Cameras, ser. ICDSC ‘10, 2010, pp. 142149. Available at http://doi.acm.org/10.1145/1865987.1866010.
    45. 45)
      • 22. Liao, L., Fox, D., Kautz, H.: ‘Location-based activity recognition using relational Markov networks’. Proc. 19th Int. Joint Conf. Artificial Intelligence, ser. IJCAI'05, 2005, pp. 773778. Available at http://dl.acm.org/citation.cfm?id=1642293.1642417.
    46. 46)
      • 75. Noor, S., Uddin, V.: ‘MapReduce for high-speed feature identification for computer vision applications’. First Int. Conf. Innovative Computing (ICIC), 2016.
    47. 47)
      • 86. Bar, M.: ‘Visual objects in context’, Nat. Rev. Neurosci., 2004, 5, pp. 617629.
    48. 48)
      • 26. Ikizler-Cinbis, N., Sclaroff, S.: ‘Object, scene and actions: combining multiple features for human action recognition’. Proc. 11th European Conf. Computer Vision: Part I, ser. ECCV'10, 2010, pp. 494507. Available at http://dl.acm.org/citation.cfm?id=1886063.1886101.
    49. 49)
      • 67. Bay, H., Tuytelaars, T., Gool, L.V.: ‘SURF: speeded-up robust features’, Comput. Vis. Image Underst., 2008, 110, pp. 346359.
    50. 50)
      • 72. Kanade, T., Hebert, M.: ‘First-person vision’, Proc. IEEE, 2012, 100, (8), pp. 24422453.
    51. 51)
      • 52. Matsuo, K., Yamada, K., Ueno, S., et al: ‘An attention-based activity recognition for egocentric video’. 2014 IEEE Conf. Computer Vision and Pattern Recognition Workshops, June 2014, pp. 565570.
    52. 52)
      • 39. Collobert, R., Bengio, S.: ‘Links between perceptrons, MLPs and SVMs’. Proc. 21st Int. Conf. Machine Learning, series ICML ‘04, 2004, p. 23. Available at http://doi.acm.org/10.1145/1015330.1015415.
    53. 53)
      • 56. Cheng, G., Wan, Y., Saudagar, A.N., et al: ‘Advances in human action recognition: a survey’, CoRR, 2015, pp. 30, 23 January 2017, abs/1501.05964. Available at http://arxiv.org/abs/1501.05964.
    54. 54)
      • 94. Taylor, G.W., Fergus, R., LeCun, Y., et al: ‘Convolutional learning of spatio-temporal features’. Computer Vision – ECCV 2010: 11th European Conf. Computer Vision, Proc., Part VI, Heraklion, Crete, Greece, 5–11 September 2010, 2010, pp. 140153. Available at http://dx.doi.org/10.1007/978-3-642-15567-3_11.
    55. 55)
      • 87. Biederman, I.: ‘Visual object recognition’, in Kosslyn, M., Osherson, D.N. (Eds.): ‘An invitation to cognitive science: visual cognition’, vol. 2 (1995, 2nd edn.), pp. 121165.
    56. 56)
      • 45. Fathi, A., Li, Y., Rehg, J.M.: ‘Learning to recognize daily actions using gaze’ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2012), pp. 314327. Available at http://dx.doi.org/10.1007/978-3-642-33718-5_23.
    57. 57)
      • 51. Ryoo, M.S., Matthies, L.: ‘First-person activity recognition: what are they doing to me’. CVPR, 2013.
    58. 58)
      • 30. Anguita, D., Ghio, A., Oneto, L., et al: ‘Energy efficient smartphone-based activity recognition using fixed-point arithmetic’, J. Univ. Comput. Sci., 2013, 19, pp. 12951314.
    59. 59)
      • 6. Lupu, R.G., Ungureanu, F.: ‘A survey of eye tracking methods and applications’, 2013, 29 August 2013, Available at http://www12.tuiasi.ro/users/103/071-086_006_Lupu_.pdf.
    60. 60)
      • 38. Kadous, W.: ‘What are the advantages and disadvantages of deep learning? Can you compare it with the statistical learning theory?’, Promoted by NYC Data Science Academy.
    61. 61)
      • 44. Zelinsky, G.J., Rao, R.P.N., Hayhoe, M.M., et al: ‘Eye movements reveal the spatiotemporal dynamics of visual search’, J. Assoc. Psychol. Sci., 1997, 8, (6), pp. 448453.
    62. 62)
      • 2. Land, M.F.: ‘Vision, eye movements, and natural behavior’, Vis. Neurosci., 2009, 26, (1), pp. 5162.
    63. 63)
      • 1. Mennie, M.N., Rusted, J.: ‘The roles of vision and eye movements in the control of activities of daily living’, Perception, 1999, 28, (11), pp. 13111328.
    64. 64)
      • 55. Jasmine, R.R., Thyagharajan, K.K.: ‘Study on recent approaches for human action recognition in real time’, Int. J. Eng. Res. Technol. (IJERT), 2015, 4, (8), pp. 660664.
    65. 65)
      • 23. Liao, L., Fox, D., Henry, K.: ‘Location-based activity recognition’, in Weiss, Y., Schölkopf, P.B., Platt, J.C. (Eds.): ‘Advances in neural information processing systems 18’ (MIT Press, 2006), pp. 787794. Available at http://papers.nips.cc/paper/2911-location-based-activity-recognition.pdf.
    66. 66)
      • 54. Ru, K.S., Uyen, T.H.L., Jin, L.Y., et al: ‘A review on video-based human activity recognition’, Comput. Open Access J., 2013, 2, (2), pp. 88131. Available at www.mdpi.com/journal/computers.
    67. 67)
      • 32. Kamal, S., Jalal, A., Kim, D.: ‘Depth images-based human detection, tracking and activity recognition using spatiotemporal features and modified HMM’, J. Electr. Eng. Technol., 2016, 11, (3), pp. 19211926.
    68. 68)
      • 57. Schuldt, C., Laptev, I., Caputo, B.: ‘Recognizing human actions: a local SVM approach’. Proc. 17th Int. Conf. Pattern Recognition (ICPRŠ04), 2004.
    69. 69)
      • 90. Torralba, A., Oliva, A., Castelhano, M.S., et al: ‘Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search’, Psychol. Rev., 2006, 113, (4), pp. 766786.
    70. 70)
      • 20. Kalaivani, P., Vimala, S.: ‘Human action recognition using background subtraction method’, Int. Res. J. Eng. Technol. (IRJET), 2015, 2, (3), pp. 10321035.
    71. 71)
      • 3. Land, M.F., Hayhoe, M.: ‘In what ways do eye movements contribute to everyday activities?’, Vis. Res., 2001, 41, (25-26), pp. 35593565. Available at http://www.sciencedirect.com/science/article/pii/S004269890100102X.
    72. 72)
      • 12. Hayhoe, M., Ballard, D.: ‘Eye movements in natural behavior’, J. Trends Cogn. Sci., 2005, 9, (4), pp. 188194.
    73. 73)
      • 88. Davenport, J., Potter, M.: ‘Scene consistency in object and background perception’, Psychol. Sci., 2004, 15, (8), pp. 559564.
    74. 74)
      • 61. Ren, X., Philipose, M.: ‘Egocentric recognition of handled objects: benchmark and analysis’. IEEE Computer Society Conf. Computer Vision and Pattern Recognition Workshops 2009 CVPR Workshops 2009, June 2009, pp. 18.
    75. 75)
      • 68. Oyallon, E., Rabin, J.: ‘An analysis and implementation of the SURF method, and its comparison to SIFT’, Image Process. Online (Preprint), 2013, 5, pp. 176218.
    76. 76)
      • 9. Tacca, M.C.: ‘Commonalities between perception and cognition’, Front. Psychol., 2011, 2, (358), pp. 110.
    77. 77)
      • 82. Lutz, O.H.-M., Burmeister, C., dos Santos, L.F., et al: ‘Application of head-mounted devices with eye-tracking in virtual reality therapy’, Curr. Dir. Biomed. Eng., 2017, 3, (1), pp. 5356.
    78. 78)
      • 43. Toet, A.: ‘Gaze directed displays as an enabling technology for attention aware systems’, J. Comput. Hum. Behav., 2006, 22, (4), pp. 615647.
    79. 79)
      • 35. Noor, S., Uddin, V.: ‘Using ANN for multi-view activity recognition in indoor environment’. 2016 Int. Conf. Frontiers of Information Technology (FIT), December 2016, pp. 258263.
    80. 80)
      • 58. Gross, R., Shi, J.: ‘The CMU motion of body (MoBo) database’. Technical Report CMU-RI-TR-01-18, Pittsburgh, PA, June 2001.
    81. 81)
      • 71. Mikolajczyk, K., Schmid, C.: ‘A performance evaluation of local descriptors’, IEEE Trans. Pattern Anal. Mach. Intell., 2005, 27, (10), pp. 16151630. Available at http://lear.inrialpes.fr/pubs/2005/MS05.
    82. 82)
      • 49. Xia, L., Gori, I., Aggarwal, J.K., et al: ‘Robot-centric activity recognition from first-person RGB-D videos’. 2015 IEEE Winter Conf. Applications of Computer Vision, January 2015, pp. 357364.
    83. 83)
      • 76. Devyver, M.S., Akihiro, T., Takeo, K.: ‘A wearable device for first person vision’. Third Int. Symp. Quality of Life Technology, FICCDAT Workshop, 2011.
    84. 84)
      • 14. Aggarwal, J.K., Ryoo, M.S., Kitani, K.M.: ‘Frontiers of human activity analysis’, Tutor. Hum. Activity Recognit. (CVPR), 2011, 43, (3), pp. 143.
    85. 85)
      • 13. Zhang, S., Wei, Z., Nie, J., et al: ‘A review on human activity recognition using vision-based method’, J. Healthc. Eng., 2017, 2017, pp. 31, 3090343.
    86. 86)
      • 99. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: ‘Learning representations by back-propagating errors’, Nat. Int. Wkly. J. Sci., 1986, 323, pp. 533536.
    87. 87)
      • 95. Laptev, I.: ‘On space–time interest points’, Int. J. Comput. Vis., 2005, 64, (2/3), pp. 107123. Available at http://dx.doi.org/10.1007/s11263-005-1838-7.
    88. 88)
      • 92. Lowe, D.G.: ‘Distinctive image features from scale-invariant keypoints’, Int. J. Comput. Vis., 2004, 60, (2), pp. 91110. Available at https://doi.org/10.1023/B:VISI.0000029664.99615.94.
    89. 89)
      • 34. Goodfellow, I., Bengio, Y., Courville, A.: ‘Deep learning’ (MIT Press, 2016). Available at http://www.deeplearningbook.org.
    90. 90)
      • 48. Pirsiavash, H., Ramanan, D.: ‘Detecting activities of daily living in first-person camera views’. Proc. 2012 IEEE Conf. Computer Vision and Pattern Recognition (CVPR), ser. CVPR ‘12, 2012, pp. 28472854. Available at http://dl.acm.org/citation.cfm?id=2354409.2355089.
    91. 91)
      • 46. Ramirez-Amaro, K., Minhas, H.N., Zehetleitner, M., et al: ‘Added value of gaze-exploiting semantic representation to allow robots inferring human behaviors’, ACM Trans. Interact. Intell. Syst., 2017, 7, (1), pp. 5:15:30. Available at http://doi.acm.org/10.1145/2939381.
    92. 92)
      • 28. Kim, E., Helal, S., Cook, D.: ‘Human activity recognition and pattern discovery’, IEEE Pervasive Comput., 2010, 9, (1), pp. 4853.
    93. 93)
      • 83. Franchak, J.M., Kretch, K.S., Soska, K.C., et al: ‘Head-mounted eyetracking: a new method to describe infant looking’, HHS Public Access, 2011, 82, (6), pp. 17381750.
    94. 94)
      • 7. Kanade, T.: ‘First-person, inside-out vision’. Keynote Speech, First Workshop on Egocentric Vision in Conjunction with CVPR, 2009.
    95. 95)
      • 64. Lowe, D.G.: ‘Object recognition from local scale-invariant features’. Proc. Int. Conf. Computer Vision, ser. ICCV ‘99, 1999, vol. 2, p. 1150. Available at http://dl.acm.org/citation.cfm?id=850924.851523.
    96. 96)
      • 53. Turaga, P., Chellappa, R., Subrahmanian, V.S., et al: ‘Machine recognition of human activities: a survey’, IEEE Trans. Circuits Syst. Video Technol., 2008, 18, (11), pp. 14731488.
    97. 97)
      • 85. ‘User experience and interaction’, 2017. Available at https://www.tobiipro.com/fields-of-use/user-experience-interaction/.
    98. 98)
      • 97. Harris, C., Stephens, M.: ‘A combined corner and edge detector’. Proc. Fourth Alvey Vision Conf., 1988, pp. 147151.
    99. 99)
      • 78. Schneider, E., Dera, T., Bard, K., et al: ‘Eye movement driven head-mounted camera: it looks where the eyes look’. 2005 IEEE Int. Conf. Systems, Man and Cybernetics, October 2005, vol. 3, pp. 24372442.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2017.0141
Loading

Related content

content/journals/10.1049/iet-cvi.2017.0141
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address