access icon free Video analytics revisited

Video, rich in visual real-time content, is however, difficult to interpret and analyse. Video collections necessarily have large data volume. Video analytics strives to automatically discover patterns and correlations present in the large volume of video data, which can help the end-user to take informed and intelligent decisions as well as predict the future based on the patterns discovered across space and time. In this study, the authors discuss various issues and problems in video analytics, proposed solutions and present some of the important current applications of video analytics.

Inspec keywords: video signal processing; correlation theory

Other keywords: data volume; pattern discovery; correlation discovery; video data; video analytics; video collection

Subjects: Optical, image and video signal processing; Video signal processing

References

    1. 1)
      • 24. Zhang, J., Marszałek, M., Lazebnik, S., et al: ‘Local features and kernels for classification of texture and object categories: a comprehensive study’, IJCV, 2007, 73, (2), pp. 213238.
    2. 2)
      • 54. Godec, M., Roth, P.M., Bischof, H.: ‘Hough-based tracking of non-rigid objects’. ICCV, 2011.
    3. 3)
      • 47. Chen, S.Y.: ‘Kalman filter for robot vision: a survey’, IEEE Trans. Ind. Electron., 2012, 59, (11), pp. 44094420.
    4. 4)
      • 105. Choudhary, A., Chaudhury, S., Banerjee, S.: ‘A framework for analysis of surveillance videos’. ICVGIP, 2008.
    5. 5)
      • 10. Yang, X., Qian, X., Xue, Y.: ‘Scalable mobile image retrieval by exploring contextual saliency’, IEEE TIP, 2015, 24, (6), pp. 17091721.
    6. 6)
      • 144. Dodge, H., Mattek, N., Austin, D., et al: ‘In-home walking speeds and variability trajectories associated with mild cognitive impairment’, Neurology, 2012, 78, (24), pp. 19461952.
    7. 7)
      • 57. Soomro, K., Zamir, A.R.: ‘Action recognition in realistic sports videos’, in Computer Vision in Sports, Moeslund, T.B., Thomas, G., Hilton, A. (eds.), (Switzerland, Springer, 2014), pp. 181208.
    8. 8)
      • 88. Wang, J.X., Bebis, G., Nicolescu, M., et al: ‘Improving target detection by coupling it with tracking’, MVA, 2009, 20, (4), pp. 205223.
    9. 9)
      • 108. Zhang, Y., Liu, Z.: ‘Irregular behavior recognition based on treading track’. ICWAPR, 2007.
    10. 10)
      • 87. Choudhary, A., Sharma, G., Chaudhury, S., et al: ‘Distributed calibration of a pan-tilt camera network using multi-layered belief propagation’. IEEE CVPR Workshop on Camera Networks, 2010.
    11. 11)
      • 117. Cheng, K., Chen, Y., Fang, W.: ‘An efficient subsequence search for video anomaly detection and localization’, MTAP, 2015, (Published Online: 22 Jan 2015).
    12. 12)
      • 103. Zhang, C., Li, H., Wang, X., et al: ‘Cross-scene crowd counting via deep convolutional neural networks’. CVPR, 2015.
    13. 13)
      • 1. Choudhary, A., Chaudhury, S., Banerjee, S.: ‘Distributed framework for composite event recognition in a calibrated pan-tilt camera network’. ICVGIP, 2010.
    14. 14)
      • 132. Ohn-Bar, E., Tawari, A., Martin, S., et al: ‘On surveillance for safety critical events: in-vehicle video networks for predictive driver assistance systems’, CVIU, 2015, 134, pp. 130140.
    15. 15)
      • 12. Ponce, J., Berg, T., Everingham, M., et al: ‘Dataset issues in object recognition’, Ponce, J., et al (EDs.): ‘Toward category-level object recognition’, (LNCS, 4170), 2006, pp. 2948.
    16. 16)
      • 56. http://www.nada.kth.se/cvap/actions/.
    17. 17)
      • 158. Wan, Y., Yang, T.-I., Keathly, D., et al: ‘Dynamic scene modelling and anomaly detection based on trajectory analysis’, IET ITS, 2014, 8, (6), pp. 526533.
    18. 18)
      • 95. Calandra, R., Raiko, T., Deisenroth, M., et al: ‘Learning deep belief networks from non-stationary streams’. ICANN, 2012.
    19. 19)
      • 16. http://www.wired.com/2014/09/video-big-data/.
    20. 20)
      • 37. Yilmaz, A., Javed, O., Shah, M.: ‘Object tracking: a survey’, J. ACM Comput. Surv., 2006, 38, (4), pp. 145.
    21. 21)
      • 73. Yang, B., Nevatia, R.: ‘Multi-target tracking by online learning a CRF model of appearance and motion patterns’, IJCV, 2014, 107, (2), pp. 203217.
    22. 22)
      • 82. Keim, D., Mansmann, F., Thomas, J.: ‘Visual analytics: how much visualization and how much analytics?’, SIGKDD Explor. Newsl., 2009, 11, (2), pp. 58.
    23. 23)
      • 153. Edgcomb, A., Vahid, F.: ‘Automated fall detection on privacy-enhanced video’. IEEE EMBS, 2012.
    24. 24)
      • 31. Revaud, J., Douze, M., Schmid, C., et al: ‘Event retrieval in large video collections with circulant temporal encoding’. IEEE CVPR, 2013.
    25. 25)
      • 150. Auvinet, E., Reveret, L., St-Arnaud, A., et al: ‘Fall detection using multiple cameras’. ICEMBS, 2008.
    26. 26)
      • 66. Glodek, M., Trentin, E., Schwenker, F., et al: ‘Hidden Markov models with graph densities for action recognition’. IJCNN, 2013.
    27. 27)
      • 50. Ross, D.A., Lim, J., Lin, R.S.: ‘Incremental learning for robust visual tracking’, IJCV, 2008, 77, (1–2), pp. 125141.
    28. 28)
      • 121. Haritaoglu, I., Flickner, M.: ‘Detection and tracking of shopping groups in store’. CVPR, 2001.
    29. 29)
      • 110. Lin, W., Lu, Y.Z.J., Zhou, B., et al: ‘Summarizing surveillance videos with local-patch-learning-based abnormality detection, blob sequence optimization, and type-based synopsis’, Neurocomputing, 2015, 155, pp. 8489.
    30. 30)
      • 129. Doshi, A., Morris, B.T., Trivedi, M.: ‘On-road prediction of driver's intent with multimodal sensory cues’, IEEE Pervasive Comput., 2011, 10, (3), pp. 2234.
    31. 31)
      • 91. Kwak, S., Bae, G., Byun, H.: ‘Abandoned luggage detection using a finite state automaton in surveillance video’, Opt. Eng., 2010, 49, (2) pp. 027007027007-10.
    32. 32)
      • 77. Lin, W., Chu, H., Wu, J., et al: ‘A new heat-map-based algorithm for recognizing group activities in videos’, IEEE TCSVT, 2013, 23, (11), pp. 19801992.
    33. 33)
      • 115. Wang, T., Snoussi, H.: ‘Histograms of optical flow orientation for abnormal events detection’. IEEE PETS, 2013.
    34. 34)
      • 68. Olivera, N., Garg, A., Horvitz, E.: ‘Layered representations for learning and inferring office activity from multiple sensory channels’, CVIU, 2004, 96, (2), pp. 163180.
    35. 35)
      • 126. Trinh, H., Pankanti, S., Fan, Q.: ‘Multimodal ranking for non-compliance detection in retail surveillance’. WACV, 2012.
    36. 36)
      • 130. Cheng, S., Park, S., Trivedi, M.: ‘Multi-spectral and multi-perspective video arrays for driver body tracking and activity analysis’, CVIU, 2007, 106, pp. 245257.
    37. 37)
      • 78. Chang, X., Zheng, W., Zhang, J.: ‘Learning person–person interaction in collective activity recognition’, IEEE TIP, 2015, 24, (6), pp. 19051918.
    38. 38)
      • 2. Mittal, A., Davis, L.S.: ‘Visibility analysis and sensor planning in dynamic environments’. ECCV, 2004.
    39. 39)
      • 46. Breitenstein, M.D., Reichlin, F., Leibe, B., et al: ‘Online multiperson tracking-by-detection from a single, uncalibrated camera’, IEEE TPAMI, 2011, 33, (9), pp. 18201833.
    40. 40)
      • 23. Barron, J.L., Fleet, D.J., Beauchemin, S.: ‘Performance of optical flow techniques’, IJCV, 1994, 12, pp. 4377.
    41. 41)
      • 79. Lin, W., Sun, M., Poovendran, R., et al: ‘Group event detection with a varying number of group members for video surveillance’, IEEE TCSVT, 2010, 20, (8), pp. 10571067.
    42. 42)
      • 48. Baker, S., Matthews, I.: ‘Lucas–Kanade 20 years on: a unifying framework’, IJCV, 2004, 56, (3), pp. 221255.
    43. 43)
      • 143. Ashraf, A., Taati, B.: ‘Automated video analysis of handwashing behavior as a potential marker of cognitive health in older adults’, IEEE J. Biomed. Health Inform., doi: 10.1109/JBHI.2015.2413358.
    44. 44)
      • 81. Wang, X., Ji, Q.: ‘Video event recognition with deep hierarchical context model’. CVPR, 2015.
    45. 45)
      • 74. Duan, L., Xu, D., Tsang, I.-H., et al: ‘Visual event recognition in videos by learning from web data’, TPAMI, 2012, 34, (9), pp. 16671680.
    46. 46)
      • 69. Xiang, T., Gong, S.: ‘Beyond tracking: modelling activity and understanding behaviour’, IJCV, 2006, 67, (1), pp. 2151.
    47. 47)
      • 14. Qian, X., Zhao, Y., Han, J.: ‘Image location estimation by salient region matching’, IEEE TIP, 2015, 24, (11), p. 4348.
    48. 48)
      • 3. Bodor, R., Schrater, P., Papanikolopoulos, N.: ‘Multicamera positioning to optimize task observability’. IEEE AVSS, 2005.
    49. 49)
      • 133. Satzoda, R., Trivedi, M.M.: ‘On enhancing lane estimation using contextual cues’, IEEE TCSVT, 2015.
    50. 50)
      • 65. Wang, C., Wang, Y., Yuille, A.: ‘An approach to pose-based action recognition’. IEEE CVPR, 2013.
    51. 51)
      • 40. Sharma, P., Nevatia, R.: ‘Efficient detector adaptation for object detection in a video’. CVPR, 2013.
    52. 52)
      • 34. Qian, X., Hua, X., Chen, P., et al: ‘PLBP: an effective local binary patterns texture descriptor with pyramid representation’, Pattern Recognit., 2011, 44, (10–11), pp. 25022515.
    53. 53)
      • 139. Mogelmose, A., Trivedi, M.M., Moeslund, T.B.: ‘Vision-based traffic sign detection and analysis for intelligent driver assistance systems: perspectives and survey’, IEEE T-ITS, 2012, 13, (4), pp. 14841497.
    54. 54)
      • 71. Gkioxari, G., Malik, J.: ‘Finding action tubes’. CVPR, 2015.
    55. 55)
      • 21. Dalal, N., Triggs, B.: ‘Histograms of oriented gradients for human detection’. IEEE CVPR, 2005.
    56. 56)
      • 127. Intelligent vehicle system’. Available at http://www.eng.nus.edu.sg/EResnews/0206/sf/sf_1.html.
    57. 57)
      • 29. Laptev, I.: ‘On space–time interest points’, IJCV, 2005, 64, (2/3), pp. 107123.
    58. 58)
      • 43. Kang, B., Zhu, W.: ‘Robust moving object detection using compressed sensing’, IET Image Process., 2015, 9, (9), pp. 811819.
    59. 59)
      • 93. Choudhary, A., Faruquie, T.A., Banerjee, S., et al: ‘Discovering activities and their temporal significance’. AVSS, 2012.
    60. 60)
      • 159. Stieglitz, S., Dang-Xuan, L., Bruns, A., et al: ‘Social media analytics’, Bus. Inf. Syst. Eng., 20146,(2), pp..
    61. 61)
      • 19. Lowe, D.: ‘Distinctive image features from scale-invariant keypoints’, IJCV, 2004, 60, (2), pp. 91110.
    62. 62)
      • 30. Shi, F., Petriu, E., Laganiere, R.: ‘Sampling strategies for real-time action recognition’. IEEE CVPR, 2013.
    63. 63)
      • 11. Piciarelli, C., Esterle, L., Khan, A., et al: ‘Dynamic reconfiguration in camera networks: a short survey’, IEEE TCSVT, 2015, PP, (99), pp. 113.
    64. 64)
      • 15. Aggarwal, C.C., Abdelzaher, T.: ‘Social sensing’, in ed. by Aggarwal, Charu C.Managing and mining sensor data’ (Springer, 2012), pp. 237297.
    65. 65)
      • 55. Lin, C.-C., Pankanti, S., Ashour, G., et al: ‘Moving camera analytics: emerging scenarios, challenges, and applications’, IBM J. Res. Dev., 2015, 59, (2/3), pp. 510.
    66. 66)
      • 5. Yous, S., Ukita, N., Kidode, M.: ‘Multiple active camera assignment for high fidelity 3D video’. IEEE ICVS, 2006.
    67. 67)
      • 53. Cehovin, L., Kristan, M., Leonardis, A.: ‘An adaptive coupled-layer visual model for robust visual tracking’. ICCV, 2011.
    68. 68)
      • 90. Prieto, M., Allen, A.: ‘Using self-organising maps in the detection and recognition of road signs’, IVC, 2009, 27, (6), pp. 673683.
    69. 69)
      • 152. Rougier, C., Meunier, J., St-Arnaud, A., et al: ‘Fall detection from human shape and motion history using video surveillance’. ICAINAW, 2007.
    70. 70)
      • 118. Wang, W., Lin, W., Chen, Y., et al: ‘Finding coherent motions and semantic regions in crowd scenes: a diffusion and clustering approach’. ECCV, 2014.
    71. 71)
      • 35. Le, Q., Zou, W., Yeung, S., et al: ‘Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis’. CVPR, 2011.
    72. 72)
      • 107. Lavee, G., Khan, L., Thuraisingham, B.: ‘A framework for a video analysis tool for suspicious event detection’, MTAP, 2007, 35, pp. 109123.
    73. 73)
      • 151. Rougier, C., Meunier, J., St-Arnaud, A., et al: ‘Monocular 3-D head tracking to detect falls of elderly people’. ICEMBS, 2006.
    74. 74)
      • 138. Overett, G., Petersson, L.: ‘Large scale sign detection using HOG feature variants’. IEEE Intelligent Vehicles Symp., 2011.
    75. 75)
      • 99. Hung, Y., Chiang, C., Hsu, S.J., et al: ‘Abnormality detection for improving elder's daily life independent’. Int. Conf. Smart Homes Health Telematics, 2010.
    76. 76)
      • 41. Pan, S.J., Yang, Q.: ‘A survey on transfer learning’, IEEE Trans. Knowl. Data Eng., 201022, (10), pp. 13451359.
    77. 77)
      • 97. Hu, D.H., Zhang, X., Yin, J., et al: ‘Abnormal activity recognition based on HDP-HMM models’. IJCAI, 2009.
    78. 78)
      • 89. Karpathy, A., Toderici, G., Shetty, S., et al: ‘Large-scale video classification with convolutional neural networks’. CVPR, 2014.
    79. 79)
      • 58. Jhuang, H., Serre, T., Wolf, L., et al: ‘A biologically inspired system for action recognition’. ICCV, 2007.
    80. 80)
      • 106. Zhong, H., Shi, J., Visontai, M.: ‘Detecting unusual activity in video’. CVPR, 2004.
    81. 81)
      • 92. Choudhary, A., Pal, M., Banerjee, S., et al: ‘Unusual activity analysis using video epitomes and pLSA’. ICVGIP, 2008.
    82. 82)
      • 25. Jiang, Y., Bhattacharya, S., Chang, S., et al: ‘High-level event recognition in unconstrained videos’, IJMIR, 2013, 2, (2), pp 73101.
    83. 83)
      • 141. Mitchell, A.J.: ‘The clinical significance of subjective memory complaints in the diagnosis of mild cognitive impairment and dementia: a meta-analysis’, Int. J. Geriatrc Psychiatry, 2008, 23, pp. 11911202.
    84. 84)
      • 8. Roy, S.D., Chaudhury, S., Banerjee, S.: ‘Recognizing large isolated 3-D objects through next view planning using inner camera invariants’, IEEE TSMCB, 2005, 35, (2), pp. 282292.
    85. 85)
      • 135. Doshi, A., Trivedi, M.M.: ‘On the roles of eye gaze and head dynamics in predicting driver's intent to change lanes’, IEEE T-ITS, 2009, 10, (3), pp. 453462.
    86. 86)
      • 120. Vezzani, R., Baltieri, D., Cicchiara, R.: ‘People re-identification in surveillance and forensics: a survey’, ACM Comput. Surv., 2013, 1, (1), pp. 134.
    87. 87)
      • 62. Laptev, I., Perez, P.: ‘Retrieving actions in movies’. ICCV, 2007.
    88. 88)
      • 112. http://www.crcv.ucf.edu/data/crowd_counting.php.
    89. 89)
      • 36. Deng, L., Yu, D.: ‘Deep learning: methods and applications’, Found. Trends Signal Process., 2014, 7, pp. 197387.
    90. 90)
      • 7. Roy, S.D., Chaudhury, S., Banerjee, S.: ‘Active recognition through next view planning: a survey’, Pattern Recognit., 2004, 37, (3), pp. 429446.
    91. 91)
      • 39. Yang, Y., Yang, Y., Shen, H.T.: ‘Effective transfer tagging from image to video’, ACM TOMM, 2013, 9, (2) pp. 14:114:20.
    92. 92)
      • 146. Nasution, A.H., Emmanuel, S.: ‘Intelligent video surveillance for monitoring elderly in home environments’. MMSP, 2007.
    93. 93)
      • 94. Raina, R., Battle, A., Lee, H., et al: ‘Self-taught learning: transfer learning from unlabeled data’. ICML, 2007.
    94. 94)
      • 128. McCall, J., Trivedi, M.M.: ‘Driver behavior and situation aware brake assistance for intelligent vehicles’, Proc. IEEE, 2007, 95, (2), pp. 374387.
    95. 95)
      • 20. Torralba, A., Murphy, K., Freeman, W., et al: ‘Context-based vision system for place and object recognition’. ICCV, 2003.
    96. 96)
      • 18. Kantorov, V., Laptev, I.: ‘Efficient feature extraction, encoding and classification for action recognition’. IEEE CVPR, 2014.
    97. 97)
      • 61. Dollar, P., Rabaud, V., Cottrell, G., et al: ‘Behavior recognition via sparse spatio-temporal features’. ICCV Workshop on VSPETS, 2005.
    98. 98)
      • 63. Niebles, J., Wang, H., Fei-Fei, L.: ‘Unsupervised learning of human action categories using spatial-temporal words’, IJCV, 2008, 79, (3), pp. 299318.
    99. 99)
      • 60. Schuldt, C., Laptev, L., Caputo, B.: ‘Recognizing human actions: a local SVM approach’. ICPR, 2004.
    100. 100)
      • 102. Choudhary, A., Sharma, S., Sreedevi, I., et al: ‘Real-time distributed multi-object tracking in a PTZ camera network’. PReMI, 2015.
    101. 101)
      • 119. Mi, Y., Liu, L., Lin, W., et al: ‘Extracting recurrent motion flows from crowded scene videos: a coherent motion-based approach’. IEEE Int. Conf. on Multimedia Big Data, 2015.
    102. 102)
      • 131. Lee, K., Hwang, J.: ‘On-road pedestrian tracking across multiple driving recorders’, IEEE Trans. Multimed., 2015, 17, (9), pp. 14291438.
    103. 103)
      • 147. Igual, R., Medrano, C., Plaza, I.: ‘Challenges, issues and trends in fall detection systems’, Biomed. Eng. Online, 2013, 12, (66), pp. 166.
    104. 104)
      • 85. Meghdadi, A.H., Irani, P.: ‘Interactive exploration of surveillance video through action shot summarization and trajectory visualization’, IEEE TVCG, 2013, 19, (12), pp. 21192128.
    105. 105)
      • 113. http://www.crcv.ucf.edu/data/ParkingLOT/.
    106. 106)
      • 114. Li, W., Mahadevan, V., Vasconcelos, N.: ‘Anomaly detection and localization in crowded scenes’, TPAMI, 2014, 36, (1), pp. 1832.
    107. 107)
      • 22. Stauffer, C., Grimson, W.: ‘Adaptive background mixture models for real-time tracking’. CVPR, 1999.
    108. 108)
      • 111. Lin, W., Chen, Y., Wu, J., et al: ‘A new network-based algorithm for human activity recognition in videos’, IEEE TCSVT, 2014, 24, (5), pp. 826841.
    109. 109)
      • 148. Zhang, Z., Liu, W., Metsis, V., et al: ‘A viewpoint-independent statistical method for fall detection’. ICPR, 2012.
    110. 110)
      • 76. Hofmann, T.: ‘Unsupervised learning by probabilistic latent semantic analysis’, Mach. Learn., 2001, 41, pp. 177196.
    111. 111)
      • 154. Chakraborty, I., Elgammal, A., Burd, R.S.: ‘Video based activity recognition in a trauma center’. IEEE FG, 2013.
    112. 112)
      • 84. Sebe, I.O., Hu, J., You, S., et al: ‘3D video surveillance with augmented virtual environments’. IWVS, 2003.
    113. 113)
      • 75. Zhang, X., Yang, Y., Zhang, Y., et al: ‘Enhancing video event recognition using automatically constructed semantic-visual knowledge base’, IEEE TMM, 2015, 17, (9), pp. 15621575.
    114. 114)
      • 44. Choi, W., Savarese, S.: ‘Understanding collective activities of people from videos’, IEEE TPAMI, 2014, 30, (6), pp. 12421257.
    115. 115)
      • 101. Mehran, R., Oyama, A., Shah, M.: ‘Abnormal crowd behavior detection using social force model’. CVPR, 2009.
    116. 116)
      • 157. Indu, S., Nair, V., Jain, S., et al: ‘Video based adaptive road traffic’. ICDSC, 2013.
    117. 117)
      • 83. Few, S.: ‘Data visualization for human perception’. Available at https://www.interaction-design.org/literature/book/the-encyclopedia-of-human-computer-interaction-2nd-ed/data-visualization-for-human-perception.
    118. 118)
      • 72. Ramanathan, V., Li, C., Deng, J., et al: ‘Learning semantic relationships for better action retrieval in images’. CVPR, 2015.
    119. 119)
      • 26. Wang, H., Klaser, A., Schmid, C., et al: ‘Dense trajectories and motion boundary descriptors for action recognition’, IJCV, 2013, 103, (1), pp. 6079.
    120. 120)
      • 109. Wiliem, A., Madasu, V., Boles, W., et al: ‘Detecting uncommon trajectories’. DICTA, 2008.
    121. 121)
      • 80. Lin, W., Sun, M., Poovendran, R., et al: ‘Activity recognition using a combination of category components and local models for video surveillance’, IEEE TCSVT, 2008, 18, (8), pp. 11281139.
    122. 122)
      • 145. Anderson, D., Luke, R.H., Keller, J.M., et al: ‘Linguistic summarization of video for fall detection using voxel person and fuzzy logic’, CVIU, 2009, 113, pp. 8089.
    123. 123)
      • 140. Holsinger, T., Deveau, J., Boustani, M., et al: ‘Does this patient have dementia?’, J. AMA, 2007, 297, pp. 23912404.
    124. 124)
      • 67. Concha, O., Xu, R., Moghaddam, Z., et al: ‘HMM-MIO: an enhanced hidden Markov model for action recognition’. CVPRW, 2011.
    125. 125)
      • 33. Qian, X., Guo, D., Hou, X., et al: ‘HWVP: hierarchical wavelet packet descriptors and their applications in scene categorization and semantic concept retrieval’, Multimedia Tools Appl., 2014, 69, pp. 897920.
    126. 126)
      • 6. Chen, C., Yao, Y., Hsu, W.-W., et al: ‘Continuous camera placement using multiple objective optimisation process’, IET Comput. Vis., 2014, 9, (3), pp. 340353.
    127. 127)
      • 38. Wang, X., Yang, M., Zhu, S., et al: ‘Regionlets for generic object detection’. ICCV, 2013.
    128. 128)
      • 125. Liciotti, D., Zingaretti, P., Placidi, V.: ‘An automatic analysis of shoppers behaviour using a distributed RGB-D cameras system’. MESA, 2014.
    129. 129)
      • 45. Kuo, C.-H., Huang, C., Nevatia, R.: ‘Multi-target tracking by online learned discriminative appearance models’. IEEE CVPR, 2010.
    130. 130)
      • 42. Kalogeiton, V., Ferrari, V., Schmid, C.: ‘Analysing domain shift factors between videos and images for object detection’, CoRR, 2015, abs/1501.01186 pp. 18.
    131. 131)
      • 100. Senior, A., Brown, L., Hampapur, A., et al: ‘Video analytics for retail’. AVSS, 2007.
    132. 132)
      • 104. Smart guard’. Available at http://www.slideshare.net/GPARWANI/sgs-corp-presentation-ss-v2-oct09.
    133. 133)
      • 51. Kwon, J., Park, F.C.: ‘Visual tracking via geometric particle filtering on the affine group with optimal importance functions’. IEEE CVPR, 2009.
    134. 134)
      • 142. Verheij, S., Muilwijk, D., Pel, J., et al: ‘Visuomotor impairment in early-stage alzheimer's disease: changes in relative timing of eye and hand movements’, J. Alzheimer's Disease, 2012, 30, (1), pp. 131143.
    135. 135)
      • 149. Auvinet, E., Multon, F., St-Arnaud, A., et al: ‘Fall detection with multiple cameras: an occlusion-resistant method based on 3-d silhouette vertical distribution’, IEEE TITB, 2011, 15, pp. 290300.
    136. 136)
      • 136. Mogelmose, A., Liu, D., Trivedi, M.M.: ‘Detection of U.S. traffic signs’, IEEE T-ITS, 2015, 16, (6), pp 31163125.
    137. 137)
      • 9. Drones: the insurance industry's next game changer?’. Available at http://www.cognizant.com/InsightsWhitepapers/drones-the-insurance-industry's-next-game-changer-codex1019.pdf.
    138. 138)
      • 4. Indu, S., Chaudhury, S., Mittal, N.R., et al: ‘Optimal sensor placement for surveillance of large spaces’. ICDSC, 2009.
    139. 139)
      • 52. Briechle, K., Hanebeck, U.D.: ‘Template matching using fast normalized cross correlation’, SPIE, Opt. Pattern Recognit., 2001, 4387, pp. 95102.
    140. 140)
      • 123. Leykin, A., Tuceryan, M.: ‘Detecting shopper groups in video sequences’. AVSS, 2007.
    141. 141)
      • 86. DeCamp, P., Shaw, G., Kubat, R., et al: ‘An immersive system for browsing and visualizing surveillance video’. MM, 2010.
    142. 142)
      • 137. Hoferlin, B., Zimmermann, K.: ‘Towards reliable traffic sign recognition’. Intelligent Vehicles Symp., 2009.
    143. 143)
      • 70. Khan, F.M., Lee, S.C., Nevatia, R.: ‘Conditional Bayesian networks for action detection’. AVSS, 2013.
    144. 144)
      • 49. Comaniciu, D., Meer, P.: ‘Mean shift: a robust approach toward feature space analysis’, IEEE TPAMI, 2002, 24, (5), pp. 603619.
    145. 145)
      • 32. Perronnin, F., Sanchez, J., Mensink, T.: ‘Improving the Fisher kernel for large-scale image classification’. ECCV, 2010.
    146. 146)
      • 134. Schubert, R., Schulze, K., Wanielik, G.: ‘Situation assessment for automatic lane-change maneuvers’, IEEE T-ITS, 2010, 11, (3), pp. 607616.
    147. 147)
      • 98. Tra, K., Pham, T.: ‘Human fall detection based on adaptive background mixture model and HMM’. IEEE ATC, 2013.
    148. 148)
      • 27. Klaser, A., Marszałek, M., Schmid, C.: ‘A spatio-temporal descriptor based on 3D-gradients’. BMVC, 2008.
    149. 149)
      • 17. http://www.searchdatamanagement.techtarget.com/news/2240175894/Big-data-in-motion-Where-does-it-make-sense.
    150. 150)
      • 124. Kröckel, J., Bodendorf, F.: ‘Customer tracking and tracing data as a basis for service innovations at the point of sale’. Service Research and Innovation Institute Global Conf., 2012.
    151. 151)
      • 156. Douglas, M.: ‘Seeing is believing’. Urgent Communications, 2011.
    152. 152)
      • 59. Schindler, K., Gool, L.V.: ‘Action snippets: how many frames does action recognition require?’. CVPR, 2008.
    153. 153)
      • 96. Zhang, D., Gatica-Perez, D., Bengio, S., et al: ‘Semisupervised adapted HMMs for unusual event detection’. CVPR, 2005.
    154. 154)
      • 122. Video analytics applications for the retail’. Available at http://www.agentvi.com/images/Agent_Vi_-_Retail_Applications.pdf.
    155. 155)
      • 28. Laptev, I., Marszałek, M., Schmid, C., et al: ‘Learning realistic human actions from movies’. IEEE CVPR, 2008.
    156. 156)
      • 64. Bobick, A., Davis, J.: ‘The recognition of human movement using temporal templates’, TPAMI, 2001, 23, (3), pp. 257267.
    157. 157)
      • 116. Benezeth, Y., Jodoin, P.M., Saligrama, V.: ‘Abnormality detection using low-level co-occurring events’, PRL, 2011, 32, pp. 423431.
    158. 158)
      • 13. Li, J., Qian, X., Tang, Y., et al: ‘GPS estimation for places of interest from social users’ uploaded photos’, IEEE TMM, 2013, 15, (8), pp. 20582071.
    159. 159)
      • 155. Miklosi, A., Andrew, R.: ‘The zebrafish as a model for behavioral studies’, Zebrafish, 2006, 3, pp. 227234.
http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2015.0321
Loading

Related content

content/journals/10.1049/iet-cvi.2015.0321
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading