http://iet.metastore.ingenta.com
1887

Depth-based end-to-end deep network for human action recognition

Depth-based end-to-end deep network for human action recognition

For access to this article, please select a purchase option:

Buy eFirst article PDF
£12.50
(plus tax if applicable)
Buy Knowledge Pack
10 articles for £75.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Name:*
Email:*
Your details
Name:*
Email:*
Department:*
Why are you recommending this title?
Select reason:
 
 
 
 
 
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Recognition of human actions from videos can be improved if depth information is available. Depth information certainly helps in segregating foreground motion from the background. Single image depth estimation (SIDE) is a commonly used method for the analysis of weather degraded images. In this study, the idea of SIDE is extended to human action recognition (HAR) on datasets where depth information is not available. Several depth-based HAR algorithms are available but all of them are using the depth information given with the dataset. Some other methods are using depth motion map which refers to the depth of motion in a temporal direction. Here, a new depth-based end-to-end deep network is proposed for HAR in which the frame-wise depth is estimated and this estimated depth is used for processing instead of RGB frame. As colour information is not required for estimating motion, a single channel depth map is used for estimating motion in the video. It makes the system computationally efficient. The proposed method is tested and verified on three benchmark datasets namely JHMDB, HMDB51 and UCF101. The proposed method outperforms the existing state-of-the-art methods for HAR on all the three tested datasets.

http://iet.metastore.ingenta.com/content/journals/10.1049/iet-cvi.2018.5020
Loading

Related content

content/journals/10.1049/iet-cvi.2018.5020
pub_keyword,iet_inspecKeyword,pub_concept
6
6
Loading
This is a required field
Please enter a valid email address