CAD: concatenated action descriptor for one and two person(s), using silhouette and silhouette's skeleton

CAD: concatenated action descriptor for one and two person(s), using silhouette and silhouette's skeleton

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Image Processing — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

This study introduces an action descriptor that has the ability to perform human action recognition efficiently for one and two person(s). The authors’ proposed descriptor computes information like motion, spatial–temporal, diversion with respect to the centroid, critical point and keypoint detection, whereas the existing approaches lack to address this information efficiently. Action descriptors are developed from signature-based optical flow, signature-based corner points and binary robust invariant scalable keypoints. These action descriptors are applied to silhouette and silhouette's skeleton frames. These aforementioned action descriptors lead to developing the concatenated action descriptor (CAD). In order to develop action descriptors, the reference video frame plays an important role. Weizmann (one person) and both clean and noise versions of SBU Kinect Interaction (two persons) datasets are used for the evaluation of their proposed descriptors. On the other hand, classifications are performed by using support vector machine. Experimental results demonstrate that CAD not only outperforms among the entire proposed descriptors, but also provides better performance as compared to state-of-the-art approaches.

Related content

This is a required field
Please enter a valid email address