3D Convolutional Neural network for Home Monitoring using Low Resolution Thermal-sensor Array
3D Convolutional Neural network for Home Monitoring using Low Resolution Thermal-sensor Array
- Author(s): Lili Tao ; T. Volonakis ; Bo Tan ; Ziqi Zhang ; Yanguo Jing
- DOI: 10.1049/cp.2019.0100
For access to this article, please select a purchase option:
Buy conference paper PDF
Buy Knowledge Pack
IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.
3rd IET International Conference on Technologies for Active and Assisted Living (TechAAL 2019) — Recommend this title to your library
Thank you
Your recommendation has been sent to your librarian.
- Author(s): Lili Tao ; T. Volonakis ; Bo Tan ; Ziqi Zhang ; Yanguo Jing Source: 3rd IET International Conference on Technologies for Active and Assisted Living (TechAAL 2019), 2019 page (6 pp.)
- Conference: 3rd IET International Conference on Technologies for Active and Assisted Living (TechAAL 2019)
- DOI: 10.1049/cp.2019.0100
- ISBN: 978-1-83953-088-3
- Location: London, UK
- Conference date: 25 March 2019
- Format: PDF
The recognition of daily actions, such as walking, sitting or standing, in the home is informative for assisted living, smart homes and general health care. A variety of actions in complex scenes can be recognised using visual information. However cameras succumb to privacy concerns. In this paper, we present a home action recognition system using an 8×8 infared sensor array. This low spatial resolution retains user visual privacy, but is still a powerful representation of actions in a scene. Actions are recognised using a 3D convolutional neural network, extracting not only spatial but temporal information from video sequences. Experimental results obtained from a publicly available dataset Infra-ADL2018 demonstrate a better performance of the proposed approach compared to the state-of-the-art. We show that the sensor is considered better at detecting the occurrence of falls and actions of daily living. Our method achieves an overall accuracy of 97.22% across 7 actions with a fall detection sensitivity of 100% and specificity of 99.31%.
Inspec keywords: feature extraction; image representation; image sequences; video signal processing; computerised monitoring; health care; infrared imaging; object recognition; cameras; convolutional neural nets; assisted living; image resolution
Subjects: Computer assistance for persons with handicaps; Neural computing techniques; Computer vision and image processing techniques; Video signal processing; Image recognition
Related content
content/conferences/10.1049/cp.2019.0100
pub_keyword,iet_inspecKeyword,pub_concept
6
6