Human Activity Recognition Process Using 3-D Posture Data

Research output: Contribution to journalArticlepeer-review

180 Citations (Scopus)

Abstract

In this paper, we present a method for recognizing human activities using information sensed by an RGB-D camera, namely the Microsoft Kinect. Our approach is based on the estimation of some relevant joints of the human body by means of the Kinect; three different machine learning techniques, i.e., K-means clustering, support vector machines, and hidden Markov models, are combined to detect the postures involved while performing an activity, to classify them, and to model each activity as a spatiotemporal evolution of known postures. Experiments were performed on Kinect Activity Recognition Dataset, a new dataset, and on CAD-60, a public dataset. Experimental results show that our solution outperforms four relevant works based on RGB-D image fusion, hierarchical Maximum Entropy Markov Model, Markov Random Fields, and Eigenjoints, respectively. The performance we achieved, i.e., precision/recall of 77.3% and 76.7%, and the ability to recognize the activities in real time show promise for applied use.
Original languageEnglish
Pages (from-to)586-597
Number of pages12
JournalIEEE Transactions on Human-Machine Systems
Volume45
Publication statusPublished - 2015

All Science Journal Classification (ASJC) codes

  • Human Factors and Ergonomics
  • Control and Systems Engineering
  • Signal Processing
  • Human-Computer Interaction
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint Dive into the research topics of 'Human Activity Recognition Process Using 3-D Posture Data'. Together they form a unique fingerprint.

Cite this