Fully Automatic, Real-Time Detection of Facial Gestures from Generic Video

Marco La Cascia, Stan Sclaroff

Risultato della ricerca: Paper

1 Citazione (Scopus)

Abstract

A technique for detection of facial gestures from low resolution video sequences is presented. The technique builds upon the automatic 3D head tracker formulation of [11]. The tracker is based on registration of a texture -mapped cylindrical model. Facial gesture analysis is performed in the texture map by assuming that the residual registration error can be modeled as a linear combination of facial motion templates. Two formulations are proposed and tested. In one formulation head and facial motion are estimated in a single, combined linear system. In the other formulation, head motion and then facial motion are estimated in a two-step process. The two-step approach yields significantly better accuracy in facial gesture analysis. The system is demonstrated in detecting two types of facial gestures: “mouth opening” and “eyebrows raising.” On a dataset with lots of head motion the two-step algorithm achieved a recognition accuracy of 70% for the “mouth opening” and accuracy of 66% for “eyebrows raising” gestures. The algorithm can reliably track and classify facial gestures without any user intervention and runs in real -time.
Lingua originaleEnglish
Stato di pubblicazionePublished - 2004

Fingerprint

Textures
Linear systems

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Engineering(all)

Cita questo

@conference{3ac8d06b19464656afed71a5ba92e509,
title = "Fully Automatic, Real-Time Detection of Facial Gestures from Generic Video",
abstract = "A technique for detection of facial gestures from low resolution video sequences is presented. The technique builds upon the automatic 3D head tracker formulation of [11]. The tracker is based on registration of a texture -mapped cylindrical model. Facial gesture analysis is performed in the texture map by assuming that the residual registration error can be modeled as a linear combination of facial motion templates. Two formulations are proposed and tested. In one formulation head and facial motion are estimated in a single, combined linear system. In the other formulation, head motion and then facial motion are estimated in a two-step process. The two-step approach yields significantly better accuracy in facial gesture analysis. The system is demonstrated in detecting two types of facial gestures: “mouth opening” and “eyebrows raising.” On a dataset with lots of head motion the two-step algorithm achieved a recognition accuracy of 70{\%} for the “mouth opening” and accuracy of 66{\%} for “eyebrows raising” gestures. The algorithm can reliably track and classify facial gestures without any user intervention and runs in real -time.",
author = "{La Cascia}, Marco and Stan Sclaroff",
year = "2004",
language = "English",

}

TY - CONF

T1 - Fully Automatic, Real-Time Detection of Facial Gestures from Generic Video

AU - La Cascia, Marco

AU - Sclaroff, Stan

PY - 2004

Y1 - 2004

N2 - A technique for detection of facial gestures from low resolution video sequences is presented. The technique builds upon the automatic 3D head tracker formulation of [11]. The tracker is based on registration of a texture -mapped cylindrical model. Facial gesture analysis is performed in the texture map by assuming that the residual registration error can be modeled as a linear combination of facial motion templates. Two formulations are proposed and tested. In one formulation head and facial motion are estimated in a single, combined linear system. In the other formulation, head motion and then facial motion are estimated in a two-step process. The two-step approach yields significantly better accuracy in facial gesture analysis. The system is demonstrated in detecting two types of facial gestures: “mouth opening” and “eyebrows raising.” On a dataset with lots of head motion the two-step algorithm achieved a recognition accuracy of 70% for the “mouth opening” and accuracy of 66% for “eyebrows raising” gestures. The algorithm can reliably track and classify facial gestures without any user intervention and runs in real -time.

AB - A technique for detection of facial gestures from low resolution video sequences is presented. The technique builds upon the automatic 3D head tracker formulation of [11]. The tracker is based on registration of a texture -mapped cylindrical model. Facial gesture analysis is performed in the texture map by assuming that the residual registration error can be modeled as a linear combination of facial motion templates. Two formulations are proposed and tested. In one formulation head and facial motion are estimated in a single, combined linear system. In the other formulation, head motion and then facial motion are estimated in a two-step process. The two-step approach yields significantly better accuracy in facial gesture analysis. The system is demonstrated in detecting two types of facial gestures: “mouth opening” and “eyebrows raising.” On a dataset with lots of head motion the two-step algorithm achieved a recognition accuracy of 70% for the “mouth opening” and accuracy of 66% for “eyebrows raising” gestures. The algorithm can reliably track and classify facial gestures without any user intervention and runs in real -time.

UR - http://hdl.handle.net/10447/4784

M3 - Paper

ER -