Conferences in Research and Practice in Information Technology
  

Online Version - Last Updated - 20 Jan 2012

 

 
Home
 

 
Procedures and Resources for Authors

 
Information and Resources for Volume Editors
 

 
Orders and Subscriptions
 

 
Published Articles

 
Upcoming Volumes
 

 
Contact Us
 

 
Useful External Links
 

 
CRPIT Site Search
 
    

A New Lip Feature Representation Method for Video-based Bimodal Authentication

Ouyang, H. and Lee, T.

    As the low-cost video transmission becomes popular, video based bimodal (audio and visual) authentication has great potential in various applications that require access control. It is especially useful for hand- held terminals, which are often used under adverse environments, where the signal quality is rather poor. When human voice is used for authentication, one of the most relevant visual features is the dynamic movement of lips. In this research, we investigate on the use of static and dynamic features of speaking lips in the context of voice based authentication. A new feature representation that preserves both appearance and motion pattern of speaking lips is proposed. The dimension of extracted features is reduced by multiple discriminant analysis (MDA) and the method of nearest neighbor is used for classification. Our method can achieve an identification rate of 98% with only lips features for 200 clients of the XM2VTS database. Experiments on speaker verification using fused audio and visual features are on-going.
Cite as: Ouyang, H. and Lee, T. (2005). A New Lip Feature Representation Method for Video-based Bimodal Authentication. In Proc. NICTA-HCSNet Multimodal User Interaction Workshop, MMUI 2005, Sydney, Australia. CRPIT, 57. Chen, F. and Epps, J., Eds. ACS. 33-37.
pdf (from crpit.com) pdf (local if available) BibTeX EndNote GS
 

 

ACS Logo© Copyright Australian Computer Society Inc. 2001-2014.
Comments should be sent to the webmaster at crpit@scem.uws.edu.au.
This page last updated 16 Nov 2007