|
| | | |
A New Lip Feature Representation Method for Video-based Bimodal Authentication
Ouyang, H. and Lee, T.
As the low-cost video transmission becomes popular, video based bimodal (audio and visual) authentication has great potential in various applications that require access control. It is especially useful for hand- held terminals, which are often used under adverse environments, where the signal quality is rather poor. When human voice is used for authentication, one of the most relevant visual features is the dynamic movement of lips. In this research, we investigate on the use of static and dynamic features of speaking lips in the context of voice based authentication. A new feature representation that preserves both appearance and motion pattern of speaking lips is proposed. The dimension of extracted features is reduced by multiple discriminant analysis (MDA) and the method of nearest neighbor is used for classification. Our method can achieve an identification rate of 98% with only lips features for 200 clients of the XM2VTS database. Experiments on speaker verification using fused audio and visual features are on-going. |
Cite as: Ouyang, H. and Lee, T. (2005). A New Lip Feature Representation Method for Video-based Bimodal Authentication. In Proc. NICTA-HCSNet Multimodal User Interaction Workshop, MMUI 2005, Sydney, Australia. CRPIT, 57. Chen, F. and Epps, J., Eds. ACS. 33-37. |
(from crpit.com)
(local if available)
|
|