|
| | | |
Combining Classifiers in Multimodal Affect Detection
Hussain, M.S., Monkaresi, H., Calvo, R.A.
Affect detection where users� mental states are automatically recognized from facial expressions, speech, physiology and other modalities, requires accurate machine learning and classification techniques. This paper investigates how combined classifiers, and their base classifiers, can be used in affect detection using features from facial video and multichannel physiology. The base classifiers evaluated include function, lazy and decision trees; and the combined where implemented as vote classifiers. Results indicate that the accuracy of affect detection can be improved using the combined classifiers especially by fusing the multimodal features.
The base classifiers that are more useful for certain modalities have been identified. Vote classifiers also performed best for most of the individuals compared to the base classifiers.
|
Cite as: Hussain, M.S., Monkaresi, H., Calvo, R.A. (2012). Combining Classifiers in Multimodal Affect Detection. In Proc. Data Mining and Analytics 2012 (AusDM 2012) Sydney, Australia. CRPIT, 134. Zhao, Y., Li, J. , Kennedy, P.J. and Christen, P. Eds., ACS. 103 - 108 |
(from crpit.com)
(local if available)
|
|