2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII)
Download PDF

Abstract

Facial action units provide an objective characterization of facial muscle movements. Automatic estimation of facial action unit intensities is a challenging problem given individual differences in neutral face appearances and the need to generalize across different pose, illumination and datasets. In this paper, we introduce the Local-Global Ranking method as a novel alternative to direct prediction of facial action unit intensities. Our method takes advantage of the additional information present in videos and image collections of the same person (e.g. a photo album). Instead of trying to estimate facial expression intensities independently for each image, our proposed method performs a two-stage ranking: a local pair-wise ranking followed by a global ranking. The local ranking is designed to be accurate and robust by making a simple 3-class comparison (higher, equal, or lower) between randomly sampled pairs of images. We use a Bayesian model to integrate all these pair-wise rankings and construct a global ranking. Our Local-Global Ranking method shows state-of-the-art performance on two publicly-available datasets. Our cross-dataset experiments also show better generalizability.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles