Abstract
In this paper, we address the problem of matching faces across changes in pose in unconstrained videos. We propose two methods based on 3D rotation and sparse representation that compensate for changes in pose. The first is Sparse Representation-based Alignment (SRA) that generates pose aligned features under a sparsity constraint. The mapping for the pose aligned features are learned from a reference set of face images which is independent of the videos used in the experiment. Thus, they generalize across data sets. The second is a Dictionary Rotation (DR) method that directly rotates video dictionary atoms in both their harmonic basis and 3D geometry to match the poses of the probe videos. We demonstrate the effectiveness of our approach over several state-of-the-art algorithms through extensive experiments on three challenging unconstrained video datasets: the video challenge of the Face and Ocular Challenge Series (FOCS), the Multiple Biometrics Grand Challenge (MBGC), and the Human ID datasets.