Abstract
This paper presents a method of gender classification by integrating facial, hairstyle, and clothing images. Initially, input images are separated into facial, hairstyle and clothing regions, and independently learned PCAs and GMMs based on thousands of sample images are applied to each region. The classification results are then integrated into a single score using some known priors based on the Bayes rule. Experimental results showed that our integration strategy significantly reduced error rate in gender classification compared with the conventional facial only approach.