Abstract
In model-based virtual conferencing systems, the changes of facial expressions on human faces are major focus of all users. In order to represent the detail changes of facial expressions, two algorithms for facial feature point tracking are developed to track motion of facial features. The first algorithm achieves medium to high accuracy with low computational complexity. And the second algorithm adopts hierarchical mesh models to algorithm is utilized to translate the tracking result to facial animation parameters, which can be used for driving MPEG-4 compliant talking heads.