Abstract
The emerging use of self-avatars for physical and motor rehabilitation leads to specific requirements for their real-time animation that combine properties from the fields of computer graphics and of biomechanics. We present a method for animating a self-avatar in real-time that allows for high-fidelity representation of whole-body kinematics using anatomical and reproducible bone-segment definition. The method requires little setup time and has low motion-to-photon latency.