Abstract
We present a framework for a virtual try-on system. Starting with a template human body mesh, we reshape it to acquire meshes of various body dimensions according to the user specified parameters. These reshaped bodies are further processed in two directions: refitting a garment model to the outside of the body and embedding a skeleton to the inside. The refitted garment mesh is then bound to the body mesh via our coat-to-mesh algorithm, which is the major contribution of this paper. The body mesh is skinned to the embedded skeleton with an implicit rigging process. This way, any deformation in the body mesh will lead to corresponding deformation in the garment mesh. During the deformation, both the mesh topology and spatial relationship between the body and the garment are maintained. At the end, we apply some third-party motion data to drive the skeleton, the body mesh, as well as the garment mesh, and create real-time animations of dressed human character.