Abstract:The subtle details on an expressional face,such as creases and furrows,are very important visual cues,but they are difficult to model and synthesize as they vary dynamically from one frame to another while people speak and make expression.A novel strategy,which is different from the traditional texture mapping methods,is proposed for generating such a kind of dynamic facial textures according to the motion of facial feature points.Based on the observation that shape and appearance on face images are highly correlated,a mapping is designed to transfer one image to the other.The mapping is called SADM(shape-appearance dependence mapping).The experimental results show that the synthesized faces with SADM are very close to the real ones.The proposed SADM strategy can be integrated into a wire-frame based head model to generate the realistic animation effects,or applied to a model-based video coding to produce more efficient bit-rates.