In this paper presents an approach for creating lifelike, controllable motion in interactive virtual environments. This is achieved through learning statistical model from a set of motion capture sequences. The method is based on clustering the motion data into motion primitives that capture local dynamical characteristics-Dynamic texture, modeling the dynamics in each cluster using linear dynamic system (LDS), annotating those LDS' which have clear meaning, and calculating the cross-entropy between frames of LDS' to construct a directed graph called a annotated dynamic texture graph (ADTG) which has two-level structure. The lower level retains the detail and nuance of the live motion, while the higher level generalizes the motion and encapsulates the connections among LDS'. The results show that this framework can generate smooth, natural-looking motion in interactive environments.