Abstract:This paper proposes a method for constructing a virtual-real fusion system integrated with multiple videos aiming to create an augmented virtual environment, where images and videos captured from real world are fused to virtual scene. With the help of textures from images and motion from videos, the virtual environment is more realistic. Unmanned Aerial Vehicles are used to take photos and reconstruct the 3D virtual scene. By matching features, video frames can be registered to the virtual environment. Then images are projected to virtual scene with the method of projective texture mapping. Due to lack of the corresponding 3D models in the virtual environment, distortions will occur when images are directly projected and the viewpoint changes. This paper first detects and tracks those moving objects, then it gives multiple ways of displaying moving objects to solve the distortion problem. Fusion of multiple videos with overlapping areas in the virtual environment is also considered in this system. The experimental results show that the virtual-real fusion environment that is build based in this paper has lots of benefits and advantages.