Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences
Xin Jin, Xiaowu Chen* , Bin Zhou, Hongchang Lin The State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science and Engineering, Beihang University Beijing, China
The occlusion between real and virtual objects influences not only seamless merging of virtual and real environments but also users’ visual perception of orientations & locations and spatial interactions in augmented reality. If there exist a large amount of video sequences for representing the real environment, and each video sequence utilizes computer vision algorithms to deal with all of occlusions between real and virtual, this often drops down the real-time performance of augmented reality system. This article proposed an approach of cooperatively resolving the occlusion between real and virtual based on multiple video sequences in augmented reality scene. Firstly it analyzes the occlusion relations between virtual and real objects in initial video sequences with their intrinsic parameters and poses, and obtains the spatial relations among video sequences through 3D registration information. Secondly, for each video sequence, it divides and codes the perception regions of relative augmented reality scene. Lastly, according to the spatial relations of video sequences, the known occlusion relations in initial video sequences and the code data of perception regions, three types of occlusion relations including real occluding virtual, virtual occluding real and non-occlusion are detected out and represented in augmented reality scene. Some experimental results show that this approach can reduce redundant calculations on the way of resolving the occlusion between real and virtual objects, and improve the performance of generating augmented reality scene, especially which includes plenty of video sequences and occlusion relations of virtual occluding real or non-occlusion.