Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences

Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences

Xin Jin, Xiaowu Chen*, Bin Zhou, Hongchang Lin

The State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science and Engineering, Beihang University Beijing, China

*Corressponding author:




   The occlusion between real and virtual objects influences not only seamless merging of virtual and real environments but also users’ visual perception of orientations & locations and spatial interactions in augmented reality. If there exist a large amount of video sequences for representing the real environment, and each video sequence utilizes computer vision algorithms to deal with all of occlusions between real and virtual, this often drops down the real-time performance of augmented reality system. This article proposed an approach of cooperatively resolving the occlusion between real and virtual based on multiple video sequences in augmented reality scene. Firstly it analyzes the occlusion relations between virtual and real objects in initial video sequences with their intrinsic parameters and poses, and obtains the spatial relations among video sequences through 3D registration information. Secondly, for each video sequence, it divides and codes the perception regions of relative augmented reality scene. Lastly, according to the spatial relations of video sequences, the known occlusion relations in initial video sequences and the code data of perception regions, three types of occlusion relations including real occluding virtual, virtual occluding real and non-occlusion are detected out and represented in augmented reality scene. Some experimental results show that this approach can reduce redundant calculations on the way of resolving the occlusion between real and virtual objects, and improve the performance of generating augmented reality scene, especially which includes plenty of video sequences and occlusion relations of virtual occluding real or non-occlusion.



Example of a shared AR scene. The human is a real object while the motocycle is a virtual object. In C1 and C3, there are no occlusions between real and viutual objects. In C2, the human is occluded by the motocycle while in C2 , the human occludes the motocycle. 

Paper and Slides


Xin Jin, Xiaowu Chen, Bin Zhou, Hongchang Lin.ChinaGrid Annual Conference. Dalian, China, August 22-23, pp. 234-240, 2011.


  author    = {Xin Jin and
               Xiaowu Chen and
               Bin Zhou and
               Hongchang Lin},
  title     = {Cooperatively Resolving Occlusion between Real and Virtual in Multiple
               Video Sequences},
  booktitle = {Sixth Chinagrid Annual Conference, ChinaGrid 2011, Dalian, Liaoning,
               China, Aug. 22-23, 2011},
  pages     = {234--240},
  year      = {2011},
  crossref  = {DBLP:conf/chinagrid/2011},
  url       = {},
  doi       = {10.1109/ChinaGrid.2011.49},
  timestamp = {Wed, 17 May 2017 10:53:58 +0200},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}

2 thoughts on “Cooperatively Resolving Occlusion Between Real and Virtual in Multiple Video Sequences

  • 2017年12月11日 at 上午3:53

    Ιf you are going for finest contents likе myself, sіmply pay a visit this web site every day for the
    reason tһat it gives feature contents, thanks



电子邮件地址不会被公开。 必填项已用*标注