Placing a virtual object behind a real world object

In ARKit for iOS. If you show a virtual item, it always comes before any real item. This means that if I stand in front of the virtual item, I will still see the virtual item. How can I fix this scenario?

enter image description here

The bottle should be visible, but it is clipping.

+3


source to share


3 answers


You cannot achieve this with ARkit alone. It does not offer a solution to the occlusion problem, which is a serious problem.

Ideally, you would know the depth of each pixel projected onto the camera and use that to determine which ones are in front and those that are behind. I would not try anything with the objects that ARKit exposes as 1) their position is non-portable. 2) there is no way to know that between two frames, which is the origin of frame A, is the tag point in frame B. This is a way to make noisy data no good.



You may be able to achieve something with third party options that handle the captured image and understand the depth or different levels of depth in the scene, but I don't know of any good solution. There's some SLAM method out there that gives a dense depth map like DTAM ( https://www.kudan.eu/kudan-news/different-types-visual-slam-systems/ ), but that will redo a big part of what arkit does. There may be other approaches that I am not aware of. Apps like snap do it their own way, so it's possible!

+5


source


So basically your question is about mapping the coordinates of the virtual item to the real world coordinate system, in short, you want to see that the virtual item is locked by the real item, and you can only see the virtual item once you pass the real item.

If so, you need to know the physical relationships of each object in that environment, and then you need to know exactly where you have to decide if the virtual item is locked.



This is not an intuitive way to fix it, but it is the only way I can think of.

Greetings.

+1


source


What you are trying to achieve is not easy.

You need to detect parts of the real world that "should be visible" using some sort of image processing. Or perhaps ARKit points containing depth information, then based on that you should add an "invisible virtual object" that cuts down on the drawing of things behind it. This will represent your "real object" within the "virtual world" so that the background (camera feed) remains visible where this invisible virtual object is present.

+1


source







All Articles