# Estimation of uncalibrated reconstruction depth estimation

I am trying to do a 3D reconstruction from a set of uncalibrated photographs in MATLAB. I am using SIFT to detect points and matches between images. I want to do a projective reconstruction first and then update it to metric using auto calibration.

I know how to evaluate 3D points from 2 images by calculating the main matrix, camera matrices and triangulation. Now say that I have 3 images, a, b and c. I am calculating camera matrices and 3D points for images a and b. Now I want to update the structure by adding image c. I am evaluating the camera matrix using known 3D points (calculated from a and b), which correspond to the 2D points in image c, because:

However, when I reconstruct the 3D points between b and c, they do not add up to the existing 3D points from a and b. I assume this is because I do not know the correct point depth estimates (depicted by s in the above formula).

With the Sturm and Triggs factorization method, I can estimate depths and find structure and movement. However, for this to work, all points must be visible in all views, which does not match my images. How can I estimate the depth for points that are invisible in all views?

source to share

This is not a Matlab question. It's about the algorithm.

It is mathematically impossible to estimate the position of a 3D point in an image if you cannot see the observation of the point in the specified image.

There are factorization extensions to deal with missing data. However, the field appears to be converging towards the Bundle Adjustment as the Gold Standard.

An excellent tutorial on how to achieve what you want can be found here , which is the culmination of several years of research in a working application, from projective reconstruction all the way to metric update.

source to share