I came across this 3D linear triangular method in TheiaSFM:

bool TriangulateNView(const std::vector<Matrix3x4d>& poses, const std::vector<Vector2d>& points, Vector4d* triangulated_point) CHECK_EQ(poses.size(), points.size()); Matrix4d design_matrix = Matrix4d::Zero(); for (int i = 0; i < points.size(); i++) { const Vector3d norm_point = points[i].homogeneous().normalized(); const Eigen::Matrix cost_term = poses[i].matrix() - norm_point * norm_point.transpose() * poses[i].matrix(); design_matrix = design_matrix + cost_term.transpose() * cost_term; } Eigen::SelfAdjointEigenSolver eigen_solver(design_matrix); *triangulated_point = eigen_solver.eigenvectors().col(0); return eigen_solver.info() == Eigen::Success; }

I was aware of the DLT (direct linear transform), but it didn't look like any formulation I've seen before. It's actually pretty neat. Let's say you're trying to find an unknown homogeneous point in 3D, . What we have is poses, , represented as matrices and the corresponding 2D coordinates represented as homogeneous points in . The 2D points are written as .

Since we're triangulating the 3D point, and we have homogeneous coordinate (i.e. ) then for all we should have:

given an scale factor .

Now let's pick apart the code above. Let's call `design_matrix` and `cost_term` . On line 12, we have:

And line 15 we’re finding the eigenvector corresponding to the smallest eigenvalue of D (`SelfAdjointSolver` produces them in a sorted order), i.e.

We can rewrite where:

, which substituting in above gives:

,

which is of course the right singular vector corresponding to the smallest singular value of . Using eigen decomposition is much more efficient the size is , not , but probably at the penalty of less numerical precision.

Either way we’re trying to find the approximate nullspace of , which means finding something that’s roughly in the null space of all the s. But why?

On lines 8–11, we have:

,

and we’re claiming is about in the null space. Let’s see what happens when we multiply by it:

Now, substituring in the first equation we have all the way back at the top gives:

taa daa!

So there you go! If there is no noise, is in the right null space of and so the null space of and of course . If there is noise, then it’s closest to being in the null space of all of measured in a least squares sense.

Note though that it’s just an algebraic error, not a reprojection error so it will give slightly oddly scaled results which will give more weight to some points than others. It is however a rather elegant solution.