Linear 3D triangulation

I came across this 3D linear triangular method in TheiaSFM:

bool TriangulateNView(const std::vector<Matrix3x4d>& poses,
                      const std::vector<Vector2d>& points,
Vector4d* triangulated_point)
  CHECK_EQ(poses.size(), points.size());

  Matrix4d design_matrix = Matrix4d::Zero();
  for (int i = 0; i < points.size(); i++) {
    const Vector3d norm_point = points[i].homogeneous().normalized();
    const Eigen::Matrix cost_term =
        poses[i].matrix() -
        norm_point * norm_point.transpose() * poses[i].matrix();
    design_matrix = design_matrix + cost_term.transpose() * cost_term;
  }
  Eigen::SelfAdjointEigenSolver eigen_solver(design_matrix);
  *triangulated_point = eigen_solver.eigenvectors().col(0);
  return eigen_solver.info() == Eigen::Success;
}

I was aware of the DLT (direct linear transform), but it didn't look like any formulation I've seen before. It's actually pretty neat. Let's say you're trying to find an unknown homogeneous point in 3D, \mathbf{X} = [X, Y, Z, 1]. What we have is N poses, P, represented as 3\times 4 matrices and the corresponding 2D coordinates represented as homogeneous points in \mathbb R^3. The 2D points are written as \mathbf{x} = [ x, y, 1].

Since we're triangulating the 3D point, and we have homogeneous coordinate (i.e. \alpha \mathbf{x} \equiv \mathbf{x}) then for all i we should have:
\alpha_i \mathbf{x}_i \approx P_i \mathbf X
given an scale factor \alpha.

Now let's pick apart the code above. Let's call design_matrix D and cost_term C. On line 12, we have:
\displaystyle D = \sum_{i=1}^{N} C_i^\top C_i
And line 15 we’re finding the eigenvector corresponding to the smallest eigenvalue of D (SelfAdjointSolver produces them in a sorted order), i.e.
\mathbf{X} \approx \displaystyle \underset{\mathbf{v}, |\mathbf{v}|=1}{\text{argmin}}\ \mathbf{v}^\top D \mathbf{v}

We can rewrite D = \mathbf{C}^\top\mathbf{C} where:
\mathbf{C} = \left[ \begin{matrix} C_1\\ C_2\\ \vdots\\ C_N\\ \end{matrix}\right], which substituting in above gives:
\mathbf{X} \approx \displaystyle \underset{\mathbf{v}, |\mathbf{v}|=1}{\text{argmin}}\ \|\mathbf{C v}\|_2^2,
which is of course the right singular vector corresponding to the smallest singular value of C. Using eigen decomposition is much more efficient the size is O(1), not O(N), but probably at the penalty of less numerical precision.

Either way we’re trying to find the approximate nullspace of \mathbf{C}, which means finding something that’s roughly in the null space of all the C_is. But why?

On lines 8–11, we have:
C_i = P_i - \mathbf{\hat{x}\hat{x}^\top}P_i,
and we’re claiming \mathbf{X} is about in the null space. Let’s see what happens when we multiply by it:
(P_i - \mathbf{\hat{x}\hat{x}^\top}P_i) \mathbf{X} = P_i \mathbf{X} -\mathbf{\hat{x}\hat{x}^\top}P_i \mathbf{X}\\
Now, substituring in the first equation we have all the way back at the top gives:
\approx \alpha \mathbf{x} - \alpha\mathbf{\hat{x}\hat{x}^\top x} = \alpha \mathbf{x} - \alpha\mathbf{\hat{x} |x|} = \alpha \mathbf{x} - \alpha\mathbf{x} = 0
taa daa!

So there you go! If there is no noise, \mathbf{X} is in the right null space of C_i and so the null space of \mathbf C and of course D. If there is noise, then it’s closest to being in the null space of all of C_i measured in a least squares sense.

Note though that it’s just an algebraic error, not a reprojection error so it will give slightly oddly scaled results which will give more weight to some points than others. It is however a rather elegant solution.