Building an automatic plant waterer (4/?): Calibrating the sensor

A short day in the attic today.

  • Part 1: resistive sensing
  • Part 2: finding resistive sensing is bad and capacitive sensing is hard
  • Part 3: another crack at a capacitive sensor
  • Part 4: calibrating the sensor

Day VII (weekend 6)

First, to check everything’s OK, I’m going to calibrate the sensor. I have a box of cheap ceramic capacitors in the E3 series and I’m going to go from 10pF to 2200pF, and I’m going to measure them with my old Academy PG015 capacitance meter since it’s likely to be more accurate than the capacitor rating.

Here are the measurements:

Rating Measured capacitance (pf) count
0 0 12.99
10 10.5 18.84
22 22.6 25.80
47 48.3 40.48
100 101.7 70.90
220 221 134.03
470 453 259.21
1000 965 539.16
2200 2240 1227.2

I’m not 100% sure how to fit this. The obvious choice is a least squares straight line fit to find the slope and offset. However, the variance increases with the measurement and I didn’t record that. Also, I don’t know what the error on the capacitance meter is like.

So, I think the best choice is a fit in log space. The fixed slope of line works well with errors on both measurements and it deals with higher measurements having higher variance, to some extent. The equation to map measurements (M) to capacitances (C) is:
C = p_1 ( M + p_2)

So we just take the log of that and do least squares on the result. The code is really simple in Octave:

% Data
d = [
0 0 12.99
10 10.5 18.84
22 22.6 25.80
47 48.3 40.48
100 101.7 70.90
220 221 134.03
470 453 259.21
1000 965 539.16
2200 2240 1227.2
];

% Initial parameters: zero point and shift
p=[1 1];

% Least squares in log space
err = @(p) sum((log(d(2:end,2)) - (log(p(1)) + log(d(2:end,3) + p(2)))).^2);

% Find the parameters
p = fminunc(err, p);

count=115;

% Compute the capacitance for a new measurement
p(1) * (count + p(2))

Nice and easy now does it work? Well, it seems to work with a variety of capacitors I tried it with. And to get intermediate values, I tried it with this rather delightful device from a long dead radio (range 16pF to 493pF):20190317_173744

and it works beautifully!

So, then I tries it on the wire wound capacitive sensor. Can you guess if it worked?

Well, it did! Funny thing though is that my capacitance meter didn’t work on that. Naturally I assumed my home built device was wrong. But it seems life wanted to troll me. Here’s what my capacitance meter does when all is good:

SCR25

Nice and easy. Changing the range switch alters the speed of the downwards decay curve. So far so good. But when I attached my sensor, this happened:

SCR26

Well, it did! Funny thing though is that my capacitance meter didn’t work on that. Naturally I assumed my home built device was wrong. But it seems life wanted to troll me. Here’s what my capacitance meter does when all is good:

Absolutely no idea why. It is a big coil, so it might have something to do with the inductance, or maybe pickup. I expect it has a higher input impedance than my device.

TL;DR a short one today, but the sensor works well and is in excellent agreement with my dedicated capacitance meter.

Linear 3D triangulation

I came across this 3D linear triangular method in TheiaSFM:

bool TriangulateNView(const std::vector<Matrix3x4d>& poses,
                      const std::vector<Vector2d>& points,
Vector4d* triangulated_point)
  CHECK_EQ(poses.size(), points.size());

  Matrix4d design_matrix = Matrix4d::Zero();
  for (int i = 0; i < points.size(); i++) {
    const Vector3d norm_point = points[i].homogeneous().normalized();
    const Eigen::Matrix cost_term =
        poses[i].matrix() -
        norm_point * norm_point.transpose() * poses[i].matrix();
    design_matrix = design_matrix + cost_term.transpose() * cost_term;
  }
  Eigen::SelfAdjointEigenSolver eigen_solver(design_matrix);
  *triangulated_point = eigen_solver.eigenvectors().col(0);
  return eigen_solver.info() == Eigen::Success;
}

I was aware of the DLT (direct linear transform), but it didn't look like any formulation I've seen before. It's actually pretty neat. Let's say you're trying to find an unknown homogeneous point in 3D, \mathbf{X} = [X, Y, Z, 1]. What we have is N poses, P, represented as 3\times 4 matrices and the corresponding 2D coordinates represented as homogeneous points in \mathbb R^3. The 2D points are written as \mathbf{x} = [ x, y, 1].

Since we're triangulating the 3D point, and we have homogeneous coordinate (i.e. \alpha \mathbf{x} \equiv \mathbf{x}) then for all i we should have:
\alpha_i \mathbf{x}_i \approx P_i \mathbf X
given an scale factor \alpha.

Now let's pick apart the code above. Let's call design_matrix D and cost_term C. On line 12, we have:
\displaystyle D = \sum_{i=1}^{N} C_i^\top C_i
And line 15 we’re finding the eigenvector corresponding to the smallest eigenvalue of D (SelfAdjointSolver produces them in a sorted order), i.e.
\mathbf{X} \approx \displaystyle \underset{\mathbf{v}, |\mathbf{v}|=1}{\text{argmin}}\ \mathbf{v}^\top D \mathbf{v}

We can rewrite D = \mathbf{C}^\top\mathbf{C} where:
\mathbf{C} = \left[ \begin{matrix} C_1\\ C_2\\ \vdots\\ C_N\\ \end{matrix}\right], which substituting in above gives:
\mathbf{X} \approx \displaystyle \underset{\mathbf{v}, |\mathbf{v}|=1}{\text{argmin}}\ \|\mathbf{C v}\|_2^2,
which is of course the right singular vector corresponding to the smallest singular value of C. Using eigen decomposition is much more efficient the size is O(1), not O(N), but probably at the penalty of less numerical precision.

Either way we’re trying to find the approximate nullspace of \mathbf{C}, which means finding something that’s roughly in the null space of all the C_is. But why?

On lines 8–11, we have:
C_i = P_i - \mathbf{\hat{x}\hat{x}^\top}P_i,
and we’re claiming \mathbf{X} is about in the null space. Let’s see what happens when we multiply by it:
(P_i - \mathbf{\hat{x}\hat{x}^\top}P_i) \mathbf{X} = P_i \mathbf{X} -\mathbf{\hat{x}\hat{x}^\top}P_i \mathbf{X}\\
Now, substituring in the first equation we have all the way back at the top gives:
\approx \alpha \mathbf{x} - \alpha\mathbf{\hat{x}\hat{x}^\top x} = \alpha \mathbf{x} - \alpha\mathbf{\hat{x} |x|} = \alpha \mathbf{x} - \alpha\mathbf{x} = 0
taa daa!

So there you go! If there is no noise, \mathbf{X} is in the right null space of C_i and so the null space of \mathbf C and of course D. If there is noise, then it’s closest to being in the null space of all of C_i measured in a least squares sense.

Note though that it’s just an algebraic error, not a reprojection error so it will give slightly oddly scaled results which will give more weight to some points than others. It is however a rather elegant solution.