BMVC 2019

I went to BMVC this year and had a great time, and saw lots of interesting papers and talked to a lot of interesting people. BMVC was my first conference in 2003, and it has changed fair bit since then. I remember some of the hushed, awed tones about how was that it was getting really international because there were two speakers from American and one all the way from China. Now it really is a big international conference, that just happens to be located in the UK each year. I think the best bits of the fundamental character haven’t changed.

On the minus side, I made a bunch of notes and then lost them so I’m having to go on memory and have almost certainly forgotten some that stood out. So here’s a somewhat random selection of papers that caught my eye as interesting for various reasons.

But first, here’s a video of Cardiff Science Library vomiting rainbows:


A random selection of interesting papers


Dissecting Neural Nets
Prof. Antonio Torralba (MIT)

That keynote was very interesting and Prof. Torralba is a fantastic presenter and the results were very intersting. Unfortunately I can’t find the video to link to.

Geometric vision

Whenever there’s a paper not about deep learning there’s always a cluster of people people who’s student days are long past hovering around commenting about how it’s nice to see something that isn’t deep learning. I also like to refer to this type of vision as “geometric” since it involves geometry rather than “traditional” or (even worse) “old-fashioned”.

26. A Simple Direct Solution to the Perspective-Three-Point Problem
Gaku Nakano (NEC Corporation)

The paper is a new solution to the P3P problem. Given the age of the field and number of existing solutions, it’s surprising that there are actually new ones. It’s a surprisingly tricky problem as anyone who’d tried to derive a solution will know and it’s interesting to see there are still new insights to be had.

Adversarial attacks

If you hand a vision system to a computer vision researcher, the first thing they will do is try and break it. These days that’s even publishable!

Non deep image features are still widely used for solving geometric problems especially if efficiency is key. While it’s not surprising, it had never occurred to me that they could be attacked just like neural nets can be attacked.

27. Adversarial Examples for Handcrafted Features
Muhammad Latif Anjum (NUST); Zohaib Ali (NUST); Wajahat Hussain (NUST – SEECS)

Much like the attacks on DNNs, the differences aren’t visually apparent. Speaking of adversarial attacks, I found this paper and poster enjoyable and easy to follow, with good results:

210. Robust Synthesis of Adversarial Visual Examples Using a Deep Image Prior
Thomas Gittings (University of Surrey); Steve Schneider (University of Surrey); John Collomosse (University of Surrey)

I didn’t know that was a thing

I like papers that have “towards” in the title. It’s an admission in the title that the results aren’t spectacular or and they aren’t aceing the current benchmarks, but they’re tackling a hard problem in a new way. That’s a good goal for research, not engineering polished solutions, but tackling new problems or bringing new insight to bear.

In this case, they are dealing with point clouds of the sort that might be the result of structure from motion but where the original images aren’t available. Turns out it’s possible to do semantic segmentation of those clouds.

252. Towards Weakly Supervised Semantic Segmentation in 3D Graph-Structured Point Clouds of Wild Scenes
Haiyan Wang (City University of New York); Xuejian Rong (City University of New York); Liang Yang (City University of New York); YingLi Tian (City University of New York)

Realtime semantic segmentation

There’s a lot of interest in realtime techniques  which I like. A lot of it comes from the self driving car industry and all of these are tested on Cityscapes. I’m more interested it from the perspective of running on a phone, but there’s a lot of common ground and so these are well worth a closer look.

253. Fast-SCNN: Fast Semantic Segmentation Network
Rudra Poudel (Tosihiba Research Europe, Ltd.); Stephan Liwicki (Toshiba Research Europe, Ltd.); Roberto Cipolla (University of Cambridge)

259. DABNet: Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation
Gen Li (Sungkyunkwan University); Joongkyu Kim (Sungkyunkwan University)

260. Feature Pyramid Encoding Network for Real-time Semantic Segmentation
Mengyu Liu (University of Manchester); Hujun Yin (University of Manchester)

Benchmarks are useful, but I feel that over reliance on them can essentially lead to reverse engineering the datasets. I’ve certainly noticed in my own work that networks that give stellar results on ImageNet don’t do nearly so well when images that aren’t of the sort one posts to the internet (i.e. worse, less well composed, more cluttered, worse lighting and focus etc).

I think all good benchmarks are doomed to eventually become more of a hindrance than a help because of all the focus that they draw. This isn’t to disparage the benchmarks, at all, I think it’s simply part of the cycle of research. I wonder when we’ll reach that point with Cityscapes.

Domain transformation

The key idea here is that (for object detection from a car), a data volume aligned with the ground plane and front of the car is more semantically meaningful than a 2D image. So they transform RESNet features into that cube using a simple technique and do the deep learning there. Sound idea with good results.

285. Orthographic Feature Transform for Monocular 3D Object Detection
Thomas Roddick (University of Cambridge); Alex Kendall (University of Cambridge); Roberto Cipolla (University of Cambridge)

Binary networks

Binarised networks have an appealing minimalism, especially from a hardware and wire-format compression point of view. Unfortunately they’re not differentiable. This paper makes judicious use of carefully inserted weighting factors and derivatives of effectively a blurred binary activation function to introduce differentiability.

19. Accurate and Compact Convolutional Neural Networks with Trained Binarization
Zhe Xu (City University of Hong Kong); Ray Cheung (City University of Hong Kong)

A different approach to deep features

I couldn’t decide how much I like this paper because I kept vacillating about the core idea. Then I realised that in itself makes it a good paper because it’s made me think a lot about the problem. It was very well presented and the core idea is simple and intriguing.

32. Matching Features without Descriptors: Implicitly Matched Interest Points
Titus Cieslewski (University of Zurich & ETH Zurich); Michael Bloesch (Deepmind); Davide Scaramuzza (University of Zurich & ETH Zurich)

I like that the features are defined purely by matchability and localisation. I also like that they do not have to do things like have precisely (or at most one) feature per 8×8 (etc) window of the image, and they have a simple structure without auxiliary losses, and an overall simple training procedure.

This is also one of the things I like about BMVC: the results presented in the paper don’t present it as the new leading feature detector, in fact it’s not even near the top of the pack of the ones they compare to. However they’re tackling it in a new and interesting way and I there is a great deal of value in such ideas being shared and discussed even if they’re not (yet?) as good as the competitors.


P0533 will ride again

Unfortunately, P0533 (see here for previous posts) didn’t make it into C++20 either (originally targeted at C++ 17). It seems that there were just too many good papers and they couldn’t work through them all in the available time.

There’s lots of good stuff and clearly a strong and growing interest in constexpr’ing everything that can be constexpr’d, so I hold out hope for both it and P1383 in C++2.. uh… 2b? Or not 2b?

Follow their progress in the trackers here:

Light chasing robot part 2 (of 2)

The first version worked, but oscillated a lot in its motion. If you haven’t read it yet, I recommend reading it first otherwise this post won’t make as much sense. And if you have, it might be worth a re-read, since it took me nearly two years to post the followup.

The reason for the oscillation is that it has essentially very high feedback. If it’s very slightly off to one side, then the opposite motor comes on full, because the direction sensor divider goes into a simple comparator. Also, it turns out (I found this about a year later–yes I am a bit lazy about writing blog posts) the response of the LDRs is really slow, measurable over the timescale of a second, so the robot will swing round a significant amount before the resistive divider starts to respond. Either way making the response have a much lower gain will help.

I can reduce the gain by making the motor come on at a reduced speed in proportion to the ratio between the two LDRs.

The circuit is a little more complex than the previous one. It also falls into the category of “should have used a microcontroller” since then the upgrade would just be software and a lot more flexible. Essentially I have used a CMOS 555 in equal duty cycle mode and I’m using the capacitor voltage to get a sawtooth wave. That’s thresholded  by the comparator (opamp) to make a PWM signal. I could have also used the other amplifier in the dual opamp chip to do the same job. That would have been neater in hindsight.


Simple PWM circuit


The result is really pretty good! See:


Er… take 2!

That works well, and is a good validation of the directional light sensors (the original point of this project).

Self feeding flat bits

In knocking together a case for something holding a Raspberry Pi, I needed to cut a 24mm hole for some of these:


USB bulkhead

My usual go-to stockists didn’t have a 24mm flat bit (or Forstner), so I went to ebay and had a quick dig around. I found a Bosch “Self Cut” spade bit for cheap (maybe not used, old). It looks like this:


Bosch self-cut bit

Bosch is one of those respectable brands and you won’t go wrong with Bosch tools if you pick the right one for the job.

Speaking of that…

Turns out self feeding bits are wildly unsuitable for the kind of things I do most of the time. They are flat bits but the tip is a screw so it feeds itself into the wood. This should  gives a very consistent depth of cut and chip load. It also means you don’t need to apply any pressure with the drill: it applies an immense amount of drilling pressure using the drill’s torque instead.

They are amazingly, astonishingly aggressive and will happily plough through thick birch ply in seconds (if your drill is up to the task; the level of torque required is vast), and completely split a piece of pine. Note the rather rough cuts with the large amount of tear out:

Holes drilled with self feeding bit

It took much less time to go through the thick birch ply than it took me to drill the holes with a normal bit in a thinner piece of pine.

Great tool, utterly the wrong one for the task.

14 Years

I’ve been working on model based 3D tracking on and off for quite a while now.

Year 1 (2005)

This was my main contribution to the field of 3D tracking. To my knowledge, it was the joint first (there was another paper from my lab mate using a different technique) real time tacking system that processed the entire image frame. Both techniques were much more robust than the ones that went before. My one also debuted an early version of the FAST corner detector (I didn’t put that page there).

You can see the tracking works because the model (rendered as purple lines) stays stuck to the image.  The tracker operated in real time, well field rate, which was 50Hz fields of 756×288 pixels of analogue video from some sort of Pulnix camera, captured on a BT878 card of some sort on a dual PIII at 850 MHz (running Redhat of some description). It wasn’t mobile (I had two 21″ CRT monitors), so I wasn’t watching the screen as I was capturing video; I found a long spool of thin 75 ohm co-ax which is why it had any kind of mobility. It, somewhat unexpectedly, tracked almost until I put the camera down on the table at the end. It was a bit of an anticlimactic finish, but I didn’t expect it to work quite so well.

Year 14 (2019)

This is the project I’ve been working on recently (landmarkers). It’s nice to see technology move from a proof of concept, academic curiosity to a robust production system usable in the wild by people who aren’t computer vision researchers. Also, I didn’t do the graphics in this one which is why it looks rather cooler than a bunch of purple lines.




Building an automatic plant waterer (4/?): Calibrating the sensor

A short day in the attic today.

  • Part 1: resistive sensing
  • Part 2: finding resistive sensing is bad and capacitive sensing is hard
  • Part 3: another crack at a capacitive sensor
  • Part 4: calibrating the sensor

Day VII (weekend 6)

First, to check everything’s OK, I’m going to calibrate the sensor. I have a box of cheap ceramic capacitors in the E3 series and I’m going to go from 10pF to 2200pF, and I’m going to measure them with my old Academy PG015 capacitance meter since it’s likely to be more accurate than the capacitor rating.

Here are the measurements:

Rating Measured capacitance (pf) count
0 0 12.99
10 10.5 18.84
22 22.6 25.80
47 48.3 40.48
100 101.7 70.90
220 221 134.03
470 453 259.21
1000 965 539.16
2200 2240 1227.2

I’m not 100% sure how to fit this. The obvious choice is a least squares straight line fit to find the slope and offset. However, the variance increases with the measurement and I didn’t record that. Also, I don’t know what the error on the capacitance meter is like.

So, I think the best choice is a fit in log space. The fixed slope of line works well with errors on both measurements and it deals with higher measurements having higher variance, to some extent. The equation to map measurements (M) to capacitances (C) is:
C = p_1 ( M + p_2)

So we just take the log of that and do least squares on the result. The code is really simple in Octave:

% Data
d = [
0 0 12.99
10 10.5 18.84
22 22.6 25.80
47 48.3 40.48
100 101.7 70.90
220 221 134.03
470 453 259.21
1000 965 539.16
2200 2240 1227.2

% Initial parameters: zero point and shift
p=[1 1];

% Least squares in log space
err = @(p) sum((log(d(2:end,2)) - (log(p(1)) + log(d(2:end,3) + p(2)))).^2);

% Find the parameters
p = fminunc(err, p);


% Compute the capacitance for a new measurement
p(1) * (count + p(2))

Nice and easy now does it work? Well, it seems to work with a variety of capacitors I tried it with. And to get intermediate values, I tried it with this rather delightful device from a long dead radio (range 16pF to 493pF):20190317_173744

and it works beautifully!

So, then I tries it on the wire wound capacitive sensor. Can you guess if it worked?

Well, it did! Funny thing though is that my capacitance meter didn’t work on that. Naturally I assumed my home built device was wrong. But it seems life wanted to troll me. Here’s what my capacitance meter does when all is good:


Nice and easy. Changing the range switch alters the speed of the downwards decay curve. So far so good. But when I attached my sensor, this happened:


Well, it did! Funny thing though is that my capacitance meter didn’t work on that. Naturally I assumed my home built device was wrong. But it seems life wanted to troll me. Here’s what my capacitance meter does when all is good:

Absolutely no idea why. It is a big coil, so it might have something to do with the inductance, or maybe pickup. I expect it has a higher input impedance than my device.

TL;DR a short one today, but the sensor works well and is in excellent agreement with my dedicated capacitance meter.

Building an automatic plant waterer (3/?): capacitive sensor try 2

Finally some progress!

  • Part 1: resistive sensing
  • Part 2: finding resistive sensing is bad and capacitive sensing is hard
  • Part 3: another crack at a capacitive sensor
  • Part 4: calibrating the sensor

Day V (weekend 5, has it really been going that long?)

OK, so I’m not really happy about the enameled wire design. It feels like the insulation is a bit fragile and I don’t really feel I know what’s going on well enough to rely on it. So, I’ll so something much better: smear some 5 minute epoxy over some stripboard and hit it with a heat gun…





Yes, this is not at all dubious. Well turns out it is. Who knew? Nonetheless it works decently well, though the insulation only goes a few cm up and I think it’s hovering at around 1GΩ. The capacitances are:

  •  Out: 13pF
  • Dryish soil: 20pF
  • Quite wet soil: 30pF
  • Very wet soil: 44pF

Substantially less sensitive than the previous one, but proves the principle. If I can actually sense that level of capacitance using the Arduino, then I can get a nice double sided one made with the good quality thin and robust coating you get on PCBs.

On to the Arduino!

So the Arduino environment doesn’t natively support the comparator. Fortunately its not hard. Just fiddly. As in spending 2 hours on a really silly mistake…

It’s fairly straightforward given the datasheet, specifically section 27 AC (analog comparator). The easiest way to get started is:

  • Not using the multiplexer, which means the negative input is AIN1
  • Using the bandgap (1.1V reference) for the positive input
  • Polling by reading ACSR.ACO (analog comparator status register.analog comparator output)
  • No low power stuff (it’s a high power application anyway)

It’s surprisingly easy. The Atmega328p has nice defaults for prototyping (everything starts on), there are not too many registers to swizzle up the pins and its happy to have two functions (GPIO and comparator) on one pin at the same time. The bit that I got hung up on for hours is that AIN1 is the Atmega328-P’s AIN1 pin, not the Arduino’s AIN1 pin. AIN1/PD7 (in Atmel-speak) is actually digital pin 7 in Arduino speak. N00bish mistake but really easy to make.

The basic code to control an LED looks like this:

const int led = 13;

void setup() {
  pinMode(led, OUTPUT);     

  // Set ACME bit to 0 to disable the multiplexer
  // This also sets some ADC related flags
  ADCSRB  = 0; 

  // Set the positive input to the bandgap reference.
  // This also sets disable to off, interrupts to off
  // and a bunch of other stuff to off.
  ACSR = bit(ACBG);

void loop() {

    bool result = ACSR & bit(ACO);
    digitalWrite(led, result);

It works. Yay. Only slightly dampened by the wild goose chase over pin numbers.


A wild goose, for illustrative purposes. (CC by SA

Day 6 (Weekend 6)

Well, this is interesting. The epoxy based probe (above) is now reading a steady 10M even in dry soil. Looks like that isn’t a long term solution. The best wire based one is now faring a lot better. I’m resigned to either having a custom board made with proper soldermask or using conformal coating.

So back to the circuit. Now because for some reason I’m intent on absolutely minimizing cost, an important part is minimizing the pin usage. So the circuit is simply this:


All the clever bits of the circuit are provided by the microcontroller.

The capacitor charges from the positive rail through a 1M resistor.  Internally, I’ve got the comparator connected to 1.1V. The equation for the voltage is:
V = V_0(1 - e^{-\frac{t}{RC}}).
Rearranging gives:
t = -\ln (1-\frac{V}{V_0})RC
Substituting in the test capacitor (47pF), the 1M resistor, the 5V supply and the 1.1V reference gives a rather marginal 12μs. For now to make life easier, I’m going to use this circuit:


Making a 1.65V rail with resistors means that the 1.1V threshold is on a much flatter part of the charging curve, increasing the time a lot.

Making a potential divider off the 3.3V rail gives 1.65V for the supply, and a much more generous 50μs. Passives are cheap, and I can greatly extend the charging time pretty easily by dividing down the supply. But we need some code to drive it.  The code implements a basic relaxation oscillator: let the capacitor charge then when the voltage exceeds the threshold, short out the capacitor to restart the cycle, then let the capacitor charge…

void setup() {
  // Set ACME bit to 0 to disable the multiplexer
  // This also sets some ADC related flags
  ADCSRB  = 0; 

  // Set the positive input to the bandgap reference.
  // This also sets disable to off, interrupts to off
  // and a bunch of other stuff to off.
  ACSR = bit(ACBG);

  // set PD7 to either hi-Z or low (depending on DDR)
  PORTD &= ~bit(7);

void loop(){
    DDRD &= ~bit(7); // Set pin 7 to hi-z

    //Loop until the AC outputs 0 (i.e. when the capacitor
    //exceeds 1.1V)
    while(ACSR & bit(ACO)){}

    DDRD |= bit(7); // Pin7 to low, emptying the cap


And here’s what the voltage looks like in operation:



In order to get nice graphs I either had to touch a grounded thing or switch off my fluorescent desk lamps since they seem to spew noise all over the place.

It’s not really very useful like this since all it’s doing is displaying a nice graph. The scope also disturbs the signal since it’s got a non-trivial capacitance and resistance. To do some further analysis, I wrote the following code:

void setup() {
  ADCSRB  = 0;
  ACSR = bit(ACBG);

float o=0, o2=0; //IIR filter state for count and count squared
int i=0;

void loop(){
    PORTD &= ~bit(7);
    DDRD |= bit(7);
    DDRD &= ~bit(7);

    uint16_t count=0;
    while(ACSR & bit(ACO)){
    DDRD |= bit(7);

    o = 0.999*o + 0.001 * count;
    o2 = 0.999*o2 + 0.001 * count * count;

    if(i++%256 == 0){
      Serial.print(" ");
      Serial.println(sqrt(o2 - o*o));

This code counts during the charging part of the cycle, then does a moving average on both the count and count squared using a simple IIR filter and prints the running mean and standard deviation of the counts to the serial port. I happen to be a big fan of IIR filters, I think they’re fun, interesting and efficient. Even the simplest one is much better than a naïve moving average. The trick of filtering both the value and value squared is one I’ve used many times for getting a running standard deviation in addition to mean.

By default I get counts of about :

  • No touch: 88.2 (σ=4.1)
  • Light touch: 110 (σ=5.5)
  • No touch with scope: 139

I can even spot proximity, so its really pretty sensitive, and you can see how much the scope loads it down by. It also turns out I was really pessimistic earlier. Reverting back to the simpler circuit, the numbers I get are:

  • No touch: 19.9 (σ=1.3)
  • Light touch: 26 (σ=1.7)

I can still spot proximity, but only with the aid of the filter: it’s about a count of 0.2 or less. So for fun, I modified the code to turn on the LED when the count exceeds 20.2, and you can see just how sensitive it is:


I think now that having faster charging helps: while there’s more quantization noise in the signal, the comparator changes state on a steeper part of the curve which means that electrical noise has less effect on the time.

Today’s conclusions

  1. The capacitive sensor is really good, cheap and easy to make. I’m going to have to use that for other things too
  2. It’s definitely the way forward for this project
  3. Something worked easily!