How bad is switch bouncing really?

TL;DR: it’s worse.

ETA: I think every electronic engineer goes through this at one time or another. You have a switch that bounces, so you debounce it. I mean every knows that switches bounce. The code doesn’t work 100%, so you fix all the bugs and it still doesn’t work. So then you break out the scope and hells bells! How can it be that bad?! I mean everyone says it’s bad and you knew it was bad but it’s BAAAAAAD.

So, I recently had a switch which needed debouncing. It happens to be a Marquardt (seems to be a good brand) microswitch. In terms of current it’s massively overspecced (16A, 250V), but it turned out to be ideal for the combination of size, button stroke, force and of course price. I guess that sort of switch is used for mains interlocks or something, but I’m using it to switch a GPIO input to a raspberry pi.

Here’s how it’s connected up:


I’d like to say I connected it to both power rails to show how the different contacts bounce. Actually my brain disconnected and that’s how I originally connected up the switch in my project. It’s a terrible idea because you get bouncing from both sets of contacts for twice the fun. The potential divider pulls the middle up to half the voltage so you can see which contacts are bouncing and when.

On the plus side, it’s great for illustrating bouncing…

Here’s what the setup looks like:


I use the mini vice (which I’m quite unreasonably proud of) to close the switch really, really slowly. And this is what pushing the switch looks like:


(Note: I can’t figure out how to take a screenie without the annoying “take a screenshot” dialog showing…)

It’s quite amazing. Note the “pre-bounce” where a disconnect happens 225ms before the switching starts. In fairness, I couldn’t reliably reproduce that, but it does happen occasionally. Thar main transition looks suspicious though:


That’s semi-horrid. The 0V contact opening is pretty clean. It then takes about 9ms for the + contact to start closing. It bounces all over the place, and even after the main bounce has stopped, it still takes 4ms for the voltage to stabilise. I guess connecting only 0V and a pullup would work much better. And here’s the bounces in detail:


Horrible, but par for the course. You can see it’s not settling cleanly on the contact in between the bounces. And here’s a different one for comparison (RTFM n00b, the fine manual says how to sake screenies properly):


This is bad in a rich and interesting variety of different ways. Like the vast majority of traces, there’s no awful pre-bounce. However, the disconnect is unclean and takes 10ms before it starts to rise properly. After the bouncing nominally finishes, it still takes a whopping 100ms to really settle.


Here’s some samples to fill you with horror as well as a couple of fairly standard ones. Some of them have a longer timebase to show a 200ms settling time.

Some of them have some really weird behaviour as they open. I haven’t figured why and the speed at which I actuate the switch doesn’t seem to affect things much. The same can’t be said for releasing the switch. The slower you release it, the gungier it is. See the enormous 100ms timebase:

yuck yuck yuck 😦 Faster actuation (and the actual bouncing) looks much as you’d expect:


though, the time between the break and make is rather shorter.

Either way, the +V contact (the one connected by closing the switch) seems surprisingly much worse than the 0V one. We can of avoid that contact by connecting the switch as a single throw one with a pullup:


And if I’m quick, I can manage to get an open and close on the screen with a 10ms timebase:


The push still has some pre-bounce, but it’s nothing like as bad as from the other contact. For the switch being pushed, there is on bounce whatsoever; it’s astonishingly fast:


for the release, there’s a bit of bouncing as expected.

Conclusion: don’t connect up switches in a silly way like I did!

Automatic dependencies for C++ in 3 very simple lines of make

I like Make. Make is best ponbuild tool. If you don’t know it already, it expresses dependencies as a DAG (directed acyclic graph), so a node lists it’s dependencies and has a rule about how to generate the file(s) from the dependencies. If a file is older than its dependencies, then it’s stale and needs to be rebuilt.

If the DAG is specified correctly, then if you change one source file, you’ll rebuild the minimum necessary with the maximum possible parallelism…

…I meant GNU Make by the way, of course.

In Make, dependencies are generally specified statically, like this:

target: dependency1 dependency2

The problem with this is that in many cases dependencies are not static. That is, the dependencies are a function of the dependencies. What? OK, let me illustrate. Suppose you have a dependency:


OK, so the object file foo.o depends on So far, so good. But it’s almost certainly incomplete. If #include’s anything then foo.o also relly depends on that too. In other words with the incomplete dependencies, if you modify that header and type “make”, foo.o won’t rebuild because make doesn’t know it ought to. This has the annoying problem that when you’re modifying headers, you keep having to “make clean” and rebuild everything.

Fortunatley, make provides a solution for you in typical make fashion: it provides the mechanism to deal with dynamic dependencies and you provide the language specific tools. You can just do:

include morestuff

Naturally, morestuff is going to be dynamically generated. GCC is nice and since it knows about the dependencies (it has to when it actually does the compile), and will emit them in make format while it does the build, ready to be included next time. So if a source file changes, the .o is rebuilt and new dependencies are generated. Next time we come to build, it checks those fresh new dependencies.

.PHONY: all clean

all: prog

	rm -f *.o *.d prog .deps

prog: a.o b.o
	$(CXX) -o prog $^

#We can't use the built in rule for compiling c++.
#Let's say a header changed to include a new header.
#That change would cause a rebuild of the .cc file
#but not the .d file. 
# Arguments mean:
# -MMD output #include dependencies, but ignore system headers, while
#      the build continues
# -MF output dependencies to a file, so other crud goes to the screen
# -MP make a blank target for dependencies. That way if a dependency is
#     deleted, make won't fail to build stuff 
%.o %d:
	$(CXX) -c $< $(CXXFLAGS) -MMD -MP -MF $*.d

#Search for all .d files and add them
#Note that .d files are built in with .o files. If there's no .o yet,
#then there's no .d. But that's OK; there are no precise dependencies,
#but we know we have to rebuild it anyway because the .o is not there.
#So, it's OK to only pick up .d files that have alredy been generted.
include $(wildcard *.d)

And the full set of files.

Now try creating an empty “c.h” and make b.h include it. Type make (you’ll see a.o rebuild). Now touch “c.h”. Make will rebuild a.o again as it should. Revert b.h and remove c.h. Make again builds it correctly. Touch c.h again and type make and nothing (correctly) gets rebuilt.

Actually, the mechanism is is subtle and interesting. It looks like a simple include directive, but the key is that if morestuff doesn’t exist, make will build it first, include it and then process the rest of the build. You cn do cool stuff there, but fortunately it’s not needed here.

A simple custom iterator example

Someone recently asked me about custom iterators in C++. It turns out most of the examples and tutorials are not ideal, especially if you get the principle but not the mechanics.

Without getting bogged down in boilerplate, here’s a complete standalone example which compiles and runs. It provides an iterator that iterates over a string one chunk at a time, returning those chunks (words and gaps):

#include <string>
#include <cctype>  //for isgraph
#include <iterator>
#include <iostream>
#include <algorithm>
#include <vector>

using namespace std;

//Iterate over a string one chunk at a time.
//Deriving from std::iterator is necessary to make the code work with 
//the standard algorithms (for now). Either that or specialising the
//std::iterator_traits type. We're telling it that we have a forward 
//iterator and that the value returned from dereferencing is a string.
//Note that this derivation is not necessary for simple for-loops or
//using the iterator in any custom context.
class ChunkStringIterator: public iterator<forward_iterator_tag, string>
	//Represent the string remaining to be iterated over as a
	//string iterator pair.
	string::const_iterator b, e;
		ChunkStringIterator(string::const_iterator b_, string::const_iterator e_)
		:b(b_), e(e_)
		//Return the chunk starting at the current position
		string operator*()
			auto n = find_next();
			return string(b, n);
		//Advance to the next chunk
		ChunkStringIterator& operator++()
			b = find_next();
			return *this;

		//We need this operator for the classic iteration loop
		bool operator!=(const ChunkStringIterator& i) const
			return b != i.b;

		//This is the meat: it returns an iterator to one-past-the-end of
		//the current chunk. Chunk is defigned as a contiguous region of 
		//chars of the same category.
		string::const_iterator find_next()
			auto current = b;
			bool printable = isgraph(*current);
			while(current != e && (bool)isgraph(*current) == printable)
			return current;


//This class is a trivial wrapper for string that dishes out
//the custom iterators.
class ChunkString
	const string& data;
		ChunkString(const string& s)
		ChunkStringIterator begin()
			return ChunkStringIterator(data.begin(), data.end());

		ChunkStringIterator end()
			return ChunkStringIterator(data.end(), data.end());

int main()
	string foo = "Hello, world. This is a bunch of chunks.";
	//A for-each loop to prove it works.
	for(auto w: ChunkString(foo))
		cout << "Chunk: -->" << w << "<--\n";

	//Use a standard algorithm to prove it works
	vector<string> chunks;
	copy(ChunkString(foo).begin(), ChunkString(foo).end(), back_inserter(chunks));
	cout << "there are " << chunks.size() << " chunks.\n";


Note: this is not complete. I haven’t implemented anything that’s not necessary to the the for each example working. It’s basically missing the other comparison operator (==) and the other dereference operator (->). Probably some other things too.

Enhanced constexpr

In an earlier post I wrote about using constexpr to generate compile time lookup tables. Unfortunately it turns out that that was dependent on a GCC extension. In actual C++14, pretty much nothing in <cmath> is constexpr.

My brother and I have written a proposal, currently P0533R0 to introduce a number of constexpr functions into <cmath>. Making functions constexpr is slightly tricky because some of them may set the global variable errno and/or floating point exception flags (effectively global variables),  and so it needs to be a language solution, not a pure library one. However, the basic arithmetic operators also fall into similar traps, so it’s not that bad.

The initial paper is to constexpr-ify all “simple” functions which don’t have pointer arguments. In this case, simple is defined as “closed over the extended rationals”, which includes the standard arithmetic operators, comparison operators and things like floor() and ceil(). It turns out there are actually rather a lot of these including many I’d never heard of before ploughing through <cmath>.

I think it should be straightforward to implement since GCC and LLVM both already do compile time evaluation of even non-simple functions (e.g. sqrt), the former actually making them constexpr as an extension of C++14, the latter only using them in the optimizer.




Overwrite a file but only if it exists (in BASH)

Imagine you have a line in a script:

cat /some/image.img > /dev/mmcblk0

which dumps a disk image on to an SD card. I have such a line as part of the setup. Actually, a more realistic one is:

pv /some/image.img | sudo 'bash -c cat >/dev/mmcblk0'

which displays a progress bar and doesn’t require having the whole script being run as root. Either way, there’s a problem: if the SD card doesn’t happen to be in when that line runs, it will create /dev/mmcblk0. Then all subsequent writes will go really fast (at the speed of the main disk), and you will get confused and sad when none of the changes are reflected on the SD card. You might even reboot which will magically fix the problem (/dev gets nuked). That happened to me 😦

The weird, out of place dd tool offers a nice fix:

pv /some/image.img | sudo dd conv=nocreat of=/dev/mmcblk0

You can specify a pseudo-conversion, which tells it to not create the file if it doesn’t already exist. It also serves the second purpose as the “sudo tee” idiom but without dumping everything to stdout.

A simple hack but a useful one. You can do similar things like append, but only if it exists, too. The incantation is:

dd conv=nocreat,notrunc oflag=append of=/file/to/append/to

That is, don’t create the file, don’t truncate it on opening and append. If you allow it to truncate, then it will truncate then append, which is entirely equivalent to overwriting but nonetheless you can still specify append without notrunc.

Warn or exit on failure in a shell script

Make has a handy feature where when a rule fails, it will stop whatever it’s doing. Often though you simply want a linear list of commands to be run in sequence (i.e. a shell script) with the same feature.

You can more or less hack that feature with BASH using the DEBUG trap. The trap executes a hook before every command is run, so you can use it to test the result of the previous command. That of course leaves the last command dangling, so you can put the same hook on the EXIT trap which runs after the last command finishes.

Here’s the snippet and example which warns (rather than exits) on failure:

function test_failure(){
  #Save the exit code, since anything else will trash it.
  if [ $v != 0 ]
    echo -e Line $LINE command "\e[31m$COM\e[0m" failed with code $v
  #This trap runs before a command, so we use it to
  #test the previous command run. So, save the details for
  #next time.

#Set up the traps
trap test_failure EXIT
trap test_failure DEBUG

#Some misc stuff to test.
echo hello
sleep 2 ; bash -c 'exit 3'

echo world

echo what > /this/is/not/writable

echo the end

Running it produces:

$ bash errors.bash 
Line 21 command bash -c 'exit 3' failed with code 3
Line 25 command false failed with code 1
errors.bash: line 27: /this/is/not/writable: No such file or directory
Line 27 command echo what > /this/is/not/writable failed with code 1
the end

RPi chroot and cross compiling (Part 1)

RPis are fantastic machines, but compared to using my PC, they’re a fraction of the performance, obviously. So, it’s convenient to be able to develop locally with a much faster compiler with much more RAM, then deploy on the Pi.

The Pi-3 is remarkably fast with it’s 4 processors, but with only 1G of RAM, I’ve had some not huge C++ programs with very extensive debugging enabled require more RAM than the machine has to compile. Trying to compile 4 things at once is a recipe for disaster.

So without further ado…

Actually there’s quite a lot of ado. This isn’t so much a guide as a brain dump of stuff I learned as I went along while skipping out the really tedious dead ends. You might want to skim read and skip around a bit.

Running Raspian in a QEMU emulator.

Couldn’t get it to work, sorry! It’s also the least interesting option.

Running an RPi-like chroot

This uses linux’s binfmt facility, to run dispatch non “native” code to various executors. Cargo cult instructions on Ubuntu are:

sudo apt-get install qemu binfmt-support qemu-user-static

Now mount the disk image. The shell transcript should look like:

$ sudo kpartx -av 2016-05-27-raspbian-jessie-lite.img
add map loop0p1 (252:0): 0 129024 linear /dev/loop0 8192
add map loop0p2 (252:1): 0 2572288 linear /dev/loop0 137216
$ sudo mount /dev/mapper/loop0p2 /mnt/

Note that kpartx reads the partition table and sets up a loopback for each partition. Then you mount the right partition on /mnt, for example to get at it.

Now, you need one more bit of setup to be able to chroot: you have to put the ARM emulator in the chroot so it can be accessed. But first, run:

update-binfmts --display

and examine the output. In there should be:

qemu-arm (enabled):
     package = qemu-user-static
        type = magic
      offset = 0
       magic = \x7fELF\x01\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x28\x00
        mask = \xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff
 interpreter = /usr/bin/qemu-arm-static
    detector =

That pretty much says that when the kernel sees the right header, it should execute a the program using the “/usr/bin/qemu-arm-static” binary. So, set that up:

sudo cp /usr/bin/qemu-arm-static /mnt/usr/bin

If you don’t set it up, you’ll get a “no such file or directory” error when you try to chroot, which is caused by the kernel failing to find qemu-arm-static. Note that they have rather sensibly compiled it as fully static, so you don’t need to bring the entire dynamic runtime into the chroot.

So, now execute (my machine is called sheepfrog by the way) and see the following transcript:

$ sudo chroot /mnt
root@sheepfrog:/# g++ --version
g++ (Raspbian 4.9.2-10) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO


like… woah. Now, a benchmark

From bash:

time for ((i=0; i < 10000; i++)); do : ; done

My RPi3 gives 0.4 seconds, my i7 Q820 @ 1.73GHz (yes, it is 2017) running the emulator gives 0.942 seconds, which is a fair bit slower. If you have a half way modern machine then you’ll be neck and neck, probably faster than native. Of course, even my old laptop has much more RAM and vastly faster storage, which makes a huge difference.

Natively my machine takes, er, 2.37 seconds! That’s because I’ve set up bash to change the xterm title, so it’s flogging the x server in that loop. Removing that gives 0.065 seconds. That’s a bit more sensible.

Running the chroot and installing things

Anyway, this isn’t the real reason for doing this. The main point of setting up the chroot is to be able to chroot into is and run apt-get to install rpi packages. Then eventually one can install packages etc and use them with a native x86 cross compiler. So first, let’s get going!

~ #chroot /mnt
root@sheepfrog:/# apt-get update
qemu: uncaught target signal 4 (Illegal instruction) - core dumped
Illegal instruction (core dumped)


Turns out the solution isn’t so bad. The cargo-cult hacking is to comment out the only line in /mnt/etc/ so the file looks like this:


Now, apt-get update works fine and you can install the all the -dev packages IN THE WORLD!

The eagle-eyed might wonder HOW apt-get works, since I never copied over the resolv.conf file, so how on earth can it fetch anything? The answer is that Rapspian provides a default resolv.conf that points at google’s handy nameservers!

 Wait that all sounds unnecessary. It’s unnecessary, right?

Yes, that is all indeed unnecessary. apt and dpkg is flexible enough that it can run as if it were in a chroot without actually being in a chroot. That means you can use the native apt-get to manipulate the contents of /mnt. If you’re not running a Debian derived distro, then you’ll have to somehow install apt-get. Probably the easiest way is to debootstrap a native installation and chroot into it.

Anyway apparently the magic option is -o Dir=something, so:

~ #sudo apt-get -o Dir=/mnt/   update
Hit jessie InRelease
Hit jessie InRelease
W: Failed to fetch  Unable to find expected entry 'main/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file)

W: Failed to fetch  Unable to find expected entry 'main/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file)

E: Some index files failed to download. They have been ignored, or old ones used instead.

Can you hear the “bwaaa bwaaaaa” trombone noise? I can hear the trombone noise.

Apparently despite being in a subtree set up as ARM, it’s looking for packages based on the host architecture, not some config file somewhere. The key is obviously going to tell it that we really so want ARM.

I’ve not managed to figure it out yet. Adding:

 -o apt::architecture=armhf -o DPkg::Options::='--admindir=/mnt/var/lib/dpkg'

kind of helps. This is supposed to tell dpkg to look in the right place too., apt is a manager which wraps dpkg, so you have to configure both of them. But then I get huge numbers of dependency errors when I try to install anything. I suspect it’s still looking in the wrong places. And sure enough, strace verifies that it’s still reading files from /etc, but not anything terribly important from /var. Strange.

Cross compiling.

That comes in part 2!