Adventures in Docker: dockerizing an old build process

So I don’t really hold with this new-fangled “container” stuff.

Well, OK that’s not really true. I don’t really hold with it’s massive over use. A lot of the time it seems to be used because people don’t know how to distribute executables on Linux. Actually the way you make distributable executables on Linux is the same way as on other systems.

However, Linux also has an extra-easy way of compiling locally which isn’t great for distribution, leading to the problem that “well it works on my machine” or people claiming that you can’t distribute binaries. So, people tend to use a container to ship their entire machine. At least that’s better than using an entire VM which was briefly in vogue.

A frequently better way is to statically link what you can, dynamically link what you can’t and ship the executable and any dynamic dependencies. Not only is this lightweight, it also works when people don’t have large container systems installed (most people), or where it won’t work (e.g. for a plugin).

And that’s precisely what I do for the 3B system.

THE END

OK that was a short post. What about building though? Building is hard, and a different matter entirely. To build reliably and a version you know works, you need your source code at a specific version, your dependencies at a specific version, and your compiler at a specific version and ideally a “clean” system with nothing that might disturb the build in some unexpected way.

And you want this on several platforms (e.g. 32 and 64 bit Linux and Windows).

In an ideal world you wouldn’t need these, but libraries have compatibility breaks, and even compilers do. Even if you try hard to write in compliant code, bear in mind I wrote the code starting in 2009, and there have been some formal breaks since then (auto_ptr leaving us), as well as bits I didn’t get 100% right which came to bite many years later.

Now I could in principle maintain the code to keep it up to date with new versions of everything and so on. Maybe I should, but the project is essentially finished and in maintenance mode, and that takes time and testing.

Oh and one of the main use cases is on clusters, and these often use very table versions of RedHat or some equivalent so they tend to be years old, so my maintained version would need to be buildable on ancient redhat which I don’t have to hand. And either way, Linux tends to be backwards compatible, not forwards, so your best bet is to build on an old system.

So I solved this problem years ago with this hideous script. Since you haven’t read it, what it does is it creates ubuntu images (in various configurations) using debootstrap, sets them up with all the packages, compiles the code in all the different ways I want and assembles a release.

It took ages to write, but really took the burn out of making releases since it required lots of configurations; 32 and 64 bit Windows and Linux static executables plus JNI plugins for all of those (compiled with MinGW). It even has a caching mechanism: it builds a base system, then constructs a system with all the dependencies from that, then constructs a clean system for building the code from those.

It’s quite neat, if all you need to do is rebuild, it’s pretty quick because it only needs copy the image and build the code. And it still (mostly) works. I ran it for the first time in ages and apart from some of the URLs going stale (ubuntu packages have moved for the now historic 10.04, a has libpng, and libtiff), it worked as well today as it did 10 years ago.

The downside is it has to run as root though because it needs to run debootstrap and chroot, which at the time required root. This makes it hard to run on restricted systems (clusters) and it builds the entire thing every time making it hard for people to modify the code. I could update them to use things like fakechroot, but this sort of thing is precisely what Docker does well.

Docker basically makes shipping and managing whole OS images easier, has a built-in and easy to use caching mechanism and so on. The Dockerfile if you care looks like this:

FROM ubuntu@sha256:51523b5adbc67853e73d7e5faff234614942f9ff3872f329d2bb59478baf53db
LABEL description="Builder for 3B on an ancient system"

# Since 10.04 (lucid) is long out of support, the packages have moved
RUN echo 'deb http://old-releases.ubuntu.com/ubuntu/ lucid main restricted universe' > /etc/apt/sources.list

#Install all the packages needed
RUN apt-get update
RUN apt-get install -y --force-yes openjdk-6-jre-headless && \
	apt-get install -y --force-yes openjdk-6-jdk wget zip vim && \
	apt-get install -y --force-yes libjpeg-dev libpng-dev libtiff-dev && \
	apt-get install -y --force-yes build-essential g++ 

RUN mkdir -p /tmp/deps /usr/local/lib /usr/local/include

WORKDIR /tmp/deps

#Build lapack
#Note Docker automatcally untars with add
ADD clapack.tgz   /tmp/deps
ADD clapack-make-linux.patch /tmp/deps
ADD clapack_mingw.patch /tmp/deps
WORKDIR /tmp/deps/CLAPACK-3.2.1
RUN cp make.inc.example make.inc && patch make.inc < ../clapack-make-linux.patch 
RUN patch -p1 < ../clapack_mingw.patch
RUN make -j8 blaslib && make -j8 f2clib && cd INSTALL && make ilaver.o slamch.o dlamch.o lsame.o && echo > second.c && cd .. && make -j8 lapacklib
RUN cp blas_LINUX.a /usr/local/lib/libblas.a && cp lapack_LINUX.a /usr/local/lib/liblapack.a && cp F2CLIBS/libf2c.a /usr/local/lib/libf2c.a

ADD TooN-2.0.tar.gz   /tmp/deps
WORKDIR /tmp/deps/TooN-2.0
RUN ./configure && make install

ADD gvars-3.0.tar.gz   /tmp/deps
WORKDIR /tmp/deps/gvars-3.0
RUN ./configure --without-head --without-lang && make -j8 && make install 

ADD libcvd-20121025.tar.gz   /tmp/deps
WORKDIR /tmp/deps/libcvd-20121025
RUN ./configure --disable-fast7 --disable-fast8 --disable-fast9 --disable-fast10 --disable-fast11 --disable-fast12 && make -j8 && make install

RUN mkdir -p /home/build
WORKDIR /home/build

It’s not very interesting. It gets an ubuntu 10.04 base image, updates it to point at the historic archive, and then patches, builds and installs the dependencies. More or less what the shell script did before (minus downloading the dependencies). it’s nearly 2021, not 2011, I no longer care about 16M of binary blobs checked into a git repository that’s unchanging.

The result is a docker environment that’s all set up with everything needed to build the project. Building the docker environment is easy:

docker build -t edrosten/threeb-static-build-env:latest .

But the really neat bit is executing it. With all that guff out of the way from a user’s point of view, building is a slight modification of the usual ./configure && make. Basically share the current directory with docker as a mount (that’s what -v does), and run in the container:

docker run -v $PWD:/home/build edrosten/threeb-static-build-env ./configure
docker run -v $PWD:/home/build edrosten/threeb-static-build-env make -j 8

The funny thing about this is it gives a very modern way of partying like it’s 1999 or 2009 at any rate. The environment is stock Ubuntu 10.04, with a C++98 compiler. Obsolete? Sure, but it works and will likely keep working for a long time yet. And static binaries last for ever. I last touched the FAST binaries in 2006 probably on some Red Hat machine, and they still work fine today.

I’m not likely to update the builder for the ImageJ plugin any time soon. No one except me builds that since anyone who is in as deep as modifying the code likely wants to analyse large datasets for which ImageJ isn’t the right tool.

One thought on “Adventures in Docker: dockerizing an old build process

  1. Pingback: Static binaries | Death and the penguin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s