8 min. reading time

In the recent blog post by Andre Stolpp, we explained the hard- and software architecture of our itemis Robocar project. For setting up Robocar, development was done directly on Raspberry PI. Although it is very comfortable to work on that device, it also has a few limitations:

  1. Not all development tools are available. Since we are using Eclipse CDT, this is a real drawback, since only old versions are available for Raspbian stretch.
  2. Performance is considerably less than on a notebook.
  3. You need to have a physical connection to the Raspberry. Coding while, e.g., traveling, is a bit difficult.

There are a few approaches that are available to mitigate these issues:

  1. Write platform-independent code and work, e.g., on an Ubuntu machine. However, you need a recompilation step to get the code for the Raspberry target.
  2. Use a cross-compilation toolchain to work on the host and then deploy the software to the target.

Setting up cross-compilation toolchains can be cumbersome tasks. We chose an approach with Docker for several use cases:

  1. Building the Raspberry binaries in a local or CI environment.
  2. Using a local IDE for comfortable editing.

Using Docker for cross compilation

We are using Docker to set up a native ARM32v7 toolchain and avoid messing with cross-compilation environments. But how does this work, since notebooks and PCs are not based on the ARM architecture and thus cannot execute ARM code natively?

There is a very nice feature in Docker for Windows (it is not in the Unix version of Docker) that allows you to execute native ARM code: Docker for Windows comes bundled with QEMU. If Docker detects that a container is based on ARM code, it is executed within QEMU. This brings a lot of advantages (but also has some limits, as discussed below).

To define the contents of the container, we need a Dockerfile. Ours looks like this:

FROM arm32v7/debian:stretch

RUN apt-get update
RUN apt-get install -y build-essential cmake unzip pkg-config

RUN apt-get install -y libjpeg-dev libpng-dev libtiff-dev
RUN apt-get install -y libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
RUN apt-get install -y libxvidcore-dev libx264-dev

RUN apt-get install -y libgtk-3-dev
RUN apt-get install -y libcanberra-gtk*
RUN apt-get install -y libatlas-base-dev gfortran
RUN apt-get install -y python3-dev
RUN apt-get install -y wget

RUN wget -O opencv.zip https://github.com/opencv/opencv/archive/4.0.0.zip
RUN wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.0.0.zip

RUN unzip opencv.zip
RUN unzip opencv_contrib.zip

WORKDIR opencv-4.0.0

RUN mkdir build
WORKDIR build

RUN apt-get install -y python3-pip

RUN pip3 install numpy

RUN pwd

RUN cmake -D CMAKE_BUILD_TYPE=RELEASE \
	-D CMAKE_INSTALL_PREFIX=/usr/local \
	-D OPENCV_EXTRA_MODULES_PATH=/opencv_contrib-4.0.0/modules \
	-D ENABLE_NEON=ON \
	-D ENABLE_VFPV3=ON \
	-D BUILD_TESTS=OFF \
	-D OPENCV_ENABLE_NONFREE=ON \
	-D INSTALL_PYTHON_EXAMPLES=OFF \
	-D BUILD_EXAMPLES=OFF ..

RUN make -j4
RUN make install
RUN ldconfig

WORKDIR /
RUN apt-get install -y git
RUN git clone git://git.drogon.net/wiringPi
RUN apt-get install -y sudo
WORKDIR wiringPi
RUN ./build

WORKDIR /
RUN git clone https://github.com/Reinbert/pca9685
WORKDIR pca9685/src
RUN make install

RUN mkdir /lohmann
WORKDIR /lohmann
RUN git clone https://github.com/nlohmann/json
WORKDIR json
#RUN cmake .
#RUN make install

# Alas, there is no use in installing gdb / gdbserver. They dont work in qemu , since
# a system call that is required is not supported
#
# RUN apt-get install -y gdb gdbserver

# We need support for the joystick device - even if we cannot access it in docker under Windows
#
RUN apt-get install -y joystick

# Useful for debugging
RUN apt-get install -y net-tools


RUN mkdir /xbuild
WORKDIR /xbuild

 

This Dockerfile sets up a Debian Stretch, installs a lot of required packages. Amongst others, it clones, compiles and installs OpenCV. It also creates a directory called /xbuild, which we will use to compile the code.

Our source code is on the host machine, i.e., the Windows machine, so we need to start the container that mounts /xbuild as a volume to the location of the project.

Robocar with Docker: start container

 

This gives us an interactive shell in the container. If we mount the correct source directory, we can use cmake and make to build an executable that could be run on the Raspberry.

Running within Docker

But of course we would also like to use the Docker-built binaries to check and test our code. Since we are within the ARM32v7 Docker, we can simply run our binaries. But there is one caveat: Running within Docker does not provide the same hardware obviously. So we have at least three issues:

  1. There is no camera attached.
  2. There is no controller attached to steer the car.
  3. There are no actuators (GPIO, PWM), so library calls will fail.

The solution to this problem is of course abstraction. We should abstract the real hardware. So, our code will be working with an abstract AbstractJoystick, and then we are going to instantiate two different implementations, based on a command line switch.

Robocar with Docker: abstraction

 

  • LinuxJoystick is the implementation running on the car. It will actually read the joystick interface
  • RemoteJoystick reads commands from a Unix socket. It will listen on a port (see start of Docker above) and emulate a joystick. We can then telnet to the socket and send commands from the host to it.

In addition, the image acquisition can be chosen as to read from a camera or from a video file (Docker version). The actuators are either actually controlled or replaced by a dummy class.

Setting up Eclipse

We can now use any modern version of Eclipse to work on the source code, and we indeed need a modern version of Eclipse. The reason is that we are going to use a full development approach, including potentially AUTOSAR system descriptions, requirements traceability, etc.

Robocar with Docker: setting up eclipse

 

However, an out-of-the box CDT will not be able to find the standard include files in the Docker container, since we cannot access the file system within the container). There are a few options. The simplest one: Copy the include files to the local disc and tell CDT where to find them. So, in the Docker container, we created a cdt_include subdir and did the following:

cp -R /usr/local/include .
cp -R /usr/include .

 

This makes the files visible in CDT in the project explorer. Using the project properties, we can tell the CDT indexer where to find the files. Sometimes it does not find includes in subdirectories, so we might need to add them manually. Of course this needs to be repeated after installing additional includes.

Robocar with Docker: add include path to project

 

We can then start the build inside Docker from the standard CDT build system by adding a new build target and using a trick on the dialog:

Robocar with Docker: add new build target

 

Robocar with Docker: modify build target

 

Note that in the text field labelled “Build target”, we set the name of the command that we want to execute on Docker. “Build target” is always appended to “Build command”. We can now start the build of the target from CDT, and Eclipse will invoke Docker to compile the code.

Drawback: debugging

It would of course been also very convenient to actually be able to debug the code running in the Docker container. Gdb supports remote code debugging on other machines. However, the Docker QEmu and debugging don’t go well together. It is not possible to start gdb in the container, because QEmu does not support the ptrace system call. If you are using QEmu from the command line, there are workarounds, however, we could not get this running from Docker.

Comments