itemis Blog

Configuring Robocar Software with Json for modern c++

Written by Andreas Graf | Oct 29, 2019

In the previous article “Developing Robocar software with Docker” in our series on robocar software development, we explained how to develop and run your embedded software in a Docker container. One of the key concepts was the introduction of different software variants: those that run in a Docker container and those that run on the real vehicle. By this, we are abstracting away any differences in the hardware.

In “real” automotive development, variants and parameters are managed with dedicated tools and are, e.g., flashed to the ECU (Engine Control Unit) at the end of the production line. For our robocars, we will show how to manage these variability and parameters with JSON for modern C++.

Software variants and software parameterization are important and often complex topics in software engineering. As soon as our robocar software gets a little bit more elaborate, they are both required. In our case, two examples are:

  • Getting input from different sources, e.g., the real camera or a pre-recording.
  • Parameterizing the trajectory planning with actual car geometry.

Choosing the image input source

When our car runs in training or in autonomous mode, the images that are being processed by the neural networks and by the trajectory planning obviously will be taken from the mounted camera. However, for testing and development purposes, we would like to have other input sources, such as pre-recorded data. Our software should be able to deal with the following variants:

  1. getting images from the camera (through OpenCV API),
  2. getting images from the camera (through Raspicam API),
  3. getting images from a pre-recorded MP4 movie,
  4. getting images from a directory of images, e.g., from a previous run.

To make our code clean and maintainable, the actual code that is processing the images should not need to distinguish between sources. So we introduce an abstract class that defines the interface for image acquisition:

class AbstractCamera {
public:
    AbstractCamera();
    virtual ~AbstractCamera();

    virtual int initVideo() = 0;
    //Initializes VideoCapture Object (opens camera modul via 4l2 camera driver)
    virtual int closeVideo() = 0;
    virtual int captureVideo(cv::Mat &dest) = 0;
};

#endif /* CV_ABSTRACTCAMERA_H_ */

The most important method in the class is captureVideo(), which will retrieve and store the next image frame into an OpenCV image data structure.

Our (simplified) processing loop could then be agnostic about the actual type of camera used:

while(true) {
	acamera->captureVideo(image);
	//Capture Video frame
	process(image);
}

We can the implement a class for getting images from the camera via OpenCV:

int ImageAcquisitionCV::initVideo() {
	if(!cap.isOpened()) {
		cout << "Could not open the cam module" << endl;
		return 0;
	}
    cap.set(CAP_PROP_FPS, 60);
    // used to set the fps rate of the camera module
    return 1;
}

int ImageAcquisitionCV::captureVideo(cv::Mat& dest) {
//    cout << "Get Frame" << endl;
    cap  >> dest;
}

Or one that reads images from a directory:

ImageAcquisitionDir::ImageAcquisitionDir(const char * dirname)
:iterator(fs::directory_iterator(dirname)) {
	std::cout << "directory image reader " << std::endl;
}

int ImageAcquisitionDir::captureVideo(cv::Mat &dest) {

	try {
		auto path = (*iterator).path();
		std::cout << "Reading file " << path << std::endl;
		dest = cv::imread((*iterator).path());
		iterator++;

		return 0;

    } catch(std::experimental::filesystem::v1::__cxx11::filesystem_error e) {
		return -1;
    }
}

Choosing variants for runtime

Since we now have two different classes, we need a mechanism to tell the program which one to use. There are different options:

  1. Changing or recompiling the code before starting: In C++, variants of the code are often specified with preprocessor functionality (#ifdef). This requires a recompilation of code, which is time consuming. We could compile different versions of executable, but the number of executables will explode for just a few variants. )
  2. Command-line arguments: a frequently-used mechanism to pass parameters to a program and change its behavior. However, in our case, we will have more complex configurations. So our choice is:
  3. Configuration files: Configuration are (often) text-based files that contain parameters.

In our case, we chose JSON-based configuration files. This allows us to easily mange more complex parameter types (see below), put our configuration into source code management tools (Git) and also easily combine configuration files in more complex scenarios.

Our section for the camera configuration looks like this:

{
    "camera" : "video",
    "camera-video" : {
   	"file" : "Driving_at_Sunset.mp4"
    },
    "camera-directory" : {
   	 "path" : "tests/images"
    },

In this configuration, the “camera” value tells the code which camera to use, and “camera-video” and “camera-directory” are specific parameters, which are interpreted by the different implementations of the abstract camera. To instantiate the camera, we use a factory class. It reads the JSON file and decides which camera to create depending on the “camera” value:

extern JConfig jconfig;
AbstractCamera * createCamera() {

	std::cout << "Configured camera: "<< (jconfig.j)["camera"].get() << std::endl;
	auto cam = (jconfig.j)["camera"].get();
	if(cam.compare("video")==0) {
		auto filename = jconfig.j["/camera-video/file"_json_pointer].get();
		std::cout << "File based " << filename.c_str() << std::endl;
		return new ImageAcquisitionCV(filename.c_str());
	}
	if(cam.compare("raspi")==0) {
		std::cout << "Raspi based camera "  << std::endl;
		return new ImageAcquisitionRCCV();
	}
	if(cam.compare("directory")==0) {
		auto filename = jconfig.j["/camera-directory/path"_json_pointer].get();
		return new ImageAcquisitionDir(filename.c_str());
	}
	std::cout << "OpenCV camera"  << std::endl;
	return new ImageAcquisitionCV();
}

A camera is then created with

AbstractCamera * acamera = createCamera();

Combining JSON files

For reading and processing JSON files, we use the library https://github.com/nlohmann/json. This gives us an additional benefit: This library supports JSON merge patch, making it possible to combine several JSON files, even overriding some of the values. So we can pass in the names of the JSON files we want to combine and get one final configuration:

for(int i=1; i < filenames.size();i++) {
	logger->debug("Reading "+filenames.at(i));
	in = std::ifstream(filenames.at(i));
	auto j2 = json::parse(in);
	j.merge_patch(j2);
}

Configuration based on detected hardware

On our robocar platform, we can actually build several configurations of robocars. The basic frame with actuators and sensors is the same, but the cameras differ:

  1. Raspberry with Raspi V1 cam
  2. Raspberry with USB cam
  3. Jetson Nano with Raspi V2 cam
  4. Jetson Nano with USB cam
  5. Nvidia Xavier with Stereo Cam
  6. Nvidia Xavier with USB Cam

The cameras have slightly different characteristics, they distort the images in different ways and they might need different color processing and special processing based on images size. Since we are humans, we would typically forget to adjust the config.json when switching from one machine to another – and then wonder why your captured images look wrong. So we decided to automatically detect the right configuration.

First of all, we have a little helper that retrieves the names of all video devices through the V4L2 API:

std::vector  allCameraNames() {

	std::vector res;
	const fs::path dev("/dev");
	for( const auto& entry : fs::directory_iterator(dev)) {
		if(entry.path().filename().string().substr(0,5).compare("video")==0) {
			res.push_back(cameraName(entry.path().string()));
		}
	}
	return res;
}

We list all possible camera names in order of their priority in the JSON file. The argument is a JSON pointer that referring to the actual configuration:

"cameras-auto-detect" : {
	"vi-output, imx219 6-0010" : "/cameras/nano-rpi",
	"USB 2.0 Camera": "/cameras/camera-gstreamer"
},
"cameras" : {
	"camera-video" : {
		"type" : "video",
		"file" : "Driving_at_Sunset.mp4"
	},
	"camera-directory" : {
		"type" : "directory",
		"path" : "tests/images"
	},
	"nano-rpi" : {
		"type" : "gstreamer",
		"config" : "tests/images",
		"distort" : {
			"CameraMatrix":[6.1042868202109855e+02, 0.0, 6.5214392342611825e+02, 0.0, 4.0400171141068842e+02 ,3.1415789291312791e+02, 0.0, 0.0, 1.0],
			"distortionCoeffs":[-1.8738949642767188e-02, 3.0868180169294455e-02, -2.5805891763426526e-03, 0.0]
		}
	}
},

In the factory, we examine the list of camera definitions in the JSON file, and if it matches one of the cameras we found in allCams, we have a match and will create an instance of that camera:

for(auto &element : (jconfig.j)["cameras-auto-detect"].items() ) {
	if (std::find(allCams.begin(), allCams.end(), element.key()) != allCams.end()) {
		logger->info( "Auto config camera: {}",element.key() );
		cam = element.value();
	}
}

Summary

Creating configurations and easily modifiable parameter sets reduces development times. Even for seemingly small and simple projects, such as our Robocar, we see that the number of possible configurations is growing rapidly. In interpreted languages, such as Python, the parameters are often adapted directly in the source code, but in compiled languages we do not actually want to recompile each time

Using libraries such as Nlohmann’s “JSON for modern C++”, we can easily create a system that can be configured comfortably and in a very flexible way. This reduces turn-around times in development. It also allows for a quicker adjustment of parameters and enables a comfortable support of different hardware configurations of the vehicle. The library’s support for JSON patches adds modularity of configurations.