Software Development, web engineering, englisch

4 Microservice Frameworks in Comparison, Streaming Example Included

You want to use a microservice framework for your next project, but you are not sure which one to use? Then read this article and make your own decision based on the test results. Four microservice frameworks are explained and tested with regard to start time, response time and response time under load using a streaming example. Have fun!

In my last post I wrote about RxJava and why you should develop microservices reactively. Microservices can be initialized or shut down depending on the load.

If a service is heavily burdened, an autoscaler can simply start additional instances. This distributes the load and reduces the response time again. If the services are under little load, some instances are stopped and resources are freed up for other services.

This approach reduces your costs when using a cloud service.

Services are rarely free of load peaks. It is advantageous if the starting and stopping of further units is quick and the newly started instances react quickly to requests. After all, you want to keep response times low for all requests.

In this article I will therefore take a look at four microservice frameworks and put them to the test. Afterwards, I will briefly explain how easy it is (or not) to get started with the respective microservice framework.

In the first step I will deal with the following microservice frameworks:

  • Spring Boot
  • Vert.x
  • Helidon
  • Quarkus

The frameworks Micronaut, Wildfly, Dropwizard and Spark will follow in due course.

I will start with my observations with Spring Boot first, because I already tackled the framework in my first article and use it productively myself.

In the following I am going to lean on the article about Reactive Stream with Spring Boot. I have already created a stream REST endpoint and developed a small frontend for illustration. The code for the following backends can be found in the same GitHub-Repo.

Interestingly, Spring Boot has a standard for application/stream+json. However, this is not an official MIME type. It is quite incomprehensible though, because the format seems to make sense.

Since Quarkus and Helidon use JAX-RS, which uses the MIME standard, either the type itself must be defined or an application/octet-stream must be used.

In order that I don't have to change the frontend, I used both the application/octet-stream and application/stream+json. The respective endpoints are accessible via headers. Without header application/json is returned.

Getting started with the microservice frameworks

So far I have only used Spring Boot productively before. Therefore the other microservice frameworks are new territory for me, too. So I will briefly explain what difficulties I had to create the corresponding endpoint in the frameworks.

Spring Boot

Spring Boot is probably the best known of the microservice-frameworks. 39.000+ stars on GitHub speak for themselves.

Spring.io makes it very easy to get started with Spring Boot. Here a pom.xml with all necessary dependencies is created. You can start immediately. In addition, the community is very large, and you will find instructions on how to deal with most problems.

Spring Boot was the only framework that supports application/stream+json by default. It also offers many other features and a lot of help for clear and easy development.

As you can see below, it doesn't take much to create a response stream:

@Controller
class RestEndpoint {

    @Autowired
    lateinit var dataProvider: DataProvider

    @Autowired
    lateinit var streamResponse: CarStreamResponseOutput

    @GetMapping(path = ["cars"], produces = [MediaType.APPLICATION_STREAM_JSON_VALUE])
    fun getCarsAsStream(): StreamingResponseBody {
        return streamResponse
    }

    @GetMapping(path = ["cars"], produces = [MediaType.APPLICATION_JSON_VALUE])
    fun getCarsAsJson(): List {
        return dataProvider.getDataStream().toList().blockingGet()
    }
}

Despite this, a StreamingResponseBody is required for the stream response:

@Component
class CarStreamResponseOutput : StreamingResponseBody {
    @Autowired
    lateinit var dataProvider: DataProvider

    override fun writeTo(os: OutputStream) {
        val writer = BufferedWriter(OutputStreamWriter(os))
        val countDownLatch = CountDownLatch(1)
        dataProvider.getDataStream().subscribe({
            writer.write(Klaxon().toJsonString(it))
            writer.write("\n")
            writer.flush()
        }, ::println, {
            os.close()
            countDownLatch.countDown()
        })
        countDownLatch.await()
        writer.flush()
    }
}

That's basically it. So it's on to the next framework.

Vert.x

Vert.x has quite good documentation. It was developed by the Eclipse Foundation and is directly designed for reactive applications on the JVM.

However, I couldn't just pass my observable (or flowable) to the response handler. You can return a flowable directly, as described in the documentation, but the flowable does not write directly to the stream at every new event.

Vert.x seems to buffer the elements in the flowable and only write the stream when the event gets a "done" from the flowable.

Therefore, you will have to write your own handler to get a continuous stream. However, this task it not very complex if you're familiar with the very similar Spring Boot handler.

class AsyncCarResponse : Handler {
  override fun handle(rtx: RoutingContext) {
    val response = rtx.response()
    response?.setChunked(true)
    val flow: Flowable = DataService.getDataStream(TIMEOUT).map { Klaxon().toJsonString(it) }.toFlowable(BackpressureStrategy.BUFFER)
    flow.subscribe({
      response.write(it)
      response.write("\n")
      response.writeContinue()
    }, ::println, {response.end()})
  }
}

The documentation of Vert.x is also good in other respects and the community with more than 9.700 GitHub-Stars is constantly increasing.

The application is compiled by
./mvnw clean compile,

started by ./mvnw exec:java.

The commands can also be combined easily: ./mvnw clean compile exec:java.

All in all, you can find your way well into the microservice framework Vert.x and can quickly start to develop. However, you have to get used to develop on a "main thread", because you are not allowed to block it.

In the beginning, I made the mistake to use Thread.sleep, which would reduce performance a lot. However, this is clearly described as a DON'T on the website.

After I had fixed this, Vert.x could score again with performance. The other microservice frameworks got along with Thread.sleep. Since this statement was in the domain part of the application that all frameworks share, I removed it globally.

Helidon

Like Quarkus, Helidon uses the JAX-RS standard. So I could use the same code as Quarkus. You only have to register a JerseySupport and off you go.

The positive thing about Helidon is that it doesn't need its own commands in the terminal to be started; the IDE support is very simple and pleasant. All necessary dependencies are in the pom.xml.

So the building can be done with mvn clean install and the built jar-archive can be executed with java -jar. It has to be said that an mvn clean install is also sufficient for all other frameworks to build an executable jar.

Vert.x and Quarkus bring more scripts and need an additional class to run from the IDE. This class is not included by default in both microservice frameworks.

  fun main(args: Array<String>) {
      val serverConfig = ServerConfiguration.builder()
          .port(8080).build()
      val webServer = WebServer
          .create(serverConfig, Routing.builder()                
                  .register("/cars",
   JerseySupport.builder().register(CarService::class.java).build())
                   .build())
           .start() 
           .toCompletableFuture()
           .get(10, TimeUnit.SECONDS)
    }

Quarkus

Quarkus is still quite a young framework. However, it has already received a lot of attention in the community. It is currently ranked at 2,000 stars on GitHub.

Quarkus was and is developed by Red Hat. It compiles natively for the GraalVM, but can also be translated for the classic JVM. Unfortunately, here it doesn't play out its strengths, which are due to the slim RAM consumption and the extremely fast startup, even though, as you'll see, it starts on the JVM in less than a second.

Quarkus uses several standards, including JAX-RS Netty and Eclipse MicroProfiles.

Quarkus commits itself to a very fast start and therefore scaling as well as a low memory consumption. In addition, the developers also rely on the reactive approach to develop highly concurrent and responsive applications.

There is a more detailed article in the JavaSpektrum (7/2019), in which Quarkus is examined in more detail. Among other things, it shows that the application on the JVM consumes 100 MB RAM, whereas on the GraalVM it needs only 8 MB.

The Quarkus documentation is detailed and easy to read. Unfortunately, there are only a few tutorials and explanations so far. I guess this is due to the fact that the community isn't that big yet, and the Quarkus microservice framework hasn't been in use long enough.

If problems occur, you have to search for a long time or ask your own questions to the community. But due to the fact that Quarkus comes from Red Hat, the community won't be long in coming.

The advantage is that you can develop a fast and lean application with existing Java or Kotlin knowledge.

@Path("/cars")
class CarResource {
    @Inject
    lateinit var responseStream: CarStreamResponseOutput

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    fun getCarsAsList() = DataService.getDataStream(0).toList().blockingGet()

    @GET
    @Produces("application/stream+json")
    fun loadCarsAsJsonStream(): Response {
        return Response.ok().entity(responseStream).build()
    }
}

Comparison of microservice frameworks

Now that you know how to get started with the individual microservice frameworks, I'll start with the actual comparison and evaluation of my benchmarks.

Development

Spring Boot, Quarkus and Helidon use almost the same ResponseWriter. Vert.x uses a handler.

In Helidon and Quarkus you can rely on the classic JAX-RS approach. There is a lot of documentation here because of the Java-EE development. Vert.x, on the other hand, has its own good documentation.

All in all, developing in Spring Boot was easiest for me. This is partly due to my experience, partly due to the currently largest community. However, the advantages of the other frameworks cannot be denied, as you will see from the numbers.

Tests of the different backends

The first step is to start all backends on the JVM (Java version 11.0.2). Then the corresponding endpoints are addressed with curl. The first-response times are determined by a format file (located in my GitHub repo).

curl -w "@curl-format.txt" -o /dev/null -s "http://localhost:8080/cars" -H"Accept:application/stream+json"


The mean value and median of the response times are determined with k6. Ten simulated users send requests to the endpoint for 30 seconds. The results are documented in the following table.

Criterion Spring Boot Vert.x Helidon Quarkus
Startzeit 2.226s 0.200s 0.619s 0.562s
First Response 0.190s 0.350s 0.540s 0.523s
RPS Small 8712 5372 7082 9269
RPS Large 79 99 98 98
Average Response Time Small Data 1.120ms 1.320ms 1.390ms 1.050ms
Median Response Time Small Data 1.030ms 1.700ms 1.130ms 0.914ms
Average Response Time Slow Data 126ms 101ms 102ms 101ms
Median Response Time Slow Data 118ms 101ms 101ms 101ms

The starting time of the four microservice frameworks can be seen in the following picture. Here you can see that Spring Boot is clearly beaten by the other frameworks:

microservice_frameworks_im_vergleich-startzeit

At least, the slow starting time has the advantage that more dependencies are already loaded at startup and the first response to a query is faster. This can be seen below:

microservice_frameworks_im_vergleich-firstresponse

The following two graphs show the answers per second. Small is data without delay, large is data with a delay of 100 ms.

As you can see, all frameworks are relatively similar for responses with delay, with Spring Boot being about 20 percent slower. However, if the answer is fast and small, the differences are greater. Here Vert.x is almost 50 percent slower than Quarkus.

microservice_frameworks_im_vergleich-smallResponse

microservice_frameworks_im_vergleich-largeResponse

Results

Under load all microservice frameworks react similarly fast, if the answer is small. Spring Boot was well ahead in the first reaction, but it loses a lot of start time. Vert.x has a clear lead in this comparison, although it can't handle as many requests simultaneously as Quarkus or Spring Boot for a single instance and a large number of queries. However, multiple instances of Vert.x can be run on a single machine without any problems because it is single-threaded.

All in all, Helidon and Quarkus are the fastest in the overall picture, although Quarkus is still a tick faster.

When it comes to the amount of documentation, help on the web and the number of developers, you should probably rely on Spring Boot. However, if a service is to be started and stopped quickly, it is worth investing time and using one of the newer microservice frameworks such as Helidon, Quarkus, or Vert.x. This is especially exciting when using a microservice architecture.

The complete code can be found in my GitHub repo.

Did the article help you? Do you have any questions?

Leave me your comment below!

   
About Auryn Engel

Auryn Engel joined itemis in Leipzig early in 2019, after completing his Master's degree. As a full-stack developer, he works with Java EE, Spring Boot, React and Vue.js.