Wanna build a fast, fraud-safe game? Dive into Kafka!

Wojciech Marusarz
August 5, 2020

Kafka is a data bus with a persistence layer. It allows multiple producers to write data, and multiple consumers to read it – efficiently and reliably. As described in the previous article, Kafka allowed us to meet the challenges of a one-click block game we created, and prove its usability. Let’s dive into code and get to know how to use Kafka in the Spring Boot application and compare different approaches for designing responsive API.

Kafka setup

Kafka is a separate process running on your server so it needs to be installed independently of the application. It comes with a whole ecosystem: zookeepers, brokers, schema registry, CLI, control center etc. To use the basic functionalities of Kafka which are well suited for your requirements, you can install the community version running on docker. You need just two services: zookeeper and broker.

For development purposes you can use the docker-compose version with Control Center, which can be found HERE. Control Center service provides a readable UI interface, which can help you to monitor Kafka.

To run Kafka in docker just execute:

go to ./cp-all-in-one/cp-all-in-one or cp-all-in-one/cp-all-in-one-community directory and run docker-compose:

Once all images are pulled and services are started, the console should return the following:

To ensure that everything is up an running you can also run the command:

That’s it, Kafka is ready to handle our incoming and outcoming messages.

Spring Boot setup

To use Kafka in a Spring Boot application we just need to add dependencies to build.gradle.

  • spring-kafka – add this dependency directly in Spring Initializr
  • kafka-streams – add this dependency directly in Spring Initializr
  • kafka-json-serializer – allows to send messages as JSON, to avoid writing own serializers and deserializers

To start working with Kafka, we just need to configure connection parameters to kafka brokers and that’s all.

If an application and Kafka are running on the same server, or we are developing an application on a local machine, we can provide the Kafka broker address as localhost:9092. Otherwise this has to be the broker address, where Kafka is installed. We have Kafka installed and running, and thanks to providing basic configuration, an application is able to connect with Kafka.

System architecture

As mentioned in previous article, the whole application is divided into three separate components:

  • Data bus that handles each incoming message (block), and saves it to kafka blocks topic.
  • Monitoring current game progress which uses Kafka Stream Topology – a background thread reads each block from the blocks topic, groups them by gameGuid and creates current game progress. If the game is completed, it is saved in the Kafka games topic.
  • Game post-processing which uses Kafka Streams Topology to validate the game and persist the best results. There is a background thread that reads finished games from the games topic, valides game results including blocks coordinates, timestamps and achieved results. If the game is valid, it’s persisted in MongoDB.

Let’s see how each component works.

Data bus #

Incoming messages (blocks) are consumed by a simple rest controller. Three types of events are handled: GameStart, NextLevel and GameEnd

Everything that needs to be done here is to persist a message in Kafka – which is super fast – and return an immediate response to the UI application. To persist data, we need to create a kafka producer and send data using it:

Messages are waiting in Kafka, ready to be handled by the next component.

Monitoring current game progress #

To build current game progress, we use separate Kafka Topology, which reads all messages from the blocks topic, groups them by gameGuid, aggregates blocks into GameAggregate and detects if the game isCompleted. If so, it persists them to games.

To create topology, we use _StreamBuilder. Completed games are persisted in a separate Kafka topic, ready to be handled by the next component.

Game post-processing #

All that needs to be done here is to read completed games from games topic, validate them and persist them in MongoDB. All of this is done using another Kafka Topology which is running in the background. Game validation is done using gameAggregateHandler

We wanted to create a reactive application, which is resistant to high traffic load. Thanks to Kafka Streams which guarantees that in a single topology only one event is handled at a time, the application was able to handle in a fast and reliable way all incoming messages.


To validate if Kafka was the right choice, we verified three cases

  • Blocking API
  • Non-blocking API and Thread Pool
  • Kafka Streams

Tests were performed using the Gatling tool. During the test, about 15k requests were sent. It was not a real-life scenario, but it was done to highlight differences between each approach. Let’s see the results.

Blocking API #

Blocking API has the easiest implementation, which works in a simple manner:

  • When GameStart message is received, new Game model is created and persisted in MongoDB
  • When new NextLevel block with game GUID is received, game model is read from MongoDB and block is appended to existing blocks list
  • When GameEnd message is received, game is marked as completed, and is displayed on the dashboard

All of the above operations are executed in the request thread, so request is blocked, until operation is completed. Looking at diagrams with response times, it seems that 60% of responses are received after 1200ms. Not so well. It is an unacceptable delay that makes fluid gameplay impossible.

Non-blocking API and Thread Pool #

A non-blocking API is similar to a blocking one, but instead of executing all the operations in the request thread, Thread Pool with 10 threads is used to handle GameStart, NextLevel and GameEnd blocks. It improved response time significantly. More than 75% of responses were received in less than 800 ms.

Kafka Streams #

Implementation with Kafka uses a request thread to send data to Kafka (which is super fast), but all data processing is executed in the background, using Kafka topologies. It allows us to execute as little as possible in the request thread. It occurs to be the fastest solution. Almost 100% of the requests completed in less than 800ms.

Response Time Distribution diagram shows that 50% of reponses completed almost immediately.

It seems that a blocking API is not the best choice as the application requires you to respond as quickly as possible. It did work, but the non-blocking API proved much more efficient. What is most important here is that using an application that uses Kafka handles almost half of the requests immediately, and almost 100% of requests are completed in 800 ms.

Now, let's talk about your project!

We don't have one standard offer.
Each project is unique, rest assured that we will approach the next one full of energy and engagement.