Wanna build a fast, fraud-safe game? Dive into Kafka!

Wanna build a fast, fraud-safe game? Dive into Kafka!

Wojciech Marusarz - August 5, 2020

Kafka is a data bus with a persistence layer. It allows multiple producers to write data, and multiple consumers to read it – efficiently and reliably. As described in the previous article, Kafka allowed us to meet the challenges of a one-click block game we created, and prove its usability. Let’s dive into code and get to know how to use Kafka in the Spring Boot application and compare different approaches for designing responsive API.

Kafka setup

Kafka is a separate process running on your server so it needs to be installed independently of the application. It comes with a whole ecosystem: zookeepers, brokers, schema registry, CLI, control center etc. To use the basic functionalities of Kafka which are well suited for your requirements, you can install the community version running on docker. You need just two services: zookeeper and broker.

For development purposes you can use the docker-compose version with Control Center, which can be found HERE. Control Center service provides a readable UI interface, which can help you to monitor Kafka.

To run Kafka in docker just execute:

git clone https://github.com/confluentinc/cp-all-in-one.git

go to ./cp-all-in-one/cp-all-in-one or cp-all-in-one/cp-all-in-one-community directory and run docker-compose:

docker-compose up -d --build

Once all images are pulled and services are started, the console should return the following:

Creating zookeeper ... done
Creating broker    ... done
Creating schema-registry ... done
Creating rest-proxy      ... done
Creating connect         ... done
Creating ksqldb-server   ... done
Creating ksqldb-cli      ... done
Creating ksql-datagen    ... done
Creating control-center  ... done

To ensure that everything is up an running you can also run the command:

docker ps

That’s it, Kafka is ready to handle our incoming and outcoming messages.

Spring Boot setup

To use Kafka in a Spring Boot application we just need to add dependencies to build.gradle.

  • spring-kafka – add this dependency directly in Spring Initializr
  • kafka-streams – add this dependency directly in Spring Initializr
  • kafka-json-serializer – allows to send messages as JSON, to avoid writing own serializers and deserializers
  dependencies {
  // … [Spring dependencies](https://nexocode.com/blog/posts/spring-dependencies-in-gradle/)
  implementation 'org.springframework.kafka:spring-kafka'
  implementation 'org.apache.kafka:kafka-streams'
  implementation 'io.confluent:kafka-json-serializer:5.5.1'

To start working with Kafka, we just need to configure connection parameters to kafka brokers and that’s all.

public Properties kafkaProperties() {
   Properties props = new Properties();
   props.setProperty(BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
   props.setProperty(ACKS_CONFIG, "all");
   props.setProperty(VALUE_SERIALIZER_CLASS_CONFIG, KafkaJsonSerializer.class.getName());
   return props;

If an application and Kafka are running on the same server, or we are developing an application on a local machine, we can provide the Kafka broker address as localhost:9092. Otherwise this has to be the broker address, where Kafka is installed. We have Kafka installed and running, and thanks to providing basic configuration, an application is able to connect with Kafka.

System architecture

As mentioned in previous article, the whole application is divided into three separate components:

  • Data bus that handles each incoming message (block), and saves it to kafka blocks topic.
  • Monitoring current game progress which uses Kafka Stream Topology – a background thread reads each block from the blocks topic, groups them by gameGuid and creates current game progress. If the game is completed, it is saved in the Kafka games topic.
  • Game post-processing which uses Kafka Streams Topology to validate the game and persist the best results. There is a background thread that reads finished games from the games topic, valides game results including blocks coordinates, timestamps and achieved results. If the game is valid, it’s persisted in MongoDB.

Let’s see how each component works.

Data bus

Incoming messages (blocks) are consumed by a simple rest controller. Three types of events are handled: GameStart, NextLevel and GameEnd

public Mono<NextLevelInfo> startGame(@RequestBody @Valid GameStart level) {
 return  gameService.startGame(level);

public NextLevelInfo nextLevel(@RequestBody @Valid NextLevel level) {
   return gameService.nextLevel(level);

public NextLevelInfo completeGame(@RequestBody @Valid GameEnd level) {
   return gameService.completeGame(level);

Everything that needs to be done here is to persist a message in Kafka – which is super fast – and return an immediate response to the UI application. To persist data, we need to create a kafka producer and send data using it:

KafkaProducer levelProducer = new KafkaProducer<>(kafkaProperties())

public NextLevelInfo nextLevel(NextLevel level) {
   levelsProducer.send(new ProducerRecord<>("blocks", level.getGameGuid(), level));
   return convertToNextLevel(level);

Messages are waiting in Kafka, ready to be handled by the next component.

Monitoring current game progress

To build current game progress, we use separate Kafka Topology, which reads all messages from the blocks topic, groups them by gameGuid, aggregates blocks into GameAggregate and detects if the game isCompleted. If so, it persists them to games.

StreamsBuilder streamsBuilder = new StreamsBuilder();
streamsBuilder.stream(”blocks”, Consumed.with(Serdes.String(), GSerdes.BlockModel()))
   .aggregate(GameAggregate::new, (gameGuid, newBlock, gameAggregate) -> {
      return gameAggregate;
   }, Materialized.with(Serdes.String(), CustomSerdes.GameAggregate()))
   .filter((gameGuid, gameAggregate) -> gameAggregate.isComplete())
   .to(”games”, Produced.with(Serdes.String(), Serdes.GameAggregate()));

Topology topology = streamsBuilder.build();
KafkaStreams kafkaStreams = new KafkaStreams(topology, props);

To create topology, we use StreamBuilder. Completed games are persisted in a separate Kafka topic, ready to be handled by the next component.

Game post-processing

All that needs to be done here is to read completed games from games topic, validate them and persist them in MongoDB. All of this is done using another Kafka Topology which is running in the background. Game validation is done using gameAggregateHandler

StreamsBuilder streamsBuilder = new StreamsBuilder();

Topology topology = streamsBuilder.build();

KafkaStreams kafkaStreams = new KafkaStreams(topology, props);

We wanted to create a reactive application, which is resistant to high traffic load. Thanks to Kafka Streams which guarantees that in a single topology only one event is handled at a time, the application was able to handle in a fast and reliable way all incoming messages.


To validate if Kafka was the right choice, we verified three cases

  • Blocking API
  • Non-blocking API and Thread Pool
  • Kafka Streams

Tests were performed using the Gatling tool. During the test, about 15k requests were sent. It was not a real-life scenario, but it was done to highlight differences between each approach. Let’s see the results.

Blocking API

Blocking API has the easiest implementation, which works in a simple manner:

  • When GameStart message is received, new Game model is created and persisted in MongoDB
  • When new NextLevel block with game GUID is received, game model is read from MongoDB and block is appended to existing blocks list
  • When GameEnd message is received, game is marked as completed, and is displayed on the dashboard

All of the above operations are executed in the request thread, so request is blocked, until operation is completed. Looking at diagrams with response times, it seems that 60% of responses are received after 1200ms. Not so well. It is an unacceptable delay that makes fluid gameplay impossible.

Non-blocking API and Thread Pool

A non-blocking API is similar to a blocking one, but instead of executing all the operations in the request thread, Thread Pool with 10 threads is used to handle GameStart, NextLevel and GameEnd blocks. It improved response time significantly. More than 75% of responses were received in less than 800 ms.

Kafka Streams

Implementation with Kafka uses a request thread to send data to Kafka (which is super fast), but all data processing is executed in the background, using Kafka topologies. It allows us to execute as little as possible in the request thread. It occurs to be the fastest solution. Almost 100% of the requests completed in less than 800ms.

Response Time Distribution diagram shows that 50% of reponses completed almost immediately.

It seems that a blocking API is not the best choice as the application requires you to respond as quickly as possible. It did work, but the non-blocking API proved much more efficient. What is most important here is that using an application that uses Kafka handles almost half of the requests immediately, and almost 100% of requests are completed in 800 ms.

About the author

Wojciech Marusarz

Wojciech Marusarz

Software Engineer

Linkedin profile Twitter Github profile

Wojciech enjoys working with small teams where the quality of the code and the project's direction are essential. In the long run, this allows him to have a broad understanding of the subject, develop personally and look for challenges. He deals with programming in Java and Kotlin. Additionally, Wojciech is interested in Big Data tools, making him a perfect candidate for various Data-Intensive Application implementations.

Tempted to work
on something
as creative?

That’s all we do.

join nexocode

This article is a part of

Zero Legacy
36 articles

Zero Legacy

What goes on behind the scenes in our engineering team? How do we solve large-scale technical challenges? How do we ensure our applications run smoothly? How do we perform testing and strive for clean code?

Follow our article series to get insight into our developers' current work and learn from their experience. Expect to see technical details, architecture discussions, reviews on libraries and tools we use, best practices on software quality, and maybe even some fail stories.

check it out

Zero Legacy

Insights from nexocode team just one click away

Sign up for our newsletter and don't miss out on the updates from our team on engineering and teal culture.


Thanks for joining the newsletter

Check your inbox for the confirmation email & enjoy the read!

This site uses cookies for analytical purposes.

Accept Privacy Policy

In the interests of your safety and to implement the principle of lawful, reliable and transparent processing of your personal data when using our services, we developed this document called the Privacy Policy. This document regulates the processing and protection of Users’ personal data in connection with their use of the Website and has been prepared by Nexocode.

To ensure the protection of Users' personal data, Nexocode applies appropriate organizational and technical solutions to prevent privacy breaches. Nexocode implements measures to ensure security at the level which ensures compliance with applicable Polish and European laws such as:

  1. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (published in the Official Journal of the European Union L 119, p 1); Act of 10 May 2018 on personal data protection (published in the Journal of Laws of 2018, item 1000);
  2. Act of 18 July 2002 on providing services by electronic means;
  3. Telecommunications Law of 16 July 2004.

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet.

1. Definitions

  1. User – a person that uses the Website, i.e. a natural person with full legal capacity, a legal person, or an organizational unit which is not a legal person to which specific provisions grant legal capacity.
  2. Nexocode – NEXOCODE sp. z o.o. with its registered office in Kraków, ul. Wadowicka 7, 30-347 Kraków, entered into the Register of Entrepreneurs of the National Court Register kept by the District Court for Kraków-Śródmieście in Kraków, 11th Commercial Department of the National Court Register, under the KRS number: 0000686992, NIP: 6762533324.
  3. Website – website run by Nexocode, at the URL: nexocode.com whose content is available to authorized persons.
  4. Cookies – small files saved by the server on the User's computer, which the server can read when when the website is accessed from the computer.
  5. SSL protocol – a special standard for transmitting data on the Internet which unlike ordinary methods of data transmission encrypts data transmission.
  6. System log – the information that the User's computer transmits to the server which may contain various data (e.g. the user’s IP number), allowing to determine the approximate location where the connection came from.
  7. IP address – individual number which is usually assigned to every computer connected to the Internet. The IP number can be permanently associated with the computer (static) or assigned to a given connection (dynamic).
  8. GDPR – Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of individuals regarding the processing of personal data and onthe free transmission of such data, repealing Directive 95/46 / EC (General Data Protection Regulation).
  9. Personal data – information about an identified or identifiable natural person ("data subject"). An identifiable natural person is a person who can be directly or indirectly identified, in particular on the basis of identifiers such as name, identification number, location data, online identifiers or one or more specific factors determining the physical, physiological, genetic, mental, economic, cultural or social identity of a natural person.
  10. Processing – any operations performed on personal data, such as collecting, recording, storing, developing, modifying, sharing, and deleting, especially when performed in IT systems.

2. Cookies

The Website is secured by the SSL protocol, which provides secure data transmission on the Internet. The Website, in accordance with art. 173 of the Telecommunications Act of 16 July 2004 of the Republic of Poland, uses Cookies, i.e. data, in particular text files, stored on the User's end device.
Cookies are used to:

  1. improve user experience and facilitate navigation on the site;
  2. help to identify returning Users who access the website using the device on which Cookies were saved;
  3. creating statistics which help to understand how the Users use websites, which allows to improve their structure and content;
  4. adjusting the content of the Website pages to specific User’s preferences and optimizing the websites website experience to the each User's individual needs.

Cookies usually contain the name of the website from which they originate, their storage time on the end device and a unique number. On our Website, we use the following types of Cookies:

  • "Session" – cookie files stored on the User's end device until the Uses logs out, leaves the website or turns off the web browser;
  • "Persistent" – cookie files stored on the User's end device for the time specified in the Cookie file parameters or until they are deleted by the User;
  • "Performance" – cookies used specifically for gathering data on how visitors use a website to measure the performance of a website;
  • "Strictly necessary" – essential for browsing the website and using its features, such as accessing secure areas of the site;
  • "Functional" – cookies enabling remembering the settings selected by the User and personalizing the User interface;
  • "First-party" – cookies stored by the Website;
  • "Third-party" – cookies derived from a website other than the Website;
  • "Facebook cookies" – You should read Facebook cookies policy: www.facebook.com
  • "Other Google cookies" – Refer to Google cookie policy: google.com

3. How System Logs work on the Website

User's activity on the Website, including the User’s Personal Data, is recorded in System Logs. The information collected in the Logs is processed primarily for purposes related to the provision of services, i.e. for the purposes of:

  • analytics – to improve the quality of services provided by us as part of the Website and adapt its functionalities to the needs of the Users. The legal basis for processing in this case is the legitimate interest of Nexocode consisting in analyzing Users' activities and their preferences;
  • fraud detection, identification and countering threats to stability and correct operation of the Website.

4. Cookie mechanism on the Website

Our site uses basic cookies that facilitate the use of its resources. Cookies contain useful information and are stored on the User's computer – our server can read them when connecting to this computer again. Most web browsers allow cookies to be stored on the User's end device by default. Each User can change their Cookie settings in the web browser settings menu: Google ChromeOpen the menu (click the three-dot icon in the upper right corner), Settings > Advanced. In the "Privacy and security" section, click the Content Settings button. In the "Cookies and site date" section you can change the following Cookie settings:

  • Deleting cookies,
  • Blocking cookies by default,
  • Default permission for cookies,
  • Saving Cookies and website data by default and clearing them when the browser is closed,
  • Specifying exceptions for Cookies for specific websites or domains

Internet Explorer 6.0 and 7.0
From the browser menu (upper right corner): Tools > Internet Options > Privacy, click the Sites button. Use the slider to set the desired level, confirm the change with the OK button.

Mozilla Firefox
browser menu: Tools > Options > Privacy and security. Activate the “Custom” field. From there, you can check a relevant field to decide whether or not to accept cookies.

Open the browser’s settings menu: Go to the Advanced section > Site Settings > Cookies and site data. From there, adjust the setting: Allow sites to save and read cookie data

In the Safari drop-down menu, select Preferences and click the Security icon.From there, select the desired security level in the "Accept cookies" area.

Disabling Cookies in your browser does not deprive you of access to the resources of the Website. Web browsers, by default, allow storing Cookies on the User's end device. Website Users can freely adjust cookie settings. The web browser allows you to delete cookies. It is also possible to automatically block cookies. Detailed information on this subject is provided in the help or documentation of the specific web browser used by the User. The User can decide not to receive Cookies by changing browser settings. However, disabling Cookies necessary for authentication, security or remembering User preferences may impact user experience, or even make the Website unusable.

5. Additional information

External links may be placed on the Website enabling Users to directly reach other website. Also, while using the Website, cookies may also be placed on the User’s device from other entities, in particular from third parties such as Google, in order to enable the use the functionalities of the Website integrated with these third parties. Each of such providers sets out the rules for the use of cookies in their privacy policy, so for security reasons we recommend that you read the privacy policy document before using these pages. We reserve the right to change this privacy policy at any time by publishing an updated version on our Website. After making the change, the privacy policy will be published on the page with a new date. For more information on the conditions of providing services, in particular the rules of using the Website, contracting, as well as the conditions of accessing content and using the Website, please refer to the the Website’s Terms and Conditions.

Nexocode Team


Want to be a part of our engineering team?

Join our teal organization and work on challenging projects.