RSocket can make service communication more responsive

Piotr Kubowicz
March 24, 2020

RSocket is a new reactive network protocol supported by the Linux Foundation and maintainers of Spring Framework. It claims to be designed specially to address shortcomings of HTTP as a mean of connecting applications. Let’s see areas where you can expect improvement and what kind of systems will significantly benefit from using RSocket.

This article will cover:

  • visualisations showing why streaming-based communication is important
  • using Server-Sent Events for communication between microservices
  • differences in the handling of request cancellation
  • a short overview of RSocket protocol itself
  • the impact of introducing RSocket or SSE to a typical application based on Spring Framework

Reactive stack is not inherently faster #

You may be tempted to think that if you rewrite a plain old blocking application to a fully reactive stack the communication will consequently get faster. It’s not that simple.

Using something like Spring WebFlux will allow your application to use server resources more effectively, i.e. handle more concurrent requests on the same hardware. It may happen that requests will be completed faster because the server won’t be as overloaded as with a blocking stack. The communication itself won’t, however, change that much: while you will have a reactive flow of data inside your application code, the transmission over network won’t behave in a very responsive way.

If you have a client requesting multiple elements from a server with a REST HTTP request, the client will need to wait until the server ‘collects’ all results, no matter how reactive the code of client and server are. It can be visualised as follows:

The server does not send anything until it collects all 20 results, even though the code is reactive

This does not look very effective. If you want to shorten the client wait time, you need to change the communication method.

Streaming responses #

A more responsive way would be to send results one by one, as soon as each becomes available. In such a scenario, the client can start processing responses sooner. You can achieve this using RSocket protocol:

With RSocket the flow is smooth and everything finishes faster

RSocket isn’t the only way to achieve such an improvement. HTTP with Server-Sent Events will behave very similarly.

SSE appeared as a mechanism for streaming updates from the server to the browser, but it can be applied to server-server communication as well. Since it is a text protocol you can observe how the communication looks like.

While a conventional HTTP REST endpoint sends a big JSON array in an “all or nothing” manner, an SSE endpoint won’t return an array. Instead, it will send individual elements as JSON objects prefixed with data::

cURL gradually receiving data sent with Server-Sent Events

Handling cancellation #

So far we have considered improving the situation from the client’s point of view. However, using proper communication method will not only shorten the time of waiting for data but may also save server resources.

There are interaction types where a client queries a server but does not need all returned results. Let’s look at a very simplified case:

First, consider what happens if ‘plain’ HTTP is used. Here a reactive streams library (Project Reactor) will take care of canceling the reactive stream originating from the network connection to the server. This will free some resources on the client side, but no resources on the server side — the cancellation signal will come too late, already after the server will have finished all work:

The server produces all 50 results although client cancels request after consuming 9th result.

In contrast, when you stream responses using RSocket or Server-Sent Events, the server does not waste time and resources producing results that are not needed:

RSocket transmits client cancellation and server produces just 9 results.

This may result in a database connection being released sooner, less memory allocated for results to be sent to the client (and later garbage-collected) and so on.

RSocket protocol #

RSocket aims to bring reactive streams semantics to both server-server and server-browser communication and allow for performance optimizations not possible with HTTP.

There are three transport options that can be used with RSocket: TCP is the typical choice for the server-server variant, WebSocket will be more useful for server-browser variant and Aeron (UDP-based) can be used where throughput is really critical.

RSocket offers four interaction types. ‘Request-response’ and ‘request-stream’ are similar to HTTP REST interactions, where a server returns either one result or a collection of results to a client. ‘Fire-and-forget’ does not include any response, so it allows for certain optimizations. ‘Channel’ interaction allows for two-directional communication.

The protocol behaves like a reactive stream because it reserves a special field in frames it sends for the ‘request’ field, which holds client demand for elements and is an equivalent for Subscription.request() call in reactive streams. So when a client starts communication by sending REQUEST_STREAM or REQUEST_CHANNEL frame, the frame passes the initial request. A server is allowed to return only as many results as the current sum of the ‘request’ fields it has received. A client is able to increase the number by sending a REQUEST_N frame.

The client asks for 3 elements when sending the requests, and later asks for 7 more. The interaction ends with a payload frame including 'complete' bit.

With RSocket you can make the server produce data in the same cadence as the client requests it. This is because ‘request’ subscription calls in the reactive stream on the client side are transmitted in RSocket frames and cause same ‘request’ subscription calls on the server side.

request(6) call on the client side causes request(6) call on the server side

RSocket has some extra features like allowing a server to stop clients from sending too many requests (using LEASE frame) or allowing a client to resume transmission — but both are optional and may not be present in all implementations.

The Java implementation is closely coupled with Project Reactor library, exposing the Reactor’s implementation of reactive stream types:

Java implementation is also based on Netty and makes use of its zero-copy decoding capabilities.

RSocket is still a new thing. Its development started around 2015. In 2018 Spring Framework developers decided to add support for RSocket, which finished only in mid-2019. Since 2017 the protocol version is 0.2, which according to its creators should be considered ‘1.0 Release Candidate’. JavaScript implementation remains in version 0.0.19. There is not as much documentation and troubleshooting advice as with technologies that are present on the market for a long time.

Changing code to support streaming #

There is a huge difference between introducing Server-Sent Events to a project and doing the same with RSocket.

SSE is nothing much than a long HTTP response, with a different MIME type and different way of encoding elements. If your application is implemented in Spring WebFlux then having a ‘conventional’ endpoint like:

You can expose an SSE-enabled one using a slightly different annotation and the framework will take care of the rest:

Similarly, changes to the client code aren’t significant. While a normal reactive HTTP client call looks like this:

SSE requires only a bit of additional unpacking:

If you look at a simple counterpart of HTTP endpoint implemented in RSocket, it may seem similar:

Even so, RSocket works quite differently from HTTP and is designed as a message-based protocol. Spring Framework supports RSocket using spring-messaging library. Resources exposed with RSocket aren’t endpoints with paths, there are ‘routes’. The code may visually look similar, but it uses other annotations and the machinery handling those annotations is very different.

As we have seen at the beginning, typical tools like cURL can be used to peek into SSE communication. RSocket hasn’t gained such a mature ecosystem yet. Fortunately, there is a command-line tool called rsc that can be treated as a cURL equivalent.

Conclusion #

Reactive web frameworks allow writing less resource-hungry clients and servers. However, if you want to speed up the end-to-end processing, you need to couple those frameworks with network protocols that support streaming, so that clients can start processing right from the first element, and not after receiving all results collected as it is the case with traditional HTTP.

Currently, there are 2 solutions: use Server-Sent Events or RSocket. Both support streaming results as they appear and canceling ongoing request that is no longer needed.

SSE is easier to introduce, as it is just a way of using HTTP. Code handling it is not much different from code using ‘traditional’ HTTP communication. Yet it may be hard to achieve really high performance with it: there is no backpressure control and no way to use a compact and low-overhead encoding. For example, if you want to encode data with Protocol Buffers instead of JSON, SSE will transmit the payload encoded with Base64, as it is a text-based protocol.

RSocket gives you more options to tune for high performance, as it was designed for efficiency and does not need to follow HTTP semantics. It transparently transmits backpressure between client and server. It is a binary protocol, so sending Protocol Buffers-based payloads won’t get any text encoding penalty as with Base64 and SSE. On the other hand, it is a new technology, not yet stable and well-established. Code handling RSocket communication won’t look like the code using HTTP. Consider it if your system will be put under heavy load and your team is ready to put much effort into learning new things.

Source code for the demonstrated client-server interactions can be found at

Now, let's talk about your project!

We don't have one standard offer.
Each project is unique, rest assured that we will approach the next one full of energy and engagement.