Blocking code in Ratpack

Ratpack is built on top of Netty and is designed for asynchronous I/O. It is this design that gives Ratpack it's high performance and lower resource overheads. As a result, when building applications using Ratpack you will be using an asynchronous programming model and you need to understand the difference between blocking and non-blocking code. If you don't recognise when you are writing blocking code and know how to handle it in Ratpack then it could have dire consequences on your application's performance.

What is blocking code?

One of the most common forms of blocking code in your applications will be blocking I/O. This is where you ask a resource, like a database or a web service, for data and then you wait or "block" until you receive it i.e. until the I/O is complete. What you are blocking is the thread that the code is executing on and even though this thread is not actually doing anything it cannot be used to do anything else. This blocking causes system resources to idle and when you have many blocking I/O operations happening it can mean that the processor will spend most of its time idle just waiting for I/O to complete.

Some examples of blocking I/O are:

  • Accessing a database
  • Using REST/Web service APIs
  • Using messaging technologies like JMS or AMQP
  • Sending emails
  • Opening sockets directly yourself

For some forms of I/O there are async drivers and clients starting to emerge that will allow non-blocking forms of I/O. For MongoDB there is mongodb-async-driver and if you Google there is some noise around drivers for MySQL and Postgres. Ratpack doesn’t yet have its own async HTTP client but there is an outstanding task for this. Alternatively there is .

Other forms of blocking code:

  • CPU intensive operations that block by virtue of the fact they take a long time to run.
  • Thread.sleep
  • Object.wait()
  • CountDownLatch.await() or any other blocking operating from java.util.concurrent

Why is blocking bad in Ratpack?

Unlike traditional synchronous programming models where there is a thread per request, Ratpack applications will have a single thread that services ALL requests. Therefore it is extremely important that you do not block this thread with any blocking I/O or blocking code as you will be blocking all the other incoming requests from being processed!!

Ok I lied, it's not necasarily a single thread, but it's easier to think of it like that. How many "application handling" threads there are is actually up to you. You can set a Ratpack property called mainThreads or leave it at its default value of 0 in which case Netty will determine the optimal number of threads. At the time of writing this is 2 times the number of available processors you have. You can see how many application handling threads are in use by running JConsole and inspecting the threads tab. All Ratpack application handling threads will have the prefix "ratpack-group". For a more detailed understanding of how Netty works I recommend buying Netty in Action. The first chapter is even available for free and has an excellent write up of the Netty architecture and blocking versus non-blocking I/O in the context of receiving incoming requests.

To show the devastating effects of blocking the main thread in Ratpack I have prepared an exaggerated demo. I have an application with a handler that contains some blocking code. In this case I am sleeping the thread for 10 seconds. The application is also configured to just 1 application handling thread so as to simulate 1 core on my multi-core machine.

ratpack {  
  handlers {
    get("blocking") {

I have a JMeter test that sends 100 concurrent requests and records the results. Running the test shows that each request takes 10 secconds longer than the request before it. Which makes sense right? All the requests are sent at once, the first request is sleeping for 10 seconds and blocks everything else from begin processed. When it wakes and completes, the next request is picked up and it's processing begins. But it has already been blocked for 10 seconds so it takes 10 seconds plus the time it has been blocked for. Each request's blocked time is increasing by 10 * the number of requests before it.

alt blocking-results

This is obviously exaggerated to demonstrate a point. But it's clear to see that if you're not careful when dealing with blocking code you can potentially kill all your application's performance. Fortunately Ratpack has a nice way of dealing with this blocking code.

How to handle blocking code in Ratpack

Ratpack has a background DSL that allows you to execute blocking code off the main thread. More details can be found on its use in the Ratpack manual and the api.

Simplistically what happens is that Ratpack has 2 task Executor services. A main executor for application handling tasks and a blocking executor for blocking tasks. The blocking executor has its own thread pool which creates new threads as needed, but will reuse previously constructed threads when they are available. You can see these threads easily using JConsole as they are all prefixed with "ratpack-blocking-worker-". The blocking executor is a ListeningExecutorService and when your blocking code is submitted to it by Ratpack for execution it will execute it on one of its threads from its pool and return a ListenableFuture. ListenableFuture allow you to register call-backs with them and specify what executor service to use to run the call-back when the ListenableFuture (the blocking code) completes. Ratpack adds an "onSuccess" call-back that will execute back on the main executor and allow the original request to complete. More info on ListenableFuture can be found here.

alt blocking-example

To demonstrate this my example also has a non-blocking handler. It also sleeps for 10 seconds but this time off the main thread by using the exec utility.

ratpack {  
  handlers {
    get("non-blocking") {
      background {

This time running the same JMeter test sending 100 concurrent requests shows that each request completes in 10 seconds.

alt non-blocking-results

Another option.......

In the first example there is a single thread to service all incoming requests but it is blocked from doing so. Another option would be to allocate more threads to process incoming requests, thus getting a thread-per-request behaviour. Setting the mainThreads property to 100 and running the test again now results in an average response time of 10 seconds.

alt blocking-100-threads-results

Although this is a valid option you do trade performance for readability. It's not visible in this example but as you scale up further with real life examples you will see a big difference. However, depending on your application and expected load it might be an option worth considering.


Ideally you won't have any blocking code in your Ratpack applications so as to maximse performance. However, if you do have a requirement for blocking code then handling it in Ratpack is easy, you just need to understand when you are blocking. You don't have to run your blocking code off the main thread but you will trade performance for readability. Either way, the most important thing is to profile your application and load test it to ensure you are getting the performance you expect.

You can get the code including the JMeter tests on GitHub

Additional Resources

Other frameworks are also built on top of Netty and provide good background detail on this concept