How do you actually "manage" the maximum number of webthreads with Spring 5 Reactive Programming?

With the classic Tomcat approach, you can give your server the maximum number of threads it can use to process web requests from users. Using the reactive programming paradigm and reactor in Spring 5, we can scale vertically, making sure we are minimally blocked.

It seems to me that this makes it less manageable than the classic Tomcat approach, where you simply define the maximum number of concurrent requests. When you have the maximum number of concurrent requests, it is easier for you to estimate the maximum memory your application will need and scale it accordingly. When you use Spring 5 Reactive Programming it seems more complicated.

When I talk about these new technologies to sysadmin friends, they respond with concern about applications running out of RAM, or even OS-level threads. So how can we deal with this better?

+3


source to share


2 answers


No blocking I / O on ALL

First of all, if you don't have any blocking operation, you shouldn't worry about how many threads I have to provide to manage concurrency . In this case, we only have one worker who handles all connections asynchronously and non-blockingly. And in this case, we can easily scale the workers that handle all connections without conflicts and consistency (each worker has its own queue of received connections, each worker runs on its own processor), and we can scale the application better in this case (not shared nothing of the design).

Summary : In this case, you control the maximum number of webthreads as before, a config container application (Tomcat, WebSphere, etc.) or similar in the case of non-servlet servers like Netty or Hybrid Undertow. Benefit - you can handle muuuuuuch more user requests, but with the same resource consumption.

Database blocking and non-blocking web API (like WebFlux over Netty).

In case we have to block I / O somehow, for instant communication with the DB by JDBC blocking, the most appropriate way to maximize and efficiently use the application, we must use a dedicated thread pool for I / O.

Thread pool requirements

First of all, we need to create a thread pool with exactly the same number of workers as the available connections in the JDBC connection pool. Therefore, we will have exactly the same number of threads that will block waiting for a response, and we use our resources as efficiently as possible, so the memory for the thread stack will no longer be consumed as it really was (in another word, Thread per Connection model).

How to configure a thread pool according to the size of the connection pool

Since property access varies for a specific database and JDBC driver, we can always preempt that configuration over a specific property, which in turn means that it can be configured using devops or sysadmin. The Threadpool configuration (in our example, this is the Project Reactor 3 Scheduler setting) might look like this:

@Configuration
public class ReactorJdbcSchedulerConfig {
    @Value("my.awasome.scheduler-size")
    int schedulerSize;

    @Bean
    public Scheduler jdbcScheduler() {
        return Schedulers.fromExecutor(new ForkJoinPool(schedulerSize));
        // similarly 
        // ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
        // taskExecutor.setCorePoolSize(schedulerSize);
        // taskExecutor.setMaxPoolSize(schedulerSize);
        // taskExecutor.setQueueCapacity(schedulerSize);
        // taskExecutor.initialize();
        // return Schedulres.fromExecutor(taskExecutor);
    }
}
...

    @Autowire
    Scheduler jdbcScheduler;


    public Mono myJdbcInteractionIsolated(String id) {
         return Mono.fromCallable(() -> jpaRepo.findById(id))
                    .subscribeOn(jdbcScheduler)
                    .publishOn(Schedulers.single());
    }
...

      



As you can see, with this technology, we can delegate our shared thread pool configuration to an external command (sysadmins for the instance) and let them manage the memory consumption that is used for the generated Java threads.

Keep a pool of I / O threads just for I / O work

This statement means that the I / O thread should only be for operations that block waiting. In turn, this means that after a thread has completed its pending response, you must postpone processing the result to another thread.

This is why in the above code snippet I put .publishOn

right after .subscribeOn

.

So, to summarize , with this technology we can allow external command to control the size of the application by controlling the size of the thread pool and the size of the connection pool accordingly. All processing of the results will be done in one thread, and therefore there will be no excessive, uncontrolled memory consumption.

Finally, Blocking API (Spring MVC) and blocking I / O (database access)

In this case, there is no need for a reactive paradigm since you are not making any profit from it. First of all, reactive programming requires a special mind shift, especially in understanding the use of functional methods with reactive libraries like RxJava or Project Reactor. In turn, for unprepared users, this gives more complexity and causes more "What's going on here ****?". So, in case of blocking operations at both ends, you should think twice if you really need Reactive programming here.

Plus, there is no magic for free. Reactive Extensions comes with a lot of intrinsic complexity and uses all the magic .map

, .flatMap

etc., you can lose overall performance and memory consumption instead of gaining, as in the case of end-to-end non-blocking, asynchronous communication.

This means good old imperative programming will be more appropriate here, and it will be much easier to manage the size of your application in memory with good old Tomcat configuration management.

+4


source


Can you try this:

public class AsyncConfig implements AsyncConfigurer {
    @Override
    public Executor getAsyncExecutor() {
        ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
        taskExecutor.setCorePoolSize(15);
        taskExecutor.setMaxPoolSize(100);
        taskExecutor.setQueueCapacity(100);
        taskExecutor.initialize();
        return taskExecutor;
    }
}

      



This works for async in spring 4, but I'm not sure if it will work in spring 5 with reactive.

-1


source







All Articles