Stop API throttling in Java

I wanted to add a way to throttle the number of requests coming to each API from a specific client. As such, I wanted to basically limit the number of API requests per client.

I am using DropWizard as a framework. Can anyone recommend ways to achieve this? I need something that will work for a distributed system.

+3


source to share


5 answers


The simplest approach would be to use Filter and wrap it around all API calls in web.xml

. Assuming your clients send API keys identifying them in the HTTP header, you can implement a filter like this:

public class MyThrottlingFilter extends Filter {

    public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {

        HttpServletRequest httpreq = (HttpServletRequest) req;
        String apiKey = httpreq.getHeader("API_KEY")

        if (invocationLimitNotReached(apiKey))
            chain.doFilter(req, res);
        else
            throw ...
    }
}

      

and then register it like this:



<filter>
    <filter-name>MyThrottlingFilter</filter-name>
    <filter-class>com.my.throttler.MyThrottlingFilter</filter-class>
</filter>
<filter-mapping>
    <filter-name>MyThrottlingFilter</filter-name>
    <url-pattern>/*</url-pattern>
</filter-mapping>

      

Of course, identifying your customers can be trickier if you use other authentication methods, but the general idea should be the same.

+5


source


Do you want this kind of logic to be included in your application? Maybe some external load balancer would be the best choice?

You can try HAProxy and have all throtlling logic outside of your application.



The big advantage of this approach is that you don't have to rebuild and redeploy the application whenever throtlling requirements change. In addition, it takes much less time to restart HAProxy than a regular Java application.

+4


source


I think an interceptor like HandlerInterceptor will solve the target.

+1


source


If you absolutely must have it in Dropwizard then I will do what npe suggests. The change needed is to share "speed" through an external process, for example. Redis.

So, in the npe example, invocationLimitNotReached

will check the redis host to figure out what the current speed is (maybe it keeps a list of all current requests) and if adding the current request will go through that threshold.

If adding the current request does not exceed the allowed speed, then you add the request to the list, and when the request is completed, you remove the request from the redis list. Redis list items can have TTLs, so if the dropwizard instance serving 20 requests suddenly disappears, after the TTL they are removed from the "currently in progress" list.

0


source


Based on the answer given by npe, you can "achieve distributed" persistence by storing the number of hits in each custom API in a central repository like Redis, which can then use the invocationLimitNotReached (apiKey) method to detect limit violations.

The hard part, of course, is figuring out how to "expire" the number of hits that fall outside your limit window.

0


source







All Articles