How to handle HTTP requests larger than 1024 bytes in netty?
I am writing a simple reverse proxy using Netty. Since I don't want to deal with raw bytes, I added a request handler and object aggregator to the handler pipeline, as well as a response encoder, and finally my own handler, something like this
ChannelPipeline p = ch.pipeline();
p.addLast(new HttpRequestDecoder());
p.addLast(new HttpObjectAggregator(MAX_CONTENT_LENGTH));
p.addLast(new HttpResponseEncoder());
p.addLast(new FrontendHandler(...));
my FrontendHandler
extends SimpleChannelInboundHandler<HttpRequest>
, so it has
// Start reading as soon as a request comes in...
public void channelActive(ChannelHandlerContext ctx) {
ctx.read();
}
and a
protected void channelRead0(ChannelHandlerContext ctx, HttpRequest request) {
// copy request, fix headers, forward to backend server
}
What happens is that the server hangs if a request comes in that is larger than 1024 bytes (for example, it contains multiple cookies). With some trial and error, I found that if I install ChannelOption.AUTO_READ
on the handler everything works fine, so it looks like my old code didn't get called ctx.read()
somewhere, but I have no idea where. If I do something like
@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.read();
}
then I get exceptions internally channelRead0
, which seems to be the reasons, when dealing with an as yet incomplete HTTP request that defeats the purpose of using a query aggregator / query aggregator. What am I missing?
source to share
I don't know if the HttpObjectAggregator can handle flagged messages. You can try using HttpChunkAggregator, something like:
pipeline.addLast("decoder", new HttpRequestDecoder(4096, 4096, 100*1024*1024));
pipeline.addLast("aggregator", new HttpChunkAggregator(100*1024*1024));
pipeline.addLast("encoder", new HttpResponseEncoder());
pipeline.addLast("hndlr", new FrontendHandler(...));
source to share