Netty as a high performance Http server handling ~ 2-3 million requests / sec.
We are trying to solve the problem of handling huge volume of Http POST request and using Netty Server I was only able to handle ~50K requests/sec
which is too low.
My question is, how do I configure this server to handle processing > 1.5 million requests/second
?
Netty4 server
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.option(ChannelOption.SO_BACKLOG, 1024);
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new HttpServerInitializer(sslCtx));
Channel ch = b.bind(PORT).sync().channel();
System.err.println("Open your web browser and navigate to " +
(SSL? "https" : "http") + "://127.0.0.1:" + PORT + '/');
ch.closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
Initializer
public class HttpServerInitializer extends ChannelInitializer<SocketChannel> {
private final SslContext sslCtx;
public HttpServerInitializer(SslContext sslCtx) {
this.sslCtx = sslCtx;
}
@Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addLast(sslCtx.newHandler(ch.alloc()));
}
p.addLast(new HttpServerCodec());
p.addLast("aggregator", new HttpObjectAggregator(Integer.MAX_VALUE));
p.addLast(new HttpServerHandler());
}
}
Handler
public class HttpServerHandler extends ChannelInboundHandlerAdapter {
private static final String CONTENT = "SUCCESS";
@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
final FullHttpRequest fReq = (FullHttpRequest) req;
Charset utf8 = CharsetUtil.UTF_8;
final ByteBuf buf = fReq.content();
String in = buf.toString( utf8 );
System.out.println(" In ==> "+in);
buf.release();
if (HttpHeaders.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE));
}
in = null;
if (HttpHeaders.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HTTP_1_1, CONTINUE));
}
boolean keepAlive = HttpHeaders.isKeepAlive(req);
FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, Unpooled.wrappedBuffer(CONTENT.getBytes()));
response.headers().set(CONTENT_TYPE, "text/plain");
response.headers().set(CONTENT_LENGTH, response.content().readableBytes());
if (!keepAlive) {
ctx.write(response).addListener(ChannelFutureListener.CLOSE);
} else {
response.headers().set(CONNECTION, Values.KEEP_ALIVE);
ctx.write(response);
}
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
{
cause.printStackTrace();
ctx.close();
}
}
+3
source to share
1 answer
Your question is very general. However, I will try to give you an answer regarding setting optimization and code improvement.
Code problems:
-
System.out.println(" In ==> "+in);
- you shouldn't use this in a high load coprocessor handler. What for? Because the code inside the method isprintln
synchronized and thus penalizes your performance; - You are doing 2 classes. For
HttpRequest
andFullHttpRequest
. You can only use the last one,
Netty issues in code:
- You need to add epoll transport (in case your server is similar to Linux). This will give + ~ 30% out of the box; How .
- You need to add your own OpenSSL bindings. This will give + ~ 20%. How .
-
EventLoopGroup bossGroup = new NioEventLoopGroup();
- you need to correctly adjust the sizes of groupsbossGroup
andworkerGroup
. Depending on your test scripts. You have not provided any information regarding your test cases, so I cannot give you advice here; -
new HttpObjectAggregator(Integer.MAX_VALUE)
- you don't really need this handler in your code. Therefore, you can uninstall it for better performance. -
new HttpServerHandler()
- you don't need to create this handler for every channel. Since it does not have any state, it can be used for all pipelines. Find@Sharable
in netty. -
new LoggingHandler(LogLevel.INFO)
- you don't need this handler for high load tests as it has a lot of logs. Make your own journal if needed -
buf.toString( utf8 )
- this is very wrong. You are converting the revenue bytes to a string. But that doesn't make any sense as all the data is already decoded to nettyHttpServerCodec
. So you're doing a double job here; -
Unpooled.wrappedBuffer(CONTENT.getBytes())
- you exchange a permanent message for every request. And thus, do unnecessary work on every request. You can create a ByteBuf only once and do itretain()
,duplicate()
depending on how you do it; -
ctx.write(response)
- you can usectx.write(response, ctx.voidPromise())
to highlight less;
That's not all. However, fixing the above issues would be a good start.
+3
source to share