Grizzly Http Server - accepting only one connection at a time

I have a Grizzly Http Server with Async handling. It orders my requests and only handles one request at a time, despite adding async support to it.

The HttpHandler path was associated with: "/" Port number: 7777

Behavior seen when hitting http://localhost:7777

from two browsers at the same time: The second call waits until the first one completes. I want my second HTTP call to also run concurrently in tandom with the first http call.

EDIT my project Gythub link

Below are the classes <B> GrizzlyMain.java

package com.grizzly;

import java.io.IOException;
import java.net.URI;

import javax.ws.rs.core.UriBuilder;

import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.nio.transport.TCPNIOTransport;
import org.glassfish.grizzly.strategies.WorkerThreadIOStrategy;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;

import com.grizzly.http.IHttpHandler;
import com.grizzly.http.IHttpServerFactory;

public class GrizzlyMain {

    private static HttpServer httpServer;

    private static void startHttpServer(int port) throws IOException {
        URI uri = getBaseURI(port);

        httpServer = IHttpServerFactory.createHttpServer(uri,
            new IHttpHandler(null));

        TCPNIOTransport transport = getListener(httpServer).getTransport();

        ThreadPoolConfig config = ThreadPoolConfig.defaultConfig()
                .setPoolName("worker-thread-").setCorePoolSize(6).setMaxPoolSize(6)
                .setQueueLimit(-1)/* same as default */;

        transport.configureBlocking(false);
        transport.setSelectorRunnersCount(3);
        transport.setWorkerThreadPoolConfig(config);
        transport.setIOStrategy(WorkerThreadIOStrategy.getInstance());
        transport.setTcpNoDelay(true);

        System.out.println("Blocking Transport(T/F): " + transport.isBlocking());
        System.out.println("Num SelectorRunners: "
            + transport.getSelectorRunnersCount());
        System.out.println("Num WorkerThreads: "
            + transport.getWorkerThreadPoolConfig().getCorePoolSize());

        httpServer.start();
        System.out.println("Server Started @" + uri.toString());
    }

    public static void main(String[] args) throws InterruptedException,
        IOException, InstantiationException, IllegalAccessException,
        ClassNotFoundException {
        startHttpServer(7777);

        System.out.println("Press any key to stop the server...");
        System.in.read();
    }

    private static NetworkListener getListener(HttpServer httpServer) {
        return httpServer.getListeners().iterator().next();
    }

    private static URI getBaseURI(int port) {
        return UriBuilder.fromUri("https://0.0.0.0/").port(port).build();
    }

}

      

HttpHandler (with native async support)

package com.grizzly.http;

import java.io.IOException;
import java.util.Date;
import java.util.concurrent.ExecutorService;

import javax.ws.rs.core.Application;

import org.glassfish.grizzly.http.server.HttpHandler;
import org.glassfish.grizzly.http.server.Request;
import org.glassfish.grizzly.http.server.Response;
import org.glassfish.grizzly.http.util.HttpStatus;
import org.glassfish.grizzly.threadpool.GrizzlyExecutorService;
import org.glassfish.grizzly.threadpool.ThreadPoolConfig;
import org.glassfish.jersey.server.ApplicationHandler;
import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.server.spi.Container;

import com.grizzly.Utils;

/**
 * Jersey {@code Container} implementation based on Grizzly
 * {@link org.glassfish.grizzly.http.server.HttpHandler}.
 *
 * @author Jakub Podlesak (jakub.podlesak at oracle.com)
 * @author Libor Kramolis (libor.kramolis at oracle.com)
 * @author Marek Potociar (marek.potociar at oracle.com)
 */
public final class IHttpHandler extends HttpHandler implements Container {

    private static int reqNum = 0;

    final ExecutorService executorService = GrizzlyExecutorService
            .createInstance(ThreadPoolConfig.defaultConfig().copy()
                    .setCorePoolSize(4).setMaxPoolSize(4));

    private volatile ApplicationHandler appHandler;

    /**
     * Create a new Grizzly HTTP container.
     *
     * @param application
     *          JAX-RS / Jersey application to be deployed on Grizzly HTTP
     *          container.
     */
    public IHttpHandler(final Application application) {
    }

    @Override
    public void start() {
        super.start();
    }

    @Override
    public void service(final Request request, final Response response) {
        System.out.println("\nREQ_ID: " + reqNum++);
        System.out.println("THREAD_ID: " + Utils.getThreadName());

        response.suspend();
        // Instruct Grizzly to not flush response, once we exit service(...) method

        executorService.execute(new Runnable() {
            @Override
            public void run() {
                try {
                    System.out.println("Executor Service Current THREAD_ID: "
                            + Utils.getThreadName());
                    Thread.sleep(25 * 1000);
                } catch (Exception e) {
                    response.setStatus(HttpStatus.INTERNAL_SERVER_ERROR_500);
                } finally {
                    String content = updateResponse(response);
                    System.out.println("Response resumed > " + content);
                    response.resume();
                }
            }
        });
    }

    @Override
    public ApplicationHandler getApplicationHandler() {
        return appHandler;
    }

    @Override
    public void destroy() {
        super.destroy();
        appHandler = null;
    }

    // Auto-generated stuff
    @Override
    public ResourceConfig getConfiguration() {
        return null;
    }

    @Override
    public void reload() {

    }

    @Override
    public void reload(ResourceConfig configuration) {
    }

    private String updateResponse(final Response response) {
        String data = null;
        try {
            data = new Date().toLocaleString();
            response.getWriter().write(data);
        } catch (IOException e) {
            data = "Unknown error from our server";
            response.setStatus(500, data);
        }

        return data;
    }

}

      

IHttpServerFactory.java

package com.grizzly.http;

import java.net.URI;

import org.glassfish.grizzly.http.server.HttpServer;
import org.glassfish.grizzly.http.server.NetworkListener;
import org.glassfish.grizzly.http.server.ServerConfiguration;

/**
 * @author smc
 */
public class IHttpServerFactory {

    private static final int DEFAULT_HTTP_PORT = 80;

    public static HttpServer createHttpServer(URI uri, IHttpHandler handler) {

        final String host = uri.getHost() == null ? NetworkListener.DEFAULT_NETWORK_HOST
            : uri.getHost();
        final int port = uri.getPort() == -1 ? DEFAULT_HTTP_PORT : uri.getPort();

        final NetworkListener listener = new NetworkListener("IGrizzly", host, port);
        listener.setSecure(false);

        final HttpServer server = new HttpServer();
        server.addListener(listener);

        final ServerConfiguration config = server.getServerConfiguration();
        if (handler != null) {
            config.addHttpHandler(handler, uri.getPath());
        }

        config.setPassTraceRequest(true);
        return server;
    }
}

      

+3


source to share


1 answer


The problem seems to be that the browser is waiting for the first request to complete and therefore more client-side than server-side. It disappears if you test two different browser processes, or even open two different paths (say localhost:7777/foo

and localhost:7777/bar

) in the same browser process (note: query string partecipates when composing a path in HTTP query string).

How I got it

Connections in HTTP / 1.1 are persistent by default, i.e. browsers reuse the same TCP connection over and over to speed things up. However, this does not mean that all requests for the same domain will be serialized: in fact, the connection pool is allocated based on the name for each node ( source ). Unfortunately, requests with the same path are effectively queued (at least in Firefox and Chrome) - I think this is the device that browsers use to protect server resources (and therefore for users)



Real word apps don't suffer from this because different resources are deployed to different urls.

DISCLAIMER: I wrote this answer based on my observations and some educated guesswork. I think this could actually be the case, but a tool like Wireshark should be used to monitor the TCP stream and definitely claim that this is happening.

+2


source







All Articles