Nginx 502 preventing eb worker and SQS causing invisibility Timeout to not be respected

I have the same symptoms as https://forums.aws.amazon.com/message.jspa?messageID=580990#580990 but on a preconfigured Python EB Docker (i.e. no visibility timeouts are observed). First, my queue visibility timeout (configured in both eb and sqs) is 1800.

I get 502 after 60 seconds since my messages are taking over 60 seconds to process (and after 60 seconds the queue is of course trying to retry the message since it received 502). I tried the .ebextensions proxy.conf solution (mentioned in ecd_bm link) to no avail.

My / var / log / nginx / access.log gives:

127.0.0.1 - - [18/May/2015:08:56:58 +0000] "POST /scrape-emails HTTP/1.1" 502 172 "-" "aws-sqsd/2.0"

      

My nginx / var / log / nginx / error.log gives:

2015/05/18 08:56:58 [error] 12465#0: *32 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: , request: "POST /scrape-emails HTTP/1.1", upstream: "http://172.17.0.4:8080/scrape-emails", host: "localhost"

      

My / var / log / aws-sqsd / default.log gives:

2015-05-18T08:56:58Z http-err: 8240b585-61c3-4fba-b99a-265ace312308 (1) 502 - 60.050

      

First, my /etc/nginx/nginx.conf looks like this:

# Elastic Beanstalk Nginx Configuration File

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log;

pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log    /var/log/nginx/access.log;

    include       /etc/nginx/conf.d/*.conf;
    include       /etc/nginx/sites-enabled/*;
}

      

I used to get 504 seconds after 60s, but adding the following to / etc / nginx / sites -enabled / elasticbeanstalk-nginx-docker-proxy.conf (which is included in /etc/nginx/nginx.conf) got rid of (but they were replaced by 502):

map $http_upgrade $connection_upgrade {
    default     "upgrade";
    ""          "";
}

server {
    listen 80;

    location / {
        proxy_pass          http://docker;
        proxy_http_version  1.1;

        proxy_set_header    Connection      $connection_upgrade;
        proxy_set_header    Upgrade     $http_upgrade;
        proxy_set_header    Host            $host;
        proxy_set_header    X-Real-IP       $remote_addr;
        proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_buffers 8 16k;
        proxy_buffer_size 32k;
        proxy_connect_timeout 1800s;
        proxy_send_timeout 1800s;
        proxy_read_timeout 1800s;

    }
}

      

I have literally set each parameter to be between 60 and 1800 seconds by default - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers

I noticed that uwsgi magazine says: your grace for graceful worker operations - 60 seconds Could this be a problem? - How can I fix it if it is? If not, how to stop 502s.

Also, I added the following to / etc / nginx / uwsgi_params to no avail:

uwsgi_read_timeout 1800s;
uwsgi_send_timeout 1800s;
uwsgi_connect_timeout 1800s;

      

After editing the nginx config file (using ssh), I will always "restart application servers" in the eb web interface and then test.

Any ideas on how to get rid of the 502 and make the visibility timeout honored while processing the message?

+3


source to share


1 answer


Here's what I've developed so far. Don't know if this is a safe way to access the queue visibility timeout, but now it seems like a trick in my ruby ​​work environment:

packages:
  yum:
    jq: []

commands:
  match_nginx_timeout_to_sqs_timeout:
    command: |
      VISIBILITY_TIMEOUT=$(
        /opt/aws/bin/cfn-get-metadata --region `{"Ref": "AWS::Region"}` --stack `{"Ref": "AWS::StackName"}` \
          --resource AWSEBBeanstalkMetadata --key AWS::ElasticBeanstalk::Ext |
          jq -r '.Parameters.AWSEBVisibilityTimeout'
      )
      if [[ -n "${VISIBILITY_TIMEOUT}" ]]; then
        echo "proxy_read_timeout ${VISIBILITY_TIMEOUT}s;" > /etc/nginx/conf.d/worker.conf
        service nginx restart
      fi

      



I actually had a secondary use of this data, so it split it into a file as well properties-cache

. See https://github.com/Safecast/ingest/pull/43/files for details .

I am getting the impression that updating the visibility timeout from the beanstalk UI will not update this value until the next deployment, but I am fine with this situation as it will not change for the environment very often anyway.

0


source







All Articles