Elasticsearch Unreachable: connection rejected

I have an ELK stack running inside docker containers inside a virtual machine.
I can curl

stuff for ES and it shows up in Kibana just fine.
I can read files using Logstash and output them to stdout.
But Logstash cannot send data to ES

docker-compose.yml

version: '2'
services:
  elasticsearch:
    image: elasticsearch
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
      xpack.security.enabled: "false"
      xpack.monitoring.enabled: "false"
      xpack.graph.enabled: "false"
      xpack.watcher.enabled: "false"
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:5.3.2
    volumes:
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
      - ./logstash/pipeline:/usr/share/logstash/pipeline
      - ./data:/usr/share/data
    ports:
      - "5000:5000"
      - "9600:9600"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    # kibana config

networks:
  elk:
    driver: bridge

      

logstash.yml

(enabling or disabling xpack doesn't seem to matter)

#log.level: debug
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.url: http://127.0.0.1:9200
xpack.monitoring.enabled: false

      

pipeline.conf

input {
# some files
}
filter{
# csv filter
}
output {
    if ([type] == "GPS") {
      stdout { }
      elasticsearch {
          hosts => [ "127.0.0.1:9200" ]
          index => "GPS"
          template_overwrite => true
          user => logstash_system
          password => changeme
      }
    }
}

      

Output docker-compose up

elasticsearch_1  [2017-05-03T05:06:29,821][INFO ][o.e.n.Node               ] initialized
elasticsearch_1  [2017-05-03T05:06:29,821][INFO ][o.e.n.Node               ] [SzDlw_j] starting ...
elasticsearch_1  [2017-05-03T05:06:30,352][WARN ][i.n.u.i.MacAddressUtil   ] Failed to find a usable hardware address from the network interfaces; using random bytes: 74:70:60:88:e2:f8:41:36
elasticsearch_1  [2017-05-03T05:06:30,722][INFO ][o.e.t.TransportService   ] [SzDlw_j] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
elasticsearch_1  [2017-05-03T05:06:30,744][WARN ][o.e.b.BootstrapChecks    ] [SzDlw_j] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch_1  [2017-05-03T05:06:34,027][INFO ][o.e.c.s.ClusterService   ] [SzDlw_j] new_master {SzDlw_j}{SzDlw_jiQsWcXN00BZfZHQ}{ELB1oDR4SlW6atQIM88hfg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
elasticsearch_1  [2017-05-03T05:06:34,176][INFO ][o.e.h.n.Netty4HttpServerTransport] [SzDlw_j] publish_address {172.18.0.2:9200}, bound_addresses {[::]:9200}
elasticsearch_1  [2017-05-03T05:06:34,179][INFO ][o.e.n.Node               ] [SzDlw_j] started
elasticsearch_1  [2017-05-03T05:06:34,832][INFO ][o.e.g.GatewayService     ] [SzDlw_j] recovered [1] indices into cluster_state
elasticsearch_1  [2017-05-03T05:06:36,519][INFO ][o.e.c.r.a.AllocationService] [SzDlw_j] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
logstash_1       Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1       [2017-05-03T05:06:55,865][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://127.0.0.1:9200/]}}
logstash_1       [2017-05-03T05:06:55,887][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
logstash_1       [2017-05-03T05:06:56,131][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x7b0aac02 URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
logstash_1       [2017-05-03T05:06:56,140][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
logstash_1       [2017-05-03T05:06:56,147][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://127.0.0.1:9200/, :error_message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
logstash_1       [2017-05-03T05:06:56,148][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:287:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:273:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:363:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:272:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:280:in `get'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:83:in `get_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:16:in `get_es_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:20:in `get_es_major_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:54:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-6.3.0-java/lib/logstash/outputs/elasticsearch/common.rb:21:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:8:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:41:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:257:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:277:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:207:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:389:in `start_pipeline'"]}
logstash_1       [2017-05-03T05:06:56,149][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x6d11738 URL://127.0.0.1:9200>]}
logstash_1       [2017-05-03T05:06:56,163][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}
logstash_1       [2017-05-03T05:06:56,703][INFO ][logstash.pipeline        ] Pipeline main started
logstash_1       2017-05-03T05:06:56.776Z b68ff5316e02 time,lat,lon,elevation,accuracy,bearing,speed
logstash_1       2017-05-03T05:06:56.778Z b68ff5316e02 2014-09-14T00:26:23Z,98.404222,99.999021,9.599976,44.000000,0.000000,0.000000
logstash_1       2017-05-03T05:06:56.779Z b68ff5316e02 2014-09-14T00:48:45Z,98.404297,99.999338,9.000000,35.000000,102.300003,0.500000
logstash_1       [2017-05-03T05:06:57,035][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
logstash_1       [2017-05-03T05:06:57,058][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
logstash_1       [2017-05-03T05:06:57,069][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>2}
logstash_1       [2017-05-03T05:06:59,085][WARN ][logstash.outputs.elasticsearch] UNEXPECTED POOL ERROR {:e=>#<LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError: No Available connections>}
logstash_1       [2017-05-03T05:06:59,086][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>4}
logstash_1       [2017-05-03T05:07:01,146][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://127.0.0.1:9200/, :path=>"/"}
logstash_1       [2017-05-03T05:07:01,158][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x7b0aac02 URL:http://127.0.0.1:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://127.0.0.1:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

      

then it continues the cycle between "sending a bulk request", "performing a health check", "trying to resurrect" and "empty pool error".
With log.level: debug

every time after a while I also seeError, cannot retrieve cgroups information {:exception=>"Errno::ENOENT", :message=>"No such file or directory - /sys/fs/cgroup/cpuacct/system.slice/docker-<containerId>.scope/cpuacct.usage"}

+3


source to share


1 answer


One of the docker's goals is isolation, in terms of containers, 127.0.0.1 only applies to itself.

Here you have 3 containers:



  • elasticsearch
  • logstash
  • kibana

You have to change your logstash config to put elasticsearch instead of 127.0.0.1 as this is the name it knows for the elasticsearch container on the moose bridge network you defined

+5


source







All Articles