Logs not being flushed to Elasticsearch container through Fluentd

I have a local setup running 2 conainers –

One for Elasticsearch (setup for development as detailed here – https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html). This I run as directed in the article using – docker run -p 9200:9200 -e "http.host=" -e "transport.host=" docker.elastic.co/elasticsearch/elasticsearch:5.4.1

  • How to deploy multiple Services on Spring Boot? [closed]
  • Nginx conf wont work
  • installation of nodejs returned a non-zero code: 1 with docker build
  • How can I access internet in docker build?
  • Dockerfile - how do I pass passwords from a file into ENV variables?
  • Containers IP accessible in local network Docker for Windows (Hyper-V)
  • Another as a Fluentd aggregator (using this base image – https://hub.docker.com/r/fluent/fluentd/). My fluent.conf for testing purposes is as follows :

        @type forward
        port 24224
    <match **>
        @type elasticsearch
        host    # Verified internal IP address of the ES container
        port 9200
        user elastic
        password changeme
        index_name fluentd
        buffer_type memory
        flush_interval 60
        retry_limit 17
        retry_wait 1.0
        include_tag_key true
        tag_key docker.test
        reconnect_on_error true

    This I start with the command – docker run -p 24224:24224 -v /data:/fluentd/log vg/fluentd:latest

    When I run my processes (that generate logs), and run these 2 containers, I see the following towards the end of stdout for the Fluentd container –

    2017-06-15 12:16:33 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"", :port=>9200, :scheme=>"http", :user=>"elastic", :password=>"obfuscated"}

    However, beyond this, I see no logs. When I login to http://localhost:9200 I only see the Elasticsearch welcome message.

    I know the logs are reaching the Fluentd container, because when I change fluent.conf to redirect to a file, I see all the logs as expected. What am I doing wrong in my setup of Elasticsearch? How can I get to seeing all the indexes laid out correctly in my browser / through Kibana?

  • Starting a process in the Dockerfile
  • With jwilder nginx-proxy, how to proxypass a subdirectory url to a specific container?
  • docker run makes i/o timeout error during installing tensorflow on windows10
  • Difference between docker restart & docker-compose restart
  • How to deploy an application using Docker
  • Docker not working after running MapReduce - AWS
  • One Solution collect form web for “Logs not being flushed to Elasticsearch container through Fluentd”

    It seems that you are in the right track. Just check the indexes that were created in elasticsearch as follows:

    curl 'localhost:9200/_cat/indices?v'


    There you can see each index name. So pick one and search within it:

    curl 'localhost:9200/INDEXNAME/_search'

    Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html

    However I recommend you to use kibana in order to have a better human experience. Just start it and by default it searches for an elastic in localhost. In the interface’s config put the index name that you now know, and start to play with it.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.