Logs not being flushed to Elasticsearch container through Fluentd

I have a local setup running 2 conainers –

One for Elasticsearch (setup for development as detailed here – https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html). This I run as directed in the article using – docker run -p 9200:9200 -e "http.host=" -e "transport.host=" docker.elastic.co/elasticsearch/elasticsearch:5.4.1

  • Installing systemd inside a ubuntu14.04 docker container - Is it possible?
  • Access docker-machine VM ports without port forwarding
  • Ruby Remote SDK configuration with Intellij broken - workarround?
  • Docker-machine access to remote docker daemon through ssh tunneling
  • How do I expose a UDP Port on Docker?
  • How to translate docker-compose.yml to Dockerrun.aws.json for Django
  • Another as a Fluentd aggregator (using this base image – https://hub.docker.com/r/fluent/fluentd/). My fluent.conf for testing purposes is as follows :

        @type forward
        port 24224
    <match **>
        @type elasticsearch
        host    # Verified internal IP address of the ES container
        port 9200
        user elastic
        password changeme
        index_name fluentd
        buffer_type memory
        flush_interval 60
        retry_limit 17
        retry_wait 1.0
        include_tag_key true
        tag_key docker.test
        reconnect_on_error true

    This I start with the command – docker run -p 24224:24224 -v /data:/fluentd/log vg/fluentd:latest

    When I run my processes (that generate logs), and run these 2 containers, I see the following towards the end of stdout for the Fluentd container –

    2017-06-15 12:16:33 +0000 [info]: Connection opened to Elasticsearch cluster => {:host=>"", :port=>9200, :scheme=>"http", :user=>"elastic", :password=>"obfuscated"}

    However, beyond this, I see no logs. When I login to http://localhost:9200 I only see the Elasticsearch welcome message.

    I know the logs are reaching the Fluentd container, because when I change fluent.conf to redirect to a file, I see all the logs as expected. What am I doing wrong in my setup of Elasticsearch? How can I get to seeing all the indexes laid out correctly in my browser / through Kibana?

  • Is this normal for docker?
  • intel pin with docker
  • No executable found matching command “dotnet-/../.dll” when running dotnet core docker image in Azure Web App on Linux
  • Docker for Mac: Host network and port publishing
  • Docker- what real value does it bring for our team?
  • How to fake cpu architecture in docker container?
  • One Solution collect form web for “Logs not being flushed to Elasticsearch container through Fluentd”

    It seems that you are in the right track. Just check the indexes that were created in elasticsearch as follows:

    curl 'localhost:9200/_cat/indices?v'


    There you can see each index name. So pick one and search within it:

    curl 'localhost:9200/INDEXNAME/_search'

    Docs: https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html

    However I recommend you to use kibana in order to have a better human experience. Just start it and by default it searches for an elastic in localhost. In the interface’s config put the index name that you now know, and start to play with it.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.