Automatically create UDP input for Graylog2 server running in Docker?

We’re running a Graylog2 server in a Docker container in our development environment. It works like a charm apart from the fact that we have to re-create the UDP input every time we launch the container.

Has anyone figured out a convenient way to automatically create Graylog2 inputs?

  • Compiling and running in different containers
  • docker-compose is not working on Docker Beta for OS X
  • Docker watchtower with private registry
  • Docker Container works locally but fails on server
  • How can I deal with security updates in my docker containers?
  • Insert Docker parent host ip into container's hosts file
  • Renaming a file on Docker build does not persist
  • how does docker handle file creation?
  • How to get docker 'objects' completion on mintty-bash from git for windows
  • Is it possible to configure Docker to pull from my GitHub repo?
  • Docker container visible on localhost but not from other hosts with custom bridge0
  • Docker cache confusion
  • 6 Solutions collect form web for “Automatically create UDP input for Graylog2 server running in Docker?”

    We use a auto-loaded content pack in a newly created docker container.

    Dockerfile:

    FROM graylog2/server:latest
    COPY udp-input-graylog.json /usr/share/graylog/data/contentpacks
    ENV GRAYLOG_CONTENT_PACKS_AUTO_LOAD udp-input-graylog.json
    ENV GRAYLOG_CONTENT_PACKS_LOADER_ENABLED true
    ENV GRAYLOG_CONTENT_PACKS_DIR data/contentpacks
    

    udp-input-graylog.json:

    {
      "name":"UDP GELF input on 12201",
      "description":"Adds a global UDP GELF input on port 12201",
      "category":"Inputs",
      "inputs":[
        {
          "title":"udp input",
          "configuration":{
            "override_source":null,
            "recv_buffer_size":262144,
            "bind_address":"0.0.0.0",
            "port":12201,
            "decompress_size_limit":8388608
          },
          "static_fields":{},
          "type":"org.graylog2.inputs.gelf.udp.GELFUDPInput",
          "global":true,
          "extractors":[]
        }
      ],
      "streams":[],
      "outputs":[],
      "dashboards":[],
      "grok_patterns":[]
    }
    

    In the end this did the trick for me. I ended up inserting the relevant configuration directly into MongoDB.

    https://github.com/kimble/graylog2-docker

    I use ansible for starting and preparing graylog2 in containers. And I just create global udp input via calling graylog2 rest api (after graylog2 auto configuration has been finished):

    - name: create graylog global udp input for receiving logs
      uri:
        url: http://{{ ipv4_address }}:9000/api/system/inputs
        method: POST
        user: "{{ graylog_admin }}"
        password: "{{ graylog_pwd }}"
        body: '{"title":"xxx global input","type":"org.graylog2.inputs.gelf.udp.GELFUDPInput","configuration":{"bind_address":"0.0.0.0","port":12201,"recv_buffer_size":262144,"override_source":null,"decompress_size_limit":8388608},"global":true}'
        force_basic_auth: yes
        status_code: 201
        body_format: json
    

    [ansible] [docker] [graylog2]

    We have a puppet solution for this (graylog2 v2.2.2). Basically enable content-packs in the server.conf, and list the relevant file(s) that will be your json content (see above for UDP inputs as a nice example). Simply a file resource in puppet placed on the graylog server in the configured dir (default is /usr/share/graylog-server/contentpacks)

    This will be loaded on 1st run of graylog.

    It a nice way of getting in a lot of config.

    After messing around with this for some time, I’ve come up with a kludge. It’s probably not the best way to do this, but documentation is sparse and it seems to work (however, see the caveats at the end of this post).

    The problem is that every time you restart your Graylog2 container, the server generates a new unique node ID. The inputs that you define through the web interface are tied to a specific node ID. Thus, every time the node ID changes, the inputs that you’ve defined become useless.

    The solution has two parts:

    1. Ensure that the MongoDB is storing its data somewhere persistent – either a data container or in a directory mounted from the host file system.

    2. Force the container to use the same node ID every time. I did this by extending the sjoerdmulder/graylog2 image:

    Here’s the Dockerfile:

    FROM sjoerdmulder/graylog2
    
    # set a Graylog2 node ID
    #
    RUN echo "mynodeid" > /opt/graylog2-server/server-node-id
    RUN chmod 0444 /opt/graylog2-server/server-node-id
    

    This will write “mynodeid” to the appropriate file and write-protect it by changing the permissions. The server then starts up normally and loads the appropriate node ID. Now you can go into the web interface, create your input, and be confident that the next time you start up your container, it will still be there.

    Important note:

    I’m not using a cluster. I’m just running a single instance of Graylog2 accepting input from a single web application. I have no idea what this will do when part of a cluster.

    Another option is to create your inputs as “global” – that way regardless of the node id that is generated for your current instance, you will get back all of your configured global inputs.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.