Import broker definitions into Dockerized RabbitMQ

I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.

Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.

  • Waiting for a Docker container to be ready
  • Unable to access Rancher UI after installed using docker
  • How to update /etc/hosts file in Docker image during “docker build”
  • Kubernetes UI: error in kubectl proxy
  • How does Docker run a command without invoking a command shell?
  • Is this possible to tag latest to multiple images in docker?
  • I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
    , or is this the best that can be done?

    My Dockerfile:

    FROM rabbitmq:management
    LABEL description="Rabbit image" version="0.0.1"
    ADD /          
    ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json         
    CMD ["/"]

    sleep 10 && curl -i -u guest:guest -d @/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
    rabbitmq-server $@

  • How to use forever within a Docker container
  • Image and container in docker after a new build
  • dynamic kubernetes port range
  • How to protect a web application running in a Docker container by username and password?
  • Run bash command before running container
  • Parse logs in fluentd
  • 2 Solutions collect form web for “Import broker definitions into Dockerized RabbitMQ”

    You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.

    More details at

    I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.

    Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.