Import broker definitions into Dockerized RabbitMQ

I have a RabbitMQ broker with some exchanges and queues already defined. I know I can export and import these definitions via the HTTP API. I want to Dockerize it, and have all the broker definitions imported when it starts.

Ideally, it would be done as easily as it is done via the API. I could write a bunch of rabbitmqctl commands, but with a lot of definitions this might take quite a some time. Also, every change somebody else makes through the web interface will have to be inserted.

  • Run apache in both host machine and docker container on 80 port
  • Rails and Nginx on different Docker containers (how share data between containers)
  • Docker compose, manage environments
  • Deleting folders in docker conatiner using python
  • Can't connect container docker from Macos
  • openconnect in dockers/IBM bluemix (Error: TUNSETIFF failed: Inappropriate ioctl for device)
  • I have managed to do what I want by writing a script that sleeps a curl request and starts the server, but this seems to be error prone and really not elegant. Are there any better ways to do definition importing/exporting
    , or is this the best that can be done?

    My Dockerfile:

    FROM rabbitmq:management
    LABEL description="Rabbit image" version="0.0.1"
    ADD /          
    ADD rabbit_e6f2965776b0_2015-7-14.json /rabbit_config.json         
    CMD ["/"]

    sleep 10 && curl -i -u guest:guest -d @/rabbit_config.json -H "content-type:application/json" http://localhost:15672/api/definitions -X POST &
    rabbitmq-server $@

  • How to use sendmail on docker
  • DotNet build CLI works in terminal, but not in Docker build
  • Access Graylog Docker container on url subpath
  • Referencing services with Docker Compose
  • Docker swarm container info about parent service
  • compiling docker on win64 using go
  • 2 Solutions collect form web for “Import broker definitions into Dockerized RabbitMQ”

    You could start your container with RabbitMQ, configure the resources (queues, exchanges, bindings) and then commit your configured container as a new image. This image can be used to start new containers.

    More details at

    I am not sure that this is an option, but the absolute easiest way to handle this situation is to periodically create a new, empty RabbitMQ container and have it join the first container as part of the RabbitMQ cluster. The configuration of the queues will be copied to the second container.

    Then, you can stop the container and create a versioned image in your docker repository of the new container using docker commit. This process will only save the changes that you have made to your image, and then it would enable you to not have to worry about re-importing the configuration each time. You would just have to get the latest image to have the latest configuration!

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.