How to run a Redis server AND another application inside Docker?

I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):

RUN apt-get update && apt-get -y install redis-server
RUN pip install redis

The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.

  • Local development with docker - do I need 2 Dockerfiles?
  • Can't expose web server using docker on OSX
  • Opening file from Docker Container in Python/Django Web App
  • How to avoid undesired “animated” console output on Jenkins
  • Weave plugin on AWS EC2
  • Running Docker behind proxy
  • If I start the Redis server with the following command, it hangs :

    RUN redis-server

    If I try to tweak the previous line, it does not work either :

    RUN nohup redis-server &

    So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?

    The Docker “last command” is already used with:

    CMD uwsgi --http --module mymodule.wsgi

  • Exposed Service unreachable
  • Starting Xserver in Docker Ubuntu container
  • ElasticSearch on Elastic Beanstalk
  • How to configure Docker mysql + aspnet
  • Syslog driver in Logstash docker image causing port “timed out” error
  • Docker stop responding under load
  • 3 Solutions collect form web for “How to run a Redis server AND another application inside Docker?”

    RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.

    Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:


    In the script you write the following:

    nohup redis-server &
    uwsgi --http --module mymodule.wsgi

    use supervisord which would control both processes. The conf file might look like this:

    command= /usr/bin/redis-server /srv/redis/redis.conf

    When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an “init” script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.

    Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that 🙂

    The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example…

    # docker-compose.yml
        image: myapp
        command: uwsgi --http --module mymodule.wsgi
          - redis
          - mysql
        image: myapp
        command: celery worker -A myapp.celery
          - redis
          - mysql
        image: redis
        image: mysql

    This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name “redis” and “mysql” inside your app containers, so instead of pointing at “localhost” you’d point at “redis”.

    There is a lot of good info on the Docker-compose docs

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.