How to run a Redis server AND another application inside Docker?

I created a Django application which runs inside a Docker container. I needed to create a thread inside the Django application so I used Celery and Redis as the Celery Database.
If I install redis in the docker image (Ubuntu 14.04):

RUN apt-get update && apt-get -y install redis-server
RUN pip install redis

The Redis server is not launched: the Django application throws an exception because the connection is refused on the port 6379. If I manually start Redis, it works.

  • docker-registry disk full and no ideas to diet it
  • Assembly specified in the dependencies manifest was not found while running docker with dotnet-core project
  • Behaviour of Ansible hosts: setting on Vagrant or Docker
  • cannot connect to docker on RemoteServer using Ansible
  • Unable to use sed to customize buffer size in mariadb Dockerfile
  • Digital ocean kill the build process on docker
  • If I start the Redis server with the following command, it hangs :

    RUN redis-server
    

    If I try to tweak the previous line, it does not work either :

    RUN nohup redis-server &
    

    So my question is: is there a way to start Redis in background and to make it restart when the Docker container is restarted ?

    The Docker “last command” is already used with:

    CMD uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
    

  • How to test Ansible playbook using Docker
  • Run Jupyter Notebook in the Background on Docker
  • Am I using flyway wrong?
  • Docker-compose error when try to start
  • Using --add-host or extra_hosts with docker-compose
  • Install MongoDB and Tomcat using Dockerfile
  • 3 Solutions collect form web for “How to run a Redis server AND another application inside Docker?”

    RUN commands are adding new image layers only. They are not executed during runtime. Only during build time of the image.

    Use CMD instead. You can combine multiple commands by externalizing them into a shell script which is invoked by CMD:

    CMD start.sh
    

    In the start.sh script you write the following:

    #!/bin/bash
    nohup redis-server &
    uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
    

    use supervisord which would control both processes. The conf file might look like this:

    ...
    [program:redis]
    command= /usr/bin/redis-server /srv/redis/redis.conf
    stdout_logfile=/var/log/supervisor/redis-server.log
    stderr_logfile=/var/log/supervisor/redis-server_err.log
    autorestart=true
    
    [program:nginx]
    command=/usr/sbin/nginx
    stdout_events_enabled=true
    stderr_events_enabled=true
    

    When you run a Docker container, there is always a single top level process. When you fire up your laptop, that top level process is an “init” script, systemd or the like. A docker image has an ENTRYPOINT directive. This is the top level process that runs in your docker container, with anything else you want to run being a child of that. In order to run Django, a Celery Worker, and Redis all inside a single Docker container, you would have to run a process that starts all three of them as child processes. As explained by Milan, you could set up a Supervisor configuration to do it, and launch supervisor as your parent process.

    Another option is to actually boot the init system. This will get you very close to what you want since it will basically run things as though you had a full scale virtual machine. However, you lose many of the benefits of containerization by doing that 🙂

    The simplest way altogether is to run several containers using Docker-compose. A container for Django, one for your Celery worker, and another for Redis (and one for your data store as well?) is pretty easy to set up that way. For example…

    # docker-compose.yml
    web:
        image: myapp
        command: uwsgi --http 0.0.0.0:8000 --module mymodule.wsgi
        links:
          - redis
          - mysql
    celeryd:
        image: myapp
        command: celery worker -A myapp.celery
        links:
          - redis
          - mysql
    redis:
        image: redis
    mysql:
        image: mysql
    

    This would give you four containers for your four top level processes. redis and mysql would be exposed with the dns name “redis” and “mysql” inside your app containers, so instead of pointing at “localhost” you’d point at “redis”.

    There is a lot of good info on the Docker-compose docs

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.