Using Docker Containers

I am in the process of trying to understand how to use Docker and have now got Docker installed on an Ubuntu 14.04 box. What I want to be able to do is to easily switch between a combination of stacks. Typical stacks

  • Ubuntu + MariaDB + Apache + PHP
  • CentOS + ditto
  • Ubuntu + MongoDB + Nginx + PHP

From my reading of the docs thus far I believe that I can do this in two ways

  • Error while pushing a Docker image into Cloud Foundry
  • Can't get Pod info in the kubernetes cluster
  • Docker pull fails to lookup on (cannot unmarshal DNS message)
  • Sequel Pro with Mysql in Docker
  • determine OS distribution of a docker image
  • Using Docker for HPC with Sun Grid Engine
    1. Loading separate containers for each – in the sensse of 1 for Ubuntu, 1 for MariadB, 1 for Apache + PHP – of the above and linking them together
    2. Defining one container for the whole lot – i.e. one container per distro + db + server…

    What I don’t quite get yet is this – when I work with such a system and the DB is subjected to changes I would like to be able to have those changes in place the next time I reuse the same configuration. This would require that I save the container as a tar archive and then load it later when required? In that case it would make sense to have atleast those containers that are liable to be modified by the user as separate linked containers?

    Finally – suppose I have got the full stack up and running (be it as separate linked containers or as one mega container). And now I browse to the IP address where it is all installed. The base Ubuntu box has no web server installed. Will I reach the Apache instance running inside the Docker container automatically or do I somehow need to tell the system of the need to do this?

    I am a Docker newbie so some of my questions are probably rather naive. Nevertheless I would much appreciate any help.

  • Emscripten “incoming” installation unusually big
  • How to mount a directory in the docker container to the host?
  • Docker build in Jenkins
  • Docker - Share environment variables with referenced volume containers
  • How to solve Docker permission error when trigger by Jenkins
  • docker compose mount host directory on osx with xhyve
  • 2 Solutions collect form web for “Using Docker Containers”

    My 2 cents on the matter is that you should work with separate linked containers – that’s simply the Docker way. One container typically hosts one app (such as the database or the web server).

    When you work with an app that requires persistent data, such as a database the way to go is to mount volumes on the docker container. This can be achieved via the -v flag of the docker run command.

    docker run -v /some/local/dir:/some/dir/in/container my/mariadb

    This means that the data in the container folder /some/dir/in/container is mapped to a local folder of the host system so when you restart the container the data is still available. There are other best practices that can be used such as data volumes and the –volumes-from flag. All this is described in the docker docs and the docker run reference.

    If you start a container with a web server (in your case Apache) the EXPOSE directive can be used to expose e.g. port 80 on the container. To link that to the host system a port linking is required via -p or -P. The -p flag can be used like this:

    docker run -p 80:80 my/apache

    The command above links port 80 on the host to port 80 on the container. You can also bind to a specific host interface (such as using the -p flag. More info on port mapping can be found in the docker docs and also under the Linking Containers section.

    Loading separate containers for each of the above and linking them together

    This will lead to 3 Dockerfiles, with in each an EXPOSE command, so that, when your containers are up, on yout computer, if you launch
    http://localhost/1234 (it is an example) you will access yout first container (MariaDB + Apache + PHP), and with
    http://localhost/2345 you will reach CentOS +ditto, and so on.

    Have a look at

    and look at

    docker inspect –format ‘{{ .NetworkSettings.IPAddress }}’ container

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.