Can you share Docker containers?

I have been trying to figure out why one might choose adding every “step” of their setup to a Dockerfile which will create your container in a certain state.

The alternative in my mind is to just create a container from a simple base image like ubuntu and then (via shell input) configure your container the way you’d like.

  • What does volume_from do if there are no volumes in the volume container?
  • Trouble getting AWS ecs-cli to pull from private docker repo
  • Unable to start any container when Volumes are enabled Docker Toolbox
  • Errno::EACCES creating rails project using Docker Rails image
  • trying to create a docker L2Bridge network in windows throwing a The system cannot find the file specified error
  • Selenium Grid with Docker set Language for Firefox
  • But can you share containers? If you can only share images with Docker then I’d understand why one would want every step of their container setup listed in a Dockerfile.

    The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.

    EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container’s image base state.

  • x509: certificate signed by unknown authority - both with docker and with github
  • TravisCI - Is it possible to start docker service and expect mysql to work?
  • Docker options for chaining dockerfiles
  • How do I connect to gcloud sql with gcloud sql proxy from GKE
  • docker exec TERM setting
  • What is the field DOCKER when we display our Docker machines?
  • One Solution collect form web for “Can you share Docker containers?”

    But can you share containers? If you can only share images with Docker then I’d understand why one would want every step of their container setup listed in a Dockerfile.

    Strictly speaking, no. However, you can create a new image from an existing container using the docker commit command:

    $ docker commit <container-name> <image-name>
    

    This command will create a new image from the existing container that you can push and pull from/to registries, export and import and create new containers from.

    The reason I ask is because I imagine there is some amount of headache involved with porting shell commands, file changes for configs, etc. to correct Dockerfile syntax and have them work correctly? But as a novice with Docker I could be overestimating the difficulty of that task.

    If you’re already using some other mechanism for automated configuration, you can simply integrate your existing automation into the Docker build. For instance, if you are already configuring your images using shell scripts, simply add a build step in your Dockerfile in which to add your install scripts to the container and execute it. In theory, this can also work with configuration management utilities like Puppet, Salt and others.

    EDIT: I suppose another valid reason for having the Dockerfile with each setup step is for documentation as to the initial state of the container. As opposed to being given a container in a certain state, but not necessarily having a way to know what all was done from the container’s image base state.

    True. As mentioned in comments, there are clear advantages to have an automated and reproducible build of your image. If you build your containers manually and then create an image with docker commit, you don’t necessarily know how to re-build this image at a later point in time (which may become necessary when you want to release a new version of your application or re-build the image on top of an updated base image).

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.