How to deploy a self-building Docker image to make changes to itself in respect to the local environment?

As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. Docker Images are save points of layers of code that are made from either dockerfiles or can be created from containers which require a base image to go off of anyways to create. Dockerfiles serve as a way to automate the build process of making images by running all the desired commands and actions for any new containers to be spawned with it and roll them into one file.

Now this is great and all but i want to take this a step further. Building images, especially those with dependencies are encumbersome because 1,you have to rely on commands that are either not there within the default OS image, or 2, have a lot of other useless commands to which are not needed.

  • Access service running in docker container from inside another docker container
  • Change .docker directory on Windows
  • pm2 not start apps on server with Process config loading failed []
  • Issue with SaltStack Docker-py port binding tcp and udp to the same port
  • Docker-Mongodb - How to connect to the mongo image in local(windows)
  • Reuse inherited image's CMD or ENTRYPOINT
  • Now in my head i feel like its possible but i cant make the connection just yet. My desire is to get a dockerfile to build itself from scratch (Litterally the image of scratch) and build itself according. It is to copy any dependencies that is desired so like an rpm or something, install it, find its start up command, and relay all dependencies thats needed to succesfully create and run the image with no flaw back to the docker file. In a programming sense,

    FROM scratch
    COPY package.rpm
    RUN *desired cmds*

    Run errors are fed back into a file. file searchs the current OS for the dependencies needed and returns them to the RUN cmd.

    CMD *service start up*

    As for that CMD, we would run the service, and get its status and filter it back its startup commands back into the CMD portion.

    The problem here though is that i dont believe i can use docker to these ends. To do a docker build of something, retain its errors and filter it back into the build again seems challenging. I wish docker could come equipped with this as it would seem like my only chance of performing such a task would be through a script which wreaks havoc on the portability factor.

    Any ideas?

  • Docker ports not exposing properly
  • Setup private docker registry with anonymous pull access
  • Docker Alpine /bin/sh apk not found
  • Add a new entrypoint to a docker image
  • Ubuntu 14.04 volume disk full due to docker data
  • how to install an app inside of running docker container?
  • 2 Solutions collect form web for “How to deploy a self-building Docker image to make changes to itself in respect to the local environment?”

    Docker isn’t going to offer you painless builds. Docker doesn’t know what you want.

    You have several options here:

    A simple example:

    build: .
       - "app:/src/app"
       - "3030:3000"

    To use it:

    docker-compose up

    Docker compose will then:

    1. Call the container web
    2. build using the current working directory as root
    3. Mount app directory to /src/app in the container
    4. Expose container port 3030 as 3000 to the outside world.

    Note that build can also point to a Docker container you found via Kitematic (which reads from so you can replace the . (in the example above on the build line) with node:latest and it will build a NodeJS container.

    Docker Compose is very similar to the docker command line. You can use for help generating the docker-compose.yml files.

    • If you’re looking for an epic solution, then I would recommend something like Mesosphere for an enterprise Docker environment.

    There are other solutions you could also look into like Google’s Kubernetes and Apache Mesos, but the learning curve will increase.

    I also noticed you were mucking with IP’s and while I haven’t used it, from what I hear, Weave greatly simplifies the network aspect of Docker, which is definitely not Docker’s strong suit.

    Sounds more like a provisioning system a la ansible, chef or puppet to me. I know some use those to create images if you have to stay in the dockerland.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.