How to deploy a self-building Docker image to make changes to itself in respect to the local environment?

As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. Docker Images are save points of layers of code that are made from either dockerfiles or can be created from containers which require a base image to go off of anyways to create. Dockerfiles serve as a way to automate the build process of making images by running all the desired commands and actions for any new containers to be spawned with it and roll them into one file.

Now this is great and all but i want to take this a step further. Building images, especially those with dependencies are encumbersome because 1,you have to rely on commands that are either not there within the default OS image, or 2, have a lot of other useless commands to which are not needed.

  • Spring boot with docker unable to find valid certification path to requested target error
  • Import pandas on docker with tensorflow
  • docker web application image needs keep running
  • Possibility of using gluster volume in k8s if glusterfs client package on container
  • how get env variables from docker in symfony yml config file
  • microservice in docker: service registered with private IP
  • Now in my head i feel like its possible but i cant make the connection just yet. My desire is to get a dockerfile to build itself from scratch (Litterally the image of scratch) and build itself according. It is to copy any dependencies that is desired so like an rpm or something, install it, find its start up command, and relay all dependencies thats needed to succesfully create and run the image with no flaw back to the docker file. In a programming sense,

    FROM scratch
    COPY package.rpm
    RUN *desired cmds*
    

    Run errors are fed back into a file. file searchs the current OS for the dependencies needed and returns them to the RUN cmd.

    CMD *service start up*
    

    As for that CMD, we would run the service, and get its status and filter it back its startup commands back into the CMD portion.

    The problem here though is that i dont believe i can use docker to these ends. To do a docker build of something, retain its errors and filter it back into the build again seems challenging. I wish docker could come equipped with this as it would seem like my only chance of performing such a task would be through a script which wreaks havoc on the portability factor.

    Any ideas?

  • “Error response from daemon: Cannot start container … no such file or directory” on Oracle Linux running hello-world
  • API HOST name from env variable node js
  • Case sensitive host volume mount in docker for windows
  • Why removing docker containers and images does not free up storage space on Windows? How to fix it?
  • install linux system from docker
  • How to convert a systemctl command to supervisord command
  • 2 Solutions collect form web for “How to deploy a self-building Docker image to make changes to itself in respect to the local environment?”

    Docker isn’t going to offer you painless builds. Docker doesn’t know what you want.

    You have several options here:

    A simple example:

    web:
    build: .
    volumes:
       - "app:/src/app"
    ports:
       - "3030:3000"
    

    To use it:

    docker-compose up

    Docker compose will then:

    1. Call the container web
    2. build using the current working directory as root
    3. Mount app directory to /src/app in the container
    4. Expose container port 3030 as 3000 to the outside world.

    Note that build can also point to a Docker container you found via Kitematic (which reads from registry.hub.docker.com) so you can replace the . (in the example above on the build line) with node:latest and it will build a NodeJS container.

    Docker Compose is very similar to the docker command line. You can use https://lorry.io/ for help generating the docker-compose.yml files.

    • If you’re looking for an epic solution, then I would recommend something like Mesosphere for an enterprise Docker environment.

    There are other solutions you could also look into like Google’s Kubernetes and Apache Mesos, but the learning curve will increase.

    I also noticed you were mucking with IP’s and while I haven’t used it, from what I hear, Weave greatly simplifies the network aspect of Docker, which is definitely not Docker’s strong suit.

    Sounds more like a provisioning system a la ansible, chef or puppet to me. I know some use those to create images if you have to stay in the dockerland.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.