Reuse inherited image's CMD or ENTRYPOINT

How can I include my own shell script CMD on container start/restart/attach, without removing the CMD used by an inherited image?

I am using this, which does execute my script fine, but appears to overwrite the PHP CMD:

  • How could one use Docker Compose to synchronize container execution?
  • Golang - Docker API - parse result of ImagePull
  • After stopping docker containers , previously running containers cannot be started
  • Is there anyway to start rsyslogd without sudo access?
  • Custom Container for hosting OpenLDAP in Bluemix
  • How to view the docker image/container limit? And how to modify it while we run “docker build”?
  • FROM php
    
    COPY start.sh /usr/local/bin
    
    CMD ["/usr/local/bin/start.sh"]
    

    What should I do differently? I am avoiding the prospect of copy/pasting the ENTRYPOINT or CMD of the parent image, and maybe that’s not a good approach.

  • Configuration issue in nginx of docker - Troubleshooting docker with no instaces
  • Running Integration tests from a Docker image for Java project
  • Docker Swarm with data: shared volume vs clustering vs single instance
  • Java Spring Hibernate application runs on local, but not in aws's ecs docker
  • docker build failing with Could not resolve 'archive.ubuntu.com'
  • Why am I getting NATed when talking to other Docker containers within the same bridge network
  • 2 Solutions collect form web for “Reuse inherited image's CMD or ENTRYPOINT”

    As mentioned in the comments, there’s no built-in solution to this. From the Dockerfile, you can’t see the value of the current CMD or ENTRYPOINT. Having a run-parts solution is nice if you control the upstream base image and include this code there, allowing downstream components to make their changes. But docker there’s one inherent issue that will cause problems with this, containers should only run a single command that needs to run in the foreground. So if the upstream image kicks off, it would stay running without giving your later steps a chance to run, so you’re left with complexities to determine the order to run commands to ensure that a single command does eventually run without exiting.

    My personal preference is a much simpler and hardcoded option, to add my own command or entrypoint, and make the last step of my command to exec the upstream command. You will still need to manually identify the script name to call from the upstream Dockerfile. But now in your start.sh, you would have:

    #!/bin/sh
    
    # run various pieces of initialization code here
    # ...
    
    # kick off the upstream command:
    exec /upstream-entrypoint.sh "$@"
    

    By using an exec call, you transfer pid 1 to the upstream entrypoint so that signals get handled correctly. And the trailing "$@" passes through any command line arguments. You can use set to adjust the value of $@ if there are some args you want to process and extract in your own start.sh script.

    If the base image is not yours, you unfortunately have to call the parent command manually.

    If you own the parent image, you can try what the people at camptocamp suggest here.

    They basically use a generic script as an entry point that calls run-parts on a directory. What that does is run all scripts in that directory in lexicographic order. So when you extend an image, you just have to put your new scripts in that same folder.

    However, that means you’ll have to maintain order by prefixing your scripts which could potentially get out of hand. (Imagine the parent image decides to add a new script later…).

    Anyway, that could work.

    Update #1

    There is a long discussion on this docker compose issue about provisioning after container run. One suggestion is to wrap you docker run or compose command in a shell script and then run docker exec on your other commands.

    If you’d like to use that approach, you basically keep the parent CMD as the run command and you place yours as a docker exec after your docker run.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.