Docker – issue command from one linked container to another

I’m trying to set up a primitive CI/CD pipeline using 2 Docker containers — I’ll call them jenkins and node-app. My aim is for the jenkins container to run a job upon commit to a GitHub repo (that’s done). That job should run a deploy.sh script on the node-app container. Therefore, when a developer commits to GitHub, jenkins picks up the commit, then kicks off a job including automated tests (in the future) followed by a deployment on node-app.

The jenkins container is using the latest image (Dockerfile).

  • How would you go about dockerfile and rpm scripting?
  • Jenkins - Docker integration - Use Jenkins to build Docker images and push to the registry
  • Using Jenkins - Build Enviroment
  • Is there an easy way to change to a non-root user in Bitbucket Pipelines Docker container?
  • How to get maven version in Jenkins pipeline
  • Same job in parallel on more Docker containers
  • The node-app container’s Dockerfile is:

    FROM node:latest
    EXPOSE 80
    WORKDIR /usr/src/final-exercise
    ADD . /usr/src/final-exercise
    
    RUN apt-get update -y
    RUN apt-get install -y nodejs npm
    RUN cd /src/final-exercise; npm install
    
    CMD ["node", "/usr/src/final-exercise/app.js"]
    

    jenkins and node-app are linked using Docker Compose, and that docker-compose.yml file contains (updated, thanks to @alkis):

    node-app:
      container_name: node-app
      build: .
      ports:
       - 80:80
      links:
       - jenkins
    jenkins:
      container_name: jenkins
      image: jenkins
      ports:
       - 8080:8080
      volumes:
       - /home/ec2-user/final-exercise:/var/jenkins
    

    The containers are built using docker-compose up -d and start as expected. docker ps yields (updated):

    CONTAINER ID        IMAGE                    COMMAND                  CREATED                 STATUS              PORTS                               NAMES
    69e52b216d48        finalexercise_node-app   "node /usr/src/final-"   3 hours ago         Up 3 hours          0.0.0.0:80->80/tcp                  node-app
    5f7e779e5fbd        jenkins                  "/bin/tini -- /usr/lo"   3 hours ago         Up 3 hours          0.0.0.0:8080->8080/tcp, 50000/tcp   jenkins
    

    I can ping jenkins from node-app and vice versa.

    Is this even possible? If not, am I making an architectural mistake here?

    Thank you very much in advance, I appreciate it!

    EDIT:
    I’ve stumbled upon nsenter and easily entering a container’s shell using this and this. However, these both assume that the origin (in their case the host machine, in my case the jenkins container) has Docker installed in order to find the PID of the destination container. I can nsenter into node-app from the host, but still no luck from jenkins.

  • Openshift node fails to start
  • Docker container reach outside network addresses
  • Make docker0 interface permanent on centos7
  • Docker: open container port to all other containers
  • Docker-compose rails postgres
  • How to get my hdfs docker client running?
  • One Solution collect form web for “Docker – issue command from one linked container to another”

    node-app:
      build: .
      ports:
       - 80:80
      links:
       - finalexercise_jenkins_1
    
    jenkins:
      image: jenkins
      ports:
       - 8080:8080
      volumes:
       - /home/ec2-user/final-exercise:/var/jenkins
    

    Try the above. You are linking by image name, but you must use container name.
    In your case, since you don’t specify explicitly the container name, it gets auto-generated like this

    finalexercise : folder where your docker-compose.yml is located
    node-app      : container configs tag
    1             : you only have one container with the prefix finalexercise_node-app. If you built a second one, then its name will be finalexercise_node-app_2
    

    The setup of the yml files:

    node-app:
      build: .
      container_name: my-node-app
      ports:
       - 80:80
      links:
       - my-jenkins
    
    jenkins:
      image: jenkins
      container_name: my-jenkins
      ports:
       - 8080:8080
      volumes:
       - /home/ec2-user/final-exercise:/var/jenkins
    

    Of course you can specify a container name for the node-app as well, so you can use something constant for the communication.

    Update

    In order to test, log to a bash terminal of the jenkins container

    docker exec -it my-jenkins bash
    

    Then try to ping my-node-app, or even telnet for the specific port.

    ping my-node-app:80
    

    Or you could

    telnet my-node-app 80
    

    Update

    What you want to do is easily accomplished by the exec command.
    From your host you can execute this (try it so you are sure it’s working)

    docker exec -i <container_name> ./deploy.sh
    

    If the above works, then your problem delegates to executing the same command from a container. As it is you can’t do that, since the container that’s issuing the command (jenkins) doesn’t have access to your host’s docker installation (which not only recognises the command, but holds control of the container you need access to).

    I haven’t used either of them, but I know of two solutions

    1. Use this official guide to gain access to your host’s docker daemon and issue docker commands from your containers as if you were doing it from your host.
    2. Mount the docker binary and socket into the container, so the container acts as if it is the host (every command will be executed by the docker daemon of your host, since it’s shared).

    This thread from SO gives some more insight about this issue.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.