Is it possible to execute CMD at the middle of docker file?

I am installing hadoop-0.20.2 using docker. I have two files one is for java installation and another is for hadoop installation. I am starting services using CMD command

 cmd ["path/to/start-all.sh"]

Now, i want to to write third dockerfile which executes an example Map-Reduce job. But the problem is

  • Docker Error : Cannot use --lxc-conf with execdriver: native-0.2
  • Emulate Bluetooth LE Adapter for Acceptance Testing
  • PyCharm remote debug in a docker container
  • How to expose Spark Driver behind dockerized Apache Zeppelin?
  • Unexpected error when Vagrant up using yaml file
  • Connecting to a remote MySQL database from Docker container
  • Third docker file depends on second hadoop-docker file. fo eg:

     FROM sec_doc_file
    
     RUN /bin/hadoop fs -mkdir input
    

    It requires hadoop services. But hadoop services ll be started only after running second docker file. But i want to run it as part of third docker file before starting MR job? Is it possible? If so, please provide an example. If not, what could be the other possibilities?

     #something like
    
     From sec_doc_file
    
     #Start_Service
    
     RUN /bin/hadoop fs -mkdir input
    
     #continue_map_reduce_job
    

  • Deploying Django with Docker on Amazon Elastic Beanstalk
  • Docker: Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates
  • How to connect spring application on docker to a cluster of activemq on docker
  • Running copies of the same multi-container app with Docker Compose
  • Docker “Can't connect to local MySQL server through socket”
  • How to install docker 1.9 in CentOS 6.5?
  • One Solution collect form web for “Is it possible to execute CMD at the middle of docker file?”

    The docker image you use as base for the new container is a base for files, not for processes supposed to be running. To do what you want you would need to start the process(es) you need during dockerbuild and run the commands to set up properly. Each RUN creates a new AUFS layer, but does not keep the possible previous running services. So, if you need a service to be up to perform some setup during docker build you would need to run it in one line (concatenating commands or with a custom script). Example:

    FROM Gops/sec_doc_file
    RUN path/to/start-all.sh && /bin/hadoop fs -mkdir input
    

    So for setting up HDFS folders and files during docker build you’d need to run the hdfs daemons and perform the action you wish in the same RUN command:

    RUN /etc/hadoop/hadoop-env.sh &&\
        /opt/hadoop/sbin/start-dfs.sh &&\
        /opt/hadoop/bin/hdfs dfs -mkdir input
    
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.