Is it possible to execute CMD at the middle of docker file?

I am installing hadoop-0.20.2 using docker. I have two files one is for java installation and another is for hadoop installation. I am starting services using CMD command

 cmd ["path/to/start-all.sh"]

Now, i want to to write third dockerfile which executes an example Map-Reduce job. But the problem is

  • Docker: Best practice for development and production environment
  • Basic auth for Curl command into the secret of openshift
  • Is there a way to detect programmatically whether a Docker container is running natively or using Docker Machine?
  • I cannot index my database with solr. I always get `Indexing failed. Rolled back all changes.`
  • Nginx Docker 400 Bad Request
  • Change ulimit in Docker Container
  • Third docker file depends on second hadoop-docker file. fo eg:

     FROM sec_doc_file
    
     RUN /bin/hadoop fs -mkdir input
    

    It requires hadoop services. But hadoop services ll be started only after running second docker file. But i want to run it as part of third docker file before starting MR job? Is it possible? If so, please provide an example. If not, what could be the other possibilities?

     #something like
    
     From sec_doc_file
    
     #Start_Service
    
     RUN /bin/hadoop fs -mkdir input
    
     #continue_map_reduce_job
    

  • Docker run failed with Error response from daemon
  • How can I separate wordpress in backend & frontend with docker container?
  • How to create Dockerfile in Boot2docker to create an image
  • Multiple docker services to listen on same host and port
  • SSL: how to get access to HTTPS in docker container
  • Docker MariaDb - remote access
  • One Solution collect form web for “Is it possible to execute CMD at the middle of docker file?”

    The docker image you use as base for the new container is a base for files, not for processes supposed to be running. To do what you want you would need to start the process(es) you need during dockerbuild and run the commands to set up properly. Each RUN creates a new AUFS layer, but does not keep the possible previous running services. So, if you need a service to be up to perform some setup during docker build you would need to run it in one line (concatenating commands or with a custom script). Example:

    FROM Gops/sec_doc_file
    RUN path/to/start-all.sh && /bin/hadoop fs -mkdir input
    

    So for setting up HDFS folders and files during docker build you’d need to run the hdfs daemons and perform the action you wish in the same RUN command:

    RUN /etc/hadoop/hadoop-env.sh &&\
        /opt/hadoop/sbin/start-dfs.sh &&\
        /opt/hadoop/bin/hdfs dfs -mkdir input
    
    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.