How to deploy Spark,That it can make the highest resource utilization

I have 10 servers (16G memory, 8 cores) and want to deploy Hadoop and Spark, can you tell me which plan can make the maximum utilization of resources?

  1. the immediate deployment;

  2. File name too long on docker
  3. How to use docker to setup kafka and spark-streaming on a Mac?
  4. Run spark-shell inside Docker container against remote YARN cluster
  5. Failed to connect to master, spark in docker
  6. Spark Docker - Can't access web UI for resource manager - Mac PC
  7. Spark SPARK_PUBLIC_DNS and SPARK_LOCAL_IP on stand-alone cluster with docker containers
  8. install Openstack, deploy the environment into virtual machine;

  9. using Docker, such as Spark on Docker;

I know resource utilization associated with usage scenario, actually I want to know the advantages and disadvantages of the three plans above.

Thank you.

  • Pull an Image from Amazon ECR fails on Windows
  • Docker login problems
  • Flask-WTF CSRF validation fails when app moved to docker production environment
  • Can I use Docker simply to set up the general environment of an EC2 instance?
  • copy file from docker to host system using python script
  • Docker Not Linking Containers
  • One Solution collect form web for “How to deploy Spark,That it can make the highest resource utilization”

    For highest resource utilization, deploying a single resource manager for both Spark and Hadoop will be a best way to go. There are two options for that:

    • Deploying Hadoop cluster using YARN since Spark can run on YARN.
    • Deploying Apache Mesos cluster, and run Hadoop job and Spark on it.

    Isolating Spark cluster and Hadoop cluster provides no advantage over this, and will cause higher overhead and lower resource utilization.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.