How to deploy Spark,That it can make the highest resource utilization

I have 10 servers (16G memory, 8 cores) and want to deploy Hadoop and Spark, can you tell me which plan can make the maximum utilization of resources?

  1. the immediate deployment;

  2. How to schedule jobs in a spark cluster using Kubernetes
  3. Problems while building neo4j mazerunner project
  4. Can not see files from Docker in Zeppelin
  5. docker stop spark container from exiting
  6. Communication with Spark using Spark JobServer in docker
  7. Deploying spark-jobserver to BlueMix Spark Node
  8. install Openstack, deploy the environment into virtual machine;

  9. using Docker, such as Spark on Docker;

I know resource utilization associated with usage scenario, actually I want to know the advantages and disadvantages of the three plans above.

Thank you.

  • Setting TLS opptions for Docker as Environment Variables
  • Why can't I run a simple ping from the IBM Liberty Docker image
  • Continuous deployment with docker
  • Populating Neo4J In Docker Container
  • Working on a Dockerfile in order to build a WordPress image
  • Get start up status of replicas in Docker Swarm
  • One Solution collect form web for “How to deploy Spark,That it can make the highest resource utilization”

    For highest resource utilization, deploying a single resource manager for both Spark and Hadoop will be a best way to go. There are two options for that:

    • Deploying Hadoop cluster using YARN since Spark can run on YARN.
    • Deploying Apache Mesos cluster, and run Hadoop job and Spark on it.

    Isolating Spark cluster and Hadoop cluster provides no advantage over this, and will cause higher overhead and lower resource utilization.

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.