How to expose Spark Driver behind dockerized Apache Zeppelin?

I am currently building a custom docker container from a plain distribution with Apache Zeppelin + Spark 2.x inside.

My Spark jobs will run in a remote cluster and I am using yarn-client as master.

  • Failed to connect to master, spark in docker
  • Problems while building neo4j mazerunner project
  • Can not see files from Docker in Zeppelin
  • Spark job with custom Docker image from private registry
  • How can I setup #Spark/#Kafka on Docker ?
  • docker-compose v3 + apache spark, connection refused on port 7077
  • When I run a notebook and try to print sc.version, the program gets stuck. If I go to the remote resource manager, an application has been created and accepted but in the logs I can read:

    INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable

    My understanding of the situation is that the cluster is unable to talk to the driver in the container but I don’t know how to solve this issue.

    I am currently using the following configuration:

    • spark.driver.port set to PORT1 and option -p PORT1:PORT1 passed to the container
    • set to (ip of the container)
    • SPARK_LOCAL_IP set to (ip of the container)
    • spark.ui.port set to PORT2 and option -p PORT2:PORT2 passed to the container

    I have the feeling I should change the SPARK_LOCAL_IP to the host ip but if I do so, SparkUI is unable to start, blocking the process a step before.

    Thanks in advance for any ideas / advices !

  • Jenkins Docker Plugin does not seem to be provisioning slave containers for Jenkins builds
  • How do I list all containers in a user-defined docker network?
  • Docker node flow-bin not found (gitlab ci)
  • ambassador pattern vs network alias pattern in docker
  • docker: possible to create a single file with binaries and packages installed?
  • How will a Docker application with ubuntu as base image work on Windows?
  • One Solution collect form web for “How to expose Spark Driver behind dockerized Apache Zeppelin?”

    Good question! First of all, as you know Apache Zeppelin runs interpreters in a separate processes.

    Apache Zeppelin architecture diagram

    In your case, Spark interpreter JVM process hosts a SparkContext and serves as aSparkDriver instance for the yarn-client deployment mode. This process inside the container, according to the Apache Spark documentation, needs to be able to communicate back and forth to\from YARN ApplicationMaster and all SparkWorkers machines of the cluster.

    Apache Spark architecture diagram

    This implies that you have to have number of ports open and manually forwarded between the container and a host machine. Here is an example of a project at ZEPL doing similar job, where it took us 7 ports to get the job done.

    Anoter aproach can be running Docker networking in a host mode (though it apparently does not work on os x, due to a recent bug)

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.