How to expose Spark Driver behind dockerized Apache Zeppelin?

I am currently building a custom docker container from a plain distribution with Apache Zeppelin + Spark 2.x inside.

My Spark jobs will run in a remote cluster and I am using yarn-client as master.

  • Spark in docker parquet error No predefined schema found
  • Docker Container with Apache Spark in standalone cluster mode
  • Trouble accessing Kubernetes endpoints
  • Elasticsearch-Hadoop library cannot connect to to docker container
  • Step by Step Setup Guide to Neo4j Mazerunner in Windows
  • How to reach the service running in docker container(overlay) externally from different hosts
  • When I run a notebook and try to print sc.version, the program gets stuck. If I go to the remote resource manager, an application has been created and accepted but in the logs I can read:

    INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable

    My understanding of the situation is that the cluster is unable to talk to the driver in the container but I don’t know how to solve this issue.

    I am currently using the following configuration:

    • spark.driver.port set to PORT1 and option -p PORT1:PORT1 passed to the container
    • set to (ip of the container)
    • SPARK_LOCAL_IP set to (ip of the container)
    • spark.ui.port set to PORT2 and option -p PORT2:PORT2 passed to the container

    I have the feeling I should change the SPARK_LOCAL_IP to the host ip but if I do so, SparkUI is unable to start, blocking the process a step before.

    Thanks in advance for any ideas / advices !

  • docker: show open ports from linked container
  • docker networking namespace not visible in ip netns list
  • how to sync mac local directory with native docker container?
  • Should files from a base Docker image be present in a derived image?
  • How to change docker daemon root directory in CentOS7
  • How do I extend a default docker image command without interrupting the default behaviour
  • One Solution collect form web for “How to expose Spark Driver behind dockerized Apache Zeppelin?”

    Good question! First of all, as you know Apache Zeppelin runs interpreters in a separate processes.

    Apache Zeppelin architecture diagram

    In your case, Spark interpreter JVM process hosts a SparkContext and serves as aSparkDriver instance for the yarn-client deployment mode. This process inside the container, according to the Apache Spark documentation, needs to be able to communicate back and forth to\from YARN ApplicationMaster and all SparkWorkers machines of the cluster.

    Apache Spark architecture diagram

    This implies that you have to have number of ports open and manually forwarded between the container and a host machine. Here is an example of a project at ZEPL doing similar job, where it took us 7 ports to get the job done.

    Anoter aproach can be running Docker networking in a host mode (though it apparently does not work on os x, due to a recent bug)

    Docker will be the best open platform for developers and sysadmins to build, ship, and run distributed applications.