How to run Spark shell from Spark on Mesos?

Is there a way to run the example (SparkPi) from the Spark shell? Or send Spark jobs through the shell to the Mesos cluster? spark-submit

does not currently support deployment to Mesos, but I want to achieve something similar so that the driver will also host the executor.

+3


source to share


1 answer


1) You can connect your spark sheath and spark to the Mesos cluster:

./bin/spark-shell -h

Usage: ./bin/spark-shell [options]
Options:
  --master MASTER_URL         spark://host:port, mesos://host:port,     yarn, or local.
  --deploy-mode DEPLOY_MODE   Whether to launch the driver program locally ("client") or
                          on one of the worker machines inside the cluster ("cluster")
                          (Default: client).
...

      

2) Is there a way to run the example (SparkPi) from the Spark shell?

In short - Yes. But this will probably only work in Spark 2.0.

The SparkPi example implementation in Spark 1.6 tries to create a new Spark Context (while the spark shell has already been created alone, this will cause problems).

https://github.com/apache/spark/blob/branch-1.6/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala



val conf = new SparkConf().setAppName("Spark Pi")
val spark = new SparkContext(conf)

      

The implementation in Spark 2.0 tries to reuse an existing Spark context: https://github.com/apache/spark/blob/branch-2.0/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala

val spark = SparkSession
  .builder
  .appName("Spark Pi")
  .getOrCreate()

      

So how do I start SparkPi from the shell? There you go:

./bin/spark-shell --jars ./examples/jars/spark-examples_2.11-2.0.0.jar 
scala> org.apache.spark.examples.SparkPi.main(Array("100"))
Pi is roughly 3.1413147141314712                              

      

0


source







All Articles