Getting java.net.BindException when trying to run Spark master on EC2 node with public IP

I am trying to run Spark Wizard for a standalone cluster on an EC2 node. The CLI command I'm using looks like this:

JAVA_HOME=<location of my JDK install> \ java -cp <spark install dir>/sbin/../conf/:<spark install dir>/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:<spark install dir>/lib/datanucleus-core-3.2.10.jar:<spark install dir>/lib/datanucleus-api-jdo-3.2.6.jar:<spark install dir>/lib/datanucleus-rdbms-3.2.9.jar \ -Xms512m -Xmx512m -XX:MaxPermSize=128m \ org.apache.spark.deploy.master.Master --port 7077 --webui-port 8080 --host 54.xx.xx.xx

Note that I am specifying the -host argument; I want my Spark master to listen on a specific IP. The host I specify (i.e. 54.xx.xx.xx) is the public IP for my EC2 node; I confirmed that I was no longer listening on port 7077 and that my EC2 security group had all ports open. I have also double checked that the public IP is correct.

When I use -host 54.xx.xx.xx I get the following error:

15/07/27 17:04:09 ERROR NettyTransport: failed to bind to /54.xx.xx.xx:7093, shutting down Netty transport Exception in thread "main" java.net.BindException: Failed to bind to: /54.xx.xx.xx:7093: Service 'sparkMaster' failed after 16 retries! at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272) at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:393) at akka.remote.transport.netty.NettyTransport$$anonfun$listen$1.apply(NettyTransport.scala:389) at scala.util.Success$$anonfun$map$1.apply(Try.scala:206) at scala.util.Try$.apply(Try.scala:161) at scala.util.Success.map(Try.scala:206) at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235) at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59) at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59) at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393) at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

It doesn't happen if I ignore the -host argument, and it doesn't if I use -host 10.0.xx.xx, where 10.0.xx.xx is my EC2 private IP.

Why won't Spark bind to the EC2 public address?

+3


source to share


1 answer


Try setting the environment variable SPARK_LOCAL_IP = 54.xx.xx.xx



please refer to the first SO answer for a similar problem here .

0


source







All Articles