ActorNotFound Exception trying to start Spark 1.3.1 on windows 7
We're in the road block trying to figure out why Spark 1.3.1 isn't working for my colleague on his Windows 7 laptop. I have pretty much the same setup and everything works fine for me.
I searched for the error message but couldn't find a resolution.
Below is the exception message (after running the 1.1.1 vanilla install pre-built for Hadoop 2.4)
akka.actor.ActorInitializationException: exception during creation
at akka.actor.ActorInitializationException$.apply(Actor.scala:164)
at akka.actor.ActorCell.create(ActorCell.scala:596)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka://sparkDriver/deadLetters), Path(/)
]
at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:110)
at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.scala$concurrent$impl$Promise$DefaultPromise$$dispatchOrAddCallb
ack(Promise.scala:280)
at scala.concurrent.impl.Promise$DefaultPromise.onComplete(Promise.scala:270)
at akka.actor.ActorSelection.resolveOne(ActorSelection.scala:63)
at akka.actor.ActorSelection.resolveOne(ActorSelection.scala:80)
at org.apache.spark.util.AkkaUtils$.makeDriverRef(AkkaUtils.scala:221)
at org.apache.spark.executor.Executor.startDriverHeartbeater(Executor.scala:393)
at org.apache.spark.executor.Executor.<init>(Executor.scala:119)
at org.apache.spark.scheduler.local.LocalActor.<init>(LocalBackend.scala:58)
at org.apache.spark.scheduler.local.LocalBackend$$anonfun$start$1.apply(LocalBackend.scala:107)
at org.apache.spark.scheduler.local.LocalBackend$$anonfun$start$1.apply(LocalBackend.scala:107)
at akka.actor.TypedCreatorFunctionConsumer.produce(Props.scala:343)
at akka.actor.Props.newActor(Props.scala:252)
at akka.actor.ActorCell.newActor(ActorCell.scala:552)
at akka.actor.ActorCell.create(ActorCell.scala:578)
... 9 more
Related problems:
Searching the internet for this problem did not bring up many results, which seems to indicate that this is a very rare and specific problem:
-
I saw this error, but for Linux, not windows: http://apache-spark-user-list.1001560.n3.nabble.com/Actor-not-found-td22265.html
-
This option also doesn't offer any permission: https://groups.google.com/a/lists.datastax.com/forum/#!topic/spark-connector-user/UqCYeUpgGCU
My guess is that this is due to some resolution / IP conflicts, etc., but I'm not sure.
More details
- JDK 1.7 64 bit, Windows 7 64 bit, Spark 1.3.1, prebuilt for Hadoop 2.4
- We ruled out all firewall related issues, we looked at all blocked traffic and it wasn't there.
- We tried "run as administrator" with no luck.
- We tried both Spark submit and spark sheath, the simplest Spark "Hellow World" didn't work.
- We got the UI at localhost: 4040, the job is marked as started but waits forever (for example,
sc.parallelize(List(1,2,3)).count()
never even ends - No errors found in the logs
- The only difference I noticed between my system and my friend: when I ping localhost I get 127.0.0.1, when he does it, he gets: 1 not sure if this is related, but I saw a spark issue with problems with ipv6, and saw that it was only allowed in 1.4, is it related? https://issues.apache.org/jira/browse/SPARK-6440
I'm sure this is a network / security / permissions issue, but we can't pinpoint it.
Any ideas where to look next?
source to share
Upgrading to Spark 1.4.0 seems to have fixed this issue.
It may be linked from https://issues.apache.org/jira/browse/SPARK-6440 but can't tell for sure.
source to share