Apache Spark runs in SUBMITTED state in a stand-alone cluster

Apache Spark runs in a SUBMITTED state, even if there are enough cores and memory available.

A Spark 1.2.1 cluster with an offline manager has:

Total Nodes: 3 Total Cores: 8 Cores Total Memory: 14 GB.

From this, the First Application uses: 1 GB of memory and 2 cores, so the Cluster still has 13 GB of memory and 6 cores. But after submitting another app, it will go to submit state and wait for the first app to finish.

Why doesn't it start right away? Why is it waiting for another application to finish?

+3
hadoop apache-spark


source to share


No one has answered this question yet

Check out similar questions:

192
What are workers, executors, cores in the Spark Standalone cluster?
12
Autonomous spark cluster. Unable to send a request programmatically & # 8594; java.io.InvalidClassException
eleven
How to distribute more performers per worker in stand-alone cluster mode?
eleven
Submitting jobs to the Spark EC2 cluster remotely
7
Corrected Work - Pending (TaskSchedulerImpl: Seed Task Not Accepted)
4
Spark Standalone Mode multiple shell (application) sessions
3
Why is Spark Standalone not using all available cores?
0
Fixing Spark Query Offline Cluster
0
Running a Spark Job Distributed Server with Multiple Employees in a Standalone Spark Cluster
0
fix standalone cluster, do not execute FIFO job



All Articles
Loading...
X
Show
Funny
Dev
Pics