Apache Spark runs in SUBMITTED state in a stand-alone cluster

Apache Spark runs in a SUBMITTED state, even if there are enough cores and memory available.

A Spark 1.2.1 cluster with an offline manager has:

Total Nodes: 3 Total Cores: 8 Cores Total Memory: 14 GB.

From this, the First Application uses: 1 GB of memory and 2 cores, so the Cluster still has 13 GB of memory and 6 cores. But after submitting another app, it will go to submit state and wait for the first app to finish.

Why doesn't it start right away? Why is it waiting for another application to finish?

+3


source to share





All Articles