Spark: Why some tasks have zero input size. How is the input size calculated?
I am running 2 spark jobs with one performer and another with 8 performers.
A (1 performer)
spark-submit --class com.SmallfilesResearchProcess --master yarn-cluster --queue xxx --executor-memory 1g --driver-memory 6g --num-executors 1 --conf spark.executor.cores=8 --conf spark.logLineage=true /tmp/hadoop-tools-1.2-SNAPSHOT-spark.jar coalesce.num.partitions=32
Here are the results for case A from UI spark
Executor ID ▴ Address Task Time Total Tasks Failed Tasks Succeeded Tasks Input Size / Records
1 machine:xxx 2.0 min 32 0 32 404.5 MB / 525572
Total Time Across All Tasks: 2.0 min
Locality Level Summary: Node local: 1; Rack local: 31
Input Size / Records: 404.5 MB / 525572
B (8 performers)
spark-submit --class com.SmallfilesResearchProcess --master yarn-cluster --queue xxx --executor-memory 1g --driver-memory 6g --num-executors 8 --conf spark.executor.cores=8 --conf spark.logLineage=true /tmp/hadoop-tools-1.2-SNAPSHOT-spark.jar coalesce.num.partitions=32
Here are the results for case B from the UI spark
Executor ID ▴ Address Task Time Total Tasks Failed Tasks Succeeded Tasks Input Size / Records
1 machine:xxxx 22 s 4 0 4 37.0 MB / 63106
2 machine:xxxx 25 s 4 0 4 0.0 B / 64068
3 machine:xxxx 27 s 4 0 4 0.0 B / 65045
4 machine:xxxx 22 s 4 0 4 38.1 MB / 64255
5 machine:xxxx 27 s 5 0 5 52.3 MB / 82091
6 machine:xxxx 22 s 5 0 5 49.1 MB / 79232
7 machine:xxxx 19 s 3 0 3 0.0 B / 48337
8 machine:xxxx 22 s 3 0 3 0.0 B / 59438
Total Time Across All Tasks: 2.8 min
Locality Level Summary: Node local: 4; Rack local: 28
Input Size / Records: 176.5 MB / 525572
Question 1: How is the size of the input calculated, I thought it should be the same, given that the entries are the same for both cases? 176.5 MB in case B versus 405 MB in case A.
Question 2: Why is the input size 0 B for some problems in case B?
+3
source to share
No one has answered this question yet
Check out similar questions: