sum(idle time)/sum(allocated cpu time)
cores -> milliseconds with this level of core usage
how many cores are currently being used
How tasks are allocated on each executor
How tasks are allocated on each executor
ExecutorId -> [TaskInfo]
Get the cores for each executor allocated
Get the cores for each executor allocated
ExecutorId -> [ExecutorInfo]
If the app is using fair scheduler, return all the pools being used.
If the app is using fair scheduler, return all the pools being used. Otherwise, empty
the number of milliseconds when the app is totally idle
the number of milliseconds when the app is totally idle after the first task is submitted
Time when the last event was processed
the identifier for a series of stages
for each locality level, the cumulative task metrics for that locality level
the highest allocated cores throughout the app history
the highest concurrent tasks throughout the app history
the highest core usage throughout the app history
filter rdds that has been referenced more than this many times
rdd information
Can be different from getCurrentCores
since one task can sometimes use more than 1 cores
Can be different from getCurrentCores
since one task can sometimes use more than 1 cores
how many tasks are currently being executed.
[avg load], the time information is contained inside CoreUsage already
the number of milliseconds between sparklint listener was created and received first task info
An analyzer is responsible of providing useful stats after processing spark event logs
6/13/16.