The JobManager actor supervises jobs running in a single SparkContext, as well as shared metadata.
It creates a SparkContext (or a StreamingContext etc. depending on the factory class)
It also creates and supervises a JobResultActor and JobStatusActor, although an existing JobResultActor
can be passed in as well.
contextConfig
num-cpu-cores = 4 # Total # of CPU cores to allocate across the cluster
memory-per-node = 512m # -Xmx style memory string for total memory to use for executor on one node
dependent-jar-uris = ["local://opt/foo/my-foo-lib.jar"]
# URIs for dependent jars to load for entire context
context-factory = "spark.jobserver.context.DefaultSparkContextFactory"
spark.mesos.coarse = true # per-context, rather than per-job, resource allocation
rdd-ttl = 24 h # time-to-live for RDDs in a SparkContext. Don't specify = forever
is-adhoc = false # trueif context is ad-hoc context
context.name = "sql" # Name of context
global configuration
spark {
jobserver {
max-jobs-per-context = 16 # Number of jobs that can be run simultaneously per context
}
}
Linear Supertypes
InstrumentedActor, ActorMetrics, Slf4jLogging, ActorStack, Actor, AnyRef, Any
The JobManager actor supervises jobs running in a single SparkContext, as well as shared metadata. It creates a SparkContext (or a StreamingContext etc. depending on the factory class) It also creates and supervises a JobResultActor and JobStatusActor, although an existing JobResultActor can be passed in as well.
contextConfig
global configuration
spark { jobserver { max-jobs-per-context = 16 # Number of jobs that can be run simultaneously per context } }