public class SparkJobInfo extends AbstractModel
skipSign
Constructor and Description |
---|
SparkJobInfo() |
SparkJobInfo(SparkJobInfo source)
NOTE: Any ambiguous key set via .set("AnyKey", "value") will be a shallow copy,
and any explicit key, i.e Foo, set via .setFoo("value") will be a deep copy.
|
Modifier and Type | Method and Description |
---|---|
String |
getAppPythonFiles()
Get Note: This returned value has been disused.
|
String |
getCmdArgs()
Get Command line parameters of the Spark job separated by space
|
String |
getCurrentTaskId()
Get Last task ID of the Spark job
|
String |
getDataEngine()
Get Engine name
|
String |
getDataEngineClusterType()
Get `spark_emr_livy` indicates to create an EMR cluster.
|
String |
getDataEngineImageVersion()
Get `Spark 3.2-EMR` indicates to use the Spark 3.2 image.
|
Long |
getDataEngineStatus()
Get Engine status.
|
String |
getDataSource()
Get Data source name
Note: This field may return null, indicating that no valid values can be obtained.
|
String |
getEni()
Get This field has been disused.
|
Long |
getIsInherit()
Get Whether the task resource configuration is inherited from the cluster template.
|
String |
getIsLocal()
Get Whether the program package is uploaded locally.
|
String |
getIsLocalArchives()
Get Archives: Dependency upload method.
|
String |
getIsLocalFiles()
Get Whether the dependency file is uploaded locally.
|
String |
getIsLocalJars()
Get Whether the dependency JAR packages are uploaded locally.
|
String |
getIsLocalPythonFiles()
Get PySpark: Dependency upload method.
|
Boolean |
getIsSessionStarted()
Get Whether the task runs with the session SQLs.
|
String |
getJobArchives()
Get Archives: Dependency resources
Note: This field may return null, indicating that no valid values can be obtained.
|
String |
getJobConf()
Get Native Spark configurations separated by line break
|
Long |
getJobCreateTime()
Get Spark job creation time
|
String |
getJobCreator()
Get Spark job creator
|
String |
getJobDriverSize()
Get Driver resource size of the Spark job
|
Long |
getJobExecutorMaxNumbers()
Get The specified executor count (max), which defaults to 1.
|
Long |
getJobExecutorNums()
Get Number of Spark job executors
|
String |
getJobExecutorSize()
Get Executor resource size of the Spark job
|
String |
getJobFile()
Get Program package path
|
String |
getJobFiles()
Get Dependency files of the Spark job separated by comma
|
String |
getJobId()
Get Spark job ID
|
String |
getJobJars()
Get Dependency JAR packages of the Spark job separated by comma
|
Long |
getJobMaxAttempts()
Get Maximum number of retries of the Spark flow task
|
String |
getJobName()
Get Spark job name
|
String |
getJobPythonFiles()
Get PySpark: Python dependency, which can be in .py, .zip, or .egg format.
|
Long |
getJobStatus()
Get Last status of the Spark job
|
Long |
getJobType()
Get Spark job type.
|
Long |
getJobUpdateTime()
Get Spark job update time
|
String |
getMainClass()
Get Main class of Spark job execution
|
Long |
getRoleArn()
Get Role ID
|
String |
getSessionId()
Get The ID of the associated Data Lake Compute query script.
|
String |
getSparkImage()
Get The Spark image version.
|
String |
getSparkImageVersion()
Get The image version.
|
StreamingStatistics |
getStreamingStat()
Get Spark streaming job statistics
Note: This field may return null, indicating that no valid values can be obtained.
|
Long |
getTaskNum()
Get Number of tasks running or ready to run under the current job
Note: This field may return null, indicating that no valid values can be obtained.
|
void |
setAppPythonFiles(String AppPythonFiles)
Set Note: This returned value has been disused.
|
void |
setCmdArgs(String CmdArgs)
Set Command line parameters of the Spark job separated by space
|
void |
setCurrentTaskId(String CurrentTaskId)
Set Last task ID of the Spark job
|
void |
setDataEngine(String DataEngine)
Set Engine name
|
void |
setDataEngineClusterType(String DataEngineClusterType)
Set `spark_emr_livy` indicates to create an EMR cluster.
|
void |
setDataEngineImageVersion(String DataEngineImageVersion)
Set `Spark 3.2-EMR` indicates to use the Spark 3.2 image.
|
void |
setDataEngineStatus(Long DataEngineStatus)
Set Engine status.
|
void |
setDataSource(String DataSource)
Set Data source name
Note: This field may return null, indicating that no valid values can be obtained.
|
void |
setEni(String Eni)
Set This field has been disused.
|
void |
setIsInherit(Long IsInherit)
Set Whether the task resource configuration is inherited from the cluster template.
|
void |
setIsLocal(String IsLocal)
Set Whether the program package is uploaded locally.
|
void |
setIsLocalArchives(String IsLocalArchives)
Set Archives: Dependency upload method.
|
void |
setIsLocalFiles(String IsLocalFiles)
Set Whether the dependency file is uploaded locally.
|
void |
setIsLocalJars(String IsLocalJars)
Set Whether the dependency JAR packages are uploaded locally.
|
void |
setIsLocalPythonFiles(String IsLocalPythonFiles)
Set PySpark: Dependency upload method.
|
void |
setIsSessionStarted(Boolean IsSessionStarted)
Set Whether the task runs with the session SQLs.
|
void |
setJobArchives(String JobArchives)
Set Archives: Dependency resources
Note: This field may return null, indicating that no valid values can be obtained.
|
void |
setJobConf(String JobConf)
Set Native Spark configurations separated by line break
|
void |
setJobCreateTime(Long JobCreateTime)
Set Spark job creation time
|
void |
setJobCreator(String JobCreator)
Set Spark job creator
|
void |
setJobDriverSize(String JobDriverSize)
Set Driver resource size of the Spark job
|
void |
setJobExecutorMaxNumbers(Long JobExecutorMaxNumbers)
Set The specified executor count (max), which defaults to 1.
|
void |
setJobExecutorNums(Long JobExecutorNums)
Set Number of Spark job executors
|
void |
setJobExecutorSize(String JobExecutorSize)
Set Executor resource size of the Spark job
|
void |
setJobFile(String JobFile)
Set Program package path
|
void |
setJobFiles(String JobFiles)
Set Dependency files of the Spark job separated by comma
|
void |
setJobId(String JobId)
Set Spark job ID
|
void |
setJobJars(String JobJars)
Set Dependency JAR packages of the Spark job separated by comma
|
void |
setJobMaxAttempts(Long JobMaxAttempts)
Set Maximum number of retries of the Spark flow task
|
void |
setJobName(String JobName)
Set Spark job name
|
void |
setJobPythonFiles(String JobPythonFiles)
Set PySpark: Python dependency, which can be in .py, .zip, or .egg format.
|
void |
setJobStatus(Long JobStatus)
Set Last status of the Spark job
|
void |
setJobType(Long JobType)
Set Spark job type.
|
void |
setJobUpdateTime(Long JobUpdateTime)
Set Spark job update time
|
void |
setMainClass(String MainClass)
Set Main class of Spark job execution
|
void |
setRoleArn(Long RoleArn)
Set Role ID
|
void |
setSessionId(String SessionId)
Set The ID of the associated Data Lake Compute query script.
|
void |
setSparkImage(String SparkImage)
Set The Spark image version.
|
void |
setSparkImageVersion(String SparkImageVersion)
Set The image version.
|
void |
setStreamingStat(StreamingStatistics StreamingStat)
Set Spark streaming job statistics
Note: This field may return null, indicating that no valid values can be obtained.
|
void |
setTaskNum(Long TaskNum)
Set Number of tasks running or ready to run under the current job
Note: This field may return null, indicating that no valid values can be obtained.
|
void |
toMap(HashMap<String,String> map,
String prefix)
Internal implementation, normal users should not use it.
|
any, fromJsonString, getBinaryParams, getMultipartRequestParams, getSkipSign, set, setParamArrayObj, setParamArraySimple, setParamObj, setParamSimple, setSkipSign, toJsonString
public SparkJobInfo()
public SparkJobInfo(SparkJobInfo source)
public String getJobId()
public void setJobId(String JobId)
JobId
- Spark job IDpublic String getJobName()
public void setJobName(String JobName)
JobName
- Spark job namepublic Long getJobType()
public void setJobType(Long JobType)
JobType
- Spark job type. Valid values: `1` (batch job), `2` (streaming job).public String getDataEngine()
public void setDataEngine(String DataEngine)
DataEngine
- Engine namepublic String getEni()
public void setEni(String Eni)
Eni
- This field has been disused. Use the `Datasource` field instead.public String getIsLocal()
public void setIsLocal(String IsLocal)
IsLocal
- Whether the program package is uploaded locally. Valid values: `cos`, `lakefs`.public String getJobFile()
public void setJobFile(String JobFile)
JobFile
- Program package pathpublic Long getRoleArn()
public void setRoleArn(Long RoleArn)
RoleArn
- Role IDpublic String getMainClass()
public void setMainClass(String MainClass)
MainClass
- Main class of Spark job executionpublic String getCmdArgs()
public void setCmdArgs(String CmdArgs)
CmdArgs
- Command line parameters of the Spark job separated by spacepublic String getJobConf()
public void setJobConf(String JobConf)
JobConf
- Native Spark configurations separated by line breakpublic String getIsLocalJars()
public void setIsLocalJars(String IsLocalJars)
IsLocalJars
- Whether the dependency JAR packages are uploaded locally. Valid values: `cos`, `lakefs`.public String getJobJars()
public void setJobJars(String JobJars)
JobJars
- Dependency JAR packages of the Spark job separated by commapublic String getIsLocalFiles()
public void setIsLocalFiles(String IsLocalFiles)
IsLocalFiles
- Whether the dependency file is uploaded locally. Valid values: `cos`, `lakefs`.public String getJobFiles()
public void setJobFiles(String JobFiles)
JobFiles
- Dependency files of the Spark job separated by commapublic String getJobDriverSize()
public void setJobDriverSize(String JobDriverSize)
JobDriverSize
- Driver resource size of the Spark jobpublic String getJobExecutorSize()
public void setJobExecutorSize(String JobExecutorSize)
JobExecutorSize
- Executor resource size of the Spark jobpublic Long getJobExecutorNums()
public void setJobExecutorNums(Long JobExecutorNums)
JobExecutorNums
- Number of Spark job executorspublic Long getJobMaxAttempts()
public void setJobMaxAttempts(Long JobMaxAttempts)
JobMaxAttempts
- Maximum number of retries of the Spark flow taskpublic String getJobCreator()
public void setJobCreator(String JobCreator)
JobCreator
- Spark job creatorpublic Long getJobCreateTime()
public void setJobCreateTime(Long JobCreateTime)
JobCreateTime
- Spark job creation timepublic Long getJobUpdateTime()
public void setJobUpdateTime(Long JobUpdateTime)
JobUpdateTime
- Spark job update timepublic String getCurrentTaskId()
public void setCurrentTaskId(String CurrentTaskId)
CurrentTaskId
- Last task ID of the Spark jobpublic Long getJobStatus()
public void setJobStatus(Long JobStatus)
JobStatus
- Last status of the Spark jobpublic StreamingStatistics getStreamingStat()
public void setStreamingStat(StreamingStatistics StreamingStat)
StreamingStat
- Spark streaming job statistics
Note: This field may return null, indicating that no valid values can be obtained.public String getDataSource()
public void setDataSource(String DataSource)
DataSource
- Data source name
Note: This field may return null, indicating that no valid values can be obtained.public String getIsLocalPythonFiles()
public void setIsLocalPythonFiles(String IsLocalPythonFiles)
IsLocalPythonFiles
- PySpark: Dependency upload method. 1: cos; 2: lakefs (this method needs to be used in the console but cannot be called through APIs).
Note: This field may return null, indicating that no valid values can be obtained.public String getAppPythonFiles()
public void setAppPythonFiles(String AppPythonFiles)
AppPythonFiles
- Note: This returned value has been disused.
Note: This field may return null, indicating that no valid values can be obtained.public String getIsLocalArchives()
public void setIsLocalArchives(String IsLocalArchives)
IsLocalArchives
- Archives: Dependency upload method. 1: cos; 2: lakefs (this method needs to be used in the console but cannot be called through APIs).
Note: This field may return null, indicating that no valid values can be obtained.public String getJobArchives()
public void setJobArchives(String JobArchives)
JobArchives
- Archives: Dependency resources
Note: This field may return null, indicating that no valid values can be obtained.public String getSparkImage()
public void setSparkImage(String SparkImage)
SparkImage
- The Spark image version.
Note: This field may return null, indicating that no valid values can be obtained.public String getJobPythonFiles()
public void setJobPythonFiles(String JobPythonFiles)
JobPythonFiles
- PySpark: Python dependency, which can be in .py, .zip, or .egg format. Multiple files should be separated by comma.
Note: This field may return null, indicating that no valid values can be obtained.public Long getTaskNum()
public void setTaskNum(Long TaskNum)
TaskNum
- Number of tasks running or ready to run under the current job
Note: This field may return null, indicating that no valid values can be obtained.public Long getDataEngineStatus()
public void setDataEngineStatus(Long DataEngineStatus)
DataEngineStatus
- Engine status. -100 (default value): unknown; -2–11: normal.
Note: This field may return null, indicating that no valid values can be obtained.public Long getJobExecutorMaxNumbers()
public void setJobExecutorMaxNumbers(Long JobExecutorMaxNumbers)
JobExecutorMaxNumbers
- The specified executor count (max), which defaults to 1. This parameter applies if the "Dynamic" mode is selected. If the "Dynamic" mode is not selected, the executor count is equal to `JobExecutorNums`.
Note: This field may return null, indicating that no valid values can be obtained.public String getSparkImageVersion()
public void setSparkImageVersion(String SparkImageVersion)
SparkImageVersion
- The image version.
Note: This field may return null, indicating that no valid values can be obtained.public String getSessionId()
public void setSessionId(String SessionId)
SessionId
- The ID of the associated Data Lake Compute query script.
Note: This field may return null, indicating that no valid values can be obtained.public String getDataEngineClusterType()
public void setDataEngineClusterType(String DataEngineClusterType)
DataEngineClusterType
- `spark_emr_livy` indicates to create an EMR cluster.
Note: This field may return null, indicating that no valid values can be obtained.public String getDataEngineImageVersion()
public void setDataEngineImageVersion(String DataEngineImageVersion)
DataEngineImageVersion
- `Spark 3.2-EMR` indicates to use the Spark 3.2 image.
Note: This field may return null, indicating that no valid values can be obtained.public Long getIsInherit()
public void setIsInherit(Long IsInherit)
IsInherit
- Whether the task resource configuration is inherited from the cluster template. Valid values: `0` (default): No; `1`: Yes.
Note: This field may return null, indicating that no valid values can be obtained.public Boolean getIsSessionStarted()
public void setIsSessionStarted(Boolean IsSessionStarted)
IsSessionStarted
- Whether the task runs with the session SQLs. Valid values: `false` for no and `true` for yes.
Note: u200dThis field may returnu200d·nullu200d, indicating that no valid values can be obtained.Copyright © 2023. All rights reserved.