public class ModifySparkAppRequest extends AbstractModel
header, skipSign
Constructor and Description |
---|
ModifySparkAppRequest() |
ModifySparkAppRequest(ModifySparkAppRequest source)
NOTE: Any ambiguous key set via .set("AnyKey", "value") will be a shallow copy,
and any explicit key, i.e Foo, set via .setFoo("value") will be a deep copy.
|
Modifier and Type | Method and Description |
---|---|
String |
getAppArchives()
Get The dependency archives of the Spark job, separated by comma, with tar.gz, .tgz, and .tar formats supported.
|
String |
getAppConf()
Get Spark configurations separated by line break
|
String |
getAppDriverSize()
Get The driver size.
|
Long |
getAppExecutorMaxNumbers()
Get The specified executor count (max), which defaults to 1.
|
Long |
getAppExecutorNums()
Get Number of Spark job executors
|
String |
getAppExecutorSize()
Get The executor size.
|
String |
getAppFile()
Get The path of the Spark job package.
|
String |
getAppFiles()
Get The dependency files of the Spark job (files other than JAR and ZIP packages), separated by comma.
|
String |
getAppJars()
Get The dependency JAR packages of the Spark JAR job (JAR packages), separated by comma.
|
String |
getAppName()
Get The Spark job name.
|
String |
getAppPythonFiles()
Get The PySpark dependencies (Python files), separated by comma, with .py, .zip, and .egg formats supported.
|
Long |
getAppType()
Get The Spark job type.
|
String |
getCmdArgs()
Get The input parameters of the Spark job, separated by comma.
|
String |
getDataEngine()
Get The data engine executing the Spark job.
|
String |
getDataSource()
Get Data source name
|
String |
getEni()
Get This field has been disused.
|
Long |
getIsInherit()
Get Whether to inherit the task resource configuration from the cluster configuration template.
|
String |
getIsLocal()
Get The source of the Spark job package.
|
String |
getIsLocalArchives()
Get The source of the dependency archives of the Spark job.
|
String |
getIsLocalFiles()
Get The source of the dependency files of the Spark job.
|
String |
getIsLocalJars()
Get The source of the dependency JAR packages of the Spark job.
|
String |
getIsLocalPythonFiles()
Get The source of the PySpark dependencies.
|
Boolean |
getIsSessionStarted()
Get Whether to run the task with the session SQLs.
|
String |
getMainClass()
Get The main class of the Spark job.
|
Long |
getMaxRetries()
Get The maximum number of retries, valid for Spark streaming tasks only.
|
Long |
getRoleArn()
Get The data access policy (CAM role arn).
|
String |
getSessionId()
Get The associated Data Lake Compute query script.
|
String |
getSparkAppId()
Get The Spark job ID.
|
String |
getSparkImage()
Get The Spark image version.
|
String |
getSparkImageVersion()
Get The Spark image version name.
|
void |
setAppArchives(String AppArchives)
Set The dependency archives of the Spark job, separated by comma, with tar.gz, .tgz, and .tar formats supported.
|
void |
setAppConf(String AppConf)
Set Spark configurations separated by line break
|
void |
setAppDriverSize(String AppDriverSize)
Set The driver size.
|
void |
setAppExecutorMaxNumbers(Long AppExecutorMaxNumbers)
Set The specified executor count (max), which defaults to 1.
|
void |
setAppExecutorNums(Long AppExecutorNums)
Set Number of Spark job executors
|
void |
setAppExecutorSize(String AppExecutorSize)
Set The executor size.
|
void |
setAppFile(String AppFile)
Set The path of the Spark job package.
|
void |
setAppFiles(String AppFiles)
Set The dependency files of the Spark job (files other than JAR and ZIP packages), separated by comma.
|
void |
setAppJars(String AppJars)
Set The dependency JAR packages of the Spark JAR job (JAR packages), separated by comma.
|
void |
setAppName(String AppName)
Set The Spark job name.
|
void |
setAppPythonFiles(String AppPythonFiles)
Set The PySpark dependencies (Python files), separated by comma, with .py, .zip, and .egg formats supported.
|
void |
setAppType(Long AppType)
Set The Spark job type.
|
void |
setCmdArgs(String CmdArgs)
Set The input parameters of the Spark job, separated by comma.
|
void |
setDataEngine(String DataEngine)
Set The data engine executing the Spark job.
|
void |
setDataSource(String DataSource)
Set Data source name
|
void |
setEni(String Eni)
Set This field has been disused.
|
void |
setIsInherit(Long IsInherit)
Set Whether to inherit the task resource configuration from the cluster configuration template.
|
void |
setIsLocal(String IsLocal)
Set The source of the Spark job package.
|
void |
setIsLocalArchives(String IsLocalArchives)
Set The source of the dependency archives of the Spark job.
|
void |
setIsLocalFiles(String IsLocalFiles)
Set The source of the dependency files of the Spark job.
|
void |
setIsLocalJars(String IsLocalJars)
Set The source of the dependency JAR packages of the Spark job.
|
void |
setIsLocalPythonFiles(String IsLocalPythonFiles)
Set The source of the PySpark dependencies.
|
void |
setIsSessionStarted(Boolean IsSessionStarted)
Set Whether to run the task with the session SQLs.
|
void |
setMainClass(String MainClass)
Set The main class of the Spark job.
|
void |
setMaxRetries(Long MaxRetries)
Set The maximum number of retries, valid for Spark streaming tasks only.
|
void |
setRoleArn(Long RoleArn)
Set The data access policy (CAM role arn).
|
void |
setSessionId(String SessionId)
Set The associated Data Lake Compute query script.
|
void |
setSparkAppId(String SparkAppId)
Set The Spark job ID.
|
void |
setSparkImage(String SparkImage)
Set The Spark image version.
|
void |
setSparkImageVersion(String SparkImageVersion)
Set The Spark image version name.
|
void |
toMap(HashMap<String,String> map,
String prefix)
Internal implementation, normal users should not use it.
|
any, fromJsonString, getBinaryParams, GetHeader, getMultipartRequestParams, getSkipSign, set, SetHeader, setParamArrayObj, setParamArraySimple, setParamObj, setParamSimple, setSkipSign, toJsonString
public ModifySparkAppRequest()
public ModifySparkAppRequest(ModifySparkAppRequest source)
public String getAppName()
public void setAppName(String AppName)
AppName
- The Spark job name.public Long getAppType()
public void setAppType(Long AppType)
AppType
- The Spark job type. Valid values: `1` for Spark JAR job and `2` for Spark streaming job.public String getDataEngine()
public void setDataEngine(String DataEngine)
DataEngine
- The data engine executing the Spark job.public String getAppFile()
public void setAppFile(String AppFile)
AppFile
- The path of the Spark job package.public Long getRoleArn()
public void setRoleArn(Long RoleArn)
RoleArn
- The data access policy (CAM role arn).public String getAppDriverSize()
public void setAppDriverSize(String AppDriverSize)
AppDriverSize
- The driver size. Valid values: `small` (default, 1 CU), `medium` (2 CUs), `large` (4 CUs), and `xlarge` (8 CUs).public String getAppExecutorSize()
public void setAppExecutorSize(String AppExecutorSize)
AppExecutorSize
- The executor size. Valid values: `small` (default, 1 CU), `medium` (2 CUs), `large` (4 CUs), and `xlarge` (8 CUs).public Long getAppExecutorNums()
public void setAppExecutorNums(Long AppExecutorNums)
AppExecutorNums
- Number of Spark job executorspublic String getSparkAppId()
public void setSparkAppId(String SparkAppId)
SparkAppId
- The Spark job ID.public String getEni()
public void setEni(String Eni)
Eni
- This field has been disused. Use the `Datasource` field instead.public String getIsLocal()
public void setIsLocal(String IsLocal)
IsLocal
- The source of the Spark job package. Valid values: `cos` for COS and `lakefs` for the local system (for use in the console, but this method does not support direct API calls).public String getMainClass()
public void setMainClass(String MainClass)
MainClass
- The main class of the Spark job.public String getAppConf()
public void setAppConf(String AppConf)
AppConf
- Spark configurations separated by line breakpublic String getIsLocalJars()
public void setIsLocalJars(String IsLocalJars)
IsLocalJars
- The source of the dependency JAR packages of the Spark job. Valid values: `cos` for COS and `lakefs` for the local system (for use in the console, but this method does not support direct API calls).public String getAppJars()
public void setAppJars(String AppJars)
AppJars
- The dependency JAR packages of the Spark JAR job (JAR packages), separated by comma.public String getIsLocalFiles()
public void setIsLocalFiles(String IsLocalFiles)
IsLocalFiles
- The source of the dependency files of the Spark job. Valid values: `cos` for COS and `lakefs` for the local system (for use in the console, but this method does not support direct API calls).public String getAppFiles()
public void setAppFiles(String AppFiles)
AppFiles
- The dependency files of the Spark job (files other than JAR and ZIP packages), separated by comma.public String getIsLocalPythonFiles()
public void setIsLocalPythonFiles(String IsLocalPythonFiles)
IsLocalPythonFiles
- The source of the PySpark dependencies. Valid values: `cos` for COS and `lakefs` for the local system (for use in the console, but this method does not support direct API calls).public String getAppPythonFiles()
public void setAppPythonFiles(String AppPythonFiles)
AppPythonFiles
- The PySpark dependencies (Python files), separated by comma, with .py, .zip, and .egg formats supported.public String getCmdArgs()
public void setCmdArgs(String CmdArgs)
CmdArgs
- The input parameters of the Spark job, separated by comma.public Long getMaxRetries()
public void setMaxRetries(Long MaxRetries)
MaxRetries
- The maximum number of retries, valid for Spark streaming tasks only.public String getDataSource()
public void setDataSource(String DataSource)
DataSource
- Data source namepublic String getIsLocalArchives()
public void setIsLocalArchives(String IsLocalArchives)
IsLocalArchives
- The source of the dependency archives of the Spark job. Valid values: `cos` for COS and `lakefs` for the local system (for use in the console, but this method does not support direct API calls).public String getAppArchives()
public void setAppArchives(String AppArchives)
AppArchives
- The dependency archives of the Spark job, separated by comma, with tar.gz, .tgz, and .tar formats supported.public String getSparkImage()
public void setSparkImage(String SparkImage)
SparkImage
- The Spark image version.public String getSparkImageVersion()
public void setSparkImageVersion(String SparkImageVersion)
SparkImageVersion
- The Spark image version name.public Long getAppExecutorMaxNumbers()
public void setAppExecutorMaxNumbers(Long AppExecutorMaxNumbers)
AppExecutorMaxNumbers
- The specified executor count (max), which defaults to 1. This parameter applies if the "Dynamic" mode is selected. If the "Dynamic" mode is not selected, the executor count is equal to `AppExecutorNums`.public String getSessionId()
public void setSessionId(String SessionId)
SessionId
- The associated Data Lake Compute query script.public Long getIsInherit()
public void setIsInherit(Long IsInherit)
IsInherit
- Whether to inherit the task resource configuration from the cluster configuration template. Valid values: `0` (default): No; `1`: Yes.public Boolean getIsSessionStarted()
public void setIsSessionStarted(Boolean IsSessionStarted)
IsSessionStarted
- Whether to run the task with the session SQLs. Valid values: `false` for no and `true` for yes.Copyright © 2024. All rights reserved.