String jobName
The name of a job to be run.
Map<K,V> arguments
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
Integer timeout
The JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before
it is terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours). This overrides
the timeout value set in the parent job.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this action.
NotificationProperty notificationProperty
Specifies configuration properties of a job run notification.
String crawlerName
The name of the crawler to be used with this action.
String catalogId
The ID of the catalog in which the partition is to be created. Currently, this should be the Amazon Web Services account ID.
String databaseName
The name of the metadata database in which the partition is to be created.
String tableName
The name of the metadata table in which the partition is to be created.
List<E> partitionInputList
A list of PartitionInput
structures that define the partitions to be created.
String catalogId
The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table in question resides.
String tableName
The name of the table that contains the partitions to be deleted.
List<E> partitionsToDelete
A list of PartitionInput
structures that define the partitions to be deleted.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the tables to delete reside. For Hive compatibility, this name is entirely lowercase.
List<E> tablesToDelete
A list of the table to delete.
String transactionId
The transaction ID at which to delete the table contents.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
List<E> versionIds
A list of the IDs of versions to be deleted. A VersionId
is a string representation of an integer.
Each version is incremented by 1.
List<E> names
A list of blueprint names.
Boolean includeBlueprint
Specifies whether or not to include the blueprint in the response.
Boolean includeParameterSpec
Specifies whether or not to include the parameters, as a JSON string, for the blueprint in the response.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionsToGet
A list of partition values identifying the partitions to retrieve.
String jobName
The name of the job definition that is used in the job run in question.
String jobRunId
The JobRunId
of the job run in question.
ErrorDetail errorDetail
Specifies details about the error that was encountered.
List<E> successfulSubmissions
A list of the JobRuns that were successfully submitted for stopping.
List<E> errors
A list of the errors that were encountered in trying to stop JobRuns
, including the
JobRunId
for which each error was encountered and details about the error.
List<E> partitionValueList
A list of values defining the partitions.
ErrorDetail errorDetail
The details about the batch update partition error.
String catalogId
The ID of the catalog in which the partition is to be updated. Currently, this should be the Amazon Web Services account ID.
String databaseName
The name of the metadata database in which the partition is to be updated.
String tableName
The name of the metadata table in which the partition is to be updated.
List<E> entries
A list of up to 100 BatchUpdatePartitionRequestEntry
objects to update.
List<E> partitionValueList
A list of values defining the partitions.
PartitionInput partitionInput
The structure used to update a partition.
String name
The name of the blueprint.
String description
The description of the blueprint.
Date createdOn
The date and time the blueprint was registered.
Date lastModifiedOn
The date and time the blueprint was last modified.
String parameterSpec
A JSON string that indicates the list of parameter specifications for the blueprint.
String blueprintLocation
Specifies the path in Amazon S3 where the blueprint is published.
String blueprintServiceLocation
Specifies a path in Amazon S3 where the blueprint is copied when you call
CreateBlueprint/UpdateBlueprint
to register the blueprint in Glue.
String status
The status of the blueprint registration.
Creating — The blueprint registration is in progress.
Active — The blueprint has been successfully registered.
Updating — An update to the blueprint registration is in progress.
Failed — The blueprint registration failed.
String errorMessage
An error message.
LastActiveDefinition lastActiveDefinition
When there are multiple versions of a blueprint and the latest version has some errors, this attribute indicates the last successful blueprint definition that is available with the service.
String blueprintName
The name of the blueprint.
String runId
The run ID for this blueprint run.
String workflowName
The name of a workflow that is created as a result of a successful blueprint run. If a blueprint run has an error, there will not be a workflow created.
String state
The state of the blueprint run. Possible values are:
Running — The blueprint run is in progress.
Succeeded — The blueprint run completed successfully.
Failed — The blueprint run failed and rollback is complete.
Rolling Back — The blueprint run failed and rollback is in progress.
Date startedOn
The date and time that the blueprint run started.
Date completedOn
The date and time that the blueprint run completed.
String errorMessage
Indicates any errors that are seen while running the blueprint.
String rollbackErrorMessage
If there are any errors while creating the entities of a workflow, we try to roll back the created entities until that point and delete them. This attribute indicates the errors seen while trying to delete the entities that are created.
String parameters
The blueprint parameters as a string. You will have to provide a value for each key that is required from the
parameter spec that is defined in the Blueprint$ParameterSpec
.
String roleArn
The role ARN. This role will be assumed by the Glue service and will be used to create the workflow and other entities of a workflow.
GrokClassifier grokClassifier
A classifier that uses grok
.
XMLClassifier xMLClassifier
A classifier for XML content.
JsonClassifier jsonClassifier
A classifier for JSON content.
CsvClassifier csvClassifier
A classifier for comma-separated values (CSV).
String columnName
The name of the column that failed.
ErrorDetail error
An error message with the reason for the failure of an operation.
String columnName
Name of column which statistics belong to.
String columnType
The data type of the column.
Date analyzedTime
The timestamp of when column statistics were generated.
ColumnStatisticsData statisticsData
A ColumnStatisticData
object that contains the statistics data values.
String type
The type of column statistics data.
BooleanColumnStatisticsData booleanColumnStatisticsData
Boolean column statistics data.
DateColumnStatisticsData dateColumnStatisticsData
Date column statistics data.
DecimalColumnStatisticsData decimalColumnStatisticsData
Decimal column statistics data.
DoubleColumnStatisticsData doubleColumnStatisticsData
Double column statistics data.
LongColumnStatisticsData longColumnStatisticsData
Long column statistics data.
StringColumnStatisticsData stringColumnStatisticsData
String column statistics data.
BinaryColumnStatisticsData binaryColumnStatisticsData
Binary column statistics data.
ColumnStatistics columnStatistics
The ColumnStatistics
of the column.
ErrorDetail error
An error message with the reason for the failure of an operation.
String logicalOperator
A logical operator.
String jobName
The name of the job whose JobRuns
this condition applies to, and on which this trigger waits.
String state
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED
,
STOPPED
, FAILED
, and TIMEOUT
. The only crawler states that a trigger can
listen for are SUCCEEDED
, FAILED
, and CANCELLED
.
String crawlerName
The name of the crawler to which this condition applies.
String crawlState
The state of the crawler to which this condition applies.
Long numTruePositives
The number of matches in the data that the transform correctly found, in the confusion matrix for your transform.
Long numFalsePositives
The number of nonmatches in the data that the transform incorrectly classified as a match, in the confusion matrix for your transform.
Long numTrueNegatives
The number of nonmatches in the data that the transform correctly rejected, in the confusion matrix for your transform.
Long numFalseNegatives
The number of matches in the data that the transform didn't find, in the confusion matrix for your transform.
String name
The name of the connection definition.
String description
The description of the connection.
String connectionType
The type of the connection. Currently, SFTP is not supported.
List<E> matchCriteria
A list of criteria that can be used in selecting this connection.
Map<K,V> connectionProperties
These key-value pairs define parameters for the connection:
HOST
- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the
database host.
PORT
- The port number, between 1024 and 65535, of the port on which the database host is listening
for database connections.
USER_NAME
- The name under which to log in to the database. The value string for
USER_NAME
is "USERNAME
".
PASSWORD
- A password, if one is used, for the user name.
ENCRYPTED_PASSWORD
- When you enable connection password protection by setting
ConnectionPasswordEncryption
in the Data Catalog encryption settings, this field stores the
encrypted password.
JDBC_DRIVER_JAR_URI
- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that
contains the JDBC driver to use.
JDBC_DRIVER_CLASS_NAME
- The class name of the JDBC driver to use.
JDBC_ENGINE
- The name of the JDBC engine to use.
JDBC_ENGINE_VERSION
- The version of the JDBC engine to use.
CONFIG_FILES
- (Reserved for future use.)
INSTANCE_ID
- The instance ID to use.
JDBC_CONNECTION_URL
- The URL for connecting to a JDBC data source.
JDBC_ENFORCE_SSL
- A Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with
hostname matching is enforced for the JDBC connection on the client. The default is false.
CUSTOM_JDBC_CERT
- An Amazon S3 location specifying the customer's root certificate. Glue uses this
root certificate to validate the customer’s certificate when connecting to the customer database. Glue only
handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM
format.
SKIP_CUSTOM_JDBC_CERT_VALIDATION
- By default, this is false
. Glue validates the
Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms
for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key
Algorithm, the key length must be at least 2048. You can set the value of this property to true
to
skip Glue’s validation of the customer certificate.
CUSTOM_JDBC_CERT_STRING
- A custom JDBC certificate string which is used for domain match or
distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as the
SSL_SERVER_CERT_DN
; in Microsoft SQL Server, this is used as the hostNameInCertificate
.
CONNECTION_URL
- The URL for connecting to a general (non-JDBC) data source.
KAFKA_BOOTSTRAP_SERVERS
- A comma-separated list of host and port pairs that are the addresses of
the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
KAFKA_SSL_ENABLED
- Whether to enable or disable SSL on an Apache Kafka connection. Default value is
"true".
KAFKA_CUSTOM_CERT
- The Amazon S3 URL for the private CA cert file (.pem format). The default is an
empty string.
KAFKA_SKIP_CUSTOM_CERT_VALIDATION
- Whether to skip the validation of the CA cert file or not. Glue
validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is "false".
SECRET_ID
- The secret ID used for the secret manager of credentials.
CONNECTOR_URL
- The connector URL for a MARKETPLACE or CUSTOM connection.
CONNECTOR_TYPE
- The connector type for a MARKETPLACE or CUSTOM connection.
CONNECTOR_CLASS_NAME
- The connector class name for a MARKETPLACE or CUSTOM connection.
KAFKA_CLIENT_KEYSTORE
- The Amazon S3 location of the client keystore file for Kafka client side
authentication (Optional).
KAFKA_CLIENT_KEYSTORE_PASSWORD
- The password to access the provided keystore (Optional).
KAFKA_CLIENT_KEY_PASSWORD
- A keystore can consist of multiple keys, so this is the password to
access the client key to be used with the Kafka server side key (Optional).
ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD
- The encrypted version of the Kafka client keystore
password (if the user has the Glue encrypt passwords setting selected).
ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD
- The encrypted version of the Kafka client key password (if the
user has the Glue encrypt passwords setting selected).
PhysicalConnectionRequirements physicalConnectionRequirements
A map of physical connection requirements, such as virtual private cloud (VPC) and SecurityGroup
,
that are needed to make this connection successfully.
Date creationTime
The time that this connection definition was created.
Date lastUpdatedTime
The last time that this connection definition was updated.
String lastUpdatedBy
The user, group, or role that last updated this connection definition.
String name
The name of the connection.
String description
The description of the connection.
String connectionType
The type of the connection. Currently, these types are supported:
JDBC
- Designates a connection to a database through Java Database Connectivity (JDBC).
KAFKA
- Designates a connection to an Apache Kafka streaming platform.
MONGODB
- Designates a connection to a MongoDB document database.
NETWORK
- Designates a network connection to a data source within an Amazon Virtual Private Cloud
environment (Amazon VPC).
MARKETPLACE
- Uses configuration settings contained in a connector purchased from Amazon Web
Services Marketplace to read from and write to data stores that are not natively supported by Glue.
CUSTOM
- Uses configuration settings contained in a custom connector to read from and write to data
stores that are not natively supported by Glue.
SFTP is not supported.
List<E> matchCriteria
A list of criteria that can be used in selecting this connection.
Map<K,V> connectionProperties
These key-value pairs define parameters for the connection.
PhysicalConnectionRequirements physicalConnectionRequirements
A map of physical connection requirements, such as virtual private cloud (VPC) and SecurityGroup
,
that are needed to successfully make this connection.
Boolean returnConnectionPasswordEncrypted
When the ReturnConnectionPasswordEncrypted
flag is set to "true", passwords remain encrypted in the
responses of GetConnection
and GetConnections
. This encryption takes effect
independently from catalog encryption.
String awsKmsKeyId
An KMS key that is used to encrypt the connection password.
If connection password protection is enabled, the caller of CreateConnection
and
UpdateConnection
needs at least kms:Encrypt
permission on the specified KMS key, to
encrypt passwords before storing them in the Data Catalog.
You can set the decrypt permission to enable or restrict access on the password key according to your security requirements.
String state
The state of the crawler.
Date startedOn
The date and time on which the crawl started.
Date completedOn
The date and time on which the crawl completed.
String errorMessage
The error message associated with the crawl.
String logGroup
The log group associated with the crawl.
String logStream
The log stream associated with the crawl.
String name
The name of the crawler.
String role
The Amazon Resource Name (ARN) of an IAM role that's used to access customer resources, such as Amazon Simple Storage Service (Amazon S3) data.
CrawlerTargets targets
A collection of targets to crawl.
String databaseName
The name of the database in which the crawler's output is stored.
String description
A description of the crawler.
List<E> classifiers
A list of UTF-8 strings that specify the custom classifiers that are associated with the crawler.
RecrawlPolicy recrawlPolicy
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
SchemaChangePolicy schemaChangePolicy
The policy that specifies update and delete behaviors for the crawler.
LineageConfiguration lineageConfiguration
A configuration that specifies whether data lineage is enabled for the crawler.
String state
Indicates whether the crawler is running, or whether a run is pending.
String tablePrefix
The prefix added to the names of tables that are created.
Schedule schedule
For scheduled crawlers, the schedule when the crawler runs.
Long crawlElapsedTime
If the crawler is running, contains the total time elapsed since the last crawl began.
Date creationTime
The time that the crawler was created.
Date lastUpdated
The time that the crawler was last updated.
LastCrawlInfo lastCrawl
The status of the last crawl, and potentially error information if an error occurred.
Long version
The version of the crawler.
String configuration
Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Include and Exclude Patterns.
String crawlerSecurityConfiguration
The name of the SecurityConfiguration
structure to be used by this crawler.
String crawlerName
The name of the crawler.
Double timeLeftSeconds
The estimated time left to complete a running crawl.
Boolean stillEstimating
True if the crawler is still estimating how long it will take to complete this run.
Double lastRuntimeSeconds
The duration of the crawler's most recent run, in seconds.
Double medianRuntimeSeconds
The median duration of this crawler's runs, in seconds.
Integer tablesCreated
The number of tables created by this crawler.
Integer tablesUpdated
The number of tables updated by this crawler.
Integer tablesDeleted
The number of tables deleted by this crawler.
List<E> s3Targets
Specifies Amazon Simple Storage Service (Amazon S3) targets.
List<E> jdbcTargets
Specifies JDBC targets.
List<E> mongoDBTargets
Specifies Amazon DocumentDB or MongoDB targets.
List<E> dynamoDBTargets
Specifies Amazon DynamoDB targets.
List<E> catalogTargets
Specifies Glue Data Catalog targets.
String name
Returns the name of the blueprint that was registered.
CreateGrokClassifierRequest grokClassifier
A GrokClassifier
object specifying the classifier to create.
CreateXMLClassifierRequest xMLClassifier
An XMLClassifier
object specifying the classifier to create.
CreateJsonClassifierRequest jsonClassifier
A JsonClassifier
object specifying the classifier to create.
CreateCsvClassifierRequest csvClassifier
A CsvClassifier
object specifying the classifier to create.
String catalogId
The ID of the Data Catalog in which to create the connection. If none is provided, the Amazon Web Services account ID is used by default.
ConnectionInput connectionInput
A ConnectionInput
object defining the connection to create.
Map<K,V> tags
The tags you assign to the connection.
String name
Name of the new crawler.
String role
The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.
String databaseName
The Glue database where results are written, such as:
arn:aws:daylight:us-east-1::database/sometable/*
.
String description
A description of the new crawler.
CrawlerTargets targets
A list of collection of targets to crawl.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> classifiers
A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.
String tablePrefix
The table prefix used for catalog tables that are created.
SchemaChangePolicy schemaChangePolicy
The policy for the crawler's update and deletion behavior.
RecrawlPolicy recrawlPolicy
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
LineageConfiguration lineageConfiguration
Specifies data lineage configuration settings for the crawler.
String configuration
Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler.
String crawlerSecurityConfiguration
The name of the SecurityConfiguration
structure to be used by this crawler.
Map<K,V> tags
The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
String name
The name of the classifier.
String delimiter
A custom symbol to denote what separates each column entry in the row.
String quoteSymbol
A custom symbol to denote what combines content into a single column value. Must be different from the column delimiter.
String containsHeader
Indicates whether the CSV file contains a header.
List<E> header
A list of strings representing column names.
Boolean disableValueTrimming
Specifies not to trim values before identifying the type of column values. The default value is true.
Boolean allowSingleColumn
Enables the processing of files that contain only one column.
String catalogId
The ID of the Data Catalog in which to create the database. If none is provided, the Amazon Web Services account ID is used by default.
DatabaseInput databaseInput
The metadata for the database.
String endpointName
The name to be assigned to the new DevEndpoint
.
String roleArn
The IAM role for the DevEndpoint
.
List<E> securityGroupIds
Security group IDs for the security groups to be used by the new DevEndpoint
.
String subnetId
The subnet ID for the new DevEndpoint
to use.
String publicKey
The public key to be used by this DevEndpoint
for authentication. This attribute is provided for
backward compatibility because the recommended attribute to use is public keys.
List<E> publicKeys
A list of public keys to be used by the development endpoints for authentication. The use of this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of
public keys. Call the UpdateDevEndpoint
API with the public key content in the
deletePublicKeys
attribute, and the list of new keys in the addPublicKeys
attribute.
Integer numberOfNodes
The number of Glue Data Processing Units (DPUs) to allocate to this DevEndpoint
.
String workerType
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X
WorkerType
configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB
disk.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your
DevEndpoint
. Multiple values must be complete paths separated by a comma.
You can only use pure Python libraries with a DevEndpoint
. Libraries that rely on C extensions, such
as the pandas Python data analysis library, are not yet supported.
String extraJarsS3Path
The path to one or more Java .jar
files in an S3 bucket that should be loaded in your
DevEndpoint
.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this DevEndpoint
.
Map<K,V> tags
The tags to use with this DevEndpoint. You may use tags to limit access to the DevEndpoint. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
Map<K,V> arguments
A map of arguments used to configure the DevEndpoint
.
String endpointName
The name assigned to the new DevEndpoint
.
String status
The current status of the new DevEndpoint
.
List<E> securityGroupIds
The security groups assigned to the new DevEndpoint
.
String subnetId
The subnet ID assigned to the new DevEndpoint
.
String roleArn
The Amazon Resource Name (ARN) of the role assigned to the new DevEndpoint
.
String yarnEndpointAddress
The address of the YARN endpoint used by this DevEndpoint
.
Integer zeppelinRemoteSparkInterpreterPort
The Apache Zeppelin port for the remote Apache Spark interpreter.
Integer numberOfNodes
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint.
String workerType
The type of predefined worker that is allocated to the development endpoint. May be a value of Standard, G.1X, or G.2X.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated to the development endpoint.
String availabilityZone
The AWS Availability Zone where this DevEndpoint
is located.
String vpcId
The ID of the virtual private cloud (VPC) used by this DevEndpoint
.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an S3 bucket that will be loaded in your DevEndpoint
.
String extraJarsS3Path
Path to one or more Java .jar
files in an S3 bucket that will be loaded in your
DevEndpoint
.
String failureReason
The reason for a current failure in this DevEndpoint
.
String securityConfiguration
The name of the SecurityConfiguration
structure being used with this DevEndpoint
.
Date createdTimestamp
The point in time at which this DevEndpoint
was created.
Map<K,V> arguments
The map of arguments used to configure this DevEndpoint
.
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
String classification
An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, Amazon CloudWatch Logs, and so on.
String name
The name of the new classifier.
String grokPattern
The grok pattern used by this classifier.
String customPatterns
Optional custom grok patterns used by this classifier.
String name
The name you assign to this job definition. It must be unique in your account.
String description
Description of the job being defined.
String logUri
This field is reserved for future use.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
ExecutionProperty executionProperty
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
JobCommand command
The JobCommand
that runs this job.
Map<K,V> defaultArguments
The default arguments for this job.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
Map<K,V> nonOverridableArguments
Non-overridable arguments for this job, specified as name-value pairs.
ConnectionsList connections
The connections used for this job.
Integer maxRetries
The maximum number of times to retry this job if it fails.
Integer allocatedCapacity
This parameter is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this Job. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated
and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job or an Apache Spark ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity
. Instead, you should
specify a Worker type
and the Number of workers
.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job.
Map<K,V> tags
The tags to use with this job. You may use tags to limit access to the job. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
NotificationProperty notificationProperty
Specifies configuration properties of a job notification.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
String name
The unique name that was provided for this job definition.
String name
The name of the classifier.
String jsonPath
A JsonPath
string defining the JSON data for the classifier to classify. Glue supports a subset of
JsonPath, as described in Writing JsonPath
Custom Classifiers.
String name
The unique name that you give the transform when you create it.
String description
A description of the machine learning transform that is being defined. The default is an empty string.
List<E> inputRecordTables
A list of Glue table definitions used by the transform.
TransformParameters parameters
The algorithmic parameters that are specific to the transform type used. Conditionally dependent on the transform type.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.
This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.
This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when this task runs.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
Integer timeout
The timeout of the task run for this transform in minutes. This is the maximum time that a task run for this
transform can consume resources before it is terminated and enters TIMEOUT
status. The default is
2,880 minutes (48 hours).
Integer maxRetries
The maximum number of times to retry a task for this transform after a task run fails.
Map<K,V> tags
The tags to use with this machine learning transform. You may use tags to limit access to the machine learning transform. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
TransformEncryption transformEncryption
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
String transformId
A unique identifier that is generated for the transform.
String catalogId
The catalog ID where the table resides.
String databaseName
Specifies the name of a database in which you want to create a partition index.
String tableName
Specifies the name of a table in which you want to create a partition index.
PartitionIndex partitionIndex
Specifies a PartitionIndex
structure to create a partition index in an existing table.
String catalogId
The Amazon Web Services account ID of the catalog in which the partition is to be created.
String databaseName
The name of the metadata database in which the partition is to be created.
String tableName
The name of the metadata table in which the partition is to be created.
PartitionInput partitionInput
A PartitionInput
structure defining the partition to be created.
String registryName
Name of the registry to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.
String description
A description of the registry. If description is not provided, there will not be any default value for this.
Map<K,V> tags
Amazon Web Services tags that contain a key value pair and may be searched by console, command line, or API.
RegistryId registryId
This is a wrapper shape to contain the registry identity fields. If this is not provided, the default registry
will be used. The ARN format for the same will be:
arn:aws:glue:us-east-2:<customer id>:registry/default-registry:random-5-letter-id
.
String schemaName
Name of the schema to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.
String dataFormat
The data format of the schema definition. Currently AVRO
and JSON
are supported.
String compatibility
The compatibility mode of the schema. The possible values are:
NONE: No compatibility mode applies. You can use this choice in development scenarios or if you do not know the compatibility mode that you want to apply to schemas. Any new version added will be accepted without undergoing a compatibility check.
DISABLED: This compatibility choice prevents versioning for a particular schema. You can use this choice to prevent future versioning of a schema.
BACKWARD: This compatibility choice is recommended as it allows data receivers to read both the current and one previous schema version. This means that for instance, a new schema version cannot drop data fields or change the type of these fields, so they can't be read by readers using the previous version.
BACKWARD_ALL: This compatibility choice allows data receivers to read both the current and all previous schema versions. You can use this choice when you need to delete fields or add optional fields, and check compatibility against all previous schema versions.
FORWARD: This compatibility choice allows data receivers to read both the current and one next schema version, but not necessarily later versions. You can use this choice when you need to add fields or delete optional fields, but only check compatibility against the last schema version.
FORWARD_ALL: This compatibility choice allows data receivers to read written by producers of any new registered schema. You can use this choice when you need to add fields or delete optional fields, and check compatibility against all previous schema versions.
FULL: This compatibility choice allows data receivers to read data written by producers using the previous or next version of the schema, but not necessarily earlier or later versions. You can use this choice when you need to add or remove optional fields, but only check compatibility against the last schema version.
FULL_ALL: This compatibility choice allows data receivers to read data written by producers using all previous schema versions. You can use this choice when you need to add or remove optional fields, and check compatibility against all previous schema versions.
String description
An optional description of the schema. If description is not provided, there will not be any automatic default value for this.
Map<K,V> tags
Amazon Web Services tags that contain a key value pair and may be searched by console, command line, or API. If specified, follows the Amazon Web Services tags-on-create pattern.
String schemaDefinition
The schema definition using the DataFormat
setting for SchemaName
.
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String schemaName
The name of the schema.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String description
A description of the schema if specified when created.
String dataFormat
The data format of the schema definition. Currently AVRO
and JSON
are supported.
String compatibility
The schema compatibility mode.
Long schemaCheckpoint
The version number of the checkpoint (the last time the compatibility mode was changed).
Long latestSchemaVersion
The latest version of the schema associated with the returned schema definition.
Long nextSchemaVersion
The next version of the schema associated with the returned schema definition.
String schemaStatus
The status of the schema.
Map<K,V> tags
The tags for the schema.
String schemaVersionId
The unique identifier of the first schema version.
String schemaVersionStatus
The status of the first schema version created.
String name
The name for the new security configuration.
EncryptionConfiguration encryptionConfiguration
The encryption configuration for the new security configuration.
String catalogId
The ID of the Data Catalog in which to create the Table
. If none is supplied, the Amazon Web
Services account ID is used by default.
String databaseName
The catalog database in which to create the new table. For Hive compatibility, this name is entirely lowercase.
TableInput tableInput
The TableInput
object that defines the metadata table to create in the catalog.
List<E> partitionIndexes
A list of partition indexes, PartitionIndex
structures, to create in the table.
String transactionId
The ID of the transaction.
String name
The name of the trigger.
String workflowName
The name of the workflow associated with the trigger.
String type
The type of the new trigger.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
This field is required when the trigger type is SCHEDULED.
Predicate predicate
A predicate to specify when the new trigger should fire.
This field is required when the trigger type is CONDITIONAL
.
List<E> actions
The actions initiated by this trigger when it fires.
String description
A description of the new trigger.
Boolean startOnCreation
Set to true
to start SCHEDULED
and CONDITIONAL
triggers when created. True
is not supported for ON_DEMAND
triggers.
Map<K,V> tags
The tags to use with this trigger. You may use tags to limit access to the trigger. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
EventBatchingCondition eventBatchingCondition
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
String name
The name of the trigger.
String catalogId
The ID of the Data Catalog in which to create the function. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which to create the function.
UserDefinedFunctionInput functionInput
A FunctionInput
object that defines the function to create in the Data Catalog.
String name
The name to be assigned to the workflow. It should be unique within your account.
String description
A description of the workflow.
Map<K,V> defaultRunProperties
A collection of properties to be used as part of each execution of the workflow.
Map<K,V> tags
The tags to be used with this workflow.
Integer maxConcurrentRuns
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
String name
The name of the workflow which was provided as part of the request.
String classification
An identifier of the data format that the classifier matches.
String name
The name of the classifier.
String rowTag
The XML tag designating the element that contains each record in an XML document being parsed. This can't
identify a self-closing element (closed by />
). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example,
<row item_a="A" item_b="B"></row>
is okay, but
<row item_a="A" item_b="B" />
is not).
String name
The name of the classifier.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String delimiter
A custom symbol to denote what separates each column entry in the row.
String quoteSymbol
A custom symbol to denote what combines content into a single column value. It must be different from the column delimiter.
String containsHeader
Indicates whether the CSV file contains a header.
List<E> header
A list of strings representing column names.
Boolean disableValueTrimming
Specifies not to trim values before identifying the type of column values. The default value is true
.
Boolean allowSingleColumn
Enables the processing of files that contain only one column.
String name
The name of the database. For Hive compatibility, this is folded to lowercase when it is stored.
String description
A description of the database.
String locationUri
The location of the database (for example, an HDFS path).
Map<K,V> parameters
These key-value pairs define parameters and properties of the database.
Date createTime
The time at which the metadata database was created in the catalog.
List<E> createTableDefaultPermissions
Creates a set of default permissions on the table for principals.
DatabaseIdentifier targetDatabase
A DatabaseIdentifier
structure that describes a target database for resource linking.
String catalogId
The ID of the Data Catalog in which the database resides.
String name
The name of the database. For Hive compatibility, this is folded to lowercase when it is stored.
String description
A description of the database.
String locationUri
The location of the database (for example, an HDFS path).
Map<K,V> parameters
These key-value pairs define parameters and properties of the database.
These key-value pairs define parameters and properties of the database.
List<E> createTableDefaultPermissions
Creates a set of default permissions on the table for principals.
DatabaseIdentifier targetDatabase
A DatabaseIdentifier
structure that describes a target database for resource linking.
EncryptionAtRest encryptionAtRest
Specifies the encryption-at-rest configuration for the Data Catalog.
ConnectionPasswordEncryption connectionPasswordEncryption
When connection password protection is enabled, the Data Catalog uses a customer-provided key to encrypt the
password as part of CreateConnection
or UpdateConnection
and store it in the
ENCRYPTED_PASSWORD
field in the connection properties. You can enable catalog encryption or only
password encryption.
String dataLakePrincipalIdentifier
An identifier for the Lake Formation principal.
DecimalNumber minimumValue
The lowest value in the column.
DecimalNumber maximumValue
The highest value in the column.
Long numberOfNulls
The number of null values in the column.
Long numberOfDistinctValues
The number of distinct values in a column.
ByteBuffer unscaledValue
The unscaled numeric value.
Integer scale
The scale that determines where the decimal point falls in the unscaled value.
String name
The name of the blueprint to delete.
String name
Returns the name of the blueprint that was deleted.
String name
Name of the classifier to remove.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionValues
A list of partition values identifying the partition.
String columnName
Name of the column.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
String columnName
The name of the column.
String name
The name of the crawler to remove.
String endpointName
The name of the DevEndpoint
.
String jobName
The name of the job definition to delete.
String jobName
The name of the job definition that was deleted.
String transformId
The unique identifier of the transform to delete.
String transformId
The unique identifier of the transform that was deleted.
String catalogId
The catalog ID where the table resides.
String databaseName
Specifies the name of a database from which you want to delete a partition index.
String tableName
Specifies the name of a table from which you want to delete a partition index.
String indexName
The name of the partition index to be deleted.
String catalogId
The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table in question resides.
String tableName
The name of the table that contains the partition to be deleted.
List<E> partitionValues
The values that define the partition.
RegistryId registryId
This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
SchemaId schemaId
This is a wrapper structure that may contain the schema name and Amazon Resource Name (ARN).
String name
The name of the security configuration to delete.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.
String name
The name of the table to be deleted. For Hive compatibility, this name is entirely lowercase.
String transactionId
The transaction ID at which to delete the table contents.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
String versionId
The ID of the table version to be deleted. A VersionID
is a string representation of an integer.
Each version is incremented by 1.
String name
The name of the trigger to delete.
String name
The name of the trigger that was deleted.
String catalogId
The ID of the Data Catalog where the function to be deleted is located. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the function is located.
String functionName
The name of the function definition to be deleted.
String name
Name of the workflow to be deleted.
String name
Name of the workflow specified in input.
String endpointName
The name of the DevEndpoint
.
String roleArn
The Amazon Resource Name (ARN) of the IAM role used in this DevEndpoint
.
List<E> securityGroupIds
A list of security group identifiers used in this DevEndpoint
.
String subnetId
The subnet ID for this DevEndpoint
.
String yarnEndpointAddress
The YARN endpoint address used by this DevEndpoint
.
String privateAddress
A private IP address to access the DevEndpoint
within a VPC if the DevEndpoint
is
created within one. The PrivateAddress
field is present only when you create the
DevEndpoint
within your VPC.
Integer zeppelinRemoteSparkInterpreterPort
The Apache Zeppelin port for the remote Apache Spark interpreter.
String publicAddress
The public IP address used by this DevEndpoint
. The PublicAddress
field is present only
when you create a non-virtual private cloud (VPC) DevEndpoint
.
String status
The current status of this DevEndpoint
.
String workerType
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X
WorkerType
configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB
disk.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
Integer numberOfNodes
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint
.
String availabilityZone
The AWS Availability Zone where this DevEndpoint
is located.
String vpcId
The ID of the virtual private cloud (VPC) used by this DevEndpoint
.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your
DevEndpoint
. Multiple values must be complete paths separated by a comma.
You can only use pure Python libraries with a DevEndpoint
. Libraries that rely on C extensions, such
as the pandas Python data analysis library, are not currently supported.
String extraJarsS3Path
The path to one or more Java .jar
files in an S3 bucket that should be loaded in your
DevEndpoint
.
You can only use pure Java/Scala libraries with a DevEndpoint
.
String failureReason
The reason for a current failure in this DevEndpoint
.
String lastUpdateStatus
The status of the last update.
Date createdTimestamp
The point in time at which this DevEndpoint was created.
Date lastModifiedTimestamp
The point in time at which this DevEndpoint
was last modified.
String publicKey
The public key to be used by this DevEndpoint
for authentication. This attribute is provided for
backward compatibility because the recommended attribute to use is public keys.
List<E> publicKeys
A list of public keys to be used by the DevEndpoints
for authentication. Using this attribute is
preferred over a single public key because the public keys allow you to have a different private key per client.
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of
public keys. Call the UpdateDevEndpoint
API operation with the public key content in the
deletePublicKeys
attribute, and the list of new keys in the addPublicKeys
attribute.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this DevEndpoint
.
Map<K,V> arguments
A map of arguments used to configure the DevEndpoint
.
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an Amazon Simple Storage Service (Amazon S3) bucket that should be
loaded in your DevEndpoint
. Multiple values must be complete paths separated by a comma.
You can only use pure Python libraries with a DevEndpoint
. Libraries that rely on C extensions, such
as the pandas Python data analysis library, are not currently supported.
String extraJarsS3Path
The path to one or more Java .jar
files in an S3 bucket that should be loaded in your
DevEndpoint
.
You can only use pure Java/Scala libraries with a DevEndpoint
.
String path
The name of the DynamoDB table to crawl.
Boolean scanAll
Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.
A value of true
means to scan all records, while a value of false
means to sample the
records. If no value is specified, the value defaults to true
.
Double scanRate
The percentage of the configured read capacity units to use by the Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.
The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).
List<E> s3Encryption
The encryption configuration for Amazon Simple Storage Service (Amazon S3) data.
CloudWatchEncryption cloudWatchEncryption
The encryption configuration for Amazon CloudWatch.
JobBookmarksEncryption jobBookmarksEncryption
The encryption configuration for job bookmarks.
String transformType
The type of machine learning transform.
FindMatchesMetrics findMatchesMetrics
The evaluation metrics for the find matches algorithm.
Integer maxConcurrentRuns
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
String outputS3Path
The Amazon Simple Storage Service (Amazon S3) path where you will export the labels.
Double areaUnderPRCurve
The area under the precision/recall curve (AUPRC) is a single number measuring the overall quality of the transform, that is independent of the choice made for precision vs. recall. Higher values indicate that you have a more attractive precision vs. recall tradeoff.
For more information, see Precision and recall in Wikipedia.
Double precision
The precision metric indicates when often your transform is correct when it predicts a match. Specifically, it measures how well the transform finds true positives from the total true positives possible.
For more information, see Precision and recall in Wikipedia.
Double recall
The recall metric indicates that for an actual match, how often your transform predicts the match. Specifically, it measures how well the transform finds true positives from the total records in the source data.
For more information, see Precision and recall in Wikipedia.
Double f1
The maximum F1 metric indicates the transform's accuracy between 0 and 1, where 1 is the best accuracy.
For more information, see F1 score in Wikipedia.
ConfusionMatrix confusionMatrix
The confusion matrix shows you what your transform is predicting accurately and what types of errors it is making.
For more information, see Confusion matrix in Wikipedia.
List<E> columnImportances
A list of ColumnImportance
structures containing column importance metrics, sorted in order of
descending importance.
String primaryKeyColumnName
The name of a column that uniquely identifies rows in the source table. Used to help identify matching records.
Double precisionRecallTradeoff
The value selected when tuning your transform for a balance between precision and recall. A value of 0.5 means no preference; a value of 1.0 means a bias purely for precision, and a value of 0.0 means a bias for recall. Because this is a tradeoff, choosing values close to 1.0 means very low recall, and choosing values close to 0.0 results in very low precision.
The precision metric indicates how often your model is correct when it predicts a match.
The recall metric indicates that for an actual match, how often your model predicts the match.
Double accuracyCostTradeoff
The value that is selected when tuning your transform for a balance between accuracy and cost. A value of 0.5
means that the system balances accuracy and cost concerns. A value of 1.0 means a bias purely for accuracy, which
typically results in a higher cost, sometimes substantially higher. A value of 0.0 means a bias purely for cost,
which results in a less accurate FindMatches
transform, sometimes with unacceptable accuracy.
Accuracy measures how well the transform finds true positives and true negatives. Increasing accuracy requires more machine resources and cost. But it also results in increased recall.
Cost measures how many compute resources, and thus money, are consumed to run the transform.
Boolean enforceProvidedLabels
The value to switch on or off to force the output to match the provided labels from users. If the value is
True
, the find matches
transform forces the output to match the provided labels. The
results override the normal conflation results. If the value is False
, the find matches
transform does not ensure all the labels provided are respected, and the results rely on the trained model.
Note that setting this value to true may increase the conflation execution time.
Blueprint blueprint
Returns a Blueprint
object.
BlueprintRun blueprintRun
Returns a BlueprintRun
object.
String catalogId
The ID of the catalog to migrate. Currently, this should be the Amazon Web Services account ID.
CatalogImportStatus importStatus
The status of the specified catalog migration.
String name
Name of the classifier to retrieve.
Classifier classifier
The requested classifier.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionValues
A list of partition values identifying the partition.
List<E> columnNames
A list of the column names.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> columnNames
A list of the column names.
String catalogId
The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.
String name
The name of the connection definition to retrieve.
Boolean hidePassword
Allows you to retrieve the connection metadata without returning the password. For instance, the AWS Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.
Connection connection
The requested connection definition.
String catalogId
The ID of the Data Catalog in which the connections reside. If none is provided, the Amazon Web Services account ID is used by default.
GetConnectionsFilter filter
A filter that controls which connections are returned.
Boolean hidePassword
Allows you to retrieve the connection metadata without returning the password. For instance, the AWS Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.
String nextToken
A continuation token, if this is a continuation call.
Integer maxResults
The maximum number of connections to return in one response.
String name
The name of the crawler to retrieve metadata for.
Crawler crawler
The metadata for the specified crawler.
Database database
The definition of the specified database in the Data Catalog.
String catalogId
The ID of the Data Catalog from which to retrieve Databases
. If none is provided, the Amazon Web
Services account ID is used by default.
String nextToken
A continuation token, if this is a continuation call.
Integer maxResults
The maximum number of databases to return in one response.
String resourceShareType
Allows you to specify that you want to list the databases shared with your account. The allowable values are
FOREIGN
or ALL
.
If set to FOREIGN
, will list the databases shared with your account.
If set to ALL
, will list the databases shared with your account, as well as the databases in yor
local account.
String catalogId
The ID of the Data Catalog to retrieve the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.
DataCatalogEncryptionSettings dataCatalogEncryptionSettings
The requested security configuration.
String pythonScript
The Python script to transform.
String endpointName
Name of the DevEndpoint
to retrieve information for.
DevEndpoint devEndpoint
A DevEndpoint
definition.
JobBookmarkEntry jobBookmarkEntry
A structure that defines a point that a job can resume processing.
String jobName
The name of the job definition to retrieve.
Job job
The requested job definition.
JobRun jobRun
The requested job-run metadata.
CatalogEntry source
Specifies the source table.
List<E> sinks
A list of target tables.
Location location
Parameters for the mapping.
String transformId
The unique identifier of the task run.
String taskRunId
The unique run identifier associated with this run.
String status
The status for this task run.
String logGroupName
The names of the log groups that are associated with the task run.
TaskRunProperties properties
The list of properties that are associated with the task run.
String errorString
The error strings that are associated with the task run.
Date startedOn
The date and time when this task run started.
Date lastModifiedOn
The date and time when this task run was last modified.
Date completedOn
The date and time when this task run was completed.
Integer executionTime
The amount of time (in seconds) that the task run consumed resources.
String transformId
The unique identifier of the machine learning transform.
String nextToken
A token for pagination of the results. The default is empty.
Integer maxResults
The maximum number of results to return.
TaskRunFilterCriteria filter
The filter criteria, in the TaskRunFilterCriteria
structure, for the task run.
TaskRunSortCriteria sort
The sorting criteria, in the TaskRunSortCriteria
structure, for the task run.
String transformId
The unique identifier of the transform, generated at the time that the transform was created.
String transformId
The unique identifier of the transform, generated at the time that the transform was created.
String name
The unique name given to the transform when it was created.
String description
A description of the transform.
String status
The last known status of the transform (to indicate whether it can be used or not). One of "NOT_READY", "READY", or "DELETING".
Date createdOn
The date and time when the transform was created.
Date lastModifiedOn
The date and time when the transform was last modified.
List<E> inputRecordTables
A list of Glue table definitions used by the transform.
TransformParameters parameters
The configuration parameters that are specific to the algorithm used.
EvaluationMetrics evaluationMetrics
The latest evaluation metrics.
Integer labelCount
The number of labels available for this transform.
List<E> schema
The Map<Column, Type>
object that represents the schema that this transform accepts. Has an
upper bound of 100 columns.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when this task runs.
Integer timeout
The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this
transform can consume resources before it is terminated and enters TIMEOUT
status. The default is
2,880 minutes (48 hours).
Integer maxRetries
The maximum number of times to retry a task for this transform after a task run fails.
TransformEncryption transformEncryption
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
String nextToken
A paginated token to offset the results.
Integer maxResults
The maximum number of results to return.
TransformFilterCriteria filter
The filter transformation criteria.
TransformSortCriteria sort
The sorting criteria.
String catalogId
The catalog ID where the table resides.
String databaseName
Specifies the name of a database from which you want to retrieve partition indexes.
String tableName
Specifies the name of a table for which you want to retrieve the partition indexes.
String nextToken
A continuation token, included if this is a continuation call.
String catalogId
The ID of the Data Catalog where the partition in question resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partition resides.
String tableName
The name of the partition's table.
List<E> partitionValues
The values that define the partition.
Partition partition
The requested information, in the form of a Partition
object.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
String expression
An expression that filters the partitions to be returned.
The expression uses SQL syntax similar to the SQL WHERE
filter clause. The SQL statement parser JSQLParser parses the expression.
Operators: The following are the operators that you can use in the Expression
API call:
Checks whether the values of the two operands are equal; if yes, then the condition becomes true.
Example: Assume 'variable a' holds 10 and 'variable b' holds 20.
(a = b) is not true.
Checks whether the values of two operands are equal; if the values are not equal, then the condition becomes true.
Example: (a < > b) is true.
Checks whether the value of the left operand is greater than the value of the right operand; if yes, then the condition becomes true.
Example: (a > b) is not true.
Checks whether the value of the left operand is less than the value of the right operand; if yes, then the condition becomes true.
Example: (a < b) is true.
Checks whether the value of the left operand is greater than or equal to the value of the right operand; if yes, then the condition becomes true.
Example: (a >= b) is not true.
Checks whether the value of the left operand is less than or equal to the value of the right operand; if yes, then the condition becomes true.
Example: (a <= b) is true.
Logical operators.
Supported Partition Key Types: The following are the supported partition keys.
string
date
timestamp
int
bigint
long
tinyint
smallint
decimal
If an type is encountered that is not valid, an exception is thrown.
The following list shows the valid operators on each type. When you define a crawler, the
partitionKey
type is created as a STRING
, to be compatible with the catalog partitions.
Sample API Call:
String nextToken
A continuation token, if this is not the first call to retrieve these partitions.
Segment segment
The segment of the table's partitions to scan in this request.
Integer maxResults
The maximum number of partitions to return in a single response.
Boolean excludeColumnSchema
When true, specifies not returning the partition column schema. Useful when you are interested only in other partition attributes such as partition values or location. This approach avoids the problem of a large response by not returning duplicate data.
String transactionId
The transaction ID at which to read the partition contents.
Date queryAsOfTime
The time as of when to read the partition contents. If not set, the most recent transaction commit time will be
used. Cannot be specified along with TransactionId
.
List<E> mapping
The list of mappings from a source table to target tables.
CatalogEntry source
The source table.
List<E> sinks
The target tables.
Location location
The parameters for the mapping.
String language
The programming language of the code to perform the mapping.
Map<K,V> additionalPlanOptionsMap
A map to hold additional optional key-value parameters.
Currently, these key-value pairs are supported:
inferSchema
— Specifies whether to set inferSchema
to true or false for the default
script generated by an Glue job. For example, to set inferSchema
to true, pass the following key
value pair:
--additional-plan-options-map '{"inferSchema":"true"}'
RegistryId registryId
This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String description
A description of the registry.
String status
The status of the registry.
String createdTime
The date and time the registry was created.
String updatedTime
The date and time the registry was updated.
String resourceArn
The ARN of the Glue resource for which to retrieve the resource policy. If not supplied, the Data Catalog
resource policy is returned. Use GetResourcePolicies
to view all existing resource policies. For
more information see Specifying Glue Resource
ARNs.
String policyInJson
Contains the requested policy document, in JSON format.
String policyHash
Contains the hash value associated with this policy.
Date createTime
The date and time at which the policy was created.
Date updateTime
The date and time at which the policy was last updated.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or
SchemaName
has to be provided.
SchemaId$SchemaName: The name of the schema. One of SchemaArn
or SchemaName
has to be
provided.
String schemaDefinition
The definition of the schema for which schema details are required.
String schemaVersionId
The schema ID of the schema version.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String dataFormat
The data format of the schema definition. Currently only AVRO
and JSON
are supported.
String status
The status of the schema version.
String createdTime
The date and time the schema was created.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String schemaName
The name of the schema.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String description
A description of schema if specified when created
String dataFormat
The data format of the schema definition. Currently AVRO
and JSON
are supported.
String compatibility
The compatibility mode of the schema.
Long schemaCheckpoint
The version number of the checkpoint (the last time the compatibility mode was changed).
Long latestSchemaVersion
The latest version of the schema associated with the returned schema definition.
Long nextSchemaVersion
The next version of the schema associated with the returned schema definition.
String schemaStatus
The status of the schema.
String createdTime
The date and time the schema was created.
String updatedTime
The date and time the schema was updated.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
String schemaVersionId
The SchemaVersionId
of the schema version. This field is required for fetching by schema ID. Either
this or the SchemaId
wrapper has to be provided.
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The SchemaVersionId
of the schema version.
String schemaDefinition
The schema definition for the schema ID.
String dataFormat
The data format of the schema definition. Currently AVRO
and JSON
are supported.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
Long versionNumber
The version number of the schema.
String status
The status of the schema version.
String createdTime
The date and time the schema version was created.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or
SchemaName
has to be provided.
SchemaId$SchemaName: The name of the schema. One of SchemaArn
or SchemaName
has to be
provided.
SchemaVersionNumber firstSchemaVersionNumber
The first of the two schema versions to be compared.
SchemaVersionNumber secondSchemaVersionNumber
The second of the two schema versions to be compared.
String schemaDiffType
Refers to SYNTAX_DIFF
, which is the currently supported diff type.
String diff
The difference between schemas as a string in JsonPatch format.
String name
The name of the security configuration to retrieve.
SecurityConfiguration securityConfiguration
The requested security configuration.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String name
The name of the table for which to retrieve the definition. For Hive compatibility, this name is entirely lowercase.
String transactionId
The transaction ID at which to read the table contents.
Date queryAsOfTime
The time as of when to read the table contents. If not set, the most recent transaction commit time will be used.
Cannot be specified along with TransactionId
.
Table table
The Table
object that defines the specified table.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog whose tables to list. For Hive compatibility, this name is entirely lowercase.
String expression
A regular expression pattern. If present, only those tables whose names match the pattern are returned.
String nextToken
A continuation token, included if this is a continuation call.
Integer maxResults
The maximum number of tables to return in a single response.
String transactionId
The transaction ID at which to read the table contents.
Date queryAsOfTime
The time as of when to read the table contents. If not set, the most recent transaction commit time will be used.
Cannot be specified along with TransactionId
.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
String versionId
The ID value of the table version to be retrieved. A VersionID
is a string representation of an
integer. Each version is incremented by 1.
TableVersion tableVersion
The requested table version.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
String nextToken
A continuation token, if this is not the first call.
Integer maxResults
The maximum number of table versions to return in one response.
String resourceArn
The Amazon Resource Name (ARN) of the resource for which to retrieve tags.
String name
The name of the trigger to retrieve.
Trigger trigger
The requested trigger definition.
String nextToken
A continuation token, if this is a continuation call.
String dependentJobName
The name of the job to retrieve triggers for. The trigger that can start this job is returned, and if there is no such trigger, all triggers are returned.
Integer maxResults
The maximum size of the response.
String catalogId
The ID of the Data Catalog where the function to be retrieved is located. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the function is located.
String functionName
The name of the function.
UserDefinedFunction userDefinedFunction
The requested function definition.
String catalogId
The ID of the Data Catalog where the functions to be retrieved are located. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the functions are located. If none is provided, functions from all the databases across the catalog will be returned.
String pattern
An optional function-name pattern string that filters the function definitions returned.
String nextToken
A continuation token, if this is a continuation call.
Integer maxResults
The maximum number of functions to return in one response.
Workflow workflow
The resource metadata for the workflow.
WorkflowRun run
The requested workflow run metadata.
String name
Name of the workflow whose metadata of runs should be returned.
Boolean includeGraph
Specifies whether to include the workflow graph in response or not.
String nextToken
The maximum size of the response.
Integer maxResults
The maximum number of workflow runs to be included in the response.
String policyInJson
Contains the requested policy document, in JSON format.
String policyHash
Contains the hash value associated with this policy.
Date createTime
The date and time at which the policy was created.
Date updateTime
The date and time at which the policy was last updated.
String databaseName
A database name in the Glue Data Catalog.
String tableName
A table name in the Glue Data Catalog.
String catalogId
A unique identifier for the Glue Data Catalog.
String connectionName
The name of the connection to the Glue Data Catalog.
String name
The name of the classifier.
String classification
An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, and so on.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String grokPattern
The grok pattern applied to a data store by this classifier. For more information, see built-in patterns in Writing Custom Classifiers.
String customPatterns
Optional custom grok patterns defined by this classifier. For more information, see custom patterns in Writing Custom Classifiers.
String catalogId
The ID of the catalog to import. Currently, this should be the Amazon Web Services account ID.
String connectionName
The name of the connection to use to connect to the JDBC target.
String path
The path of the JDBC target.
List<E> exclusions
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
String name
The name you assign to this job definition.
String description
A description of the job.
String logUri
This field is reserved for future use.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
Date createdOn
The time and date that this job definition was created.
Date lastModifiedOn
The last point in time when this job definition was modified.
ExecutionProperty executionProperty
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
JobCommand command
The JobCommand
that runs this job.
Map<K,V> defaultArguments
The default arguments for this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
Map<K,V> nonOverridableArguments
Non-overridable arguments for this job, specified as name-value pairs.
ConnectionsList connections
The connections used for this job.
Integer maxRetries
The maximum number of times to retry this job after a JobRun fails.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) allocated to runs of this job. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated
and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity
. Instead, you should
specify a Worker type
and the Number of workers
.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job.
NotificationProperty notificationProperty
Specifies configuration properties of a job notification.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
String jobName
The name of the job in question.
Integer version
The version of the job.
Integer run
The run ID number.
Integer attempt
The attempt ID number.
String previousRunId
The unique run identifier associated with the previous job run.
String runId
The run ID number.
String jobBookmark
The bookmark itself.
String name
The name of the job command. For an Apache Spark ETL job, this must be glueetl
. For a Python shell
job, it must be pythonshell
. For an Apache Spark streaming ETL job, this must be
gluestreaming
.
String scriptLocation
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
String pythonVersion
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
String id
The ID of this job run.
Integer attempt
The number of the attempt to run this job.
String previousRunId
The ID of the previous run of this job. For example, the JobRunId
specified in the
StartJobRun
action.
String triggerName
The name of the trigger that started this job run.
String jobName
The name of the job definition being used in this run.
Date startedOn
The date and time at which this job run was started.
Date lastModifiedOn
The last time that this job run was modified.
Date completedOn
The date and time that this job run completed.
String jobRunState
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Map<K,V> arguments
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
String errorMessage
An error message associated with this job run.
List<E> predecessorRuns
A list of predecessors to this job run.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer executionTime
The amount of time (in seconds) that the job run consumed resources.
Integer timeout
The JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before
it is terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours). This overrides
the timeout value set in the parent job.
Double maxCapacity
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job or an Apache Spark ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl"), you can allocate from 2 to 100
DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job run.
String logGroupName
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS.
This name can be /aws-glue/jobs/
, in which case the default encryption is NONE
. If you
add a role name and SecurityConfiguration
name (in other words,
/aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/
), then that security configuration is
used to encrypt the log group.
NotificationProperty notificationProperty
Specifies configuration properties of a job run notification.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
String description
Description of the job being defined.
String logUri
This field is reserved for future use.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job (required).
ExecutionProperty executionProperty
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
JobCommand command
The JobCommand
that runs this job (required).
Map<K,V> defaultArguments
The default arguments for this job.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
Map<K,V> nonOverridableArguments
Non-overridable arguments for this job, specified as name-value pairs.
ConnectionsList connections
The connections used for this job.
Integer maxRetries
The maximum number of times to retry this job if it fails.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this job. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated
and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job or an Apache Spark ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
For Glue version 2.0 jobs, you cannot instead specify a Maximum capacity
. Instead, you should
specify a Worker type
and the Number of workers
.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job.
NotificationProperty notificationProperty
Specifies the configuration properties of a job notification.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for jobs of type Spark.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
String name
The name of the classifier.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String jsonPath
A JsonPath
string defining the JSON data for the classifier to classify. Glue supports a subset of
JsonPath, as described in Writing JsonPath
Custom Classifiers.
String outputS3Path
The Amazon Simple Storage Service (Amazon S3) path where you will generate the labeling set.
String description
The description of the blueprint.
Date lastModifiedOn
The date and time the blueprint was last modified.
String parameterSpec
A JSON string specifying the parameters for the blueprint.
String blueprintLocation
Specifies a path in Amazon S3 where the blueprint is published by the Glue developer.
String blueprintServiceLocation
Specifies a path in Amazon S3 where the blueprint is copied when you create or update the blueprint.
String status
Status of the last crawl.
String errorMessage
If an error occurred, the error information about the last crawl.
String logGroup
The log group for the last crawl.
String logStream
The log stream for the last crawl.
String messagePrefix
The prefix for a message about this crawl.
Date startTime
The time at which the crawl started.
String crawlerLineageSettings
Specifies whether data lineage is enabled for the crawler. Valid values are:
ENABLE: enables data lineage for the crawler
DISABLE: disables data lineage for the crawler
String nextToken
A continuation token, if this is a continuation request.
Integer maxResults
The maximum size of a list to return.
TransformFilterCriteria filter
A TransformFilterCriteria
used to filter the machine learning transforms.
TransformSortCriteria sort
A TransformSortCriteria
used to sort the machine learning transforms.
Map<K,V> tags
Specifies to return only these tagged resources.
RegistryId registryId
A wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
Integer maxResults
Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.
String nextToken
A continuation token, if this is a continuation call.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
Integer maxResults
Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.
String nextToken
A continuation token, if this is a continuation call.
String nextToken
A continuation token, if this is a continuation request.
String dependentJobName
The name of the job for which to retrieve triggers. The trigger that can start this job is returned. If there is no such trigger, all triggers are returned.
Integer maxResults
The maximum size of a list to return.
Map<K,V> tags
Specifies to return only these tagged resources.
String transformId
The unique transform ID that is generated for the machine learning transform. The ID is guaranteed to be unique and does not change.
String name
A user-defined name for the machine learning transform. Names are not guaranteed unique and can be changed at any time.
String description
A user-defined, long-form description text for the machine learning transform. Descriptions are not guaranteed to be unique and can be changed at any time.
String status
The current status of the machine learning transform.
Date createdOn
A timestamp. The time and date that this machine learning transform was created.
Date lastModifiedOn
A timestamp. The last point in time when this machine learning transform was modified.
List<E> inputRecordTables
A list of Glue table definitions used by the transform.
TransformParameters parameters
A TransformParameters
object. You can use parameters to tune (customize) the behavior of the machine
learning transform by specifying what data it learns from and your preference on various tradeoffs (such as
precious vs. recall, or accuracy vs. cost).
EvaluationMetrics evaluationMetrics
An EvaluationMetrics
object. Evaluation metrics provide an estimate of the quality of your machine
learning transform.
Integer labelCount
A count identifier for the labeling files generated by Glue for this transform. As you create a better transform, you can iteratively download, label, and upload the labeling file.
List<E> schema
A map of key-value pairs representing the columns and data types that this transform can run against. Has an upper bound of 100 columns.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.
This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.
This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when a task of this transform runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a task of the transform runs.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
Integer timeout
The timeout in minutes of the machine learning transform.
Integer maxRetries
The maximum number of times to retry after an MLTaskRun
of the machine learning transform fails.
TransformEncryption transformEncryption
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
String mlUserDataEncryptionMode
The encryption mode applied to user data. Valid values are:
DISABLED: encryption is disabled
SSEKMS: use of server-side encryption with Key Management Service (SSE-KMS) for user data stored in Amazon S3.
String kmsKeyId
The ID for the customer-provided KMS key.
String connectionName
The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.
String path
The path of the Amazon DocumentDB or MongoDB target (database/collection).
Boolean scanAll
Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.
A value of true
means to scan all records, while a value of false
means to sample the
records. If no value is specified, the value defaults to true
.
String type
The type of Glue component represented by the node.
String name
The name of the Glue component represented by the node.
String uniqueId
The unique Id assigned to the node within the workflow.
TriggerNodeDetails triggerDetails
Details of the Trigger when the node represents a Trigger.
JobNodeDetails jobDetails
Details of the Job when the node represents a Job.
CrawlerNodeDetails crawlerDetails
Details of the crawler when the node represents a crawler.
Integer notifyDelayAfter
After a job run starts, the number of minutes to wait before sending a job run delay notification.
List<E> values
The values of the partition.
String databaseName
The name of the catalog database in which to create the partition.
String tableName
The name of the database table in which to create the partition.
Date creationTime
The time at which the partition was created.
Date lastAccessTime
The last time at which the partition was accessed.
StorageDescriptor storageDescriptor
Provides information about the physical location where the partition is stored.
Map<K,V> parameters
These key-value pairs define partition parameters.
Date lastAnalyzedTime
The last time at which column statistics were computed for this partition.
String catalogId
The ID of the Data Catalog in which the partition resides.
List<E> partitionValues
The values that define the partition.
ErrorDetail errorDetail
The details about the partition error.
String indexName
The name of the partition index.
List<E> keys
A list of one or more keys, as KeySchemaElement
structures, for the partition index.
String indexStatus
The status of the partition index.
The possible statuses are:
CREATING: The index is being created. When an index is in a CREATING state, the index or its table cannot be deleted.
ACTIVE: The index creation succeeds.
FAILED: The index creation fails.
DELETING: The index is deleted from the list of indexes.
List<E> backfillErrors
A list of errors that can occur when registering partition indexes for an existing table.
List<E> values
The values of the partition. Although this parameter is not required by the SDK, you must specify this parameter for a valid input.
The values for the keys for the new partition must be passed as an array of String objects that must be ordered in the same order as the partition keys appearing in the Amazon S3 prefix. Otherwise Glue will add the values to the wrong keys.
Date lastAccessTime
The last time at which the partition was accessed.
StorageDescriptor storageDescriptor
Provides information about the physical location where the partition is stored.
Map<K,V> parameters
These key-value pairs define partition parameters.
Date lastAnalyzedTime
The last time at which column statistics were computed for this partition.
String subnetId
The subnet ID used by the connection.
List<E> securityGroupIdList
The security group ID list used by the connection.
String availabilityZone
The connection's Availability Zone. This field is redundant because the specified subnet implies the Availability Zone to be used. Currently the field must be populated, but it will be deprecated in the future.
DataLakePrincipal principal
The principal who is granted permissions.
List<E> permissions
The permissions that are granted to the principal.
String catalogId
The ID of the Data Catalog to set the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.
DataCatalogEncryptionSettings dataCatalogEncryptionSettings
The security configuration to set.
String policyInJson
Contains the policy document to set, in JSON format.
String resourceArn
Do not use. For internal use only.
String policyHashCondition
The hash value returned when the previous policy was set using PutResourcePolicy
. Its purpose is to
prevent concurrent modifications of a policy. Do not use this parameter if no previous policy has been set.
String policyExistsCondition
A value of MUST_EXIST
is used to update a policy. A value of NOT_EXIST
is used to
create a new policy. If a value of NONE
or a null value is used, the call does not depend on the
existence of a policy.
String enableHybrid
If 'TRUE'
, indicates that you are using both methods to grant cross-account access to Data Catalog
resources:
By directly updating the resource policy with PutResourePolicy
By using the Grant permissions command on the Amazon Web Services Management Console.
Must be set to 'TRUE'
if you have already used the Management Console to grant cross-account access,
otherwise the call fails. Default is 'FALSE'.
String policyHash
A hash of the policy that has just been set. This must be included in a subsequent call that overwrites or updates this policy.
SchemaId schemaId
The unique ID for the schema.
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
MetadataKeyValuePair metadataKeyValue
The metadata key's corresponding value.
String schemaArn
The Amazon Resource Name (ARN) for the schema.
String schemaName
The name for the schema.
String registryName
The name for the registry.
Boolean latestVersion
The latest version of the schema.
Long versionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
String metadataKey
The metadata key.
String metadataValue
The value of the metadata key.
SchemaId schemaId
A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
List<E> metadataList
Search key-value pairs for metadata, if they are not provided all the metadata information will be fetched.
Integer maxResults
Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.
String nextToken
A continuation token, if this is a continuation call.
Map<K,V> metadataInfoMap
A map of a metadata key and associated values.
String schemaVersionId
The unique version ID of the schema version.
String nextToken
A continuation token for paginating the returned list of tokens, returned if the current segment of the list is not the last.
String recrawlBehavior
Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.
A value of CRAWL_EVERYTHING
specifies crawling the entire dataset again.
A value of CRAWL_NEW_FOLDERS_ONLY
specifies crawling only folders that were added since the last
crawler run.
A value of CRAWL_EVENT_MODE
specifies crawling only the changes identified by Amazon S3 events.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
String schemaDefinition
The schema definition using the DataFormat
setting for the SchemaName
.
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String description
A description of the registry.
String status
The status of the registry.
String createdTime
The data the registry was created.
String updatedTime
The date the registry was updated.
SchemaId schemaId
A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
MetadataKeyValuePair metadataKeyValue
The value of the metadata key.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String schemaName
The name of the schema.
String registryName
The name of the registry.
Boolean latestVersion
The latest version of the schema.
Long versionNumber
The version number of the schema.
String schemaVersionId
The version ID for the schema version.
String metadataKey
The metadata key.
String metadataValue
The value of the metadata key.
JobBookmarkEntry jobBookmarkEntry
The reset bookmark entry.
String path
The path to the Amazon S3 target.
List<E> exclusions
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
String connectionName
The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).
Integer sampleSize
Sets the number of files in each leaf folder to be crawled when crawling sample files in a dataset. If not set, all the files are crawled. A valid value is an integer between 1 and 249.
String eventQueueArn
A valid Amazon SQS ARN. For example, arn:aws:sqs:region:account:sqs
.
String dlqEventQueueArn
A valid Amazon dead-letter SQS ARN. For example, arn:aws:sqs:region:account:deadLetterQueue
.
String scheduleExpression
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
String state
The state of the schedule.
String schemaArn
The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or SchemaName
has to be
provided.
String schemaName
The name of the schema. One of SchemaArn
or SchemaName
has to be provided.
String registryName
The name of the schema registry that contains the schema.
String registryName
the name of the registry where the schema resides.
String schemaName
The name of the schema.
String schemaArn
The Amazon Resource Name (ARN) for the schema.
String description
A description for the schema.
String schemaStatus
The status of the schema.
String createdTime
The date and time that a schema was created.
String updatedTime
The date and time that a schema was updated.
SchemaId schemaId
A structure that contains schema identity fields. Either this or the SchemaVersionId
has to be
provided.
String schemaVersionId
The unique ID assigned to a version of the schema. Either this or the SchemaId
has to be provided.
Long schemaVersionNumber
The version number of the schema.
Long versionNumber
The version number of the schema.
ErrorDetails errorDetails
The details of the error for the schema version.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String schemaVersionId
The unique identifier of the schema version.
Long versionNumber
The version number of the schema.
String status
The status of the schema version.
String createdTime
The date and time the schema version was created.
String catalogId
A unique identifier, consisting of account_id
.
String nextToken
A continuation token, included if this is a continuation call.
List<E> filters
A list of key-value pairs, and a comparator used to filter the search results. Returns all entities matching the predicate.
The Comparator
member of the PropertyPredicate
struct is used only for time fields, and
can be omitted for other field types. Also, when comparing string values, such as when Key=Name
, a
fuzzy match algorithm is used. The Key
field (for example, the value of the Name
field)
is split on certain punctuation characters, for example, -, :, #, etc. into tokens. Then each token is
exact-match compared with the Value
member of PropertyPredicate
. For example, if
Key=Name
and Value=link
, tables named customer-link
and
xx-link-yy
are returned, but xxlinkyy
is not returned.
String searchText
A string used for a text search.
Specifying a value in quotes filters based on an exact match to the value.
List<E> sortCriteria
A list of criteria for sorting the results by a field name, in an ascending or descending order.
Integer maxResults
The maximum number of tables to return in a single response.
String resourceShareType
Allows you to specify that you want to search the tables shared with your account. The allowable values are
FOREIGN
or ALL
.
If set to FOREIGN
, will search the tables shared with your account.
If set to ALL
, will search the tables shared with your account, as well as the tables in yor local
account.
String name
The name of the security configuration.
Date createdTimeStamp
The time at which this security configuration was created.
EncryptionConfiguration encryptionConfiguration
The encryption configuration associated with this security configuration.
List<E> skewedColumnNames
A list of names of columns that contain skewed values.
List<E> skewedColumnValues
A list of values that appear so frequently as to be considered skewed.
Map<K,V> skewedColumnValueLocationMaps
A mapping of skewed values to the columns that contain them.
String runId
The run ID for this blueprint run.
String name
Name of the crawler to start.
String crawlerName
Name of the crawler to schedule.
String taskRunId
The unique identifier for the task run.
String taskRunId
The unique identifier for the task run.
String jobName
The name of the job definition to use.
String jobRunId
The ID of a previous JobRun
to retry.
Map<K,V> arguments
The job arguments specifically for this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before
it is terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours). This overrides
the timeout value set in the parent job.
Double maxCapacity
The number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Do not set Max Capacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, or an Apache Spark ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl"), you can allocate from 2 to 100
DPUs. The default is 10 DPUs. This job type cannot have a fractional DPU allocation.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job run.
NotificationProperty notificationProperty
Specifies configuration properties of a job run notification.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String jobRunId
The ID assigned to this job run.
String transformId
The unique identifier of the machine learning transform.
String taskRunId
The unique identifier associated with this run.
String taskRunId
The unique run identifier that is associated with this task run.
String name
The name of the trigger to start.
String name
The name of the trigger that was started.
String name
The name of the workflow to start.
String runId
An Id for the new run.
String name
Name of the crawler to stop.
String crawlerName
Name of the crawler whose schedule state to set.
String name
The name of the trigger to stop.
String name
The name of the trigger that was stopped.
List<E> columns
A list of the Columns
in the table.
String location
The physical location of the table. By default, this takes the form of the warehouse location, followed by the database location in the warehouse, followed by the table name.
String inputFormat
The input format: SequenceFileInputFormat
(binary), or TextInputFormat
, or a custom
format.
String outputFormat
The output format: SequenceFileOutputFormat
(binary), or IgnoreKeyTextOutputFormat
, or
a custom format.
Boolean compressed
True
if the data in the table is compressed, or False
if not.
Integer numberOfBuckets
Must be specified if the table contains any dimension columns.
SerDeInfo serdeInfo
The serialization/deserialization (SerDe) information.
List<E> bucketColumns
A list of reducer grouping columns, clustering columns, and bucketing columns in the table.
List<E> sortColumns
A list specifying the sort order of each bucket in the table.
Map<K,V> parameters
The user-supplied properties in key-value form.
SkewedInfo skewedInfo
The information about values that appear frequently in a column (skewed values).
Boolean storedAsSubDirectories
True
if the table data is stored in subdirectories, or False
if not.
SchemaReference schemaReference
An object that references a schema stored in the Glue Schema Registry.
When creating a table, you can pass an empty list of columns for the schema, and instead use a schema reference.
Long maximumLength
The size of the longest string in the column.
Double averageLength
The average string length in the column.
Long numberOfNulls
The number of null values in the column.
Long numberOfDistinctValues
The number of distinct values in a column.
String name
The table name. For Hive compatibility, this must be entirely lowercase.
String databaseName
The name of the database where the table metadata resides. For Hive compatibility, this must be all lowercase.
String description
A description of the table.
String owner
The owner of the table.
Date createTime
The time when the table definition was created in the Data Catalog.
Date updateTime
The last time that the table was updated.
Date lastAccessTime
The last time that the table was accessed. This is usually taken from HDFS, and might not be reliable.
Date lastAnalyzedTime
The last time that column statistics were computed for this table.
Integer retention
The retention time for this table.
StorageDescriptor storageDescriptor
A storage descriptor containing information about the physical storage of this table.
List<E> partitionKeys
A list of columns by which the table is partitioned. Only primitive types are supported as partition keys.
When you create a table used by Amazon Athena, and you do not specify any partitionKeys
, you must at
least set the value of partitionKeys
to an empty list. For example:
"PartitionKeys": []
String viewOriginalText
If the table is a view, the original text of the view; otherwise null
.
String viewExpandedText
If the table is a view, the expanded text of the view; otherwise null
.
String tableType
The type of this table (EXTERNAL_TABLE
, VIRTUAL_VIEW
, etc.).
Map<K,V> parameters
These key-value pairs define properties associated with the table.
String createdBy
The person or entity who created the table.
Boolean isRegisteredWithLakeFormation
Indicates whether the table has been registered with Lake Formation.
TableIdentifier targetTable
A TableIdentifier
structure that describes a target table for resource linking.
String catalogId
The ID of the Data Catalog in which the table resides.
String tableName
The name of the table. For Hive compatibility, this must be entirely lowercase.
ErrorDetail errorDetail
The details about the error.
String name
The table name. For Hive compatibility, this is folded to lowercase when it is stored.
String description
A description of the table.
String owner
The table owner.
Date lastAccessTime
The last time that the table was accessed.
Date lastAnalyzedTime
The last time that column statistics were computed for this table.
Integer retention
The retention time for this table.
StorageDescriptor storageDescriptor
A storage descriptor containing information about the physical storage of this table.
List<E> partitionKeys
A list of columns by which the table is partitioned. Only primitive types are supported as partition keys.
When you create a table used by Amazon Athena, and you do not specify any partitionKeys
, you must at
least set the value of partitionKeys
to an empty list. For example:
"PartitionKeys": []
String viewOriginalText
If the table is a view, the original text of the view; otherwise null
.
String viewExpandedText
If the table is a view, the expanded text of the view; otherwise null
.
String tableType
The type of this table (EXTERNAL_TABLE
, VIRTUAL_VIEW
, etc.).
Map<K,V> parameters
These key-value pairs define properties associated with the table.
TableIdentifier targetTable
A TableIdentifier
structure that describes a target table for resource linking.
String tableName
The name of the table in question.
String versionId
The ID value of the version in question. A VersionID
is a string representation of an integer. Each
version is incremented by 1.
ErrorDetail errorDetail
The details about the error.
String resourceArn
The ARN of the Glue resource to which to add the tags. For more information about Glue resource ARNs, see the Glue ARN string pattern.
Map<K,V> tagsToAdd
Tags to add to this resource.
String transformId
The unique identifier for the transform.
String taskRunId
The unique identifier for this task run.
String status
The current status of the requested task run.
String logGroupName
The names of the log group for secure logging, associated with this task run.
TaskRunProperties properties
Specifies configuration properties associated with this task run.
String errorString
The list of error strings associated with this task run.
Date startedOn
The date and time that this task run started.
Date lastModifiedOn
The last point in time that the requested task run was updated.
Date completedOn
The last point in time that the requested task run was completed.
Integer executionTime
The amount of time (in seconds) that the task run consumed resources.
String taskType
The type of task run.
ImportLabelsTaskRunProperties importLabelsTaskRunProperties
The configuration properties for an importing labels task run.
ExportLabelsTaskRunProperties exportLabelsTaskRunProperties
The configuration properties for an exporting labels task run.
LabelingSetGenerationTaskRunProperties labelingSetGenerationTaskRunProperties
The configuration properties for a labeling set generation task run.
FindMatchesTaskRunProperties findMatchesTaskRunProperties
The configuration properties for a find matches task run.
MLUserDataEncryption mlUserDataEncryption
An MLUserDataEncryption
object containing the encryption mode and customer-provided KMS key ID.
String taskRunSecurityConfigurationName
The name of the security configuration.
String name
A unique transform name that is used to filter the machine learning transforms.
String transformType
The type of machine learning transform that is used to filter the machine learning transforms.
String status
Filters the list of machine learning transforms by the last known status of the transforms (to indicate whether a transform can be used or not). One of "NOT_READY", "READY", or "DELETING".
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Date createdBefore
The time and date before which the transforms were created.
Date createdAfter
The time and date after which the transforms were created.
Date lastModifiedBefore
Filter on transforms last modified before this date.
Date lastModifiedAfter
Filter on transforms last modified after this date.
List<E> schema
Filters on datasets with a specific schema. The Map<Column, Type>
object is an array of
key-value pairs representing the schema this transform accepts, where Column
is the name of a
column, and Type
is the type of the data such as an integer or string. Has an upper bound of 100
columns.
String transformType
The type of machine learning transform.
For information about the types of machine learning transforms, see Creating Machine Learning Transforms.
FindMatchesParameters findMatchesParameters
The parameters for the find matches algorithm.
String name
The name of the trigger.
String workflowName
The name of the workflow associated with the trigger.
String id
Reserved for future use.
String type
The type of trigger that this is.
String state
The current state of the trigger.
String description
A description of this trigger.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> actions
The actions initiated by this trigger.
Predicate predicate
The predicate of this trigger, which defines when it will fire.
EventBatchingCondition eventBatchingCondition
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
Trigger trigger
The information of the trigger represented by the trigger node.
String name
Reserved for future use.
String description
A description of this trigger.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> actions
The actions initiated by this trigger.
Predicate predicate
The predicate of this trigger, which defines when it will fire.
EventBatchingCondition eventBatchingCondition
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
String name
Returns the name of the blueprint that was updated.
UpdateGrokClassifierRequest grokClassifier
A GrokClassifier
object with updated fields.
UpdateXMLClassifierRequest xMLClassifier
An XMLClassifier
object with updated fields.
UpdateJsonClassifierRequest jsonClassifier
A JsonClassifier
object with updated fields.
UpdateCsvClassifierRequest csvClassifier
A CsvClassifier
object with updated fields.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionValues
A list of partition values identifying the partition.
List<E> columnStatisticsList
A list of the column statistics.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> columnStatisticsList
A list of the column statistics.
String catalogId
The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.
String name
The name of the connection definition to update.
ConnectionInput connectionInput
A ConnectionInput
object that redefines the connection in question.
String name
Name of the new crawler.
String role
The IAM role or Amazon Resource Name (ARN) of an IAM role that is used by the new crawler to access customer resources.
String databaseName
The Glue database where results are stored, such as:
arn:aws:daylight:us-east-1::database/sometable/*
.
String description
A description of the new crawler.
CrawlerTargets targets
A list of targets to crawl.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> classifiers
A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.
String tablePrefix
The table prefix used for catalog tables that are created.
SchemaChangePolicy schemaChangePolicy
The policy for the crawler's update and deletion behavior.
RecrawlPolicy recrawlPolicy
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
LineageConfiguration lineageConfiguration
Specifies data lineage configuration settings for the crawler.
String configuration
Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Configuring a Crawler.
String crawlerSecurityConfiguration
The name of the SecurityConfiguration
structure to be used by this crawler.
String crawlerName
The name of the crawler whose schedule to update.
String schedule
The updated cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
String name
The name of the classifier.
String delimiter
A custom symbol to denote what separates each column entry in the row.
String quoteSymbol
A custom symbol to denote what combines content into a single column value. It must be different from the column delimiter.
String containsHeader
Indicates whether the CSV file contains a header.
List<E> header
A list of strings representing column names.
Boolean disableValueTrimming
Specifies not to trim values before identifying the type of column values. The default value is true.
Boolean allowSingleColumn
Enables the processing of files that contain only one column.
String catalogId
The ID of the Data Catalog in which the metadata database resides. If none is provided, the Amazon Web Services account ID is used by default.
String name
The name of the database to update in the catalog. For Hive compatibility, this is folded to lowercase.
DatabaseInput databaseInput
A DatabaseInput
object specifying the new definition of the metadata database in the catalog.
String endpointName
The name of the DevEndpoint
to be updated.
String publicKey
The public key for the DevEndpoint
to use.
List<E> addPublicKeys
The list of public keys for the DevEndpoint
to use.
List<E> deletePublicKeys
The list of public keys to be deleted from the DevEndpoint
.
DevEndpointCustomLibraries customLibraries
Custom Python or Java libraries to be loaded in the DevEndpoint
.
Boolean updateEtlLibraries
True
if the list of custom libraries to be loaded in the development endpoint needs to be updated,
or False
if otherwise.
List<E> deleteArguments
The list of argument keys to be deleted from the map of arguments used to configure the DevEndpoint
.
Map<K,V> addArguments
The map of arguments to add the map of arguments used to configure the DevEndpoint
.
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
String name
The name of the GrokClassifier
.
String classification
An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, Amazon CloudWatch Logs, and so on.
String grokPattern
The grok pattern used by this classifier.
String customPatterns
Optional custom grok patterns used by this classifier.
String jobName
Returns the name of the updated job definition.
String name
The name of the classifier.
String jsonPath
A JsonPath
string defining the JSON data for the classifier to classify. Glue supports a subset of
JsonPath, as described in Writing JsonPath
Custom Classifiers.
String transformId
A unique identifier that was generated when the transform was created.
String name
The unique name that you gave the transform when you created it.
String description
A description of the transform. The default is an empty string.
TransformParameters parameters
The configuration parameters that are specific to the transform type (algorithm) used. Conditionally dependent on the transform type.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when this task runs.
Integer timeout
The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this
transform can consume resources before it is terminated and enters TIMEOUT
status. The default is
2,880 minutes (48 hours).
Integer maxRetries
The maximum number of times to retry a task for this transform after a task run fails.
String transformId
The unique identifier for the transform that was updated.
String catalogId
The ID of the Data Catalog where the partition to be updated resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table in question resides.
String tableName
The name of the table in which the partition to be updated is located.
List<E> partitionValueList
List of partition key values that define the partition to update.
PartitionInput partitionInput
The new partition object to update the partition to.
The Values
property can't be changed. If you want to change the partition key values for a
partition, delete and recreate the partition.
RegistryId registryId
This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
String description
A description of the registry. If description is not provided, this field will not be updated.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or
SchemaName
has to be provided.
SchemaId$SchemaName: The name of the schema. One of SchemaArn
or SchemaName
has to be
provided.
SchemaVersionNumber schemaVersionNumber
Version number required for check pointing. One of VersionNumber
or Compatibility
has
to be provided.
String compatibility
The new compatibility setting for the schema.
String description
The new description for the schema.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.
TableInput tableInput
An updated TableInput
object to define the metadata table in the catalog.
Boolean skipArchive
By default, UpdateTable
always creates an archived version of the table before updating it. However,
if skipArchive
is set to true, UpdateTable
does not create the archived version.
String transactionId
The transaction ID at which to update the table contents.
String name
The name of the trigger to update.
TriggerUpdate triggerUpdate
The new values with which to update the trigger.
Trigger trigger
The resulting trigger definition.
String catalogId
The ID of the Data Catalog where the function to be updated is located. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the function to be updated is located.
String functionName
The name of the function.
UserDefinedFunctionInput functionInput
A FunctionInput
object that redefines the function in the Data Catalog.
String name
Name of the workflow to be updated.
String description
The description of the workflow.
Map<K,V> defaultRunProperties
A collection of properties to be used as part of each execution of the workflow.
Integer maxConcurrentRuns
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
String name
The name of the workflow which was specified in input.
String name
The name of the classifier.
String classification
An identifier of the data format that the classifier matches.
String rowTag
The XML tag designating the element that contains each record in an XML document being parsed. This cannot
identify a self-closing element (closed by />
). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example,
<row item_a="A" item_b="B"></row>
is okay, but
<row item_a="A" item_b="B" />
is not).
String functionName
The name of the function.
String databaseName
The name of the catalog database that contains the function.
String className
The Java class that contains the function code.
String ownerName
The owner of the function.
String ownerType
The owner type.
Date createTime
The time at which the function was created.
List<E> resourceUris
The resource URIs for the function.
String catalogId
The ID of the Data Catalog in which the function resides.
String functionName
The name of the function.
String className
The Java class that contains the function code.
String ownerName
The owner of the function.
String ownerType
The owner type.
List<E> resourceUris
The resource URIs for the function.
String name
The name of the workflow.
String description
A description of the workflow.
Map<K,V> defaultRunProperties
A collection of properties to be used as part of each execution of the workflow. The run properties are made available to each job in the workflow. A job can modify the properties for the next jobs in the flow.
Date createdOn
The date and time when the workflow was created.
Date lastModifiedOn
The date and time when the workflow was last modified.
WorkflowRun lastRun
The information about the last execution of the workflow.
WorkflowGraph graph
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Integer maxConcurrentRuns
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
BlueprintDetails blueprintDetails
This structure indicates the details of the blueprint that this particular workflow is created from.
String name
Name of the workflow that was run.
String workflowRunId
The ID of this workflow run.
String previousRunId
The ID of the previous workflow run.
Map<K,V> workflowRunProperties
The workflow run properties which were set during the run.
Date startedOn
The date and time when the workflow run was started.
Date completedOn
The date and time when the workflow run completed.
String status
The status of the workflow run.
String errorMessage
This error message describes any error that may have occurred in starting the workflow run. Currently the only
error message is "Concurrent runs exceeded for workflow: foo
."
WorkflowRunStatistics statistics
The statistics of the run.
WorkflowGraph graph
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
StartingEventBatchCondition startingEventBatchCondition
The batch condition that started the workflow run.
Integer totalActions
Total number of Actions in the workflow run.
Integer timeoutActions
Total number of Actions that timed out.
Integer failedActions
Total number of Actions that have failed.
Integer stoppedActions
Total number of Actions that have stopped.
Integer succeededActions
Total number of Actions that have succeeded.
Integer runningActions
Total number Actions in running state.
String name
The name of the classifier.
String classification
An identifier of the data format that the classifier matches.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String rowTag
The XML tag designating the element that contains each record in an XML document being parsed. This can't
identify a self-closing element (closed by />
). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example,
<row item_a="A" item_b="B"></row>
is okay, but
<row item_a="A" item_b="B" />
is not).
Copyright © 2021. All rights reserved.