String jobName
The name of a job to be run.
Map<K,V> arguments
The job arguments used when this trigger fires. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the key-value pairs that Glue consumes to set up your job, see the Special Parameters Used by Glue topic in the developer guide.
Integer timeout
The JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before
it is terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours). This overrides
the timeout value set in the parent job.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this action.
NotificationProperty notificationProperty
Specifies configuration properties of a job run notification.
String crawlerName
The name of the crawler to be used with this action.
List<E> column
Specifies the column on the data set on which the aggregation function will be applied.
String aggFunc
Specifies the aggregation function to apply.
Possible aggregation functions include: avg countDistinct, count, first, last, kurtosis, max, min, skewness, stddev_samp, stddev_pop, sum, sumDistinct, var_samp, var_pop
String accessType
The access type for the Redshift connection. Can be a direct connection or catalog connections.
String sourceType
The source type to specify whether a specific table is the source or a custom query.
Option connection
The Glue connection to the Redshift cluster.
Option schema
The Redshift schema name when working with a direct connection.
Option table
The Redshift table name when working with a direct connection.
Option catalogDatabase
The name of the Glue Data Catalog database when working with a data catalog.
Option catalogTable
The Glue Data Catalog table name when working with a data catalog.
String catalogRedshiftSchema
The Redshift schema name when working with a data catalog.
String catalogRedshiftTable
The database table to read from.
String tempDir
The Amazon S3 path where temporary data can be staged when copying out of the database.
Option iamRole
Optional. The role name use when connection to S3. The IAM role ill default to the role on the job when left blank.
List<E> advancedOptions
Optional values when connecting to the Redshift cluster.
String sampleQuery
The SQL used to fetch the data from a Redshift sources when the SourceType is 'query'.
String preAction
The SQL used before a MERGE or APPEND with upsert is run.
String postAction
The SQL used before a MERGE or APPEND with upsert is run.
String action
Specifies how writing to a Redshift cluser will occur.
String tablePrefix
Specifies the prefix to a table.
Boolean upsert
The action used on Redshift sinks when doing an APPEND.
String mergeAction
The action used when to detemine how a MERGE in a Redshift sink will be handled.
String mergeWhenMatched
The action used when to detemine how a MERGE in a Redshift sink will be handled when an existing record matches a new record.
String mergeWhenNotMatched
The action used when to detemine how a MERGE in a Redshift sink will be handled when an existing record doesn't match a new record.
String mergeClause
The SQL used in a custom merge to deal with matching records.
String crawlerConnection
Specifies the name of the connection that is associated with the catalog table used.
List<E> tableSchema
The array of schema output for a given node.
String stagingTable
The name of the temporary staging table that is used when doing a MERGE or APPEND with upsert.
List<E> selectedColumns
The list of column names used to determine a matching record when doing a MERGE or APPEND with upsert.
String name
The name of the Amazon Redshift source.
AmazonRedshiftNodeData data
Specifies the data of the Amazon Reshift source node.
String name
The name of the Amazon Redshift target.
AmazonRedshiftNodeData data
Specifies the data of the Amazon Redshift target node.
List<E> inputs
The nodes that are inputs to the data target.
String name
The name of the data source.
String connectionName
The name of the connection that is associated with the connector.
String connectorName
The name of a connector that assists with accessing the data store in Glue Studio.
String connectionType
The type of connection, such as marketplace.athena or custom.athena, designating a connection to an Amazon Athena data store.
String connectionTable
The name of the table in the data source.
String schemaName
The name of the Cloudwatch log group to read from. For example, /aws-glue/jobs/output
.
List<E> outputSchemas
Specifies the data schema for the custom Athena source.
String name
The name of your data target.
List<E> inputs
The nodes that are inputs to the data target.
String database
The database that contains the table you want to use as the target. This database must already exist in the Data Catalog.
String table
The table that defines the schema of your output data. This table must already exist in the Data Catalog.
String catalogId
The ID of the catalog in which the partition is to be created. Currently, this should be the Amazon Web Services account ID.
String databaseName
The name of the metadata database in which the partition is to be created.
String tableName
The name of the metadata table in which the partition is to be created.
List<E> partitionInputList
A list of PartitionInput
structures that define the partitions to be created.
String catalogId
The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table in question resides.
String tableName
The name of the table that contains the partitions to be deleted.
List<E> partitionsToDelete
A list of PartitionInput
structures that define the partitions to be deleted.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the tables to delete reside. For Hive compatibility, this name is entirely lowercase.
List<E> tablesToDelete
A list of the table to delete.
String transactionId
The transaction ID at which to delete the table contents.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
List<E> versionIds
A list of the IDs of versions to be deleted. A VersionId
is a string representation of an integer.
Each version is incremented by 1.
List<E> names
A list of blueprint names.
Boolean includeBlueprint
Specifies whether or not to include the blueprint in the response.
Boolean includeParameterSpec
Specifies whether or not to include the parameters, as a JSON string, for the blueprint in the response.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionsToGet
A list of partition values identifying the partitions to retrieve.
ErrorDetail error
An ErrorDetail
object containing code and message details about the error.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
String type
The type of table optimizer.
String jobName
The name of the job definition that is used in the job run in question.
String jobRunId
The JobRunId
of the job run in question.
ErrorDetail errorDetail
Specifies details about the error that was encountered.
List<E> successfulSubmissions
A list of the JobRuns that were successfully submitted for stopping.
List<E> errors
A list of the errors that were encountered in trying to stop JobRuns
, including the
JobRunId
for which each error was encountered and details about the error.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
TableOptimizer tableOptimizer
A TableOptimizer
object that contains details on the configuration and last run of a table optimzer.
List<E> partitionValueList
A list of values defining the partitions.
ErrorDetail errorDetail
The details about the batch update partition error.
String catalogId
The ID of the catalog in which the partition is to be updated. Currently, this should be the Amazon Web Services account ID.
String databaseName
The name of the metadata database in which the partition is to be updated.
String tableName
The name of the metadata table in which the partition is to be updated.
List<E> entries
A list of up to 100 BatchUpdatePartitionRequestEntry
objects to update.
List<E> partitionValueList
A list of values defining the partitions.
PartitionInput partitionInput
The structure used to update a partition.
String name
The name of the blueprint.
String description
The description of the blueprint.
Date createdOn
The date and time the blueprint was registered.
Date lastModifiedOn
The date and time the blueprint was last modified.
String parameterSpec
A JSON string that indicates the list of parameter specifications for the blueprint.
String blueprintLocation
Specifies the path in Amazon S3 where the blueprint is published.
String blueprintServiceLocation
Specifies a path in Amazon S3 where the blueprint is copied when you call
CreateBlueprint/UpdateBlueprint
to register the blueprint in Glue.
String status
The status of the blueprint registration.
Creating — The blueprint registration is in progress.
Active — The blueprint has been successfully registered.
Updating — An update to the blueprint registration is in progress.
Failed — The blueprint registration failed.
String errorMessage
An error message.
LastActiveDefinition lastActiveDefinition
When there are multiple versions of a blueprint and the latest version has some errors, this attribute indicates the last successful blueprint definition that is available with the service.
String blueprintName
The name of the blueprint.
String runId
The run ID for this blueprint run.
String workflowName
The name of a workflow that is created as a result of a successful blueprint run. If a blueprint run has an error, there will not be a workflow created.
String state
The state of the blueprint run. Possible values are:
Running — The blueprint run is in progress.
Succeeded — The blueprint run completed successfully.
Failed — The blueprint run failed and rollback is complete.
Rolling Back — The blueprint run failed and rollback is in progress.
Date startedOn
The date and time that the blueprint run started.
Date completedOn
The date and time that the blueprint run completed.
String errorMessage
Indicates any errors that are seen while running the blueprint.
String rollbackErrorMessage
If there are any errors while creating the entities of a workflow, we try to roll back the created entities until that point and delete them. This attribute indicates the errors seen while trying to delete the entities that are created.
String parameters
The blueprint parameters as a string. You will have to provide a value for each key that is required from the
parameter spec that is defined in the Blueprint$ParameterSpec
.
String roleArn
The role ARN. This role will be assumed by the Glue service and will be used to create the workflow and other entities of a workflow.
String runId
The unique run identifier associated with this run.
String runId
The unique run identifier associated with this run.
String name
The name of the Delta Lake data source.
String database
The name of the database to read from.
String table
The name of the table in the database to read from.
Map<K,V> additionalDeltaOptions
Specifies additional connection options.
List<E> outputSchemas
Specifies the data schema for the Delta Lake source.
String name
The name of the Hudi data source.
String database
The name of the database to read from.
String table
The name of the table in the database to read from.
Map<K,V> additionalHudiOptions
Specifies additional connection options.
List<E> outputSchemas
Specifies the data schema for the Hudi source.
String name
The name of the data store.
Integer windowSize
The amount of time to spend processing each micro batch.
Boolean detectSchema
Whether to automatically determine the schema from the incoming data.
String table
The name of the table in the database to read from.
String database
The name of the database to read from.
KafkaStreamingSourceOptions streamingOptions
Specifies the streaming options.
StreamingDataPreviewOptions dataPreviewOptions
Specifies options related to data preview for viewing a sample of your data.
String name
The name of the data source.
Integer windowSize
The amount of time to spend processing each micro batch.
Boolean detectSchema
Whether to automatically determine the schema from the incoming data.
String table
The name of the table in the database to read from.
String database
The name of the database to read from.
KinesisStreamingSourceOptions streamingOptions
Additional options for the Kinesis streaming data source.
StreamingDataPreviewOptions dataPreviewOptions
Additional options for data preview.
String databaseName
The name of the database to be synchronized.
List<E> tables
A list of the tables to be synchronized.
String connectionName
The name of the connection for an Amazon S3-backed Data Catalog table to be a target of the crawl when using a
Catalog
connection type paired with a NETWORK
Connection type.
String eventQueueArn
A valid Amazon SQS ARN. For example, arn:aws:sqs:region:account:sqs
.
String dlqEventQueueArn
A valid Amazon dead-letter SQS ARN. For example, arn:aws:sqs:region:account:deadLetterQueue
.
GrokClassifier grokClassifier
A classifier that uses grok
.
XMLClassifier xMLClassifier
A classifier for XML content.
JsonClassifier jsonClassifier
A classifier for JSON content.
CsvClassifier csvClassifier
A classifier for comma-separated values (CSV).
AthenaConnectorSource athenaConnectorSource
Specifies a connector to an Amazon Athena data source.
JDBCConnectorSource jDBCConnectorSource
Specifies a connector to a JDBC data source.
SparkConnectorSource sparkConnectorSource
Specifies a connector to an Apache Spark data source.
CatalogSource catalogSource
Specifies a data store in the Glue Data Catalog.
RedshiftSource redshiftSource
Specifies an Amazon Redshift data store.
S3CatalogSource s3CatalogSource
Specifies an Amazon S3 data store in the Glue Data Catalog.
S3CsvSource s3CsvSource
Specifies a command-separated value (CSV) data store stored in Amazon S3.
S3JsonSource s3JsonSource
Specifies a JSON data store stored in Amazon S3.
S3ParquetSource s3ParquetSource
Specifies an Apache Parquet data store stored in Amazon S3.
RelationalCatalogSource relationalCatalogSource
Specifies a relational catalog data store in the Glue Data Catalog.
DynamoDBCatalogSource dynamoDBCatalogSource
Specifies a DynamoDBC Catalog data store in the Glue Data Catalog.
JDBCConnectorTarget jDBCConnectorTarget
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
SparkConnectorTarget sparkConnectorTarget
Specifies a target that uses an Apache Spark connector.
BasicCatalogTarget catalogTarget
Specifies a target that uses a Glue Data Catalog table.
RedshiftTarget redshiftTarget
Specifies a target that uses Amazon Redshift.
S3CatalogTarget s3CatalogTarget
Specifies a data target that writes to Amazon S3 using the Glue Data Catalog.
S3GlueParquetTarget s3GlueParquetTarget
Specifies a data target that writes to Amazon S3 in Apache Parquet columnar storage.
S3DirectTarget s3DirectTarget
Specifies a data target that writes to Amazon S3.
ApplyMapping applyMapping
Specifies a transform that maps data property keys in the data source to data property keys in the data target. You can rename keys, modify the data types for keys, and choose which keys to drop from the dataset.
SelectFields selectFields
Specifies a transform that chooses the data property keys that you want to keep.
DropFields dropFields
Specifies a transform that chooses the data property keys that you want to drop.
RenameField renameField
Specifies a transform that renames a single data property key.
Spigot spigot
Specifies a transform that writes samples of the data to an Amazon S3 bucket.
Join join
Specifies a transform that joins two datasets into one dataset using a comparison phrase on the specified data property keys. You can use inner, outer, left, right, left semi, and left anti joins.
SplitFields splitFields
Specifies a transform that splits data property keys into two DynamicFrames
. The output is a
collection of DynamicFrames
: one with selected data property keys, and one with the remaining data
property keys.
SelectFromCollection selectFromCollection
Specifies a transform that chooses one DynamicFrame
from a collection of DynamicFrames
.
The output is the selected DynamicFrame
FillMissingValues fillMissingValues
Specifies a transform that locates records in the dataset that have missing values and adds a new field with a value determined by imputation. The input data set is used to train the machine learning model that determines what the missing value should be.
Filter filter
Specifies a transform that splits a dataset into two, based on a filter condition.
CustomCode customCode
Specifies a transform that uses custom code you provide to perform the data transformation. The output is a collection of DynamicFrames.
SparkSQL sparkSQL
Specifies a transform where you enter a SQL query using Spark SQL syntax to transform the data. The output is a
single DynamicFrame
.
DirectKinesisSource directKinesisSource
Specifies a direct Amazon Kinesis data source.
DirectKafkaSource directKafkaSource
Specifies an Apache Kafka data store.
CatalogKinesisSource catalogKinesisSource
Specifies a Kinesis data source in the Glue Data Catalog.
CatalogKafkaSource catalogKafkaSource
Specifies an Apache Kafka data store in the Data Catalog.
DropNullFields dropNullFields
Specifies a transform that removes columns from the dataset if all values in the column are 'null'. By default, Glue Studio will recognize null objects, but some values such as empty strings, strings that are "null", -1 integers or other placeholders such as zeros, are not automatically recognized as nulls.
Merge merge
Specifies a transform that merges a DynamicFrame
with a staging DynamicFrame
based on
the specified primary keys to identify records. Duplicate records (records with the same primary keys) are not
de-duplicated.
Union union
Specifies a transform that combines the rows from two or more datasets into a single result.
PIIDetection pIIDetection
Specifies a transform that identifies, removes or masks PII data.
Aggregate aggregate
Specifies a transform that groups rows by chosen fields and computes the aggregated value by specified function.
DropDuplicates dropDuplicates
Specifies a transform that removes rows of repeating data from a data set.
GovernedCatalogTarget governedCatalogTarget
Specifies a data target that writes to a goverened catalog.
GovernedCatalogSource governedCatalogSource
Specifies a data source in a goverened Data Catalog.
MicrosoftSQLServerCatalogSource microsoftSQLServerCatalogSource
Specifies a Microsoft SQL server data source in the Glue Data Catalog.
MySQLCatalogSource mySQLCatalogSource
Specifies a MySQL data source in the Glue Data Catalog.
OracleSQLCatalogSource oracleSQLCatalogSource
Specifies an Oracle data source in the Glue Data Catalog.
PostgreSQLCatalogSource postgreSQLCatalogSource
Specifies a PostgresSQL data source in the Glue Data Catalog.
MicrosoftSQLServerCatalogTarget microsoftSQLServerCatalogTarget
Specifies a target that uses Microsoft SQL.
MySQLCatalogTarget mySQLCatalogTarget
Specifies a target that uses MySQL.
OracleSQLCatalogTarget oracleSQLCatalogTarget
Specifies a target that uses Oracle SQL.
PostgreSQLCatalogTarget postgreSQLCatalogTarget
Specifies a target that uses Postgres SQL.
DynamicTransform dynamicTransform
Specifies a custom visual transform created by a user.
EvaluateDataQuality evaluateDataQuality
Specifies your data quality evaluation criteria.
S3CatalogHudiSource s3CatalogHudiSource
Specifies a Hudi data source that is registered in the Glue Data Catalog. The data source must be stored in Amazon S3.
CatalogHudiSource catalogHudiSource
Specifies a Hudi data source that is registered in the Glue Data Catalog.
S3HudiSource s3HudiSource
Specifies a Hudi data source stored in Amazon S3.
S3HudiCatalogTarget s3HudiCatalogTarget
Specifies a target that writes to a Hudi data source in the Glue Data Catalog.
S3HudiDirectTarget s3HudiDirectTarget
Specifies a target that writes to a Hudi data source in Amazon S3.
DirectJDBCSource directJDBCSource
S3CatalogDeltaSource s3CatalogDeltaSource
Specifies a Delta Lake data source that is registered in the Glue Data Catalog. The data source must be stored in Amazon S3.
CatalogDeltaSource catalogDeltaSource
Specifies a Delta Lake data source that is registered in the Glue Data Catalog.
S3DeltaSource s3DeltaSource
Specifies a Delta Lake data source stored in Amazon S3.
S3DeltaCatalogTarget s3DeltaCatalogTarget
Specifies a target that writes to a Delta Lake data source in the Glue Data Catalog.
S3DeltaDirectTarget s3DeltaDirectTarget
Specifies a target that writes to a Delta Lake data source in Amazon S3.
AmazonRedshiftSource amazonRedshiftSource
Specifies a target that writes to a data source in Amazon Redshift.
AmazonRedshiftTarget amazonRedshiftTarget
Specifies a target that writes to a data target in Amazon Redshift.
EvaluateDataQualityMultiFrame evaluateDataQualityMultiFrame
Specifies your data quality evaluation criteria. Allows multiple input data and returns a collection of Dynamic Frames.
Recipe recipe
Specifies a Glue DataBrew recipe node.
SnowflakeSource snowflakeSource
Specifies a Snowflake data source.
SnowflakeTarget snowflakeTarget
Specifies a target that writes to a Snowflake data source.
ConnectorDataSource connectorDataSource
Specifies a source generated with standard connection options.
ConnectorDataTarget connectorDataTarget
Specifies a target generated with standard connection options.
String columnName
The name of the column that failed.
ErrorDetail error
An error message with the reason for the failure of an operation.
String columnName
Name of column which statistics belong to.
String columnType
The data type of the column.
Date analyzedTime
The timestamp of when column statistics were generated.
ColumnStatisticsData statisticsData
A ColumnStatisticData
object that contains the statistics data values.
String type
The type of column statistics data.
BooleanColumnStatisticsData booleanColumnStatisticsData
Boolean column statistics data.
DateColumnStatisticsData dateColumnStatisticsData
Date column statistics data.
DecimalColumnStatisticsData decimalColumnStatisticsData
Decimal column statistics data. UnscaledValues within are Base64-encoded binary objects storing big-endian, two's complement representations of the decimal's unscaled value.
DoubleColumnStatisticsData doubleColumnStatisticsData
Double column statistics data.
LongColumnStatisticsData longColumnStatisticsData
Long column statistics data.
StringColumnStatisticsData stringColumnStatisticsData
String column statistics data.
BinaryColumnStatisticsData binaryColumnStatisticsData
Binary column statistics data.
ColumnStatistics columnStatistics
The ColumnStatistics
of the column.
ErrorDetail error
An error message with the reason for the failure of an operation.
String customerId
The Amazon Web Services account ID.
String columnStatisticsTaskRunId
The identifier for the particular column statistics task run.
String databaseName
The database where the table resides.
String tableName
The name of the table for which column statistics is generated.
List<E> columnNameList
A list of the column names. If none is supplied, all column names for the table will be used by default.
String catalogID
The ID of the Data Catalog where the table resides. If none is supplied, the Amazon Web Services account ID is used by default.
String role
The IAM role that the service assumes to generate statistics.
Double sampleSize
The percentage of rows used to generate statistics. If none is supplied, the entire table will be used to generate stats.
String securityConfiguration
Name of the security configuration that is used to encrypt CloudWatch logs for the column stats task run.
Integer numberOfWorkers
The number of workers used to generate column statistics. The job is preconfigured to autoscale up to 25 instances.
String workerType
The type of workers being used for generating stats. The default is g.1x
.
String status
The status of the task run.
Date creationTime
The time that this task was created.
Date lastUpdated
The last point in time when this task was modified.
Date startTime
The start time of the task.
Date endTime
The end time of the task.
String errorMessage
The error message for the job.
Double dPUSeconds
The calculated DPU usage in seconds for all autoscaled workers.
String logicalOperator
A logical operator.
String jobName
The name of the job whose JobRuns
this condition applies to, and on which this trigger waits.
String state
The condition state. Currently, the only job states that a trigger can listen for are SUCCEEDED
,
STOPPED
, FAILED
, and TIMEOUT
. The only crawler states that a trigger can
listen for are SUCCEEDED
, FAILED
, and CANCELLED
.
String crawlerName
The name of the crawler to which this condition applies.
String crawlState
The state of the crawler to which this condition applies.
Long numTruePositives
The number of matches in the data that the transform correctly found, in the confusion matrix for your transform.
Long numFalsePositives
The number of nonmatches in the data that the transform incorrectly classified as a match, in the confusion matrix for your transform.
Long numTrueNegatives
The number of nonmatches in the data that the transform correctly rejected, in the confusion matrix for your transform.
Long numFalseNegatives
The number of matches in the data that the transform didn't find, in the confusion matrix for your transform.
String name
The name of the connection definition.
String description
The description of the connection.
String connectionType
The type of the connection. Currently, SFTP is not supported.
List<E> matchCriteria
A list of criteria that can be used in selecting this connection.
Map<K,V> connectionProperties
These key-value pairs define parameters for the connection:
HOST
- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the
database host.
PORT
- The port number, between 1024 and 65535, of the port on which the database host is listening
for database connections.
USER_NAME
- The name under which to log in to the database. The value string for
USER_NAME
is "USERNAME
".
PASSWORD
- A password, if one is used, for the user name.
ENCRYPTED_PASSWORD
- When you enable connection password protection by setting
ConnectionPasswordEncryption
in the Data Catalog encryption settings, this field stores the
encrypted password.
JDBC_DRIVER_JAR_URI
- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that
contains the JDBC driver to use.
JDBC_DRIVER_CLASS_NAME
- The class name of the JDBC driver to use.
JDBC_ENGINE
- The name of the JDBC engine to use.
JDBC_ENGINE_VERSION
- The version of the JDBC engine to use.
CONFIG_FILES
- (Reserved for future use.)
INSTANCE_ID
- The instance ID to use.
JDBC_CONNECTION_URL
- The URL for connecting to a JDBC data source.
JDBC_ENFORCE_SSL
- A Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with
hostname matching is enforced for the JDBC connection on the client. The default is false.
CUSTOM_JDBC_CERT
- An Amazon S3 location specifying the customer's root certificate. Glue uses this
root certificate to validate the customer’s certificate when connecting to the customer database. Glue only
handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM
format.
SKIP_CUSTOM_JDBC_CERT_VALIDATION
- By default, this is false
. Glue validates the
Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms
for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key
Algorithm, the key length must be at least 2048. You can set the value of this property to true
to
skip Glue’s validation of the customer certificate.
CUSTOM_JDBC_CERT_STRING
- A custom JDBC certificate string which is used for domain match or
distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as the
SSL_SERVER_CERT_DN
; in Microsoft SQL Server, this is used as the hostNameInCertificate
.
CONNECTION_URL
- The URL for connecting to a general (non-JDBC) data source.
SECRET_ID
- The secret ID used for the secret manager of credentials.
CONNECTOR_URL
- The connector URL for a MARKETPLACE or CUSTOM connection.
CONNECTOR_TYPE
- The connector type for a MARKETPLACE or CUSTOM connection.
CONNECTOR_CLASS_NAME
- The connector class name for a MARKETPLACE or CUSTOM connection.
KAFKA_BOOTSTRAP_SERVERS
- A comma-separated list of host and port pairs that are the addresses of
the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
KAFKA_SSL_ENABLED
- Whether to enable or disable SSL on an Apache Kafka connection. Default value is
"true".
KAFKA_CUSTOM_CERT
- The Amazon S3 URL for the private CA cert file (.pem format). The default is an
empty string.
KAFKA_SKIP_CUSTOM_CERT_VALIDATION
- Whether to skip the validation of the CA cert file or not. Glue
validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is "false".
KAFKA_CLIENT_KEYSTORE
- The Amazon S3 location of the client keystore file for Kafka client side
authentication (Optional).
KAFKA_CLIENT_KEYSTORE_PASSWORD
- The password to access the provided keystore (Optional).
KAFKA_CLIENT_KEY_PASSWORD
- A keystore can consist of multiple keys, so this is the password to
access the client key to be used with the Kafka server side key (Optional).
ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD
- The encrypted version of the Kafka client keystore
password (if the user has the Glue encrypt passwords setting selected).
ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD
- The encrypted version of the Kafka client key password (if the
user has the Glue encrypt passwords setting selected).
KAFKA_SASL_MECHANISM
- "SCRAM-SHA-512"
, "GSSAPI"
, or
"AWS_MSK_IAM"
. These are the supported SASL Mechanisms.
KAFKA_SASL_SCRAM_USERNAME
- A plaintext username used to authenticate with the "SCRAM-SHA-512"
mechanism.
KAFKA_SASL_SCRAM_PASSWORD
- A plaintext password used to authenticate with the "SCRAM-SHA-512"
mechanism.
ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD
- The encrypted version of the Kafka SASL SCRAM password (if the
user has the Glue encrypt passwords setting selected).
KAFKA_SASL_SCRAM_SECRETS_ARN
- The Amazon Resource Name of a secret in Amazon Web Services Secrets
Manager.
KAFKA_SASL_GSSAPI_KEYTAB
- The S3 location of a Kerberos keytab
file. A keytab stores
long-term keys for one or more principals. For more information, see MIT Kerberos Documentation: Keytab.
KAFKA_SASL_GSSAPI_KRB5_CONF
- The S3 location of a Kerberos krb5.conf
file. A krb5.conf
stores Kerberos configuration information, such as the location of the KDC server. For more information, see MIT Kerberos Documentation:
krb5.conf.
KAFKA_SASL_GSSAPI_SERVICE
- The Kerberos service name, as set with
sasl.kerberos.service.name
in your Kafka Configuration.
KAFKA_SASL_GSSAPI_PRINCIPAL
- The name of the Kerberos princial used by Glue. For more information,
see Kafka Documentation:
Configuring Kafka Brokers.
PhysicalConnectionRequirements physicalConnectionRequirements
A map of physical connection requirements, such as virtual private cloud (VPC) and SecurityGroup
,
that are needed to make this connection successfully.
Date creationTime
The time that this connection definition was created.
Date lastUpdatedTime
The last time that this connection definition was updated.
String lastUpdatedBy
The user, group, or role that last updated this connection definition.
String name
The name of the connection. Connection will not function as expected without a name.
String description
The description of the connection.
String connectionType
The type of the connection. Currently, these types are supported:
JDBC
- Designates a connection to a database through Java Database Connectivity (JDBC).
JDBC
Connections use the following ConnectionParameters.
Required: All of (HOST
, PORT
, JDBC_ENGINE
) or
JDBC_CONNECTION_URL
.
Required: All of (USERNAME
, PASSWORD
) or SECRET_ID
.
Optional: JDBC_ENFORCE_SSL
, CUSTOM_JDBC_CERT
, CUSTOM_JDBC_CERT_STRING
,
SKIP_CUSTOM_JDBC_CERT_VALIDATION
. These parameters are used to configure SSL with JDBC.
KAFKA
- Designates a connection to an Apache Kafka streaming platform.
KAFKA
Connections use the following ConnectionParameters.
Required: KAFKA_BOOTSTRAP_SERVERS
.
Optional: KAFKA_SSL_ENABLED
, KAFKA_CUSTOM_CERT
,
KAFKA_SKIP_CUSTOM_CERT_VALIDATION
. These parameters are used to configure SSL with
KAFKA
.
Optional: KAFKA_CLIENT_KEYSTORE
, KAFKA_CLIENT_KEYSTORE_PASSWORD
,
KAFKA_CLIENT_KEY_PASSWORD
, ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD
,
ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD
. These parameters are used to configure TLS client configuration
with SSL in KAFKA
.
Optional: KAFKA_SASL_MECHANISM
. Can be specified as SCRAM-SHA-512
, GSSAPI
,
or AWS_MSK_IAM
.
Optional: KAFKA_SASL_SCRAM_USERNAME
, KAFKA_SASL_SCRAM_PASSWORD
,
ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD
. These parameters are used to configure SASL/SCRAM-SHA-512
authentication with KAFKA
.
Optional: KAFKA_SASL_GSSAPI_KEYTAB
, KAFKA_SASL_GSSAPI_KRB5_CONF
,
KAFKA_SASL_GSSAPI_SERVICE
, KAFKA_SASL_GSSAPI_PRINCIPAL
. These parameters are used to
configure SASL/GSSAPI authentication with KAFKA
.
MONGODB
- Designates a connection to a MongoDB document database.
MONGODB
Connections use the following ConnectionParameters.
Required: CONNECTION_URL
.
Required: All of (USERNAME
, PASSWORD
) or SECRET_ID
.
NETWORK
- Designates a network connection to a data source within an Amazon Virtual Private Cloud
environment (Amazon VPC).
NETWORK
Connections do not require ConnectionParameters. Instead, provide a
PhysicalConnectionRequirements.
MARKETPLACE
- Uses configuration settings contained in a connector purchased from Amazon Web
Services Marketplace to read from and write to data stores that are not natively supported by Glue.
MARKETPLACE
Connections use the following ConnectionParameters.
Required: CONNECTOR_TYPE
, CONNECTOR_URL
, CONNECTOR_CLASS_NAME
,
CONNECTION_URL
.
Required for JDBC
CONNECTOR_TYPE
connections: All of (USERNAME
,
PASSWORD
) or SECRET_ID
.
CUSTOM
- Uses configuration settings contained in a custom connector to read from and write to data
stores that are not natively supported by Glue.
SFTP
is not supported.
For more information about how optional ConnectionProperties are used to configure features in Glue, consult Glue connection properties.
For more information about how optional ConnectionProperties are used to configure features in Glue Studio, consult Using connectors and connections.
List<E> matchCriteria
A list of criteria that can be used in selecting this connection.
Map<K,V> connectionProperties
These key-value pairs define parameters for the connection.
PhysicalConnectionRequirements physicalConnectionRequirements
A map of physical connection requirements, such as virtual private cloud (VPC) and SecurityGroup
,
that are needed to successfully make this connection.
Boolean returnConnectionPasswordEncrypted
When the ReturnConnectionPasswordEncrypted
flag is set to "true", passwords remain encrypted in the
responses of GetConnection
and GetConnections
. This encryption takes effect
independently from catalog encryption.
String awsKmsKeyId
An KMS key that is used to encrypt the connection password.
If connection password protection is enabled, the caller of CreateConnection
and
UpdateConnection
needs at least kms:Encrypt
permission on the specified KMS key, to
encrypt passwords before storing them in the Data Catalog.
You can set the decrypt permission to enable or restrict access on the password key according to your security requirements.
String name
The name of this source node.
String connectionType
The connectionType
, as provided to the underlying Glue library. This node type supports the
following connection types:
opensearch
azuresql
azurecosmos
bigquery
saphana
teradata
vertica
Map<K,V> data
A map specifying connection options for the node. You can find standard connection options for the corresponding connection type in the Connection parameters section of the Glue documentation.
List<E> outputSchemas
Specifies the data schema for this source.
String name
The name of this target node.
String connectionType
The connectionType
, as provided to the underlying Glue library. This node type supports the
following connection types:
opensearch
azuresql
azurecosmos
bigquery
saphana
teradata
vertica
Map<K,V> data
A map specifying connection options for the node. You can find standard connection options for the corresponding connection type in the Connection parameters section of the Glue documentation.
List<E> inputs
The nodes that are inputs to the data target.
String state
The state of the crawler.
Date startedOn
The date and time on which the crawl started.
Date completedOn
The date and time on which the crawl completed.
String errorMessage
The error message associated with the crawl.
String logGroup
The log group associated with the crawl.
String logStream
The log stream associated with the crawl.
String name
The name of the crawler.
String role
The Amazon Resource Name (ARN) of an IAM role that's used to access customer resources, such as Amazon Simple Storage Service (Amazon S3) data.
CrawlerTargets targets
A collection of targets to crawl.
String databaseName
The name of the database in which the crawler's output is stored.
String description
A description of the crawler.
List<E> classifiers
A list of UTF-8 strings that specify the custom classifiers that are associated with the crawler.
RecrawlPolicy recrawlPolicy
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
SchemaChangePolicy schemaChangePolicy
The policy that specifies update and delete behaviors for the crawler.
LineageConfiguration lineageConfiguration
A configuration that specifies whether data lineage is enabled for the crawler.
String state
Indicates whether the crawler is running, or whether a run is pending.
String tablePrefix
The prefix added to the names of tables that are created.
Schedule schedule
For scheduled crawlers, the schedule when the crawler runs.
Long crawlElapsedTime
If the crawler is running, contains the total time elapsed since the last crawl began.
Date creationTime
The time that the crawler was created.
Date lastUpdated
The time that the crawler was last updated.
LastCrawlInfo lastCrawl
The status of the last crawl, and potentially error information if an error occurred.
Long version
The version of the crawler.
String configuration
Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Setting crawler configuration options.
String crawlerSecurityConfiguration
The name of the SecurityConfiguration
structure to be used by this crawler.
LakeFormationConfiguration lakeFormationConfiguration
Specifies whether the crawler should use Lake Formation credentials for the crawler instead of the IAM role credentials.
String crawlId
A UUID identifier for each crawl.
String state
The state of the crawl.
Date startTime
The date and time on which the crawl started.
Date endTime
The date and time on which the crawl ended.
String summary
A run summary for the specific crawl in JSON. Contains the catalog tables and partitions that were added, updated, or deleted.
String errorMessage
If an error occurred, the error message associated with the crawl.
String logGroup
The log group associated with the crawl.
String logStream
The log stream associated with the crawl.
String messagePrefix
The prefix for a CloudWatch message about this crawl.
Double dPUHour
The number of data processing units (DPU) used in hours for the crawl.
String crawlerName
The name of the crawler.
Double timeLeftSeconds
The estimated time left to complete a running crawl.
Boolean stillEstimating
True if the crawler is still estimating how long it will take to complete this run.
Double lastRuntimeSeconds
The duration of the crawler's most recent run, in seconds.
Double medianRuntimeSeconds
The median duration of this crawler's runs, in seconds.
Integer tablesCreated
The number of tables created by this crawler.
Integer tablesUpdated
The number of tables updated by this crawler.
Integer tablesDeleted
The number of tables deleted by this crawler.
List<E> s3Targets
Specifies Amazon Simple Storage Service (Amazon S3) targets.
List<E> jdbcTargets
Specifies JDBC targets.
List<E> mongoDBTargets
Specifies Amazon DocumentDB or MongoDB targets.
List<E> dynamoDBTargets
Specifies Amazon DynamoDB targets.
List<E> catalogTargets
Specifies Glue Data Catalog targets.
List<E> deltaTargets
Specifies Delta data store targets.
List<E> icebergTargets
Specifies Apache Iceberg data store targets.
List<E> hudiTargets
Specifies Apache Hudi data store targets.
String fieldName
A key used to filter the crawler runs for a specified crawler. Valid values for each of the field names are:
CRAWL_ID
: A string representing the UUID identifier for a crawl.
STATE
: A string representing the state of the crawl.
START_TIME
and END_TIME
: The epoch timestamp in milliseconds.
DPU_HOUR
: The number of data processing unit (DPU) hours used for the crawl.
String filterOperator
A defined comparator that operates on the value. The available operators are:
GT
: Greater than.
GE
: Greater than or equal to.
LT
: Less than.
LE
: Less than or equal to.
EQ
: Equal to.
NE
: Not equal to.
String fieldValue
The value provided for comparison on the crawl field.
String name
Returns the name of the blueprint that was registered.
CreateGrokClassifierRequest grokClassifier
A GrokClassifier
object specifying the classifier to create.
CreateXMLClassifierRequest xMLClassifier
An XMLClassifier
object specifying the classifier to create.
CreateJsonClassifierRequest jsonClassifier
A JsonClassifier
object specifying the classifier to create.
CreateCsvClassifierRequest csvClassifier
A CsvClassifier
object specifying the classifier to create.
String catalogId
The ID of the Data Catalog in which to create the connection. If none is provided, the Amazon Web Services account ID is used by default.
ConnectionInput connectionInput
A ConnectionInput
object defining the connection to create.
Map<K,V> tags
The tags you assign to the connection.
String name
Name of the new crawler.
String role
The IAM role or Amazon Resource Name (ARN) of an IAM role used by the new crawler to access customer resources.
String databaseName
The Glue database where results are written, such as:
arn:aws:daylight:us-east-1::database/sometable/*
.
String description
A description of the new crawler.
CrawlerTargets targets
A list of collection of targets to crawl.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> classifiers
A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.
String tablePrefix
The table prefix used for catalog tables that are created.
SchemaChangePolicy schemaChangePolicy
The policy for the crawler's update and deletion behavior.
RecrawlPolicy recrawlPolicy
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
LineageConfiguration lineageConfiguration
Specifies data lineage configuration settings for the crawler.
LakeFormationConfiguration lakeFormationConfiguration
Specifies Lake Formation configuration settings for the crawler.
String configuration
Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Setting crawler configuration options.
String crawlerSecurityConfiguration
The name of the SecurityConfiguration
structure to be used by this crawler.
Map<K,V> tags
The tags to use with this crawler request. You may use tags to limit access to the crawler. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
String name
The name of the classifier.
String delimiter
A custom symbol to denote what separates each column entry in the row.
String quoteSymbol
A custom symbol to denote what combines content into a single column value. Must be different from the column delimiter.
String containsHeader
Indicates whether the CSV file contains a header.
List<E> header
A list of strings representing column names.
Boolean disableValueTrimming
Specifies not to trim values before identifying the type of column values. The default value is true.
Boolean allowSingleColumn
Enables the processing of files that contain only one column.
Boolean customDatatypeConfigured
Enables the configuration of custom datatypes.
List<E> customDatatypes
Creates a list of supported custom datatypes.
String serde
Sets the SerDe for processing CSV in the classifier, which will be applied in the Data Catalog. Valid values are
OpenCSVSerDe
, LazySimpleSerDe
, and None
. You can specify the
None
value when you want the crawler to do the detection.
String name
A name for the custom pattern that allows it to be retrieved or deleted later. This name must be unique per Amazon Web Services account.
String regexString
A regular expression string that is used for detecting sensitive data in a custom pattern.
List<E> contextWords
A list of context words. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data.
If no context words are passed only a regular expression is checked.
Map<K,V> tags
A list of tags applied to the custom entity type.
String name
The name of the custom pattern you created.
String catalogId
The ID of the Data Catalog in which to create the database. If none is provided, the Amazon Web Services account ID is used by default.
DatabaseInput databaseInput
The metadata for the database.
Map<K,V> tags
The tags you assign to the database.
String name
A unique name for the data quality ruleset.
String description
A description of the data quality ruleset.
String ruleset
A Data Quality Definition Language (DQDL) ruleset. For more information, see the Glue developer guide.
Map<K,V> tags
A list of tags applied to the data quality ruleset.
DataQualityTargetTable targetTable
A target table associated with the data quality ruleset.
String clientToken
Used for idempotency and is recommended to be set to a random ID (such as a UUID) to avoid creating or starting multiple instances of the same resource.
String name
A unique name for the data quality ruleset.
String endpointName
The name to be assigned to the new DevEndpoint
.
String roleArn
The IAM role for the DevEndpoint
.
List<E> securityGroupIds
Security group IDs for the security groups to be used by the new DevEndpoint
.
String subnetId
The subnet ID for the new DevEndpoint
to use.
String publicKey
The public key to be used by this DevEndpoint
for authentication. This attribute is provided for
backward compatibility because the recommended attribute to use is public keys.
List<E> publicKeys
A list of public keys to be used by the development endpoints for authentication. The use of this attribute is preferred over a single public key because the public keys allow you to have a different private key per client.
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of
public keys. Call the UpdateDevEndpoint
API with the public key content in the
deletePublicKeys
attribute, and the list of new keys in the addPublicKeys
attribute.
Integer numberOfNodes
The number of Glue Data Processing Units (DPUs) to allocate to this DevEndpoint
.
String workerType
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X
WorkerType
configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB
disk.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your
DevEndpoint
. Multiple values must be complete paths separated by a comma.
You can only use pure Python libraries with a DevEndpoint
. Libraries that rely on C extensions, such
as the pandas Python data analysis library, are not yet supported.
String extraJarsS3Path
The path to one or more Java .jar
files in an S3 bucket that should be loaded in your
DevEndpoint
.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this DevEndpoint
.
Map<K,V> tags
The tags to use with this DevEndpoint. You may use tags to limit access to the DevEndpoint. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
Map<K,V> arguments
A map of arguments used to configure the DevEndpoint
.
String endpointName
The name assigned to the new DevEndpoint
.
String status
The current status of the new DevEndpoint
.
List<E> securityGroupIds
The security groups assigned to the new DevEndpoint
.
String subnetId
The subnet ID assigned to the new DevEndpoint
.
String roleArn
The Amazon Resource Name (ARN) of the role assigned to the new DevEndpoint
.
String yarnEndpointAddress
The address of the YARN endpoint used by this DevEndpoint
.
Integer zeppelinRemoteSparkInterpreterPort
The Apache Zeppelin port for the remote Apache Spark interpreter.
Integer numberOfNodes
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint.
String workerType
The type of predefined worker that is allocated to the development endpoint. May be a value of Standard, G.1X, or G.2X.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated to the development endpoint.
String availabilityZone
The Amazon Web Services Availability Zone where this DevEndpoint
is located.
String vpcId
The ID of the virtual private cloud (VPC) used by this DevEndpoint
.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an S3 bucket that will be loaded in your DevEndpoint
.
String extraJarsS3Path
Path to one or more Java .jar
files in an S3 bucket that will be loaded in your
DevEndpoint
.
String failureReason
The reason for a current failure in this DevEndpoint
.
String securityConfiguration
The name of the SecurityConfiguration
structure being used with this DevEndpoint
.
Date createdTimestamp
The point in time at which this DevEndpoint
was created.
Map<K,V> arguments
The map of arguments used to configure this DevEndpoint
.
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
String classification
An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, Amazon CloudWatch Logs, and so on.
String name
The name of the new classifier.
String grokPattern
The grok pattern used by this classifier.
String customPatterns
Optional custom grok patterns used by this classifier.
String name
The name you assign to this job definition. It must be unique in your account.
String description
Description of the job being defined.
String logUri
This field is reserved for future use.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
ExecutionProperty executionProperty
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
JobCommand command
The JobCommand
that runs this job.
Map<K,V> defaultArguments
The default arguments for every run of this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
Map<K,V> nonOverridableArguments
Arguments for this job that are not overridden when providing job arguments in a job run, specified as name-value pairs.
ConnectionsList connections
The connections used for this job.
Integer maxRetries
The maximum number of times to retry this job if it fails.
Integer allocatedCapacity
This parameter is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this Job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated
and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity
. Instead, you should specify a
Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job.
Map<K,V> tags
The tags to use with this job. You may use tags to limit access to the job. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
NotificationProperty notificationProperty
Specifies configuration properties of a job notification.
String glueVersion
In Spark jobs, GlueVersion
determines the versions of Apache Spark and Python that Glue available in
a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion
to 4.0
or greater. However, the versions of Ray, Python
and additional libraries available in your Ray job are determined by the Runtime
parameter of the
Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio),
US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the
G.4X
worker type.
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume
streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
Map<K,V> codeGenConfigurationNodes
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
String executionClass
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set
ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
SourceControlDetails sourceControlDetails
The details for a source control configuration for a job, allowing synchronization of job artifacts to or from a remote repository.
String name
The unique name that was provided for this job definition.
String name
The name of the classifier.
String jsonPath
A JsonPath
string defining the JSON data for the classifier to classify. Glue supports a subset of
JsonPath, as described in Writing JsonPath
Custom Classifiers.
String name
The unique name that you give the transform when you create it.
String description
A description of the machine learning transform that is being defined. The default is an empty string.
List<E> inputRecordTables
A list of Glue table definitions used by the transform.
TransformParameters parameters
The algorithmic parameters that are specific to the transform type used. Conditionally dependent on the transform type.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.
This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.
This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when this task runs.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
Integer timeout
The timeout of the task run for this transform in minutes. This is the maximum time that a task run for this
transform can consume resources before it is terminated and enters TIMEOUT
status. The default is
2,880 minutes (48 hours).
Integer maxRetries
The maximum number of times to retry a task for this transform after a task run fails.
Map<K,V> tags
The tags to use with this machine learning transform. You may use tags to limit access to the machine learning transform. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
TransformEncryption transformEncryption
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
String transformId
A unique identifier that is generated for the transform.
String catalogId
The catalog ID where the table resides.
String databaseName
Specifies the name of a database in which you want to create a partition index.
String tableName
Specifies the name of a table in which you want to create a partition index.
PartitionIndex partitionIndex
Specifies a PartitionIndex
structure to create a partition index in an existing table.
String catalogId
The Amazon Web Services account ID of the catalog in which the partition is to be created.
String databaseName
The name of the metadata database in which the partition is to be created.
String tableName
The name of the metadata table in which the partition is to be created.
PartitionInput partitionInput
A PartitionInput
structure defining the partition to be created.
String registryName
Name of the registry to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.
String description
A description of the registry. If description is not provided, there will not be any default value for this.
Map<K,V> tags
Amazon Web Services tags that contain a key value pair and may be searched by console, command line, or API.
RegistryId registryId
This is a wrapper shape to contain the registry identity fields. If this is not provided, the default registry
will be used. The ARN format for the same will be:
arn:aws:glue:us-east-2:<customer id>:registry/default-registry:random-5-letter-id
.
String schemaName
Name of the schema to be created of max length of 255, and may only contain letters, numbers, hyphen, underscore, dollar sign, or hash mark. No whitespace.
String dataFormat
The data format of the schema definition. Currently AVRO
, JSON
and
PROTOBUF
are supported.
String compatibility
The compatibility mode of the schema. The possible values are:
NONE: No compatibility mode applies. You can use this choice in development scenarios or if you do not know the compatibility mode that you want to apply to schemas. Any new version added will be accepted without undergoing a compatibility check.
DISABLED: This compatibility choice prevents versioning for a particular schema. You can use this choice to prevent future versioning of a schema.
BACKWARD: This compatibility choice is recommended as it allows data receivers to read both the current and one previous schema version. This means that for instance, a new schema version cannot drop data fields or change the type of these fields, so they can't be read by readers using the previous version.
BACKWARD_ALL: This compatibility choice allows data receivers to read both the current and all previous schema versions. You can use this choice when you need to delete fields or add optional fields, and check compatibility against all previous schema versions.
FORWARD: This compatibility choice allows data receivers to read both the current and one next schema version, but not necessarily later versions. You can use this choice when you need to add fields or delete optional fields, but only check compatibility against the last schema version.
FORWARD_ALL: This compatibility choice allows data receivers to read written by producers of any new registered schema. You can use this choice when you need to add fields or delete optional fields, and check compatibility against all previous schema versions.
FULL: This compatibility choice allows data receivers to read data written by producers using the previous or next version of the schema, but not necessarily earlier or later versions. You can use this choice when you need to add or remove optional fields, but only check compatibility against the last schema version.
FULL_ALL: This compatibility choice allows data receivers to read data written by producers using all previous schema versions. You can use this choice when you need to add or remove optional fields, and check compatibility against all previous schema versions.
String description
An optional description of the schema. If description is not provided, there will not be any automatic default value for this.
Map<K,V> tags
Amazon Web Services tags that contain a key value pair and may be searched by console, command line, or API. If specified, follows the Amazon Web Services tags-on-create pattern.
String schemaDefinition
The schema definition using the DataFormat
setting for SchemaName
.
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String schemaName
The name of the schema.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String description
A description of the schema if specified when created.
String dataFormat
The data format of the schema definition. Currently AVRO
, JSON
and
PROTOBUF
are supported.
String compatibility
The schema compatibility mode.
Long schemaCheckpoint
The version number of the checkpoint (the last time the compatibility mode was changed).
Long latestSchemaVersion
The latest version of the schema associated with the returned schema definition.
Long nextSchemaVersion
The next version of the schema associated with the returned schema definition.
String schemaStatus
The status of the schema.
Map<K,V> tags
The tags for the schema.
String schemaVersionId
The unique identifier of the first schema version.
String schemaVersionStatus
The status of the first schema version created.
String name
The name for the new security configuration.
EncryptionConfiguration encryptionConfiguration
The encryption configuration for the new security configuration.
String id
The ID of the session request.
String description
The description of the session.
String role
The IAM Role ARN
SessionCommand command
The SessionCommand
that runs the job.
Integer timeout
The number of minutes before session times out. Default for Spark ETL jobs is 48 hours (2880 minutes), the maximum session lifetime for this job type. Consult the documentation for other job types.
Integer idleTimeout
The number of minutes when idle before session times out. Default for Spark ETL jobs is value of Timeout. Consult the documentation for other job types.
Map<K,V> defaultArguments
A map array of key-value pairs. Max is 75 pairs.
ConnectionsList connections
The number of connections to use for the session.
Double maxCapacity
The number of Glue data processing units (DPUs) that can be allocated when the job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB memory.
Integer numberOfWorkers
The number of workers of a defined WorkerType
to use for the session.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, or G.8X for Spark jobs. Accepts the value Z.2X for Ray notebooks.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio),
US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the
G.4X
worker type.
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
String securityConfiguration
The name of the SecurityConfiguration structure to be used with the session
String glueVersion
The Glue version determines the versions of Apache Spark and Python that Glue supports. The GlueVersion must be greater than 2.0.
Map<K,V> tags
The map of key value pairs (tags) belonging to the session.
String requestOrigin
The origin of the request.
Session session
Returns the session object in the response.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
String type
The type of table optimizer. Currently, the only valid value is compaction
.
TableOptimizerConfiguration tableOptimizerConfiguration
A TableOptimizerConfiguration
object representing the configuration of a table optimizer.
String catalogId
The ID of the Data Catalog in which to create the Table
. If none is supplied, the Amazon Web
Services account ID is used by default.
String databaseName
The catalog database in which to create the new table. For Hive compatibility, this name is entirely lowercase.
TableInput tableInput
The TableInput
object that defines the metadata table to create in the catalog.
List<E> partitionIndexes
A list of partition indexes, PartitionIndex
structures, to create in the table.
String transactionId
The ID of the transaction.
OpenTableFormatInput openTableFormatInput
Specifies an OpenTableFormatInput
structure when creating an open format table.
String name
The name of the trigger.
String workflowName
The name of the workflow associated with the trigger.
String type
The type of the new trigger.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
This field is required when the trigger type is SCHEDULED.
Predicate predicate
A predicate to specify when the new trigger should fire.
This field is required when the trigger type is CONDITIONAL
.
List<E> actions
The actions initiated by this trigger when it fires.
String description
A description of the new trigger.
Boolean startOnCreation
Set to true
to start SCHEDULED
and CONDITIONAL
triggers when created. True
is not supported for ON_DEMAND
triggers.
Map<K,V> tags
The tags to use with this trigger. You may use tags to limit access to the trigger. For more information about tags in Glue, see Amazon Web Services Tags in Glue in the developer guide.
EventBatchingCondition eventBatchingCondition
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
String name
The name of the trigger.
String catalogId
The ID of the Data Catalog in which to create the function. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which to create the function.
UserDefinedFunctionInput functionInput
A FunctionInput
object that defines the function to create in the Data Catalog.
String name
The name to be assigned to the workflow. It should be unique within your account.
String description
A description of the workflow.
Map<K,V> defaultRunProperties
A collection of properties to be used as part of each execution of the workflow.
Map<K,V> tags
The tags to be used with this workflow.
Integer maxConcurrentRuns
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
String name
The name of the workflow which was provided as part of the request.
String classification
An identifier of the data format that the classifier matches.
String name
The name of the classifier.
String rowTag
The XML tag designating the element that contains each record in an XML document being parsed. This can't
identify a self-closing element (closed by />
). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example,
<row item_a="A" item_b="B"></row>
is okay, but
<row item_a="A" item_b="B" />
is not).
String name
The name of the classifier.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String delimiter
A custom symbol to denote what separates each column entry in the row.
String quoteSymbol
A custom symbol to denote what combines content into a single column value. It must be different from the column delimiter.
String containsHeader
Indicates whether the CSV file contains a header.
List<E> header
A list of strings representing column names.
Boolean disableValueTrimming
Specifies not to trim values before identifying the type of column values. The default value is true
.
Boolean allowSingleColumn
Enables the processing of files that contain only one column.
Boolean customDatatypeConfigured
Enables the custom datatype to be configured.
List<E> customDatatypes
A list of custom datatypes including "BINARY", "BOOLEAN", "DATE", "DECIMAL", "DOUBLE", "FLOAT", "INT", "LONG", "SHORT", "STRING", "TIMESTAMP".
String serde
Sets the SerDe for processing CSV in the classifier, which will be applied in the Data Catalog. Valid values are
OpenCSVSerDe
, LazySimpleSerDe
, and None
. You can specify the
None
value when you want the crawler to do the detection.
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names.
String code
The custom code that is used to perform the data transformation.
String className
The name defined for the custom code node class.
List<E> outputSchemas
Specifies the data schema for the custom code transform.
String name
A name for the custom pattern that allows it to be retrieved or deleted later. This name must be unique per Amazon Web Services account.
String regexString
A regular expression string that is used for detecting sensitive data in a custom pattern.
List<E> contextWords
A list of context words. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data.
If no context words are passed only a regular expression is checked.
String name
The name of the database. For Hive compatibility, this is folded to lowercase when it is stored.
String description
A description of the database.
String locationUri
The location of the database (for example, an HDFS path).
Map<K,V> parameters
These key-value pairs define parameters and properties of the database.
Date createTime
The time at which the metadata database was created in the catalog.
List<E> createTableDefaultPermissions
Creates a set of default permissions on the table for principals. Used by Lake Formation. Not used in the normal course of Glue operations.
DatabaseIdentifier targetDatabase
A DatabaseIdentifier
structure that describes a target database for resource linking.
String catalogId
The ID of the Data Catalog in which the database resides.
FederatedDatabase federatedDatabase
A FederatedDatabase
structure that references an entity outside the Glue Data Catalog.
String name
The name of the database. For Hive compatibility, this is folded to lowercase when it is stored.
String description
A description of the database.
String locationUri
The location of the database (for example, an HDFS path).
Map<K,V> parameters
These key-value pairs define parameters and properties of the database.
These key-value pairs define parameters and properties of the database.
List<E> createTableDefaultPermissions
Creates a set of default permissions on the table for principals. Used by Lake Formation. Not used in the normal course of Glue operations.
DatabaseIdentifier targetDatabase
A DatabaseIdentifier
structure that describes a target database for resource linking.
FederatedDatabase federatedDatabase
A FederatedDatabase
structure that references an entity outside the Glue Data Catalog.
EncryptionAtRest encryptionAtRest
Specifies the encryption-at-rest configuration for the Data Catalog.
ConnectionPasswordEncryption connectionPasswordEncryption
When connection password protection is enabled, the Data Catalog uses a customer-provided key to encrypt the
password as part of CreateConnection
or UpdateConnection
and store it in the
ENCRYPTED_PASSWORD
field in the connection properties. You can enable catalog encryption or only
password encryption.
String dataLakePrincipalIdentifier
An identifier for the Lake Formation principal.
String name
The name of the data quality analyzer.
String description
A description of the data quality analyzer.
String evaluationMessage
An evaluation message.
Map<K,V> evaluatedMetrics
A map of metrics associated with the evaluation of the analyzer.
Double actualValue
The actual value of the data quality metric.
Double expectedValue
The expected value of the data quality metric according to the analysis of historical data.
Double lowerLimit
The lower limit of the data quality metric value according to the analysis of historical data.
Double upperLimit
The upper limit of the data quality metric value according to the analysis of historical data.
String description
A description of the data quality observation.
MetricBasedObservation metricBasedObservation
An object of type MetricBasedObservation
representing the observation that is based on evaluated
data quality metrics.
String resultId
A unique result ID for the data quality result.
Double score
An aggregate data quality score. Represents the ratio of rules that passed to the total number of rules.
DataSource dataSource
The table associated with the data quality result, if any.
String rulesetName
The name of the ruleset associated with the data quality result.
String evaluationContext
In the context of a job in Glue Studio, each node in the canvas is typically assigned some sort of name and data
quality nodes will have names. In the case of multiple nodes, the evaluationContext
can
differentiate the nodes.
Date startedOn
The date and time when this data quality run started.
Date completedOn
The date and time when this data quality run completed.
String jobName
The job name associated with the data quality result, if any.
String jobRunId
The job run ID associated with the data quality result, if any.
String rulesetEvaluationRunId
The unique run ID for the ruleset evaluation for this data quality result.
List<E> ruleResults
A list of DataQualityRuleResult
objects representing the results for each rule.
List<E> analyzerResults
A list of DataQualityAnalyzerResult
objects representing the results for each analyzer.
List<E> observations
A list of DataQualityObservation
objects representing the observations generated after evaluating
the rules and analyzers.
String resultId
The unique result ID for this data quality result.
DataSource dataSource
The table name associated with the data quality result.
String jobName
The job name associated with the data quality result.
String jobRunId
The job run ID associated with the data quality result.
Date startedOn
The time that the run started for this data quality result.
DataSource dataSource
Filter results by the specified data source. For example, retrieving all results for an Glue table.
String jobName
Filter results by the specified job name.
String jobRunId
Filter results by the specified job run ID.
Date startedAfter
Filter results by runs that started after this time.
Date startedBefore
Filter results by runs that started before this time.
String runId
The unique run identifier associated with this run.
String status
The status for this run.
Date startedOn
The date and time when this run started.
DataSource dataSource
The data source (Glue table) associated with the recommendation run.
DataSource dataSource
Filter based on a specified data source (Glue table).
Date startedBefore
Filter based on time for results started before provided time.
Date startedAfter
Filter based on time for results started after provided time.
String name
The name of the data quality rule.
String description
A description of the data quality rule.
String evaluationMessage
An evaluation message.
String result
A pass or fail status for the rule.
Map<K,V> evaluatedMetrics
A map of metrics associated with the evaluation of the rule.
String runId
The unique run identifier associated with this run.
String status
The status for this run.
Date startedOn
The date and time when the run started.
DataSource dataSource
The data source (an Glue table) associated with the run.
DataSource dataSource
Filter based on a data source (an Glue table) associated with the run.
Date startedBefore
Filter results by runs that started before this time.
Date startedAfter
Filter results by runs that started after this time.
String name
The name of the ruleset filter criteria.
String description
The description of the ruleset filter criteria.
Date createdBefore
Filter on rulesets created before this date.
Date createdAfter
Filter on rulesets created after this date.
Date lastModifiedBefore
Filter on rulesets last modified before this date.
Date lastModifiedAfter
Filter on rulesets last modified after this date.
DataQualityTargetTable targetTable
The name and database name of the target table.
String name
The name of the data quality ruleset.
String description
A description of the data quality ruleset.
Date createdOn
The date and time the data quality ruleset was created.
Date lastModifiedOn
The date and time the data quality ruleset was last modified.
DataQualityTargetTable targetTable
An object representing an Glue table.
String recommendationRunId
When a ruleset was created from a recommendation run, this run ID is generated to link the two together.
Integer ruleCount
The number of rules in the ruleset.
GlueTable glueTable
An Glue table.
DecimalNumber minimumValue
The lowest value in the column.
DecimalNumber maximumValue
The highest value in the column.
Long numberOfNulls
The number of null values in the column.
Long numberOfDistinctValues
The number of distinct values in a column.
ByteBuffer unscaledValue
The unscaled numeric value.
Integer scale
The scale that determines where the decimal point falls in the unscaled value.
String name
The name of the blueprint to delete.
String name
Returns the name of the blueprint that was deleted.
String name
Name of the classifier to remove.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionValues
A list of partition values identifying the partition.
String columnName
Name of the column.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
String columnName
The name of the column.
String name
The name of the crawler to remove.
String name
The name of the custom pattern that you want to delete.
String name
The name of the custom pattern you deleted.
String name
A name for the data quality ruleset.
String endpointName
The name of the DevEndpoint
.
String jobName
The name of the job definition to delete.
String jobName
The name of the job definition that was deleted.
String transformId
The unique identifier of the transform to delete.
String transformId
The unique identifier of the transform that was deleted.
String catalogId
The catalog ID where the table resides.
String databaseName
Specifies the name of a database from which you want to delete a partition index.
String tableName
Specifies the name of a table from which you want to delete a partition index.
String indexName
The name of the partition index to be deleted.
String catalogId
The ID of the Data Catalog where the partition to be deleted resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table in question resides.
String tableName
The name of the table that contains the partition to be deleted.
List<E> partitionValues
The values that define the partition.
RegistryId registryId
This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
SchemaId schemaId
This is a wrapper structure that may contain the schema name and Amazon Resource Name (ARN).
String name
The name of the security configuration to delete.
String id
Returns the ID of the deleted session.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.
String name
The name of the table to be deleted. For Hive compatibility, this name is entirely lowercase.
String transactionId
The transaction ID at which to delete the table contents.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
String versionId
The ID of the table version to be deleted. A VersionID
is a string representation of an integer.
Each version is incremented by 1.
String name
The name of the trigger to delete.
String name
The name of the trigger that was deleted.
String catalogId
The ID of the Data Catalog where the function to be deleted is located. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the function is located.
String functionName
The name of the function definition to be deleted.
String name
Name of the workflow to be deleted.
String name
Name of the workflow specified in input.
List<E> deltaTables
A list of the Amazon S3 paths to the Delta tables.
String connectionName
The name of the connection to use to connect to the Delta table target.
Boolean writeManifest
Specifies whether to write the manifest files to the Delta table path.
Boolean createNativeDeltaTable
Specifies whether the crawler will create native tables, to allow integration with query engines that support querying of the Delta transaction log directly.
String endpointName
The name of the DevEndpoint
.
String roleArn
The Amazon Resource Name (ARN) of the IAM role used in this DevEndpoint
.
List<E> securityGroupIds
A list of security group identifiers used in this DevEndpoint
.
String subnetId
The subnet ID for this DevEndpoint
.
String yarnEndpointAddress
The YARN endpoint address used by this DevEndpoint
.
String privateAddress
A private IP address to access the DevEndpoint
within a VPC if the DevEndpoint
is
created within one. The PrivateAddress
field is present only when you create the
DevEndpoint
within your VPC.
Integer zeppelinRemoteSparkInterpreterPort
The Apache Zeppelin port for the remote Apache Spark interpreter.
String publicAddress
The public IP address used by this DevEndpoint
. The PublicAddress
field is present only
when you create a non-virtual private cloud (VPC) DevEndpoint
.
String status
The current status of this DevEndpoint
.
String workerType
The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and
provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.
Known issue: when a development endpoint is created with the G.2X
WorkerType
configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB
disk.
String glueVersion
Glue version determines the versions of Apache Spark and Python that Glue supports. The Python version indicates the version supported for running your ETL scripts on development endpoints.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Development endpoints that are created without specifying a Glue version default to Glue 0.9.
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated to the development endpoint.
The maximum number of workers you can define are 299 for G.1X
, and 149 for G.2X
.
Integer numberOfNodes
The number of Glue Data Processing Units (DPUs) allocated to this DevEndpoint
.
String availabilityZone
The Amazon Web Services Availability Zone where this DevEndpoint
is located.
String vpcId
The ID of the virtual private cloud (VPC) used by this DevEndpoint
.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an Amazon S3 bucket that should be loaded in your
DevEndpoint
. Multiple values must be complete paths separated by a comma.
You can only use pure Python libraries with a DevEndpoint
. Libraries that rely on C extensions, such
as the pandas Python data analysis library, are not currently supported.
String extraJarsS3Path
The path to one or more Java .jar
files in an S3 bucket that should be loaded in your
DevEndpoint
.
You can only use pure Java/Scala libraries with a DevEndpoint
.
String failureReason
The reason for a current failure in this DevEndpoint
.
String lastUpdateStatus
The status of the last update.
Date createdTimestamp
The point in time at which this DevEndpoint was created.
Date lastModifiedTimestamp
The point in time at which this DevEndpoint
was last modified.
String publicKey
The public key to be used by this DevEndpoint
for authentication. This attribute is provided for
backward compatibility because the recommended attribute to use is public keys.
List<E> publicKeys
A list of public keys to be used by the DevEndpoints
for authentication. Using this attribute is
preferred over a single public key because the public keys allow you to have a different private key per client.
If you previously created an endpoint with a public key, you must remove that key to be able to set a list of
public keys. Call the UpdateDevEndpoint
API operation with the public key content in the
deletePublicKeys
attribute, and the list of new keys in the addPublicKeys
attribute.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this DevEndpoint
.
Map<K,V> arguments
A map of arguments used to configure the DevEndpoint
.
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
String extraPythonLibsS3Path
The paths to one or more Python libraries in an Amazon Simple Storage Service (Amazon S3) bucket that should be
loaded in your DevEndpoint
. Multiple values must be complete paths separated by a comma.
You can only use pure Python libraries with a DevEndpoint
. Libraries that rely on C extensions, such
as the pandas Python data analysis library, are not currently supported.
String extraJarsS3Path
The path to one or more Java .jar
files in an S3 bucket that should be loaded in your
DevEndpoint
.
You can only use pure Java/Scala libraries with a DevEndpoint
.
String name
The name of the JDBC source connection.
String database
The database of the JDBC source connection.
String table
The table of the JDBC source connection.
String connectionName
The connection name of the JDBC source.
String connectionType
The connection type of the JDBC source.
String redshiftTmpDir
The temp directory of the JDBC Redshift source.
String name
The name of the data store.
KafkaStreamingSourceOptions streamingOptions
Specifies the streaming options.
Integer windowSize
The amount of time to spend processing each micro batch.
Boolean detectSchema
Whether to automatically determine the schema from the incoming data.
StreamingDataPreviewOptions dataPreviewOptions
Specifies options related to data preview for viewing a sample of your data.
String name
The name of the data source.
Integer windowSize
The amount of time to spend processing each micro batch.
Boolean detectSchema
Whether to automatically determine the schema from the incoming data.
KinesisStreamingSourceOptions streamingOptions
Additional options for the Kinesis streaming data source.
StreamingDataPreviewOptions dataPreviewOptions
Additional options for data preview.
Boolean enableUpdateCatalog
Whether to use the specified update behavior when the crawler finds a changed schema.
String updateBehavior
The update behavior when the crawler finds a changed schema.
String table
Specifies the table in the database that the schema change policy applies to.
String database
Specifies the database that the schema change policy applies to.
String evaluationContext
The context of the evaluation.
String resultsS3Prefix
The Amazon S3 prefix prepended to the results.
Boolean cloudWatchMetricsEnabled
Enable metrics for your data quality results.
Boolean resultsPublishingEnabled
Enable publishing for your data quality results.
String stopJobOnFailureTiming
When to stop job if your data quality evaluation fails. Options are Immediate or AfterDataLoad.
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names.
NullCheckBoxList nullCheckBoxList
A structure that represents whether certain values are recognized as null values for removal.
List<E> nullTextList
A structure that specifies a list of NullValueField structures that represent a custom null value such as zero or other value being used as a null placeholder unique to the dataset.
The DropNullFields
transform removes custom null values only if both the value of the null
placeholder and the datatype match the data.
String name
Specifies the name of the dynamic transform.
String transformName
Specifies the name of the dynamic transform as it appears in the Glue Studio visual editor.
List<E> inputs
Specifies the inputs for the dynamic transform that are required.
List<E> parameters
Specifies the parameters of the dynamic transform.
String functionName
Specifies the name of the function of the dynamic transform.
String path
Specifies the path of the dynamic transform source and config files.
String version
This field is not used and will be deprecated in future release.
List<E> outputSchemas
Specifies the data schema for the dynamic transform.
String path
The name of the DynamoDB table to crawl.
Boolean scanAll
Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.
A value of true
means to scan all records, while a value of false
means to sample the
records. If no value is specified, the value defaults to true
.
Double scanRate
The percentage of the configured read capacity units to use by the Glue crawler. Read capacity units is a term defined by DynamoDB, and is a numeric value that acts as rate limiter for the number of reads that can be performed on that table per second.
The valid values are null or a value between 0.1 to 1.5. A null value is used when user does not provide a value, and defaults to 0.5 of the configured Read Capacity Unit (for provisioned tables), or 0.25 of the max configured Read Capacity Unit (for tables using on-demand mode).
List<E> s3Encryption
The encryption configuration for Amazon Simple Storage Service (Amazon S3) data.
CloudWatchEncryption cloudWatchEncryption
The encryption configuration for Amazon CloudWatch.
JobBookmarksEncryption jobBookmarksEncryption
The encryption configuration for job bookmarks.
Boolean fromFederationSource
Indicates whether or not the exception relates to a federated source.
String name
The name of the data quality evaluation.
List<E> inputs
The inputs of your data quality evaluation.
String ruleset
The ruleset for your data quality evaluation.
String output
The output of your data quality evaluation.
DQResultsPublishingOptions publishingOptions
Options to configure how your results are published.
DQStopJobOnFailureOptions stopJobOnFailureOptions
Options to configure how your job will stop if your data quality evaluation fails.
String name
The name of the data quality evaluation.
List<E> inputs
The inputs of your data quality evaluation. The first input in this list is the primary data source.
Map<K,V> additionalDataSources
The aliases of all data sources except primary.
String ruleset
The ruleset for your data quality evaluation.
DQResultsPublishingOptions publishingOptions
Options to configure how your results are published.
Map<K,V> additionalOptions
Options to configure runtime behavior of the transform.
DQStopJobOnFailureOptions stopJobOnFailureOptions
Options to configure how your job will stop if your data quality evaluation fails.
String transformType
The type of machine learning transform.
FindMatchesMetrics findMatchesMetrics
The evaluation metrics for the find matches algorithm.
Integer maxConcurrentRuns
The maximum number of concurrent runs allowed for the job. The default is 1. An error is returned when this threshold is reached. The maximum value you can specify is controlled by a service limit.
String outputS3Path
The Amazon Simple Storage Service (Amazon S3) path where you will export the labels.
String associatedGlueResource
The associated Glue resource already exists.
String federationSourceErrorCode
The error code of the problem.
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names.
String imputedPath
A JSON path to a variable in the data structure for the dataset that is imputed.
String filledPath
A JSON path to a variable in the data structure for the dataset that is filled.
Double areaUnderPRCurve
The area under the precision/recall curve (AUPRC) is a single number measuring the overall quality of the transform, that is independent of the choice made for precision vs. recall. Higher values indicate that you have a more attractive precision vs. recall tradeoff.
For more information, see Precision and recall in Wikipedia.
Double precision
The precision metric indicates when often your transform is correct when it predicts a match. Specifically, it measures how well the transform finds true positives from the total true positives possible.
For more information, see Precision and recall in Wikipedia.
Double recall
The recall metric indicates that for an actual match, how often your transform predicts the match. Specifically, it measures how well the transform finds true positives from the total records in the source data.
For more information, see Precision and recall in Wikipedia.
Double f1
The maximum F1 metric indicates the transform's accuracy between 0 and 1, where 1 is the best accuracy.
For more information, see F1 score in Wikipedia.
ConfusionMatrix confusionMatrix
The confusion matrix shows you what your transform is predicting accurately and what types of errors it is making.
For more information, see Confusion matrix in Wikipedia.
List<E> columnImportances
A list of ColumnImportance
structures containing column importance metrics, sorted in order of
descending importance.
String primaryKeyColumnName
The name of a column that uniquely identifies rows in the source table. Used to help identify matching records.
Double precisionRecallTradeoff
The value selected when tuning your transform for a balance between precision and recall. A value of 0.5 means no preference; a value of 1.0 means a bias purely for precision, and a value of 0.0 means a bias for recall. Because this is a tradeoff, choosing values close to 1.0 means very low recall, and choosing values close to 0.0 results in very low precision.
The precision metric indicates how often your model is correct when it predicts a match.
The recall metric indicates that for an actual match, how often your model predicts the match.
Double accuracyCostTradeoff
The value that is selected when tuning your transform for a balance between accuracy and cost. A value of 0.5
means that the system balances accuracy and cost concerns. A value of 1.0 means a bias purely for accuracy, which
typically results in a higher cost, sometimes substantially higher. A value of 0.0 means a bias purely for cost,
which results in a less accurate FindMatches
transform, sometimes with unacceptable accuracy.
Accuracy measures how well the transform finds true positives and true negatives. Increasing accuracy requires more machine resources and cost. But it also results in increased recall.
Cost measures how many compute resources, and thus money, are consumed to run the transform.
Boolean enforceProvidedLabels
The value to switch on or off to force the output to match the provided labels from users. If the value is
True
, the find matches
transform forces the output to match the provided labels. The
results override the normal conflation results. If the value is False
, the find matches
transform does not ensure all the labels provided are respected, and the results rely on the trained model.
Note that setting this value to true may increase the conflation execution time.
Blueprint blueprint
Returns a Blueprint
object.
BlueprintRun blueprintRun
Returns a BlueprintRun
object.
String catalogId
The ID of the catalog to migrate. Currently, this should be the Amazon Web Services account ID.
CatalogImportStatus importStatus
The status of the specified catalog migration.
String name
Name of the classifier to retrieve.
Classifier classifier
The requested classifier.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionValues
A list of partition values identifying the partition.
List<E> columnNames
A list of the column names.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> columnNames
A list of the column names.
String columnStatisticsTaskRunId
The identifier for the particular column statistics task run.
ColumnStatisticsTaskRun columnStatisticsTaskRun
A ColumnStatisticsTaskRun
object representing the details of the column stats run.
String catalogId
The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.
String name
The name of the connection definition to retrieve.
Boolean hidePassword
Allows you to retrieve the connection metadata without returning the password. For instance, the Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.
Connection connection
The requested connection definition.
String catalogId
The ID of the Data Catalog in which the connections reside. If none is provided, the Amazon Web Services account ID is used by default.
GetConnectionsFilter filter
A filter that controls which connections are returned.
Boolean hidePassword
Allows you to retrieve the connection metadata without returning the password. For instance, the Glue console uses this flag to retrieve the connection, and does not display the password. Set this parameter when the caller might not have permission to use the KMS key to decrypt the password, but it does have permission to access the rest of the connection properties.
String nextToken
A continuation token, if this is a continuation call.
Integer maxResults
The maximum number of connections to return in one response.
String name
The name of the crawler to retrieve metadata for.
Crawler crawler
The metadata for the specified crawler.
String name
The name of the custom pattern that you want to retrieve.
String name
The name of the custom pattern that you retrieved.
String regexString
A regular expression string that is used for detecting sensitive data in a custom pattern.
List<E> contextWords
A list of context words if specified when you created the custom pattern. If none of these context words are found within the vicinity of the regular expression the data will not be detected as sensitive data.
Database database
The definition of the specified database in the Data Catalog.
String catalogId
The ID of the Data Catalog from which to retrieve Databases
. If none is provided, the Amazon Web
Services account ID is used by default.
String nextToken
A continuation token, if this is a continuation call.
Integer maxResults
The maximum number of databases to return in one response.
String resourceShareType
Allows you to specify that you want to list the databases shared with your account. The allowable values are
FEDERATED
, FOREIGN
or ALL
.
If set to FEDERATED
, will list the federated databases (referencing an external entity) shared with
your account.
If set to FOREIGN
, will list the databases shared with your account.
If set to ALL
, will list the databases shared with your account, as well as the databases in yor
local account.
String catalogId
The ID of the Data Catalog to retrieve the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.
DataCatalogEncryptionSettings dataCatalogEncryptionSettings
The requested security configuration.
String pythonScript
The Python script to transform.
String resultId
A unique result ID for the data quality result.
String resultId
A unique result ID for the data quality result.
Double score
An aggregate data quality score. Represents the ratio of rules that passed to the total number of rules.
DataSource dataSource
The table associated with the data quality result, if any.
String rulesetName
The name of the ruleset associated with the data quality result.
String evaluationContext
In the context of a job in Glue Studio, each node in the canvas is typically assigned some sort of name and data
quality nodes will have names. In the case of multiple nodes, the evaluationContext
can
differentiate the nodes.
Date startedOn
The date and time when the run for this data quality result started.
Date completedOn
The date and time when the run for this data quality result was completed.
String jobName
The job name associated with the data quality result, if any.
String jobRunId
The job run ID associated with the data quality result, if any.
String rulesetEvaluationRunId
The unique run ID associated with the ruleset evaluation.
List<E> ruleResults
A list of DataQualityRuleResult
objects representing the results for each rule.
List<E> analyzerResults
A list of DataQualityAnalyzerResult
objects representing the results for each analyzer.
List<E> observations
A list of DataQualityObservation
objects representing the observations generated after evaluating
the rules and analyzers.
String runId
The unique run identifier associated with this run.
String runId
The unique run identifier associated with this run.
DataSource dataSource
The data source (an Glue table) associated with this run.
String role
An IAM role supplied to encrypt the results of the run.
Integer numberOfWorkers
The number of G.1X
workers to be used in the run. The default is 5.
Integer timeout
The timeout for a run in minutes. This is the maximum time that a run can consume resources before it is
terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
String status
The status for this run.
String errorString
The error strings that are associated with the run.
Date startedOn
The date and time when this run started.
Date lastModifiedOn
A timestamp. The last point in time when this data quality rule recommendation run was modified.
Date completedOn
The date and time when this run was completed.
Integer executionTime
The amount of time (in seconds) that the run consumed resources.
String recommendedRuleset
When a start rule recommendation run completes, it creates a recommended ruleset (a set of rules). This member has those rules in Data Quality Definition Language (DQDL) format.
String createdRulesetName
The name of the ruleset that was created by the run.
String runId
The unique run identifier associated with this run.
String runId
The unique run identifier associated with this run.
DataSource dataSource
The data source (an Glue table) associated with this evaluation run.
String role
An IAM role supplied to encrypt the results of the run.
Integer numberOfWorkers
The number of G.1X
workers to be used in the run. The default is 5.
Integer timeout
The timeout for a run in minutes. This is the maximum time that a run can consume resources before it is
terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
DataQualityEvaluationRunAdditionalRunOptions additionalRunOptions
Additional run options you can specify for an evaluation run.
String status
The status for this run.
String errorString
The error strings that are associated with the run.
Date startedOn
The date and time when this run started.
Date lastModifiedOn
A timestamp. The last point in time when this data quality rule recommendation run was modified.
Date completedOn
The date and time when this run was completed.
Integer executionTime
The amount of time (in seconds) that the run consumed resources.
List<E> rulesetNames
A list of ruleset names for the run.
List<E> resultIds
A list of result IDs for the data quality results for the run.
Map<K,V> additionalDataSources
A map of reference strings to additional data sources you can specify for an evaluation run.
String name
The name of the ruleset.
String name
The name of the ruleset.
String description
A description of the ruleset.
String ruleset
A Data Quality Definition Language (DQDL) ruleset. For more information, see the Glue developer guide.
DataQualityTargetTable targetTable
The name and database name of the target table.
Date createdOn
A timestamp. The time and date that this data quality ruleset was created.
Date lastModifiedOn
A timestamp. The last point in time when this data quality ruleset was modified.
String recommendationRunId
When a ruleset was created from a recommendation run, this run ID is generated to link the two together.
String endpointName
Name of the DevEndpoint
to retrieve information for.
DevEndpoint devEndpoint
A DevEndpoint
definition.
JobBookmarkEntry jobBookmarkEntry
A structure that defines a point that a job can resume processing.
String jobName
The name of the job definition to retrieve.
Job job
The requested job definition.
JobRun jobRun
The requested job-run metadata.
CatalogEntry source
Specifies the source table.
List<E> sinks
A list of target tables.
Location location
Parameters for the mapping.
String transformId
The unique identifier of the task run.
String taskRunId
The unique run identifier associated with this run.
String status
The status for this task run.
String logGroupName
The names of the log groups that are associated with the task run.
TaskRunProperties properties
The list of properties that are associated with the task run.
String errorString
The error strings that are associated with the task run.
Date startedOn
The date and time when this task run started.
Date lastModifiedOn
The date and time when this task run was last modified.
Date completedOn
The date and time when this task run was completed.
Integer executionTime
The amount of time (in seconds) that the task run consumed resources.
String transformId
The unique identifier of the machine learning transform.
String nextToken
A token for pagination of the results. The default is empty.
Integer maxResults
The maximum number of results to return.
TaskRunFilterCriteria filter
The filter criteria, in the TaskRunFilterCriteria
structure, for the task run.
TaskRunSortCriteria sort
The sorting criteria, in the TaskRunSortCriteria
structure, for the task run.
String transformId
The unique identifier of the transform, generated at the time that the transform was created.
String transformId
The unique identifier of the transform, generated at the time that the transform was created.
String name
The unique name given to the transform when it was created.
String description
A description of the transform.
String status
The last known status of the transform (to indicate whether it can be used or not). One of "NOT_READY", "READY", or "DELETING".
Date createdOn
The date and time when the transform was created.
Date lastModifiedOn
The date and time when the transform was last modified.
List<E> inputRecordTables
A list of Glue table definitions used by the transform.
TransformParameters parameters
The configuration parameters that are specific to the algorithm used.
EvaluationMetrics evaluationMetrics
The latest evaluation metrics.
Integer labelCount
The number of labels available for this transform.
List<E> schema
The Map<Column, Type>
object that represents the schema that this transform accepts. Has an
upper bound of 100 columns.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when this task runs.
Integer timeout
The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this
transform can consume resources before it is terminated and enters TIMEOUT
status. The default is
2,880 minutes (48 hours).
Integer maxRetries
The maximum number of times to retry a task for this transform after a task run fails.
TransformEncryption transformEncryption
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
String nextToken
A paginated token to offset the results.
Integer maxResults
The maximum number of results to return.
TransformFilterCriteria filter
The filter transformation criteria.
TransformSortCriteria sort
The sorting criteria.
String catalogId
The catalog ID where the table resides.
String databaseName
Specifies the name of a database from which you want to retrieve partition indexes.
String tableName
Specifies the name of a table for which you want to retrieve the partition indexes.
String nextToken
A continuation token, included if this is a continuation call.
String catalogId
The ID of the Data Catalog where the partition in question resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partition resides.
String tableName
The name of the partition's table.
List<E> partitionValues
The values that define the partition.
Partition partition
The requested information, in the form of a Partition
object.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
String expression
An expression that filters the partitions to be returned.
The expression uses SQL syntax similar to the SQL WHERE
filter clause. The SQL statement parser JSQLParser parses the expression.
Operators: The following are the operators that you can use in the Expression
API call:
Checks whether the values of the two operands are equal; if yes, then the condition becomes true.
Example: Assume 'variable a' holds 10 and 'variable b' holds 20.
(a = b) is not true.
Checks whether the values of two operands are equal; if the values are not equal, then the condition becomes true.
Example: (a < > b) is true.
Checks whether the value of the left operand is greater than the value of the right operand; if yes, then the condition becomes true.
Example: (a > b) is not true.
Checks whether the value of the left operand is less than the value of the right operand; if yes, then the condition becomes true.
Example: (a < b) is true.
Checks whether the value of the left operand is greater than or equal to the value of the right operand; if yes, then the condition becomes true.
Example: (a >= b) is not true.
Checks whether the value of the left operand is less than or equal to the value of the right operand; if yes, then the condition becomes true.
Example: (a <= b) is true.
Logical operators.
Supported Partition Key Types: The following are the supported partition keys.
string
date
timestamp
int
bigint
long
tinyint
smallint
decimal
If an type is encountered that is not valid, an exception is thrown.
The following list shows the valid operators on each type. When you define a crawler, the
partitionKey
type is created as a STRING
, to be compatible with the catalog partitions.
Sample API Call:
String nextToken
A continuation token, if this is not the first call to retrieve these partitions.
Segment segment
The segment of the table's partitions to scan in this request.
Integer maxResults
The maximum number of partitions to return in a single response.
Boolean excludeColumnSchema
When true, specifies not returning the partition column schema. Useful when you are interested only in other partition attributes such as partition values or location. This approach avoids the problem of a large response by not returning duplicate data.
String transactionId
The transaction ID at which to read the partition contents.
Date queryAsOfTime
The time as of when to read the partition contents. If not set, the most recent transaction commit time will be
used. Cannot be specified along with TransactionId
.
List<E> mapping
The list of mappings from a source table to target tables.
CatalogEntry source
The source table.
List<E> sinks
The target tables.
Location location
The parameters for the mapping.
String language
The programming language of the code to perform the mapping.
Map<K,V> additionalPlanOptionsMap
A map to hold additional optional key-value parameters.
Currently, these key-value pairs are supported:
inferSchema
 —  Specifies whether to set inferSchema
to true or false for the default
script generated by an Glue job. For example, to set inferSchema
to true, pass the following key
value pair:
--additional-plan-options-map '{"inferSchema":"true"}'
RegistryId registryId
This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String description
A description of the registry.
String status
The status of the registry.
String createdTime
The date and time the registry was created.
String updatedTime
The date and time the registry was updated.
String resourceArn
The ARN of the Glue resource for which to retrieve the resource policy. If not supplied, the Data Catalog
resource policy is returned. Use GetResourcePolicies
to view all existing resource policies. For
more information see Specifying Glue Resource
ARNs.
String policyInJson
Contains the requested policy document, in JSON format.
String policyHash
Contains the hash value associated with this policy.
Date createTime
The date and time at which the policy was created.
Date updateTime
The date and time at which the policy was last updated.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or
SchemaName
has to be provided.
SchemaId$SchemaName: The name of the schema. One of SchemaArn
or SchemaName
has to be
provided.
String schemaDefinition
The definition of the schema for which schema details are required.
String schemaVersionId
The schema ID of the schema version.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String dataFormat
The data format of the schema definition. Currently AVRO
, JSON
and
PROTOBUF
are supported.
String status
The status of the schema version.
String createdTime
The date and time the schema was created.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String schemaName
The name of the schema.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String description
A description of schema if specified when created
String dataFormat
The data format of the schema definition. Currently AVRO
, JSON
and
PROTOBUF
are supported.
String compatibility
The compatibility mode of the schema.
Long schemaCheckpoint
The version number of the checkpoint (the last time the compatibility mode was changed).
Long latestSchemaVersion
The latest version of the schema associated with the returned schema definition.
Long nextSchemaVersion
The next version of the schema associated with the returned schema definition.
String schemaStatus
The status of the schema.
String createdTime
The date and time the schema was created.
String updatedTime
The date and time the schema was updated.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
String schemaVersionId
The SchemaVersionId
of the schema version. This field is required for fetching by schema ID. Either
this or the SchemaId
wrapper has to be provided.
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The SchemaVersionId
of the schema version.
String schemaDefinition
The schema definition for the schema ID.
String dataFormat
The data format of the schema definition. Currently AVRO
, JSON
and
PROTOBUF
are supported.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
Long versionNumber
The version number of the schema.
String status
The status of the schema version.
String createdTime
The date and time the schema version was created.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or
SchemaName
has to be provided.
SchemaId$SchemaName: The name of the schema. One of SchemaArn
or SchemaName
has to be
provided.
SchemaVersionNumber firstSchemaVersionNumber
The first of the two schema versions to be compared.
SchemaVersionNumber secondSchemaVersionNumber
The second of the two schema versions to be compared.
String schemaDiffType
Refers to SYNTAX_DIFF
, which is the currently supported diff type.
String diff
The difference between schemas as a string in JsonPatch format.
String name
The name of the security configuration to retrieve.
SecurityConfiguration securityConfiguration
The requested security configuration.
Session session
The session object is returned in the response.
Statement statement
Returns the statement.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
TableOptimizer tableOptimizer
The optimizer associated with the specified table.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String name
The name of the table for which to retrieve the definition. For Hive compatibility, this name is entirely lowercase.
String transactionId
The transaction ID at which to read the table contents.
Date queryAsOfTime
The time as of when to read the table contents. If not set, the most recent transaction commit time will be used.
Cannot be specified along with TransactionId
.
Table table
The Table
object that defines the specified table.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog whose tables to list. For Hive compatibility, this name is entirely lowercase.
String expression
A regular expression pattern. If present, only those tables whose names match the pattern are returned.
String nextToken
A continuation token, included if this is a continuation call.
Integer maxResults
The maximum number of tables to return in a single response.
String transactionId
The transaction ID at which to read the table contents.
Date queryAsOfTime
The time as of when to read the table contents. If not set, the most recent transaction commit time will be used.
Cannot be specified along with TransactionId
.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
String versionId
The ID value of the table version to be retrieved. A VersionID
is a string representation of an
integer. Each version is incremented by 1.
TableVersion tableVersion
The requested table version.
String catalogId
The ID of the Data Catalog where the tables reside. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The database in the catalog in which the table resides. For Hive compatibility, this name is entirely lowercase.
String tableName
The name of the table. For Hive compatibility, this name is entirely lowercase.
String nextToken
A continuation token, if this is not the first call.
Integer maxResults
The maximum number of table versions to return in one response.
String resourceArn
The Amazon Resource Name (ARN) of the resource for which to retrieve tags.
String name
The name of the trigger to retrieve.
Trigger trigger
The requested trigger definition.
String nextToken
A continuation token, if this is a continuation call.
String dependentJobName
The name of the job to retrieve triggers for. The trigger that can start this job is returned, and if there is no such trigger, all triggers are returned.
Integer maxResults
The maximum size of the response.
String catalogId
The catalog ID where the partition resides.
String databaseName
(Required) Specifies the name of a database that contains the partition.
String tableName
(Required) Specifies the name of a table that contains the partition.
List<E> partitionValues
(Required) A list of partition key values.
AuditContext auditContext
A structure containing Lake Formation audit context information.
List<E> supportedPermissionTypes
(Required) A list of supported permission types.
Partition partition
A Partition object containing the partition metadata.
List<E> authorizedColumns
A list of column names that the user has been granted access to.
Boolean isRegisteredWithLakeFormation
A Boolean value that indicates whether the partition location is registered with Lake Formation.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is provided, the AWS account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the table that contains the partition.
String expression
An expression that filters the partitions to be returned.
The expression uses SQL syntax similar to the SQL WHERE
filter clause. The SQL statement parser JSQLParser parses the expression.
Operators: The following are the operators that you can use in the Expression
API call:
Checks whether the values of the two operands are equal; if yes, then the condition becomes true.
Example: Assume 'variable a' holds 10 and 'variable b' holds 20.
(a = b) is not true.
Checks whether the values of two operands are equal; if the values are not equal, then the condition becomes true.
Example: (a < > b) is true.
Checks whether the value of the left operand is greater than the value of the right operand; if yes, then the condition becomes true.
Example: (a > b) is not true.
Checks whether the value of the left operand is less than the value of the right operand; if yes, then the condition becomes true.
Example: (a < b) is true.
Checks whether the value of the left operand is greater than or equal to the value of the right operand; if yes, then the condition becomes true.
Example: (a >= b) is not true.
Checks whether the value of the left operand is less than or equal to the value of the right operand; if yes, then the condition becomes true.
Example: (a <= b) is true.
Logical operators.
Supported Partition Key Types: The following are the supported partition keys.
string
date
timestamp
int
bigint
long
tinyint
smallint
decimal
If an type is encountered that is not valid, an exception is thrown.
AuditContext auditContext
A structure containing Lake Formation audit context information.
List<E> supportedPermissionTypes
A list of supported permission types.
String nextToken
A continuation token, if this is not the first call to retrieve these partitions.
Segment segment
The segment of the table's partitions to scan in this request.
Integer maxResults
The maximum number of partitions to return in a single response.
String catalogId
The catalog ID where the table resides.
String databaseName
(Required) Specifies the name of a database that contains the table.
String name
(Required) Specifies the name of a table for which you are requesting metadata.
AuditContext auditContext
A structure containing Lake Formation audit context information.
List<E> supportedPermissionTypes
(Required) A list of supported permission types.
Table table
A Table object containing the table metadata.
List<E> authorizedColumns
A list of column names that the user has been granted access to.
Boolean isRegisteredWithLakeFormation
A Boolean value that indicates whether the partition location is registered with Lake Formation.
List<E> cellFilters
A list of column row filters.
String catalogId
The ID of the Data Catalog where the function to be retrieved is located. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the function is located.
String functionName
The name of the function.
UserDefinedFunction userDefinedFunction
The requested function definition.
String catalogId
The ID of the Data Catalog where the functions to be retrieved are located. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the functions are located. If none is provided, functions from all the databases across the catalog will be returned.
String pattern
An optional function-name pattern string that filters the function definitions returned.
String nextToken
A continuation token, if this is a continuation call.
Integer maxResults
The maximum number of functions to return in one response.
Workflow workflow
The resource metadata for the workflow.
WorkflowRun run
The requested workflow run metadata.
String name
Name of the workflow whose metadata of runs should be returned.
Boolean includeGraph
Specifies whether to include the workflow graph in response or not.
String nextToken
The maximum size of the response.
Integer maxResults
The maximum number of workflow runs to be included in the response.
String policyInJson
Contains the requested policy document, in JSON format.
String policyHash
Contains the hash value associated with this policy.
Date createTime
The date and time at which the policy was created.
Date updateTime
The date and time at which the policy was last updated.
String databaseName
A database name in the Glue Data Catalog.
String tableName
A table name in the Glue Data Catalog.
String catalogId
A unique identifier for the Glue Data Catalog.
String connectionName
The name of the connection to the Glue Data Catalog.
Map<K,V> additionalOptions
Additional options for the table. Currently there are two keys supported:
pushDownPredicate
: to filter on partitions without having to list and read all the files in your
dataset.
catalogPartitionPredicate
: to use server-side partition pruning using partition indexes in the Glue
Data Catalog.
String name
The name of the data store.
String database
The database to read from.
String table
The database table to read from.
String partitionPredicate
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not
deleted. Set to ""
– empty by default.
S3SourceAdditionalOptions additionalOptions
Specifies additional connection options.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String table
The name of the table in the database to write to.
String database
The name of the database to write to.
CatalogSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the governed catalog.
String name
The name of the classifier.
String classification
An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, and so on.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String grokPattern
The grok pattern applied to a data store by this classifier. For more information, see built-in patterns in Writing Custom Classifiers.
String customPatterns
Optional custom grok patterns defined by this classifier. For more information, see custom patterns in Writing Custom Classifiers.
List<E> paths
An array of Amazon S3 location strings for Hudi, each indicating the root folder with which the metadata files for a Hudi table resides. The Hudi folder may be located in a child folder of the root folder.
The crawler will scan all folders underneath a path for a Hudi folder.
String connectionName
The name of the connection to use to connect to the Hudi target. If your Hudi files are stored in buckets that require VPC authorization, you can set their connection properties here.
List<E> exclusions
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
Integer maximumTraversalDepth
The maximum depth of Amazon S3 paths that the crawler can traverse to discover the Hudi metadata folder in your Amazon S3 path. Used to limit the crawler run time.
List<E> paths
One or more Amazon S3 paths that contains Iceberg metadata folders as s3://bucket/prefix
.
String connectionName
The name of the connection to use to connect to the Iceberg target.
List<E> exclusions
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
Integer maximumTraversalDepth
The maximum depth of Amazon S3 paths that the crawler can traverse to discover the Iceberg metadata folder in your Amazon S3 path. Used to limit the crawler run time.
String catalogId
The ID of the catalog to import. Currently, this should be the Amazon Web Services account ID.
Boolean fromFederationSource
Indicates whether or not the exception relates to a federated source.
String filterPredicate
Extra condition clause to filter data from source. For example:
BillingCity='Mountain View'
When using a query instead of a table name, you should validate that the query works with the specified
filterPredicate
.
String partitionColumn
The name of an integer column that is used for partitioning. This option works only when it's included with
lowerBound
, upperBound
, and numPartitions
. This option works the same way
as in the Spark SQL JDBC reader.
Long lowerBound
The minimum value of partitionColumn
that is used to decide partition stride.
Long upperBound
The maximum value of partitionColumn
that is used to decide partition stride.
Long numPartitions
The number of partitions. This value, along with lowerBound
(inclusive) and upperBound
(exclusive), form partition strides for generated WHERE
clause expressions that are used to split
the partitionColumn
.
List<E> jobBookmarkKeys
The name of the job bookmark keys on which to sort.
String jobBookmarkKeysSortOrder
Specifies an ascending or descending sort order.
Map<K,V> dataTypeMapping
Custom data type mapping that builds a mapping from a JDBC data type to an Glue data type. For example, the
option "dataTypeMapping":{"FLOAT":"STRING"}
maps data fields of JDBC type FLOAT
into
the Java String
type by calling the ResultSet.getString()
method of the driver, and
uses it to build the Glue record. The ResultSet
object is implemented by each driver, so the
behavior is specific to the driver you use. Refer to the documentation for your JDBC driver to understand how the
driver performs the conversions.
String name
The name of the data source.
String connectionName
The name of the connection that is associated with the connector.
String connectorName
The name of a connector that assists with accessing the data store in Glue Studio.
String connectionType
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data store.
JDBCConnectorOptions additionalOptions
Additional connection options for the connector.
String connectionTable
The name of the table in the data source.
String query
The table or SQL query to get the data from. You can specify either ConnectionTable
or
query
, but not both.
List<E> outputSchemas
Specifies the data schema for the custom JDBC source.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
String connectionName
The name of the connection that is associated with the connector.
String connectionTable
The name of the table in the data target.
String connectorName
The name of a connector that will be used.
String connectionType
The type of connection, such as marketplace.jdbc or custom.jdbc, designating a connection to a JDBC data target.
Map<K,V> additionalOptions
Additional connection options for the connector.
List<E> outputSchemas
Specifies the data schema for the JDBC target.
String connectionName
The name of the connection to use to connect to the JDBC target.
String path
The path of the JDBC target.
List<E> exclusions
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
List<E> enableAdditionalMetadata
Specify a value of RAWTYPES
or COMMENTS
to enable additional metadata in table
responses. RAWTYPES
provides the native-level datatype. COMMENTS
provides comments
associated with a column or table in the database.
If you do not need additional metadata, keep the field empty.
String name
The name you assign to this job definition.
String description
A description of the job.
String logUri
This field is reserved for future use.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job.
Date createdOn
The time and date that this job definition was created.
Date lastModifiedOn
The last point in time when this job definition was modified.
ExecutionProperty executionProperty
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
JobCommand command
The JobCommand
that runs this job.
Map<K,V> defaultArguments
The default arguments for every run of this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
Map<K,V> nonOverridableArguments
Arguments for this job that are not overridden when providing job arguments in a job run, specified as name-value pairs.
ConnectionsList connections
The connections used for this job.
Integer maxRetries
The maximum number of times to retry this job after a JobRun fails.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) allocated to runs of this job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated
and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0 or later jobs, you cannot specify a Maximum capacity
. Instead, you should
specify a Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio),
US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the
G.4X
worker type.
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume
streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job.
NotificationProperty notificationProperty
Specifies configuration properties of a job notification.
String glueVersion
In Spark jobs, GlueVersion
determines the versions of Apache Spark and Python that Glue available in
a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion
to 4.0
or greater. However, the versions of Ray, Python
and additional libraries available in your Ray job are determined by the Runtime
parameter of the
Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
Map<K,V> codeGenConfigurationNodes
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
String executionClass
Indicates whether the job is run with a standard or flexible execution class. The standard execution class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set
ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
SourceControlDetails sourceControlDetails
The details for a source control configuration for a job, allowing synchronization of job artifacts to or from a remote repository.
String jobName
The name of the job in question.
Integer version
The version of the job.
Integer run
The run ID number.
Integer attempt
The attempt ID number.
String previousRunId
The unique run identifier associated with the previous job run.
String runId
The run ID number.
String jobBookmark
The bookmark itself.
String name
The name of the job command. For an Apache Spark ETL job, this must be glueetl
. For a Python shell
job, it must be pythonshell
. For an Apache Spark streaming ETL job, this must be
gluestreaming
. For a Ray job, this must be glueray
.
String scriptLocation
Specifies the Amazon Simple Storage Service (Amazon S3) path to a script that runs a job.
String pythonVersion
The Python version being used to run a Python shell job. Allowed values are 2 or 3.
String runtime
In Ray jobs, Runtime is used to specify the versions of Ray, Python and additional libraries available in your environment. This field is not used in other job types. For supported runtime environment values, see Working with Ray jobs in the Glue Developer Guide.
String id
The ID of this job run.
Integer attempt
The number of the attempt to run this job.
String previousRunId
The ID of the previous run of this job. For example, the JobRunId
specified in the
StartJobRun
action.
String triggerName
The name of the trigger that started this job run.
String jobName
The name of the job definition being used in this run.
Date startedOn
The date and time at which this job run was started.
Date lastModifiedOn
The last time that this job run was modified.
Date completedOn
The date and time that this job run completed.
String jobRunState
The current state of the job run. For more information about the statuses of jobs that have terminated abnormally, see Glue Job Run Statuses.
Map<K,V> arguments
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
String errorMessage
An error message associated with this job run.
List<E> predecessorRuns
A list of predecessors to this job run.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) allocated to this JobRun. From 2 to 100 DPUs can be allocated; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer executionTime
The amount of time (in seconds) that the job run consumed resources.
Integer timeout
The JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before
it is terminated and enters TIMEOUT
status. This value overrides the timeout value set in the parent
job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity
. Instead, you should specify a
Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio),
US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the
G.4X
worker type.
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume
streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job run.
String logGroupName
The name of the log group for secure logging that can be server-side encrypted in Amazon CloudWatch using KMS.
This name can be /aws-glue/jobs/
, in which case the default encryption is NONE
. If you
add a role name and SecurityConfiguration
name (in other words,
/aws-glue/jobs-yourRoleName-yourSecurityConfigurationName/
), then that security configuration is
used to encrypt the log group.
NotificationProperty notificationProperty
Specifies configuration properties of a job run notification.
String glueVersion
In Spark jobs, GlueVersion
determines the versions of Apache Spark and Python that Glue available in
a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion
to 4.0
or greater. However, the versions of Ray, Python
and additional libraries available in your Ray job are determined by the Runtime
parameter of the
Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
Double dPUSeconds
This field populates only for Auto Scaling job runs, and represents the total time each executor ran during the
lifecycle of a job run in seconds, multiplied by a DPU factor (1 for G.1X
, 2 for G.2X
,
or 0.25 for G.025X
workers). This value may be different than the
executionEngineRuntime
* MaxCapacity
as in the case of Auto Scaling jobs, as the number
of executors running at a given time may be less than the MaxCapacity
. Therefore, it is possible
that the value of DPUSeconds
is less than executionEngineRuntime
*
MaxCapacity
.
String executionClass
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set
ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
String description
Description of the job being defined.
String logUri
This field is reserved for future use.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with this job (required).
ExecutionProperty executionProperty
An ExecutionProperty
specifying the maximum number of concurrent runs allowed for this job.
JobCommand command
The JobCommand
that runs this job (required).
Map<K,V> defaultArguments
The default arguments for every run of this job, specified as name-value pairs.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
Map<K,V> nonOverridableArguments
Arguments for this job that are not overridden when providing job arguments in a job run, specified as name-value pairs.
ConnectionsList connections
The connections used for this job.
Integer maxRetries
The maximum number of times to retry this job if it fails.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this job. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The job timeout in minutes. This is the maximum time that a job run can consume resources before it is terminated
and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity
. Instead, you should specify a
Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio),
US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the
G.4X
worker type.
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume
streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job.
NotificationProperty notificationProperty
Specifies the configuration properties of a job notification.
String glueVersion
In Spark jobs, GlueVersion
determines the versions of Apache Spark and Python that Glue available in
a job. The Python version indicates the version supported for jobs of type Spark.
Ray jobs should set GlueVersion
to 4.0
or greater. However, the versions of Ray, Python
and additional libraries available in your Ray job are determined by the Runtime
parameter of the
Job command.
For more information about the available Glue versions and corresponding Spark and Python versions, see Glue version in the developer guide.
Jobs that are created without specifying a Glue version default to Glue 0.9.
Map<K,V> codeGenConfigurationNodes
The representation of a directed acyclic graph on which both the Glue Studio visual component and Glue Studio code generation is based.
String executionClass
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set
ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
SourceControlDetails sourceControlDetails
The details for a source control configuration for a job, allowing synchronization of job artifacts to or from a remote repository.
String name
The name of the classifier.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String jsonPath
A JsonPath
string defining the JSON data for the classifier to classify. Glue supports a subset of
JsonPath, as described in Writing JsonPath
Custom Classifiers.
String bootstrapServers
A list of bootstrap server URLs, for example, as
b-1.vpc-test-2.o4q88o.c6.kafka.us-east-1.amazonaws.com:9094
. This option must be specified in the
API call or defined in the table metadata in the Data Catalog.
String securityProtocol
The protocol used to communicate with brokers. The possible values are "SSL"
or
"PLAINTEXT"
.
String connectionName
The name of the connection.
String topicName
The topic name as specified in Apache Kafka. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
String assign
The specific TopicPartitions
to consume. You must specify at least one of "topicName"
,
"assign"
or "subscribePattern"
.
String subscribePattern
A Java regex string that identifies the topic list to subscribe to. You must specify at least one of
"topicName"
, "assign"
or "subscribePattern"
.
String classification
An optional classification.
String delimiter
Specifies the delimiter character.
String startingOffsets
The starting position in the Kafka topic to read data from. The possible values are "earliest"
or
"latest"
. The default value is "latest"
.
String endingOffsets
The end point when a batch query is ended. Possible values are either "latest"
or a JSON string that
specifies an ending offset for each TopicPartition
.
Long pollTimeoutMs
The timeout in milliseconds to poll data from Kafka in Spark job executors. The default value is 512
.
Integer numRetries
The number of times to retry before failing to fetch Kafka offsets. The default value is 3
.
Long retryIntervalMs
The time in milliseconds to wait before retrying to fetch Kafka offsets. The default value is 10
.
Long maxOffsetsPerTrigger
The rate limit on the maximum number of offsets that are processed per trigger interval. The specified total
number of offsets is proportionally split across topicPartitions
of different volumes. The default
value is null, which means that the consumer reads all offsets until the known latest offset.
Integer minPartitions
The desired minimum number of partitions to read from Kafka. The default value is null, which means that the number of spark partitions is equal to the number of Kafka partitions.
Boolean includeHeaders
Whether to include the Kafka headers. When the option is set to "true", the data output will contain an
additional column named "glue_streaming_kafka_headers" with type
Array[Struct(key: String, value: String)]
. The default value is "false". This option is available in
Glue version 3.0 or later only.
String addRecordTimestamp
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that indicates the time when the corresponding record received by the topic. The default value is 'false'. This option is supported in Glue version 4.0 or later.
String emitConsumerLagMetrics
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest record received by the topic and the time it arrives in Glue to CloudWatch. The metric's name is "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue version 4.0 or later.
Date startingTimestamp
The timestamp of the record in the Kafka topic to start reading data from. The possible values are a timestamp
string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC timezone offset
with a +/-. For example: "2023-04-04T08:00:00+08:00").
Only one of StartingTimestamp
or StartingOffsets
must be set.
String endpointUrl
The URL of the Kinesis endpoint.
String streamName
The name of the Kinesis data stream.
String classification
An optional classification.
String delimiter
Specifies the delimiter character.
String startingPosition
The starting position in the Kinesis data stream to read data from. The possible values are "latest"
, "trim_horizon"
, "earliest"
, or a timestamp string in UTC format in the pattern
yyyy-mm-ddTHH:MM:SSZ
(where Z
represents a UTC timezone offset with a +/-. For example:
"2023-04-04T08:00:00-04:00"). The default value is "latest"
.
Note: Using a value that is a timestamp string in UTC format for "startingPosition" is supported only for Glue version 4.0 or later.
Long maxFetchTimeInMs
The maximum time spent in the job executor to fetch a record from the Kinesis data stream per shard, specified in
milliseconds (ms). The default value is 1000
.
Long maxFetchRecordsPerShard
The maximum number of records to fetch per shard in the Kinesis data stream. The default value is
100000
.
Long maxRecordPerRead
The maximum number of records to fetch from the Kinesis data stream in each getRecords operation. The default
value is 10000
.
Boolean addIdleTimeBetweenReads
Adds a time delay between two consecutive getRecords operations. The default value is "False"
. This
option is only configurable for Glue version 2.0 and above.
Long idleTimeBetweenReadsInMs
The minimum time delay between two consecutive getRecords operations, specified in ms. The default value is
1000
. This option is only configurable for Glue version 2.0 and above.
Long describeShardInterval
The minimum time interval between two ListShards API calls for your script to consider resharding. The default
value is 1s
.
Integer numRetries
The maximum number of retries for Kinesis Data Streams API requests. The default value is 3
.
Long retryIntervalMs
The cool-off time period (specified in ms) before retrying the Kinesis Data Streams API call. The default value
is 1000
.
Long maxRetryIntervalMs
The maximum cool-off time period (specified in ms) between two retries of a Kinesis Data Streams API call. The
default value is 10000
.
Boolean avoidEmptyBatches
Avoids creating an empty microbatch job by checking for unread data in the Kinesis data stream before the batch
is started. The default value is "False"
.
String streamArn
The Amazon Resource Name (ARN) of the Kinesis data stream.
String roleArn
The Amazon Resource Name (ARN) of the role to assume using AWS Security Token Service (AWS STS). This role must
have permissions for describe or read record operations for the Kinesis data stream. You must use this parameter
when accessing a data stream in a different account. Used in conjunction with "awsSTSSessionName"
.
String roleSessionName
An identifier for the session assuming the role using AWS STS. You must use this parameter when accessing a data
stream in a different account. Used in conjunction with "awsSTSRoleARN"
.
String addRecordTimestamp
When this option is set to 'true', the data output will contain an additional column named "__src_timestamp" that indicates the time when the corresponding record received by the stream. The default value is 'false'. This option is supported in Glue version 4.0 or later.
String emitConsumerLagMetrics
When this option is set to 'true', for each batch, it will emit the metrics for the duration between the oldest record received by the stream and the time it arrives in Glue to CloudWatch. The metric's name is "glue.driver.streaming.maxConsumerLagInMs". The default value is 'false'. This option is supported in Glue version 4.0 or later.
Date startingTimestamp
The timestamp of the record in the Kinesis data stream to start reading data from. The possible values are a
timestamp string in UTC format of the pattern yyyy-mm-ddTHH:MM:SSZ
(where Z represents a UTC
timezone offset with a +/-. For example: "2023-04-04T08:00:00+08:00").
String outputS3Path
The Amazon Simple Storage Service (Amazon S3) path where you will generate the labeling set.
Boolean useLakeFormationCredentials
Specifies whether to use Lake Formation credentials for the crawler instead of the IAM role credentials.
String accountId
Required for cross account crawls. For same account crawls as the target data, this can be left as null.
String description
The description of the blueprint.
Date lastModifiedOn
The date and time the blueprint was last modified.
String parameterSpec
A JSON string specifying the parameters for the blueprint.
String blueprintLocation
Specifies a path in Amazon S3 where the blueprint is published by the Glue developer.
String blueprintServiceLocation
Specifies a path in Amazon S3 where the blueprint is copied when you create or update the blueprint.
String status
Status of the last crawl.
String errorMessage
If an error occurred, the error information about the last crawl.
String logGroup
The log group for the last crawl.
String logStream
The log stream for the last crawl.
String messagePrefix
The prefix for a message about this crawl.
Date startTime
The time at which the crawl started.
String crawlerLineageSettings
Specifies whether data lineage is enabled for the crawler. Valid values are:
ENABLE: enables data lineage for the crawler
DISABLE: disables data lineage for the crawler
String crawlerName
The name of the crawler whose runs you want to retrieve.
Integer maxResults
The maximum number of results to return. The default is 20, and maximum is 100.
List<E> filters
Filters the crawls by the criteria you specify in a list of CrawlsFilter
objects.
String nextToken
A continuation token, if this is a continuation call.
DataQualityResultFilterCriteria filter
The filter criteria.
String nextToken
A paginated token to offset the results.
Integer maxResults
The maximum number of results to return.
DataQualityRuleRecommendationRunFilter filter
The filter criteria.
String nextToken
A paginated token to offset the results.
Integer maxResults
The maximum number of results to return.
DataQualityRulesetEvaluationRunFilter filter
The filter criteria.
String nextToken
A paginated token to offset the results.
Integer maxResults
The maximum number of results to return.
String nextToken
A continuation token, if this is a continuation request.
Integer maxResults
The maximum size of a list to return.
TransformFilterCriteria filter
A TransformFilterCriteria
used to filter the machine learning transforms.
TransformSortCriteria sort
A TransformSortCriteria
used to sort the machine learning transforms.
Map<K,V> tags
Specifies to return only these tagged resources.
RegistryId registryId
A wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
Integer maxResults
Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.
String nextToken
A continuation token, if this is a continuation call.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
Integer maxResults
Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.
String nextToken
A continuation token, if this is a continuation call.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
String type
The type of table optimizer. Currently, the only valid value is compaction
.
Integer maxResults
The maximum number of optimizer runs to return on each call.
String nextToken
A continuation token, if this is a continuation call.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
String nextToken
A continuation token for paginating the returned list of optimizer runs, returned if the current segment of the list is not the last.
List<E> tableOptimizerRuns
A list of the optimizer runs associated with a table.
String nextToken
A continuation token, if this is a continuation request.
String dependentJobName
The name of the job for which to retrieve triggers. The trigger that can start this job is returned. If there is no such trigger, all triggers are returned.
Integer maxResults
The maximum size of a list to return.
Map<K,V> tags
Specifies to return only these tagged resources.
String toKey
After the apply mapping, what the name of the column should be. Can be the same as FromPath
.
List<E> fromPath
The table or column to be modified.
String fromType
The type of the data to be modified.
String toType
The data type that the data is to be modified to.
Boolean dropped
If true, then the column is removed.
List<E> children
Only applicable to nested data structures. If you want to change the parent structure, but also one of its
children, you can fill out this data strucutre. It is also Mapping
, but its FromPath
will be the parent's FromPath
plus the FromPath
from this structure.
For the children part, suppose you have the structure:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
You can specify a Mapping
that looks like:
{ "FromPath": "OuterStructure", "ToKey": "OuterStructure", "ToType": "Struct", "Dropped": false, "Chidlren": [{ "FromPath": "inner", "ToKey": "inner", "ToType": "Double", "Dropped": false, }] }
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names.
String source
The source DynamicFrame
that will be merged with a staging DynamicFrame
.
List<E> primaryKeys
The list of primary key fields to match records from the source and staging dynamic frames.
String metricName
The name of the data quality metric used for generating the observation.
DataQualityMetricValues metricValues
An object of type DataQualityMetricValues
representing the analysis of the data quality metric
value.
List<E> newRules
A list of new data quality rules generated as part of the observation based on the data quality metric value.
String transformId
The unique transform ID that is generated for the machine learning transform. The ID is guaranteed to be unique and does not change.
String name
A user-defined name for the machine learning transform. Names are not guaranteed unique and can be changed at any time.
String description
A user-defined, long-form description text for the machine learning transform. Descriptions are not guaranteed to be unique and can be changed at any time.
String status
The current status of the machine learning transform.
Date createdOn
A timestamp. The time and date that this machine learning transform was created.
Date lastModifiedOn
A timestamp. The last point in time when this machine learning transform was modified.
List<E> inputRecordTables
A list of Glue table definitions used by the transform.
TransformParameters parameters
A TransformParameters
object. You can use parameters to tune (customize) the behavior of the machine
learning transform by specifying what data it learns from and your preference on various tradeoffs (such as
precious vs. recall, or accuracy vs. cost).
EvaluationMetrics evaluationMetrics
An EvaluationMetrics
object. Evaluation metrics provide an estimate of the quality of your machine
learning transform.
Integer labelCount
A count identifier for the labeling files generated by Glue for this transform. As you create a better transform, you can iteratively download, label, and upload the labeling file.
List<E> schema
A map of key-value pairs representing the columns and data types that this transform can run against. Has an upper bound of 100 columns.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions. The required permissions include both Glue service role permissions to Glue resources, and Amazon S3 permissions required by the transform.
This role needs Glue service role permissions to allow access to resources in Glue. See Attach a Policy to IAM Users That Access Glue.
This role needs permission to your Amazon Simple Storage Service (Amazon S3) sources, targets, temporary directory, scripts, and any libraries used by the task run for this transform.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when a task of this transform runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
MaxCapacity
is a mutually exclusive option with NumberOfWorkers
and
WorkerType
.
If either NumberOfWorkers
or WorkerType
is set, then MaxCapacity
cannot be
set.
If MaxCapacity
is set then neither NumberOfWorkers
or WorkerType
can be
set.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
MaxCapacity
and NumberOfWorkers
must both be at least 1.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a task of the transform runs.
If WorkerType
is set, then NumberOfWorkers
is required (and vice versa).
Integer timeout
The timeout in minutes of the machine learning transform.
Integer maxRetries
The maximum number of times to retry after an MLTaskRun
of the machine learning transform fails.
TransformEncryption transformEncryption
The encryption-at-rest settings of the transform that apply to accessing user data. Machine learning transforms can access user data encrypted in Amazon S3 using KMS.
String mlUserDataEncryptionMode
The encryption mode applied to user data. Valid values are:
DISABLED: encryption is disabled
SSEKMS: use of server-side encryption with Key Management Service (SSE-KMS) for user data stored in Amazon S3.
String kmsKeyId
The ID for the customer-provided KMS key.
String connectionName
The name of the connection to use to connect to the Amazon DocumentDB or MongoDB target.
String path
The path of the Amazon DocumentDB or MongoDB target (database/collection).
Boolean scanAll
Indicates whether to scan all the records, or to sample rows from the table. Scanning all the records can take a long time when the table is not a high throughput table.
A value of true
means to scan all records, while a value of false
means to sample the
records. If no value is specified, the value defaults to true
.
String type
The type of Glue component represented by the node.
String name
The name of the Glue component represented by the node.
String uniqueId
The unique Id assigned to the node within the workflow.
TriggerNodeDetails triggerDetails
Details of the Trigger when the node represents a Trigger.
JobNodeDetails jobDetails
Details of the Job when the node represents a Job.
CrawlerNodeDetails crawlerDetails
Details of the crawler when the node represents a crawler.
Integer notifyDelayAfter
After a job run starts, the number of minutes to wait before sending a job run delay notification.
IcebergInput icebergInput
Specifies an IcebergInput
structure that defines an Apache Iceberg metadata table.
List<E> values
The values of the partition.
String databaseName
The name of the catalog database in which to create the partition.
String tableName
The name of the database table in which to create the partition.
Date creationTime
The time at which the partition was created.
Date lastAccessTime
The last time at which the partition was accessed.
StorageDescriptor storageDescriptor
Provides information about the physical location where the partition is stored.
Map<K,V> parameters
These key-value pairs define partition parameters.
Date lastAnalyzedTime
The last time at which column statistics were computed for this partition.
String catalogId
The ID of the Data Catalog in which the partition resides.
List<E> partitionValues
The values that define the partition.
ErrorDetail errorDetail
The details about the partition error.
String indexName
The name of the partition index.
List<E> keys
A list of one or more keys, as KeySchemaElement
structures, for the partition index.
String indexStatus
The status of the partition index.
The possible statuses are:
CREATING: The index is being created. When an index is in a CREATING state, the index or its table cannot be deleted.
ACTIVE: The index creation succeeds.
FAILED: The index creation fails.
DELETING: The index is deleted from the list of indexes.
List<E> backfillErrors
A list of errors that can occur when registering partition indexes for an existing table.
List<E> values
The values of the partition. Although this parameter is not required by the SDK, you must specify this parameter for a valid input.
The values for the keys for the new partition must be passed as an array of String objects that must be ordered in the same order as the partition keys appearing in the Amazon S3 prefix. Otherwise Glue will add the values to the wrong keys.
Date lastAccessTime
The last time at which the partition was accessed.
StorageDescriptor storageDescriptor
Provides information about the physical location where the partition is stored.
Map<K,V> parameters
These key-value pairs define partition parameters.
Date lastAnalyzedTime
The last time at which column statistics were computed for this partition.
String subnetId
The subnet ID used by the connection.
List<E> securityGroupIdList
The security group ID list used by the connection.
String availabilityZone
The connection's Availability Zone. This field is redundant because the specified subnet implies the Availability Zone to be used. Currently the field must be populated, but it will be deprecated in the future.
String name
The name of the transform node.
List<E> inputs
The node ID inputs to the transform.
String piiType
Indicates the type of PIIDetection transform.
List<E> entityTypesToDetect
Indicates the types of entities the PIIDetection transform will identify as PII data.
PII type entities include: PERSON_NAME, DATE, USA_SNN, EMAIL, USA_ITIN, USA_PASSPORT_NUMBER, PHONE_NUMBER, BANK_ACCOUNT, IP_ADDRESS, MAC_ADDRESS, USA_CPT_CODE, USA_HCPCS_CODE, USA_NATIONAL_DRUG_CODE, USA_MEDICARE_BENEFICIARY_IDENTIFIER, USA_HEALTH_INSURANCE_CLAIM_NUMBER,CREDIT_CARD,USA_NATIONAL_PROVIDER_IDENTIFIER,USA_DEA_NUMBER,USA_DRIVING_LICENSE
String outputColumnName
Indicates the output column name that will contain any entity type detected in that row.
Double sampleFraction
Indicates the fraction of the data to sample when scanning for PII entities.
Double thresholdFraction
Indicates the fraction of the data that must be met in order for a column to be identified as PII data.
String maskValue
Indicates the value that will replace the detected entity.
DataLakePrincipal principal
The principal who is granted permissions.
List<E> permissions
The permissions that are granted to the principal.
String catalogId
The ID of the Data Catalog to set the security configuration for. If none is provided, the Amazon Web Services account ID is used by default.
DataCatalogEncryptionSettings dataCatalogEncryptionSettings
The security configuration to set.
String policyInJson
Contains the policy document to set, in JSON format.
String resourceArn
Do not use. For internal use only.
String policyHashCondition
The hash value returned when the previous policy was set using PutResourcePolicy
. Its purpose is to
prevent concurrent modifications of a policy. Do not use this parameter if no previous policy has been set.
String policyExistsCondition
A value of MUST_EXIST
is used to update a policy. A value of NOT_EXIST
is used to
create a new policy. If a value of NONE
or a null value is used, the call does not depend on the
existence of a policy.
String enableHybrid
If 'TRUE'
, indicates that you are using both methods to grant cross-account access to Data Catalog
resources:
By directly updating the resource policy with PutResourePolicy
By using the Grant permissions command on the Amazon Web Services Management Console.
Must be set to 'TRUE'
if you have already used the Management Console to grant cross-account access,
otherwise the call fails. Default is 'FALSE'.
String policyHash
A hash of the policy that has just been set. This must be included in a subsequent call that overwrites or updates this policy.
SchemaId schemaId
The unique ID for the schema.
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
MetadataKeyValuePair metadataKeyValue
The metadata key's corresponding value.
String schemaArn
The Amazon Resource Name (ARN) for the schema.
String schemaName
The name for the schema.
String registryName
The name for the registry.
Boolean latestVersion
The latest version of the schema.
Long versionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
String metadataKey
The metadata key.
String metadataValue
The value of the metadata key.
SchemaId schemaId
A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
List<E> metadataList
Search key-value pairs for metadata, if they are not provided all the metadata information will be fetched.
Integer maxResults
Maximum number of results required per page. If the value is not supplied, this will be defaulted to 25 per page.
String nextToken
A continuation token, if this is a continuation call.
Map<K,V> metadataInfoMap
A map of a metadata key and associated values.
String schemaVersionId
The unique version ID of the schema version.
String nextToken
A continuation token for paginating the returned list of tokens, returned if the current segment of the list is not the last.
String name
The name of the Glue Studio node.
List<E> inputs
The nodes that are inputs to the recipe node, identified by id.
RecipeReference recipeReference
A reference to the DataBrew recipe used by the node.
String recrawlBehavior
Specifies whether to crawl the entire dataset again or to crawl only folders that were added since the last crawler run.
A value of CRAWL_EVERYTHING
specifies crawling the entire dataset again.
A value of CRAWL_NEW_FOLDERS_ONLY
specifies crawling only folders that were added since the last
crawler run.
A value of CRAWL_EVENT_MODE
specifies crawling only the changes identified by Amazon S3 events.
String name
The name of the Amazon Redshift data store.
String database
The database to read from.
String table
The database table to read from.
String redshiftTmpDir
The Amazon S3 path where temporary data can be staged when copying out of the database.
String tmpDirIAMRole
The IAM role with permissions.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
String database
The name of the database to write to.
String table
The name of the table in the database to write to.
String redshiftTmpDir
The Amazon S3 path where temporary data can be staged when copying out of the database.
String tmpDirIAMRole
The IAM role with permissions.
UpsertRedshiftTargetOptions upsertRedshiftOptions
The set of options to configure an upsert operation when writing to a Redshift target.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. Either SchemaArn
or
SchemaName
and RegistryName
has to be provided.
SchemaId$SchemaName: The name of the schema. Either SchemaArn
or SchemaName
and
RegistryName
has to be provided.
String schemaDefinition
The schema definition using the DataFormat
setting for the SchemaName
.
String registryName
The name of the registry.
String registryArn
The Amazon Resource Name (ARN) of the registry.
String description
A description of the registry.
String status
The status of the registry.
String createdTime
The data the registry was created.
String updatedTime
The date the registry was updated.
SchemaId schemaId
A wrapper structure that may contain the schema name and Amazon Resource Name (ARN).
SchemaVersionNumber schemaVersionNumber
The version number of the schema.
String schemaVersionId
The unique version ID of the schema version.
MetadataKeyValuePair metadataKeyValue
The value of the metadata key.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String schemaName
The name of the schema.
String registryName
The name of the registry.
Boolean latestVersion
The latest version of the schema.
Long versionNumber
The version number of the schema.
String schemaVersionId
The version ID for the schema version.
String metadataKey
The metadata key.
String metadataValue
The value of the metadata key.
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names.
List<E> sourcePath
A JSON path to a variable in the data structure for the source data.
List<E> targetPath
A JSON path to a variable in the data structure for the target data.
JobBookmarkEntry jobBookmarkEntry
The reset bookmark entry.
String numberOfBytesCompacted
The number of bytes removed by the compaction job run.
String numberOfFilesCompacted
The number of files removed by the compaction job run.
String numberOfDpus
The number of DPU hours consumed by the job.
String jobDurationInHour
The duration of the job in hours.
Integer id
Returns the Id of the statement that was run.
String name
The name of the Delta Lake data source.
String database
The name of the database to read from.
String table
The name of the table in the database to read from.
Map<K,V> additionalDeltaOptions
Specifies additional connection options.
List<E> outputSchemas
Specifies the data schema for the Delta Lake source.
String name
The name of the Hudi data source.
String database
The name of the database to read from.
String table
The name of the table in the database to read from.
Map<K,V> additionalHudiOptions
Specifies additional connection options.
List<E> outputSchemas
Specifies the data schema for the Hudi source.
String name
The name of the data store.
String database
The database to read from.
String table
The database table to read from.
String partitionPredicate
Partitions satisfying this predicate are deleted. Files within the retention period in these partitions are not
deleted. Set to ""
– empty by default.
S3SourceAdditionalOptions additionalOptions
Specifies additional connection options.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String table
The name of the table in the database to write to.
String database
The name of the database to write to.
CatalogSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the data store.
List<E> paths
A list of the Amazon S3 paths to read from.
String compressionType
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
List<E> exclusions
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "[\"**.pdf\"]" excludes all PDF files.
String groupSize
The target group size in bytes. The default is computed based on the input data size and the size of your
cluster. When there are fewer than 50,000 input files, "groupFiles"
must be set to
"inPartition"
for this to take effect.
String groupFiles
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with
fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000
files, set this parameter to "none"
.
Boolean recurse
If set to true, recursively reads files in all subdirectories under the specified paths.
Integer maxBand
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
Integer maxFilesInBand
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
S3DirectSourceAdditionalOptions additionalOptions
Specifies additional connection options.
String separator
Specifies the delimiter character. The default is a comma: ",", but any other character can be specified.
String escaper
Specifies a character to use for escaping. This option is used only when reading CSV files. The default value is
none
. If enabled, the character which immediately follows is used as-is, except for a small set of
well-known escapes (\n
, \r
, \t
, and \0
).
String quoteChar
Specifies the character to use for quoting. The default is a double quote: '"'
. Set this to
-1
to turn off quoting entirely.
Boolean multiline
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field
contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The
default value is False
, which allows for more aggressive file-splitting during parsing.
Boolean withHeader
A Boolean value that specifies whether to treat the first line as a header. The default value is
False
.
Boolean writeHeader
A Boolean value that specifies whether to write the header to output. The default value is True
.
Boolean skipFirst
A Boolean value that specifies whether to skip the first data line. The default value is False
.
Boolean optimizePerformance
A Boolean value that specifies whether to use the advanced SIMD CSV reader along with Apache Arrow based columnar memory formats. Only available in Glue version 3.0.
List<E> outputSchemas
Specifies the data schema for the S3 CSV source.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String table
The name of the table in the database to write to.
String database
The name of the database to write to.
Map<K,V> additionalOptions
Specifies additional connection options for the connector.
CatalogSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String path
The Amazon S3 path of your Delta Lake data source to write to.
String compression
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
String format
Specifies the data output format for the target.
Map<K,V> additionalOptions
Specifies additional connection options for the connector.
DirectSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the Delta Lake source.
List<E> paths
A list of the Amazon S3 paths to read from.
Map<K,V> additionalDeltaOptions
Specifies additional connection options.
S3DirectSourceAdditionalOptions additionalOptions
Specifies additional options for the connector.
List<E> outputSchemas
Specifies the data schema for the Delta Lake source.
Long boundedSize
Sets the upper limit for the target size of the dataset in bytes that will be processed.
Long boundedFiles
Sets the upper limit for the target number of files that will be processed.
Boolean enableSamplePath
Sets option to enable a sample path.
String samplePath
If enabled, specifies the sample path.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String path
A single Amazon S3 path to write to.
String compression
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
String format
Specifies the data output format for the target.
DirectSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String path
A single Amazon S3 path to write to.
String compression
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
DirectSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String table
The name of the table in the database to write to.
String database
The name of the database to write to.
Map<K,V> additionalOptions
Specifies additional connection options for the connector.
CatalogSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
String path
The Amazon S3 path of your Hudi data source to write to.
String compression
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
List<E> partitionKeys
Specifies native partitioning using a sequence of keys.
String format
Specifies the data output format for the target.
Map<K,V> additionalOptions
Specifies additional connection options for the connector.
DirectSchemaChangePolicy schemaChangePolicy
A policy that specifies update behavior for the crawler.
String name
The name of the Hudi source.
List<E> paths
A list of the Amazon S3 paths to read from.
Map<K,V> additionalHudiOptions
Specifies additional connection options.
S3DirectSourceAdditionalOptions additionalOptions
Specifies additional options for the connector.
List<E> outputSchemas
Specifies the data schema for the Hudi source.
String name
The name of the data store.
List<E> paths
A list of the Amazon S3 paths to read from.
String compressionType
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
List<E> exclusions
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "[\"**.pdf\"]" excludes all PDF files.
String groupSize
The target group size in bytes. The default is computed based on the input data size and the size of your
cluster. When there are fewer than 50,000 input files, "groupFiles"
must be set to
"inPartition"
for this to take effect.
String groupFiles
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with
fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000
files, set this parameter to "none"
.
Boolean recurse
If set to true, recursively reads files in all subdirectories under the specified paths.
Integer maxBand
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
Integer maxFilesInBand
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
S3DirectSourceAdditionalOptions additionalOptions
Specifies additional connection options.
String jsonPath
A JsonPath string defining the JSON data.
Boolean multiline
A Boolean value that specifies whether a single record can span multiple lines. This can occur when a field
contains a quoted new-line character. You must set this option to True if any record spans multiple lines. The
default value is False
, which allows for more aggressive file-splitting during parsing.
List<E> outputSchemas
Specifies the data schema for the S3 JSON source.
String name
The name of the data store.
List<E> paths
A list of the Amazon S3 paths to read from.
String compressionType
Specifies how the data is compressed. This is generally not necessary if the data has a standard file extension.
Possible values are "gzip"
and "bzip"
).
List<E> exclusions
A string containing a JSON list of Unix-style glob patterns to exclude. For example, "[\"**.pdf\"]" excludes all PDF files.
String groupSize
The target group size in bytes. The default is computed based on the input data size and the size of your
cluster. When there are fewer than 50,000 input files, "groupFiles"
must be set to
"inPartition"
for this to take effect.
String groupFiles
Grouping files is turned on by default when the input contains more than 50,000 files. To turn on grouping with
fewer than 50,000 files, set this parameter to "inPartition". To disable grouping when there are more than 50,000
files, set this parameter to "none"
.
Boolean recurse
If set to true, recursively reads files in all subdirectories under the specified paths.
Integer maxBand
This option controls the duration in milliseconds after which the s3 listing is likely to be consistent. Files with modification timestamps falling within the last maxBand milliseconds are tracked specially when using JobBookmarks to account for Amazon S3 eventual consistency. Most users don't need to set this option. The default is 900000 milliseconds, or 15 minutes.
Integer maxFilesInBand
This option specifies the maximum number of files to save from the last maxBand seconds. If this number is exceeded, extra files are skipped and only processed in the next job run.
S3DirectSourceAdditionalOptions additionalOptions
Specifies additional connection options.
List<E> outputSchemas
Specifies the data schema for the S3 Parquet source.
String path
The path to the Amazon S3 target.
List<E> exclusions
A list of glob patterns used to exclude from the crawl. For more information, see Catalog Tables with a Crawler.
String connectionName
The name of a connection which allows a job or crawler to access data in Amazon S3 within an Amazon Virtual Private Cloud environment (Amazon VPC).
Integer sampleSize
Sets the number of files in each leaf folder to be crawled when crawling sample files in a dataset. If not set, all the files are crawled. A valid value is an integer between 1 and 249.
String eventQueueArn
A valid Amazon SQS ARN. For example, arn:aws:sqs:region:account:sqs
.
String dlqEventQueueArn
A valid Amazon dead-letter SQS ARN. For example, arn:aws:sqs:region:account:deadLetterQueue
.
String scheduleExpression
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
String state
The state of the schedule.
String schemaArn
The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or SchemaName
has to be
provided.
String schemaName
The name of the schema. One of SchemaArn
or SchemaName
has to be provided.
String registryName
The name of the schema registry that contains the schema.
String registryName
the name of the registry where the schema resides.
String schemaName
The name of the schema.
String schemaArn
The Amazon Resource Name (ARN) for the schema.
String description
A description for the schema.
String schemaStatus
The status of the schema.
String createdTime
The date and time that a schema was created.
String updatedTime
The date and time that a schema was updated.
SchemaId schemaId
A structure that contains schema identity fields. Either this or the SchemaVersionId
has to be
provided.
String schemaVersionId
The unique ID assigned to a version of the schema. Either this or the SchemaId
has to be provided.
Long schemaVersionNumber
The version number of the schema.
Long versionNumber
The version number of the schema.
ErrorDetails errorDetails
The details of the error for the schema version.
String schemaArn
The Amazon Resource Name (ARN) of the schema.
String schemaVersionId
The unique identifier of the schema version.
Long versionNumber
The version number of the schema.
String status
The status of the schema version.
String createdTime
The date and time the schema version was created.
String catalogId
A unique identifier, consisting of account_id
.
String nextToken
A continuation token, included if this is a continuation call.
List<E> filters
A list of key-value pairs, and a comparator used to filter the search results. Returns all entities matching the predicate.
The Comparator
member of the PropertyPredicate
struct is used only for time fields, and
can be omitted for other field types. Also, when comparing string values, such as when Key=Name
, a
fuzzy match algorithm is used. The Key
field (for example, the value of the Name
field)
is split on certain punctuation characters, for example, -, :, #, etc. into tokens. Then each token is
exact-match compared with the Value
member of PropertyPredicate
. For example, if
Key=Name
and Value=link
, tables named customer-link
and
xx-link-yy
are returned, but xxlinkyy
is not returned.
String searchText
A string used for a text search.
Specifying a value in quotes filters based on an exact match to the value.
List<E> sortCriteria
A list of criteria for sorting the results by a field name, in an ascending or descending order.
Integer maxResults
The maximum number of tables to return in a single response.
String resourceShareType
Allows you to specify that you want to search the tables shared with your account. The allowable values are
FOREIGN
or ALL
.
If set to FOREIGN
, will search the tables shared with your account.
If set to ALL
, will search the tables shared with your account, as well as the tables in yor local
account.
String name
The name of the security configuration.
Date createdTimeStamp
The time at which this security configuration was created.
EncryptionConfiguration encryptionConfiguration
The encryption configuration associated with this security configuration.
String id
The ID of the session.
Date createdOn
The time and date when the session was created.
String status
The session status.
String errorMessage
The error message displayed during the session.
String description
The description of the session.
String role
The name or Amazon Resource Name (ARN) of the IAM role associated with the Session.
SessionCommand command
The command object.See SessionCommand.
Map<K,V> defaultArguments
A map array of key-value pairs. Max is 75 pairs.
ConnectionsList connections
The number of connections used for the session.
Double progress
The code execution progress of the session.
Double maxCapacity
The number of Glue data processing units (DPUs) that can be allocated when the job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB memory.
String securityConfiguration
The name of the SecurityConfiguration structure to be used with the session.
String glueVersion
The Glue version determines the versions of Apache Spark and Python that Glue supports. The GlueVersion must be greater than 2.0.
Integer numberOfWorkers
The number of workers of a defined WorkerType
to use for the session.
String workerType
The type of predefined worker that is allocated when a session runs. Accepts a value of G.1X
,
G.2X
, G.4X
, or G.8X
for Spark sessions. Accepts the value
Z.2X
for Ray sessions.
Date completedOn
The date and time that this session is completed.
Double executionTime
The total time the session ran for.
Double dPUSeconds
The DPUs consumed by the session (formula: ExecutionTime * MaxCapacity).
Integer idleTimeout
The number of minutes when idle before the session times out.
List<E> skewedColumnNames
A list of names of columns that contain skewed values.
List<E> skewedColumnValues
A list of values that appear so frequently as to be considered skewed.
Map<K,V> skewedColumnValueLocationMaps
A mapping of skewed values to the columns that contain them.
String sourceType
Specifies how retrieved data is specified. Valid values: "table"
, "query"
.
Option connection
Specifies a Glue Data Catalog Connection to a Snowflake endpoint.
String schema
Specifies a Snowflake database schema for your node to use.
String table
Specifies a Snowflake table for your node to use.
String database
Specifies a Snowflake database for your node to use.
String tempDir
Not currently used.
Option iamRole
Not currently used.
Map<K,V> additionalOptions
Specifies additional options passed to the Snowflake connector. If options are specified elsewhere in this node, this will take precedence.
String sampleQuery
A SQL string used to retrieve data with the query
sourcetype.
String preAction
A SQL string run before the Snowflake connector performs its standard actions.
String postAction
A SQL string run after the Snowflake connector performs its standard actions.
String action
Specifies what action to take when writing to a table with preexisting data. Valid values: append
,
merge
, truncate
, drop
.
Boolean upsert
Used when Action is append
. Specifies the resolution behavior when a row already exists. If true,
preexisting rows will be updated. If false, those rows will be inserted.
String mergeAction
Specifies a merge action. Valid values: simple
, custom
. If simple, merge behavior is
defined by MergeWhenMatched
and MergeWhenNotMatched
. If custom, defined by
MergeClause
.
String mergeWhenMatched
Specifies how to resolve records that match preexisting data when merging. Valid values: update
,
delete
.
String mergeWhenNotMatched
Specifies how to process records that do not match preexisting data when merging. Valid values:
insert
, none
.
String mergeClause
A SQL statement that specifies a custom merge behavior.
String stagingTable
The name of a staging table used when performing merge
or upsert append
actions. Data
is written to this table, then moved to table
by a generated postaction.
List<E> selectedColumns
Specifies the columns combined to identify a record when detecting matches for merges and upserts. A list of
structures with value
, label
and description
keys. Each structure
describes a column.
Boolean autoPushdown
Specifies whether automatic query pushdown is enabled. If pushdown is enabled, then when a query is run on Spark, if part of the query can be "pushed down" to the Snowflake server, it is pushed down. This improves performance of some queries.
List<E> tableSchema
Manually defines the target schema for the node. A list of structures with value
,
label
and description
keys. Each structure defines a column.
String name
The name of the Snowflake data source.
SnowflakeNodeData data
Configuration for the Snowflake data source.
List<E> outputSchemas
Specifies user-defined schemas for your output data.
String name
The name of the Snowflake target.
SnowflakeNodeData data
Specifies the data of the Snowflake target node.
List<E> inputs
The nodes that are inputs to the data target.
String provider
The provider for the remote repository.
String repository
The name of the remote repository that contains the job artifacts.
String owner
The owner of the remote repository that contains the job artifacts.
String branch
An optional branch in the remote repository.
String folder
An optional folder in the remote repository.
String lastCommitId
The last commit ID for a commit in the remote repository.
String authStrategy
The type of authentication, which can be an authentication token stored in Amazon Web Services Secrets Manager, or a personal access token.
String authToken
The value of an authorization token.
String name
The name of the data source.
String connectionName
The name of the connection that is associated with the connector.
String connectorName
The name of a connector that assists with accessing the data store in Glue Studio.
String connectionType
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
Map<K,V> additionalOptions
Additional connection options for the connector.
List<E> outputSchemas
Specifies data schema for the custom spark source.
String name
The name of the data target.
List<E> inputs
The nodes that are inputs to the data target.
String connectionName
The name of a connection for an Apache Spark connector.
String connectorName
The name of an Apache Spark connector.
String connectionType
The type of connection, such as marketplace.spark or custom.spark, designating a connection to an Apache Spark data store.
Map<K,V> additionalOptions
Additional connection options for the connector.
List<E> outputSchemas
Specifies the data schema for the custom spark target.
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names. You can associate a table name with each input node to use in the SQL query. The name you choose must meet the Spark SQL naming restrictions.
String sqlQuery
A SQL query that must use Spark SQL syntax and return a single data set.
List<E> sqlAliases
A list of aliases. An alias allows you to specify what name to use in the SQL for a given input. For example, you
have a datasource named "MyDataSource". If you specify From
as MyDataSource, and Alias
as SqlName, then in your SQL you can do:
select * from SqlName
and that gets data from MyDataSource.
List<E> outputSchemas
Specifies the data schema for the SparkSQL transform.
String name
The name of the transform node.
List<E> inputs
The data inputs identified by their node names.
String path
A path in Amazon S3 where the transform will write a subset of records from the dataset to a JSON file in an Amazon S3 bucket.
Integer topk
Specifies a number of records to write starting from the beginning of the dataset.
Double prob
The probability (a decimal value with a maximum value of 1) of picking any given record. A value of 1 indicates that each row read from the dataset should be included in the sample output.
String runId
The run ID for this blueprint run.
String databaseName
The name of the database where the table resides.
String tableName
The name of the table to generate statistics.
List<E> columnNameList
A list of the column names to generate statistics. If none is supplied, all column names for the table will be used by default.
String role
The IAM role that the service assumes to generate statistics.
Double sampleSize
The percentage of rows used to generate statistics. If none is supplied, the entire table will be used to generate stats.
String catalogID
The ID of the Data Catalog where the table reside. If none is supplied, the Amazon Web Services account ID is used by default.
String securityConfiguration
Name of the security configuration that is used to encrypt CloudWatch logs for the column stats task run.
String columnStatisticsTaskRunId
The identifier for the column statistics task run.
String name
Name of the crawler to start.
String crawlerName
Name of the crawler to schedule.
DataSource dataSource
The data source (Glue table) associated with this run.
String role
An IAM role supplied to encrypt the results of the run.
Integer numberOfWorkers
The number of G.1X
workers to be used in the run. The default is 5.
Integer timeout
The timeout for a run in minutes. This is the maximum time that a run can consume resources before it is
terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
String createdRulesetName
A name for the ruleset.
String clientToken
Used for idempotency and is recommended to be set to a random ID (such as a UUID) to avoid creating or starting multiple instances of the same resource.
String runId
The unique run identifier associated with this run.
DataSource dataSource
The data source (Glue table) associated with this run.
String role
An IAM role supplied to encrypt the results of the run.
Integer numberOfWorkers
The number of G.1X
workers to be used in the run. The default is 5.
Integer timeout
The timeout for a run in minutes. This is the maximum time that a run can consume resources before it is
terminated and enters TIMEOUT
status. The default is 2,880 minutes (48 hours).
String clientToken
Used for idempotency and is recommended to be set to a random ID (such as a UUID) to avoid creating or starting multiple instances of the same resource.
DataQualityEvaluationRunAdditionalRunOptions additionalRunOptions
Additional run options you can specify for an evaluation run.
List<E> rulesetNames
A list of ruleset names.
Map<K,V> additionalDataSources
A map of reference strings to additional data sources you can specify for an evaluation run.
String runId
The unique run identifier associated with this run.
String taskRunId
The unique identifier for the task run.
String taskRunId
The unique identifier for the task run.
String jobName
The name of the job definition to use.
String jobRunId
The ID of a previous JobRun
to retry.
Map<K,V> arguments
The job arguments associated with this run. For this job run, they replace the default arguments set in the job definition itself.
You can specify arguments here that your own job-execution script consumes, as well as arguments that Glue itself consumes.
Job arguments may be logged. Do not pass plaintext secrets as arguments. Retrieve secrets from a Glue Connection, Secrets Manager or other secret management mechanism if you intend to keep them within the Job.
For information about how to specify and consume your own Job arguments, see the Calling Glue APIs in Python topic in the developer guide.
For information about the arguments you can provide to this field when configuring Spark jobs, see the Special Parameters Used by Glue topic in the developer guide.
For information about the arguments you can provide to this field when configuring Ray jobs, see Using job parameters in Ray jobs in the developer guide.
Integer allocatedCapacity
This field is deprecated. Use MaxCapacity
instead.
The number of Glue data processing units (DPUs) to allocate to this JobRun. You can allocate a minimum of 2 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
Integer timeout
The JobRun
timeout in minutes. This is the maximum time that a job run can consume resources before
it is terminated and enters TIMEOUT
status. This value overrides the timeout value set in the parent
job.
Streaming jobs do not have a timeout. The default for non-streaming jobs is 2,880 minutes (48 hours).
Double maxCapacity
For Glue version 1.0 or earlier jobs, using the standard worker type, the number of Glue data processing units (DPUs) that can be allocated when this job runs. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
For Glue version 2.0+ jobs, you cannot specify a Maximum capacity
. Instead, you should specify a
Worker type
and the Number of workers
.
Do not set MaxCapacity
if using WorkerType
and NumberOfWorkers
.
The value that can be allocated for MaxCapacity
depends on whether you are running a Python shell
job, an Apache Spark ETL job, or an Apache Spark streaming ETL job:
When you specify a Python shell job (JobCommand.Name
="pythonshell"), you can allocate either 0.0625
or 1 DPU. The default is 0.0625 DPU.
When you specify an Apache Spark ETL job (JobCommand.Name
="glueetl") or Apache Spark streaming ETL
job (JobCommand.Name
="gluestreaming"), you can allocate from 2 to 100 DPUs. The default is 10 DPUs.
This job type cannot have a fractional DPU allocation.
String securityConfiguration
The name of the SecurityConfiguration
structure to be used with this job run.
NotificationProperty notificationProperty
Specifies configuration properties of a job run notification.
String workerType
The type of predefined worker that is allocated when a job runs. Accepts a value of G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.
For the G.1X
worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.2X
worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of memory) with 128GB disk
(approximately 77GB free), and provides 1 executor per worker. We recommend this worker type for workloads such
as data transforms, joins, and queries, to offers a scalable and cost effective way to run most jobs.
For the G.4X
worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB of memory) with 256GB disk
(approximately 235GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions: US East (Ohio),
US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo),
Canada (Central), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).
For the G.8X
worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB of memory) with 512GB disk
(approximately 487GB free), and provides 1 executor per worker. We recommend this worker type for jobs whose
workloads contain your most demanding transforms, aggregations, joins, and queries. This worker type is available
only for Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions as supported for the
G.4X
worker type.
For the G.025X
worker type, each worker maps to 0.25 DPU (2 vCPUs, 4 GB of memory) with 84GB disk
(approximately 34GB free), and provides 1 executor per worker. We recommend this worker type for low volume
streaming jobs. This worker type is only available for Glue version 3.0 streaming jobs.
For the Z.2X
worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB of memory) with 128 GB disk
(approximately 120GB free), and provides up to 8 Ray workers based on the autoscaler.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when a job runs.
String executionClass
Indicates whether the job is run with a standard or flexible execution class. The standard execution-class is ideal for time-sensitive workloads that require fast job startup and dedicated resources.
The flexible execution class is appropriate for time-insensitive jobs whose start and completion times may vary.
Only jobs with Glue version 3.0 and above and command type glueetl
will be allowed to set
ExecutionClass
to FLEX
. The flexible execution class is available for Spark jobs.
String jobRunId
The ID assigned to this job run.
String transformId
The unique identifier of the machine learning transform.
String taskRunId
The unique identifier associated with this run.
String taskRunId
The unique run identifier that is associated with this task run.
String name
The name of the trigger to start.
String name
The name of the trigger that was started.
String runId
An Id for the new run.
Integer id
The ID of the statement.
String code
The execution code of the statement.
String state
The state while request is actioned.
StatementOutput output
The output in JSON.
Double progress
The code execution progress.
Long startedOn
The unix time and date that the job definition was started.
Long completedOn
The unix time and date that the job definition was completed.
StatementOutputData data
The code execution output.
Integer executionCount
The execution count of the output.
String status
The status of the code execution output.
String errorName
The name of the error in the output.
String errorValue
The error value of the output.
List<E> traceback
The traceback of the output.
String textPlain
The code execution output in text format.
String name
Name of the crawler to stop.
String crawlerName
Name of the crawler whose schedule state to set.
String id
Returns the Id of the stopped session.
String name
The name of the trigger to stop.
String name
The name of the trigger that was stopped.
List<E> columns
A list of the Columns
in the table.
String location
The physical location of the table. By default, this takes the form of the warehouse location, followed by the database location in the warehouse, followed by the table name.
List<E> additionalLocations
A list of locations that point to the path where a Delta table is located.
String inputFormat
The input format: SequenceFileInputFormat
(binary), or TextInputFormat
, or a custom
format.
String outputFormat
The output format: SequenceFileOutputFormat
(binary), or IgnoreKeyTextOutputFormat
, or
a custom format.
Boolean compressed
True
if the data in the table is compressed, or False
if not.
Integer numberOfBuckets
Must be specified if the table contains any dimension columns.
SerDeInfo serdeInfo
The serialization/deserialization (SerDe) information.
List<E> bucketColumns
A list of reducer grouping columns, clustering columns, and bucketing columns in the table.
List<E> sortColumns
A list specifying the sort order of each bucket in the table.
Map<K,V> parameters
The user-supplied properties in key-value form.
SkewedInfo skewedInfo
The information about values that appear frequently in a column (skewed values).
Boolean storedAsSubDirectories
True
if the table data is stored in subdirectories, or False
if not.
SchemaReference schemaReference
An object that references a schema stored in the Glue Schema Registry.
When creating a table, you can pass an empty list of columns for the schema, and instead use a schema reference.
Long maximumLength
The size of the longest string in the column.
Double averageLength
The average string length in the column.
Long numberOfNulls
The number of null values in the column.
Long numberOfDistinctValues
The number of distinct values in a column.
String name
The table name. For Hive compatibility, this must be entirely lowercase.
String databaseName
The name of the database where the table metadata resides. For Hive compatibility, this must be all lowercase.
String description
A description of the table.
String owner
The owner of the table.
Date createTime
The time when the table definition was created in the Data Catalog.
Date updateTime
The last time that the table was updated.
Date lastAccessTime
The last time that the table was accessed. This is usually taken from HDFS, and might not be reliable.
Date lastAnalyzedTime
The last time that column statistics were computed for this table.
Integer retention
The retention time for this table.
StorageDescriptor storageDescriptor
A storage descriptor containing information about the physical storage of this table.
List<E> partitionKeys
A list of columns by which the table is partitioned. Only primitive types are supported as partition keys.
When you create a table used by Amazon Athena, and you do not specify any partitionKeys
, you must at
least set the value of partitionKeys
to an empty list. For example:
"PartitionKeys": []
String viewOriginalText
Included for Apache Hive compatibility. Not used in the normal course of Glue operations. If the table is a
VIRTUAL_VIEW
, certain Athena configuration encoded in base64.
String viewExpandedText
Included for Apache Hive compatibility. Not used in the normal course of Glue operations.
String tableType
The type of this table. Glue will create tables with the EXTERNAL_TABLE
type. Other services, such
as Athena, may create tables with additional table types.
Glue related table types:
Hive compatible attribute - indicates a non-Hive managed table.
Used by Lake Formation. The Glue Data Catalog understands GOVERNED
.
Map<K,V> parameters
These key-value pairs define properties associated with the table.
String createdBy
The person or entity who created the table.
Boolean isRegisteredWithLakeFormation
Indicates whether the table has been registered with Lake Formation.
TableIdentifier targetTable
A TableIdentifier
structure that describes a target table for resource linking.
String catalogId
The ID of the Data Catalog in which the table resides.
String versionId
The ID of the table version.
FederatedTable federatedTable
A FederatedTable
structure that references an entity outside the Glue Data Catalog.
String tableName
The name of the table. For Hive compatibility, this must be entirely lowercase.
ErrorDetail errorDetail
The details about the error.
String name
The table name. For Hive compatibility, this is folded to lowercase when it is stored.
String description
A description of the table.
String owner
The table owner. Included for Apache Hive compatibility. Not used in the normal course of Glue operations.
Date lastAccessTime
The last time that the table was accessed.
Date lastAnalyzedTime
The last time that column statistics were computed for this table.
Integer retention
The retention time for this table.
StorageDescriptor storageDescriptor
A storage descriptor containing information about the physical storage of this table.
List<E> partitionKeys
A list of columns by which the table is partitioned. Only primitive types are supported as partition keys.
When you create a table used by Amazon Athena, and you do not specify any partitionKeys
, you must at
least set the value of partitionKeys
to an empty list. For example:
"PartitionKeys": []
String viewOriginalText
Included for Apache Hive compatibility. Not used in the normal course of Glue operations. If the table is a
VIRTUAL_VIEW
, certain Athena configuration encoded in base64.
String viewExpandedText
Included for Apache Hive compatibility. Not used in the normal course of Glue operations.
String tableType
The type of this table. Glue will create tables with the EXTERNAL_TABLE
type. Other services, such
as Athena, may create tables with additional table types.
Glue related table types:
Hive compatible attribute - indicates a non-Hive managed table.
Used by Lake Formation. The Glue Data Catalog understands GOVERNED
.
Map<K,V> parameters
These key-value pairs define properties associated with the table.
TableIdentifier targetTable
A TableIdentifier
structure that describes a target table for resource linking.
String type
The type of table optimizer. Currently, the only valid value is compaction
.
TableOptimizerConfiguration configuration
A TableOptimizerConfiguration
object that was specified when creating or updating a table optimizer.
TableOptimizerRun lastRun
A TableOptimizerRun
object representing the last run of the table optimizer.
String eventType
An event type representing the status of the table optimizer run.
Date startTimestamp
Represents the epoch timestamp at which the compaction job was started within Lake Formation.
Date endTimestamp
Represents the epoch timestamp at which the compaction job ended.
RunMetrics metrics
A RunMetrics
object containing metrics for the optimizer run.
String error
An error that occured during the optimizer run.
String tableName
The name of the table in question.
String versionId
The ID value of the version in question. A VersionID
is a string representation of an integer. Each
version is incremented by 1.
ErrorDetail errorDetail
The details about the error.
String resourceArn
The ARN of the Glue resource to which to add the tags. For more information about Glue resource ARNs, see the Glue ARN string pattern.
Map<K,V> tagsToAdd
Tags to add to this resource.
String transformId
The unique identifier for the transform.
String taskRunId
The unique identifier for this task run.
String status
The current status of the requested task run.
String logGroupName
The names of the log group for secure logging, associated with this task run.
TaskRunProperties properties
Specifies configuration properties associated with this task run.
String errorString
The list of error strings associated with this task run.
Date startedOn
The date and time that this task run started.
Date lastModifiedOn
The last point in time that the requested task run was updated.
Date completedOn
The last point in time that the requested task run was completed.
Integer executionTime
The amount of time (in seconds) that the task run consumed resources.
String taskType
The type of task run.
ImportLabelsTaskRunProperties importLabelsTaskRunProperties
The configuration properties for an importing labels task run.
ExportLabelsTaskRunProperties exportLabelsTaskRunProperties
The configuration properties for an exporting labels task run.
LabelingSetGenerationTaskRunProperties labelingSetGenerationTaskRunProperties
The configuration properties for a labeling set generation task run.
FindMatchesTaskRunProperties findMatchesTaskRunProperties
The configuration properties for a find matches task run.
String name
Specifies the name of the parameter in the config file of the dynamic transform.
String type
Specifies the parameter type in the config file of the dynamic transform.
String validationRule
Specifies the validation rule in the config file of the dynamic transform.
String validationMessage
Specifies the validation message in the config file of the dynamic transform.
List<E> value
Specifies the value of the parameter in the config file of the dynamic transform.
String listType
Specifies the list type of the parameter in the config file of the dynamic transform.
Boolean isOptional
Specifies whether the parameter is optional or not in the config file of the dynamic transform.
MLUserDataEncryption mlUserDataEncryption
An MLUserDataEncryption
object containing the encryption mode and customer-provided KMS key ID.
String taskRunSecurityConfigurationName
The name of the security configuration.
String name
A unique transform name that is used to filter the machine learning transforms.
String transformType
The type of machine learning transform that is used to filter the machine learning transforms.
String status
Filters the list of machine learning transforms by the last known status of the transforms (to indicate whether a transform can be used or not). One of "NOT_READY", "READY", or "DELETING".
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Date createdBefore
The time and date before which the transforms were created.
Date createdAfter
The time and date after which the transforms were created.
Date lastModifiedBefore
Filter on transforms last modified before this date.
Date lastModifiedAfter
Filter on transforms last modified after this date.
List<E> schema
Filters on datasets with a specific schema. The Map<Column, Type>
object is an array of
key-value pairs representing the schema this transform accepts, where Column
is the name of a
column, and Type
is the type of the data such as an integer or string. Has an upper bound of 100
columns.
String transformType
The type of machine learning transform.
For information about the types of machine learning transforms, see Creating Machine Learning Transforms.
FindMatchesParameters findMatchesParameters
The parameters for the find matches algorithm.
String name
The name of the trigger.
String workflowName
The name of the workflow associated with the trigger.
String id
Reserved for future use.
String type
The type of trigger that this is.
String state
The current state of the trigger.
String description
A description of this trigger.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> actions
The actions initiated by this trigger.
Predicate predicate
The predicate of this trigger, which defines when it will fire.
EventBatchingCondition eventBatchingCondition
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
Trigger trigger
The information of the trigger represented by the trigger node.
String name
Reserved for future use.
String description
A description of this trigger.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> actions
The actions initiated by this trigger.
Predicate predicate
The predicate of this trigger, which defines when it will fire.
EventBatchingCondition eventBatchingCondition
Batch condition that must be met (specified number of events received or batch time window expired) before EventBridge event trigger fires.
Partition partition
The partition object.
List<E> authorizedColumns
The list of columns the user has permissions to access.
Boolean isRegisteredWithLakeFormation
A Boolean value indicating that the partition location is registered with Lake Formation.
String name
The name of the transform node.
List<E> inputs
The node ID inputs to the transform.
String unionType
Indicates the type of Union transform.
Specify ALL
to join all rows from data sources to the resulting DynamicFrame. The resulting union
does not remove duplicate rows.
Specify DISTINCT
to remove duplicate rows in the resulting DynamicFrame.
String name
Returns the name of the blueprint that was updated.
UpdateGrokClassifierRequest grokClassifier
A GrokClassifier
object with updated fields.
UpdateXMLClassifierRequest xMLClassifier
An XMLClassifier
object with updated fields.
UpdateJsonClassifierRequest jsonClassifier
A JsonClassifier
object with updated fields.
UpdateCsvClassifierRequest csvClassifier
A CsvClassifier
object with updated fields.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> partitionValues
A list of partition values identifying the partition.
List<E> columnStatisticsList
A list of the column statistics.
String catalogId
The ID of the Data Catalog where the partitions in question reside. If none is supplied, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the partitions reside.
String tableName
The name of the partitions' table.
List<E> columnStatisticsList
A list of the column statistics.
String catalogId
The ID of the Data Catalog in which the connection resides. If none is provided, the Amazon Web Services account ID is used by default.
String name
The name of the connection definition to update.
ConnectionInput connectionInput
A ConnectionInput
object that redefines the connection in question.
String name
Name of the new crawler.
String role
The IAM role or Amazon Resource Name (ARN) of an IAM role that is used by the new crawler to access customer resources.
String databaseName
The Glue database where results are stored, such as:
arn:aws:daylight:us-east-1::database/sometable/*
.
String description
A description of the new crawler.
CrawlerTargets targets
A list of targets to crawl.
String schedule
A cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
List<E> classifiers
A list of custom classifiers that the user has registered. By default, all built-in classifiers are included in a crawl, but these custom classifiers always override the default classifiers for a given classification.
String tablePrefix
The table prefix used for catalog tables that are created.
SchemaChangePolicy schemaChangePolicy
The policy for the crawler's update and deletion behavior.
RecrawlPolicy recrawlPolicy
A policy that specifies whether to crawl the entire dataset again, or to crawl only folders that were added since the last crawler run.
LineageConfiguration lineageConfiguration
Specifies data lineage configuration settings for the crawler.
LakeFormationConfiguration lakeFormationConfiguration
Specifies Lake Formation configuration settings for the crawler.
String configuration
Crawler configuration information. This versioned JSON string allows users to specify aspects of a crawler's behavior. For more information, see Setting crawler configuration options.
String crawlerSecurityConfiguration
The name of the SecurityConfiguration
structure to be used by this crawler.
String crawlerName
The name of the crawler whose schedule to update.
String schedule
The updated cron
expression used to specify the schedule (see Time-Based Schedules for
Jobs and Crawlers. For example, to run something every day at 12:15 UTC, you would specify:
cron(15 12 * * ? *)
.
String name
The name of the classifier.
String delimiter
A custom symbol to denote what separates each column entry in the row.
String quoteSymbol
A custom symbol to denote what combines content into a single column value. It must be different from the column delimiter.
String containsHeader
Indicates whether the CSV file contains a header.
List<E> header
A list of strings representing column names.
Boolean disableValueTrimming
Specifies not to trim values before identifying the type of column values. The default value is true.
Boolean allowSingleColumn
Enables the processing of files that contain only one column.
Boolean customDatatypeConfigured
Specifies the configuration of custom datatypes.
List<E> customDatatypes
Specifies a list of supported custom datatypes.
String serde
Sets the SerDe for processing CSV in the classifier, which will be applied in the Data Catalog. Valid values are
OpenCSVSerDe
, LazySimpleSerDe
, and None
. You can specify the
None
value when you want the crawler to do the detection.
String catalogId
The ID of the Data Catalog in which the metadata database resides. If none is provided, the Amazon Web Services account ID is used by default.
String name
The name of the database to update in the catalog. For Hive compatibility, this is folded to lowercase.
DatabaseInput databaseInput
A DatabaseInput
object specifying the new definition of the metadata database in the catalog.
String endpointName
The name of the DevEndpoint
to be updated.
String publicKey
The public key for the DevEndpoint
to use.
List<E> addPublicKeys
The list of public keys for the DevEndpoint
to use.
List<E> deletePublicKeys
The list of public keys to be deleted from the DevEndpoint
.
DevEndpointCustomLibraries customLibraries
Custom Python or Java libraries to be loaded in the DevEndpoint
.
Boolean updateEtlLibraries
True
if the list of custom libraries to be loaded in the development endpoint needs to be updated,
or False
if otherwise.
List<E> deleteArguments
The list of argument keys to be deleted from the map of arguments used to configure the DevEndpoint
.
Map<K,V> addArguments
The map of arguments to add the map of arguments used to configure the DevEndpoint
.
Valid arguments are:
"--enable-glue-datacatalog": ""
You can specify a version of Python support for development endpoints by using the Arguments
parameter in the CreateDevEndpoint
or UpdateDevEndpoint
APIs. If no arguments are
provided, the version defaults to Python 2.
String name
The name of the GrokClassifier
.
String classification
An identifier of the data format that the classifier matches, such as Twitter, JSON, Omniture logs, Amazon CloudWatch Logs, and so on.
String grokPattern
The grok pattern used by this classifier.
String customPatterns
Optional custom grok patterns used by this classifier.
String jobName
The name of the Glue job to be synchronized to or from the remote repository.
String provider
The provider for the remote repository. Possible values: GITHUB, AWS_CODE_COMMIT, GITLAB, BITBUCKET.
String repositoryName
The name of the remote repository that contains the job artifacts. For BitBucket providers,
RepositoryName
should include WorkspaceName
. Use the format
<WorkspaceName>/<RepositoryName>
.
String repositoryOwner
The owner of the remote repository that contains the job artifacts.
String branchName
An optional branch in the remote repository.
String folder
An optional folder in the remote repository.
String commitId
A commit ID for a commit in the remote repository.
String authStrategy
The type of authentication, which can be an authentication token stored in Amazon Web Services Secrets Manager, or a personal access token.
String authToken
The value of the authorization token.
String jobName
The name of the Glue job.
String jobName
Returns the name of the updated job definition.
String name
The name of the classifier.
String jsonPath
A JsonPath
string defining the JSON data for the classifier to classify. Glue supports a subset of
JsonPath, as described in Writing JsonPath
Custom Classifiers.
String transformId
A unique identifier that was generated when the transform was created.
String name
The unique name that you gave the transform when you created it.
String description
A description of the transform. The default is an empty string.
TransformParameters parameters
The configuration parameters that are specific to the transform type (algorithm) used. Conditionally dependent on the transform type.
String role
The name or Amazon Resource Name (ARN) of the IAM role with the required permissions.
String glueVersion
This value determines which version of Glue this machine learning transform is compatible with. Glue 1.0 is recommended for most customers. If the value is not set, the Glue compatibility defaults to Glue 0.9. For more information, see Glue Versions in the developer guide.
Double maxCapacity
The number of Glue data processing units (DPUs) that are allocated to task runs for this transform. You can allocate from 2 to 100 DPUs; the default is 10. A DPU is a relative measure of processing power that consists of 4 vCPUs of compute capacity and 16 GB of memory. For more information, see the Glue pricing page.
When the WorkerType
field is set to a value other than Standard
, the
MaxCapacity
field is set automatically and becomes read-only.
String workerType
The type of predefined worker that is allocated when this task runs. Accepts a value of Standard, G.1X, or G.2X.
For the Standard
worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2
executors per worker.
For the G.1X
worker type, each worker provides 4 vCPU, 16 GB of memory and a 64GB disk, and 1
executor per worker.
For the G.2X
worker type, each worker provides 8 vCPU, 32 GB of memory and a 128GB disk, and 1
executor per worker.
Integer numberOfWorkers
The number of workers of a defined workerType
that are allocated when this task runs.
Integer timeout
The timeout for a task run for this transform in minutes. This is the maximum time that a task run for this
transform can consume resources before it is terminated and enters TIMEOUT
status. The default is
2,880 minutes (48 hours).
Integer maxRetries
The maximum number of times to retry a task for this transform after a task run fails.
String transformId
The unique identifier for the transform that was updated.
String catalogId
The ID of the Data Catalog where the partition to be updated resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table in question resides.
String tableName
The name of the table in which the partition to be updated is located.
List<E> partitionValueList
List of partition key values that define the partition to update.
PartitionInput partitionInput
The new partition object to update the partition to.
The Values
property can't be changed. If you want to change the partition key values for a
partition, delete and recreate the partition.
RegistryId registryId
This is a wrapper structure that may contain the registry name and Amazon Resource Name (ARN).
String description
A description of the registry. If description is not provided, this field will not be updated.
SchemaId schemaId
This is a wrapper structure to contain schema identity fields. The structure contains:
SchemaId$SchemaArn: The Amazon Resource Name (ARN) of the schema. One of SchemaArn
or
SchemaName
has to be provided.
SchemaId$SchemaName: The name of the schema. One of SchemaArn
or SchemaName
has to be
provided.
SchemaVersionNumber schemaVersionNumber
Version number required for check pointing. One of VersionNumber
or Compatibility
has
to be provided.
String compatibility
The new compatibility setting for the schema.
String description
The new description for the schema.
String jobName
The name of the Glue job to be synchronized to or from the remote repository.
String provider
The provider for the remote repository. Possible values: GITHUB, AWS_CODE_COMMIT, GITLAB, BITBUCKET.
String repositoryName
The name of the remote repository that contains the job artifacts. For BitBucket providers,
RepositoryName
should include WorkspaceName
. Use the format
<WorkspaceName>/<RepositoryName>
.
String repositoryOwner
The owner of the remote repository that contains the job artifacts.
String branchName
An optional branch in the remote repository.
String folder
An optional folder in the remote repository.
String commitId
A commit ID for a commit in the remote repository.
String authStrategy
The type of authentication, which can be an authentication token stored in Amazon Web Services Secrets Manager, or a personal access token.
String authToken
The value of the authorization token.
String jobName
The name of the Glue job.
String catalogId
The Catalog ID of the table.
String databaseName
The name of the database in the catalog in which the table resides.
String tableName
The name of the table.
String type
The type of table optimizer. Currently, the only valid value is compaction
.
TableOptimizerConfiguration tableOptimizerConfiguration
A TableOptimizerConfiguration
object representing the configuration of a table optimizer.
String catalogId
The ID of the Data Catalog where the table resides. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database in which the table resides. For Hive compatibility, this name is entirely lowercase.
TableInput tableInput
An updated TableInput
object to define the metadata table in the catalog.
Boolean skipArchive
By default, UpdateTable
always creates an archived version of the table before updating it. However,
if skipArchive
is set to true, UpdateTable
does not create the archived version.
String transactionId
The transaction ID at which to update the table contents.
String versionId
The version ID at which to update the table contents.
String name
The name of the trigger to update.
TriggerUpdate triggerUpdate
The new values with which to update the trigger.
Trigger trigger
The resulting trigger definition.
String catalogId
The ID of the Data Catalog where the function to be updated is located. If none is provided, the Amazon Web Services account ID is used by default.
String databaseName
The name of the catalog database where the function to be updated is located.
String functionName
The name of the function.
UserDefinedFunctionInput functionInput
A FunctionInput
object that redefines the function in the Data Catalog.
String name
Name of the workflow to be updated.
String description
The description of the workflow.
Map<K,V> defaultRunProperties
A collection of properties to be used as part of each execution of the workflow.
Integer maxConcurrentRuns
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
String name
The name of the workflow which was specified in input.
String name
The name of the classifier.
String classification
An identifier of the data format that the classifier matches.
String rowTag
The XML tag designating the element that contains each record in an XML document being parsed. This cannot
identify a self-closing element (closed by />
). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example,
<row item_a="A" item_b="B"></row>
is okay, but
<row item_a="A" item_b="B" />
is not).
String functionName
The name of the function.
String databaseName
The name of the catalog database that contains the function.
String className
The Java class that contains the function code.
String ownerName
The owner of the function.
String ownerType
The owner type.
Date createTime
The time at which the function was created.
List<E> resourceUris
The resource URIs for the function.
String catalogId
The ID of the Data Catalog in which the function resides.
String functionName
The name of the function.
String className
The Java class that contains the function code.
String ownerName
The owner of the function.
String ownerType
The owner type.
List<E> resourceUris
The resource URIs for the function.
String name
The name of the workflow.
String description
A description of the workflow.
Map<K,V> defaultRunProperties
A collection of properties to be used as part of each execution of the workflow. The run properties are made available to each job in the workflow. A job can modify the properties for the next jobs in the flow.
Date createdOn
The date and time when the workflow was created.
Date lastModifiedOn
The date and time when the workflow was last modified.
WorkflowRun lastRun
The information about the last execution of the workflow.
WorkflowGraph graph
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
Integer maxConcurrentRuns
You can use this parameter to prevent unwanted multiple updates to data, to control costs, or in some cases, to prevent exceeding the maximum number of concurrent runs of any of the component jobs. If you leave this parameter blank, there is no limit to the number of concurrent workflow runs.
BlueprintDetails blueprintDetails
This structure indicates the details of the blueprint that this particular workflow is created from.
String name
Name of the workflow that was run.
String workflowRunId
The ID of this workflow run.
String previousRunId
The ID of the previous workflow run.
Map<K,V> workflowRunProperties
The workflow run properties which were set during the run.
Date startedOn
The date and time when the workflow run was started.
Date completedOn
The date and time when the workflow run completed.
String status
The status of the workflow run.
String errorMessage
This error message describes any error that may have occurred in starting the workflow run. Currently the only
error message is "Concurrent runs exceeded for workflow: foo
."
WorkflowRunStatistics statistics
The statistics of the run.
WorkflowGraph graph
The graph representing all the Glue components that belong to the workflow as nodes and directed connections between them as edges.
StartingEventBatchCondition startingEventBatchCondition
The batch condition that started the workflow run.
Integer totalActions
Total number of Actions in the workflow run.
Integer timeoutActions
Total number of Actions that timed out.
Integer failedActions
Total number of Actions that have failed.
Integer stoppedActions
Total number of Actions that have stopped.
Integer succeededActions
Total number of Actions that have succeeded.
Integer runningActions
Total number Actions in running state.
Integer erroredActions
Indicates the count of job runs in the ERROR state in the workflow run.
Integer waitingActions
Indicates the count of job runs in WAITING state in the workflow run.
String name
The name of the classifier.
String classification
An identifier of the data format that the classifier matches.
Date creationTime
The time that this classifier was registered.
Date lastUpdated
The time that this classifier was last updated.
Long version
The version of this classifier.
String rowTag
The XML tag designating the element that contains each record in an XML document being parsed. This can't
identify a self-closing element (closed by />
). An empty row element that contains only
attributes can be parsed as long as it ends with a closing tag (for example,
<row item_a="A" item_b="B"></row>
is okay, but
<row item_a="A" item_b="B" />
is not).
Copyright © 2023. All rights reserved.