AWS Data Pipeline activity objects.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-copyactivity.html
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-copyactivity.html
The ID of the object. IDs must be unique within a pipeline definition.
The optional, user-defined label of the object. If you do not provide a name for an object in a pipeline definition, AWS Data Pipeline automatically duplicates the value of id.
The input data source.
The location for the output.
Required for AdpActivity
CSV Data Format
CSV Data Format
A comma-delimited data format where the column separator is a comma and the record separator is a newline character.
Custom Data Format
Custom Data Format
A custom data format defined by a combination of a certain column separator, record separator, and escape character.
Defines AWS Data Pipeline Data Formats
Defines AWS Data Pipeline Data Formats
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-dataformats.html
AWS Data Pipeline DataNode objects
AWS Data Pipeline DataNode objects
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-datanodes.html
Each data pipeline can have a default object
The base class of all AWS Data Pipeline objects.
The base class of all AWS Data Pipeline objects.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-objects.html
AWS Data Pipeline database objects.
AWS Data Pipeline database objects.
Ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-databases.html
A precondition to check that data exists in a DynamoDB table.
A precondition to check that data exists in a DynamoDB table.
The DynamoDB table to check.
DynamoDBDataFormat
DynamoDBDataFormat
Applies a schema to a DynamoDB table to make it accessible by a Hive query. DynamoDBDataFormat is used with a HiveActivity object and a DynamoDBDataNode input and output. DynamoDBDataFormat requires that you specify all columns in your Hive query. For more flexibility to specify certain columns in a Hive query or Amazon S3 support, see DynamoDBExportDataFormat.
DynamoDB DataNode
DynamoDB DataNode
The DynamoDB table.
The AWS region where the DynamoDB table exists. It's used by HiveActivity when it performs staging for DynamoDB tables in Hive. For more information, see Using a Pipeline with Resources in Multiple Regions.
Applies a schema to a DynamoDB table to make it accessible by a Hive query.
Sets the rate of read operations to keep your DynamoDB provisioned throughput rate in the allocated range for your table. The value is a double between .1 and 1.0, inclusively. For more information, see Specifying Read and Write Requirements for Tables.
Sets the rate of write operations to keep your DynamoDB provisioned throughput rate in the allocated range for your table. The value is a double between .1 and 1.0, inclusively. For more information, see Specifying Read and Write Requirements for Tables.
DynamoDBExportDataFormat
DynamoDBExportDataFormat
Applies a schema to an DynamoDB table to make it accessible by a Hive query. Use DynamoDBExportDataFormat with a HiveCopyActivity object and DynamoDBDataNode or S3DataNode input and output. DynamoDBExportDataFormat has the following benefits:
* Provides both DynamoDB and Amazon S3 support * Allows you to filter data by certain columns in your Hive query * Exports all attributes from DynamoDB even if you have a sparse schema
A precondition to check that the DynamoDB table exists.
A precondition to check that the DynamoDB table exists.
The DynamoDB table to check.
An EC2 instance that will perform the work defined by a pipeline activity.
An EC2 instance that will perform the work defined by a pipeline activity.
The type of EC2 instance to use for the resource pool. The default value is m1.small.
The AMI version to use for the EC2 instances. For more information, see Amazon Machine Images (AMIs).
The IAM role to use to create the EC2 instance.
The IAM role to use to control the resources that the EC2 instance can access.
The name of the key pair. If you launch an EC2 instance without specifying a key pair, you can't log on to it.
A region code to specify that the resource should run in a different region.
The Availability Zone in which to launch the EC2 instance.
The ID of the subnet to launch the instance into.
Indicates whether to assign a public IP address to an instance. An instance in a VPC can't access Amazon S3 unless it has a public IP address or a network address translation (NAT) instance with proper routing configuration. If the instance is in EC2-Classic or a default VPC, the default value is true. Otherwise, the default value is false.
The names of one or more security groups to use for the instances in the resource pool. By default, Amazon EC2 uses the default security group.
The IDs of one or more security groups to use for the instances in the resource pool. By default, Amazon EC2 uses the default security group.
The Spot Instance bid price for Ec2Resources. The maximum dollar amount for your Spot Instance bid and is a decimal value between 0 and 20.00 exclusive
On the last attempt to request a resource, this option will make a request for On-Demand Instances rather than Spot. This ensures that if all previous attempts have failed that the last attempt is not interrupted in the middle by changes in the spot market. Default value is True.
The amount of time to wait before terminating the resource.
Action to take when the resource fails.
Action to take when the task associated with this resource fails.
Runs an Amazon EMR job.
Runs an Amazon EMR job.
AWS Data Pipeline uses a different format for steps than Amazon EMR, for example AWS Data Pipeline uses comma-separated arguments after the JAR name in the EmrActivity step field.
One or more steps for the cluster to run. To specify multiple steps, up to 255, add multiple step fields. Use comma-separated arguments after the JAR name; for example, "s3://example-bucket/MyWork.jar,arg1,arg2,arg3".
Shell scripts to be run before any steps are run. To specify multiple scripts, up to 255, add multiple preStepCommand fields.
Shell scripts to be run after all steps are finished. To specify multiple scripts, up to 255, add multiple postStepCommand fields.
The input data source.
The location for the output
The Amazon EMR cluster to run this cluster.
Represents the configuration of an Amazon EMR cluster.
Represents the configuration of an Amazon EMR cluster. This object is used by EmrActivity to launch a cluster.
Checks whether a data node object exists.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-hiveactivity.html
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-hivecopyactivity.html
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-hivecopyactivity.html
A Hive SQL statement fragment that filters a subset of DynamoDB or Amazon S3 data to copy. The filter should only contain predicates and not begin with a WHERE clause, because AWS Data Pipeline adds it automatically.
An Amazon S3 path capturing the Hive script that ran after all the expressions in it were evaluated, including staging information. This script is stored for troubleshooting purposes.
The input data node. This must be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
The output data node. If input is S3DataNode, this must be DynamoDBDataNode. Otherwise, this can be S3DataNode or DynamoDBDataNode. If you use DynamoDBDataNode, specify a DynamoDBExportDataFormat.
The host of the proxy which Task Runner clients use to connect to AWS services.
Port of the proxy host which the Task Runner clients use to connect to AWS services.
The username for the proxy.
The password for proxy.
The Windows domain name for an NTLM proxy.
The Windows workgroup name for an NTLM proxy.
Defines a JDBC database.
Defines a JDBC database.
The JDBC connection string to access the database.
The driver class to load before establishing the JDBC connection.
AdpParameter is a pipeline parameter definition.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-pigactivity.html
A condition that must be met before the object can run.
A condition that must be met before the object can run. The activity cannot run until all its conditions are met.
Defines an Amazon RDS database.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-redshiftcopyactivity.html
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-redshiftcopyactivity.html
required for AdpDataPipelineObject
required for AdpDataPipelineObject
Determines what AWS Data Pipeline does with pre-existing data in the target table that overlaps with rows in the data to be loaded. Valid values are KEEP_EXISTING, OVERWRITE_EXISTING, and TRUNCATE.
The SQL SELECT expression used to transform the input data.
Corresponds to the query_group setting in Amazon Redshift, which allows you to assign and prioritize concurrent activities based on their placement in queues. Amazon Redshift limits the number of simultaneous connections to 15.
Takes COPY parameters to pass to the Amazon Redshift data node.
The input data node. The data source can be Amazon S3, DynamoDB, or Amazon Redshift.
The output data node. The output location can be Amazon S3 or Amazon Redshift.
Required for AdpActivity
Required for AdpActivity
Defines a data node using Amazon Redshift.
Defines a data node using Amazon Redshift.
If you do not specify primaryKeys for a destination table in RedShiftCopyActivity, you can specify a list of columns using primaryKeys which will act as a mergeKey. However, if you have an existing primaryKey defined in a Redshift table, this setting overrides the existing key.
Defines an Amazon Redshift database.
Defines an Amazon Redshift database.
The identifier provided by the user when the Amazon Redshift cluster was created. For example, if the endpoint for your Amazon Redshift cluster is mydb.example.us-east-1.redshift.amazonaws.com, the correct clusterId value is mydb. In the Amazon Redshift console, this value is "Cluster Name".
The JDBC endpoint for connecting to an Amazon Redshift instance owned by an account different than the pipeline.
References to an existing aws data pipeline object
References to an existing aws data pipeline object
more details: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-pipeline-expressions.html
RegEx Data Format
RegEx Data Format
A custom data format defined by a regular expression.
The regular expression to parse an S3 input file. inputRegEx provides a way to retrieve columns from relatively unstructured data in a file.
The column fields retrieved by inputRegEx, but referenced as %1, %2, %3, etc. using Java formatter syntax.
Defines the AWS Data Pipeline Resources
Defines the AWS Data Pipeline Resources
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-resources.html
You must provide either a filePath or directoryPath value.
Checks whether a key exists in an Amazon S3 data node.
Checks whether a key exists in an Amazon S3 data node.
Amazon S3 key to check for existence.
A precondition to check that the Amazon S3 objects with the given prefix (represented as a URI) are present.
A precondition to check that the Amazon S3 objects with the given prefix (represented as a URI) are present.
The Amazon S3 prefix to check for existence of objects.
Defines the timing of a scheduled event, such as when an activity runs.
Defines the timing of a scheduled event, such as when an activity runs.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-schedule.html
How often the pipeline should run. The format is "N [minutes|hours|days|weeks|months]", where N is a number followed by one of the time specifiers. For example, "15 minutes", runs the pipeline every 15 minutes. The minimum period is 15 minutes and the maximum period is 3 years.
The date and time at which to start the scheduled pipeline runs. Valid value is FIRST_ACTIVATION_DATE_TIME. FIRST_ACTIVATION_DATE_TIME is assumed to be the current date and time.
The date and time to start the scheduled runs. You must use either startDateTime or startAt but not both.
The date and time to end the scheduled runs. Must be a date and time later than the value of startDateTime or startAt. The default behavior is to schedule runs until the pipeline is shut down.
The number of times to execute the pipeline after it's activated. You can't use occurrences with endDateTime.
Runs a command on an EC2 node.
Runs a command on an EC2 node. You specify the input S3 location, output S3 location and the script/command.
The command to run. This value and any associated parameters must function in the environment from which you are running the Task Runner.
An Amazon S3 URI path for a file to download and run as a shell command. Only one scriptUri or command field should be present. scriptUri cannot use parameters, use command instead.
A list of arguments to pass to the shell script.
The Amazon S3 path that receives redirected output from the command. If you use the runsOn field, this must be an Amazon S3 path because of the transitory nature of the resource running your activity. However if you specify the workerGroup field, a local file path is permitted.
The path that receives redirected system error messages from the command. If you use the runsOn field, this must be an Amazon S3 path because of the transitory nature of the resource running your activity. However if you specify the workerGroup field, a local file path is permitted.
Determines whether staging is enabled and allows your shell commands to have access to the staged-data variables, such as
$\{INPUT1_STAGING_DIR\
The input data source.
The location for the output.
A Unix/Linux shell command that can be run as a precondition.
A Unix/Linux shell command that can be run as a precondition.
The command to run. This value and any associated parameters must function in the environment from which you are running the Task Runner.
An Amazon S3 URI path for a file to download and run as a shell command. Only one scriptUri or command field should be present. scriptUri cannot use parameters, use command instead.
A list of arguments to pass to the shell script.
The Amazon S3 path that receives redirected output from the command. If you use the runsOn field, this must be an Amazon S3 path because of the transitory nature of the resource running your activity. However if you specify the workerGroup field, a local file path is permitted.
The Amazon S3 path that receives redirected system error messages from the command. If you use the runsOn field, this must be an Amazon S3 path because of the transitory nature of the resource running your activity. However if you specify the workerGroup field, a local file path is permitted.
Sends an Amazon SNS notification message when an activity fails or finishes successfully.
Sends an Amazon SNS notification message when an activity fails or finishes successfully.
The subject line of the Amazon SNS notification message. String Yes
The body text of the Amazon SNS notification. String Yes
The destination Amazon SNS topic ARN for the message. String Yes
The IAM role to use to create the Amazon SNS alarm. String Yes
Runs a SQL query on a database.
Runs a SQL query on a database. You specify the input table where the SQL query is run and the output table where the results are stored. If the output table doesn't exist, this operation creates a new table with that name.
The SQL script to run. For example:
insert into output select * from input where lastModified in range (?, ?)
the script is not evaluated as an expression. In that situation, scriptArgument are useful
a list of variables for the script
that scriptUri is deliberately missing from this implementation, as there does not seem to be any use case for now.
Example:
Example:
{ "id" : "Sql Table", "type" : "MySqlDataNode", "schedule" : { "ref" : "CopyPeriod" }, "table" : "adEvents", "selectQuery" : "select * from #{table} where eventTime >= '#{@scheduledStartTime.format('YYYY-MM-dd HH:mm:ss')}' and eventTime < '#{@scheduledEndTime.format('YYYY-MM-dd HH:mm:ss')}'" }
An action to trigger the cancellation of a pending or unfinished activity, resource, or data node.
An action to trigger the cancellation of a pending or unfinished activity, resource, or data node. AWS Data Pipeline attempts to put the activity, resource, or data node into the CANCELLED state if it does not finish by the lateAfterTimeout value.
A delimited data format where the column separator is a tab character and the record separator is a newline character.
A delimited data format where the column separator is a tab character and the record separator is a newline character.
The structure of the data file. Use column names and data types separated by a space. For example:
[ "Name STRING", "Score INT", "DateOfBirth TIMESTAMP" ]
You can omit the data type when using STRING, which is the default. Valid data types: TINYINT, SMALLINT, INT, BIGINT, BOOLEAN, FLOAT, DOUBLE, STRING, TIMESTAMP
A character, for example "\", that instructs the parser to ignore the next character.
Serializes a aws data pipeline object to JSON
AWS Data Pipeline activity objects.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-activities.html