The Pig script to run. You must specify either script or scriptUri.
The location of the Pig script to run. For example, s3://script location. You must specify either scriptUri or script.
The arguments to pass to the Pig script. You can use scriptVariable with script or scriptUri.
An Amazon S3 path to capture the Pig script that ran after all the expressions in it were evaluated, including staging information. This script is stored for historical, troubleshooting purposes.
Determines whether staging is enabled and allows your Pig script to have access to the staged-data tables, such as
$\{INPUT1\
The input data source.
The location for the output.
The timeout time interval for an object attempt.
The timeout time interval for an object attempt. If an attempt does not complete within the start time plus this time interval, AWS Data Pipeline marks the attempt as failed and your retry settings determine the next steps taken.
One or more references to other Activities that must reach the FINISHED state before this activity will start.
One or more references to other Activities that must reach the FINISHED state before this activity will start.
Determines whether pipeline object failures and rerun commands cascade through pipeline object dependencies
Determines whether pipeline object failures and rerun commands cascade through pipeline object dependencies
Possible values include cascade and none.
An Amazon S3 path to capture the Pig script that ran after all the expressions in it were evaluated, including staging information.
An Amazon S3 path to capture the Pig script that ran after all the expressions in it were evaluated, including staging information. This script is stored for historical, troubleshooting purposes.
The ID of the object, IDs must be unique within a pipeline definition
The ID of the object, IDs must be unique within a pipeline definition
The input data source.
The time period in which the object run must start.
The time period in which the object run must start. If the object does not start within the scheduled start time plus this time interval, it is considered late
The maximum number of concurrent active instances of a component.
The maximum number of concurrent active instances of a component. Re-runs do not count toward the number of active instances.
The maximum number of times to retry the action.
The maximum number of times to retry the action. The default value is 2, which results in 3 tries total (1 original attempt plus 2 retries). The maximum value is 5 (6 total attempts).
The optional, user-defined label of the object.
The optional, user-defined label of the object. If you do not provide a name for an object in a pipeline definition, AWS Data Pipeline automatically duplicates the value of id.
The SNS alarm to raise when the activity fails.
The SNS alarm to raise when the activity fails.
The SNS alarm to raise when the activity fails to start on time.
The SNS alarm to raise when the activity fails to start on time.
The SNS alarm to raise when the activity succeeds.
The SNS alarm to raise when the activity succeeds.
The location for the output.
A condition that must be met before the object can run.
A condition that must be met before the object can run. To specify multiple conditions, add multiple precondition fields. The activity cannot run until all its conditions are met.
The timeout duration between two retry attempts.
The timeout duration between two retry attempts. The default is 10 minutes.
The Pig script to run.
The Pig script to run. You must specify either script or scriptUri.
The location of the Pig script to run.
The location of the Pig script to run. For example, s3://script location. You must specify either scriptUri or script.
The arguments to pass to the Pig script.
The arguments to pass to the Pig script. You can use scriptVariable with script or scriptUri.
Determines whether staging is enabled and allows your Pig script to have access to the staged-data tables, such as
Determines whether staging is enabled and allows your Pig script to have access to the staged-data tables, such as
$\{INPUT1\
The type of object.
The type of object. Use one of the predefined AWS Data Pipeline object types.
The worker group.
The worker group. This is used for routing tasks. If you provide a runsOn value and workerGroup exists, workerGroup is ignored.
ref: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-pigactivity.html