Class WorkflowTemplateJobHadoopJobArgs


  • public final class WorkflowTemplateJobHadoopJobArgs
    extends com.pulumi.resources.ResourceArgs
    • Method Detail

      • archiveUris

        public java.util.Optional<com.pulumi.core.Output<java.util.List<java.lang.String>>> archiveUris()
        Returns:
        Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
      • args

        public java.util.Optional<com.pulumi.core.Output<java.util.List<java.lang.String>>> args()
        Returns:
        Optional. The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
      • fileUris

        public java.util.Optional<com.pulumi.core.Output<java.util.List<java.lang.String>>> fileUris()
        Returns:
        Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
      • jarFileUris

        public java.util.Optional<com.pulumi.core.Output<java.util.List<java.lang.String>>> jarFileUris()
        Returns:
        Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
      • mainClass

        public java.util.Optional<com.pulumi.core.Output<java.lang.String>> mainClass()
        Returns:
        The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in `jar_file_uris`.
      • mainJarFileUri

        public java.util.Optional<com.pulumi.core.Output<java.lang.String>> mainJarFileUri()
        Returns:
        The HCFS URI of the jar file that contains the main class.
      • properties

        public java.util.Optional<com.pulumi.core.Output<java.util.Map<java.lang.String,​java.lang.String>>> properties()
        Returns:
        Optional. The properties to set on daemon config files. Property keys are specified in `prefix:property` format, for example `core:hadoop.tmp.dir`. The following are supported prefixes and their mappings: * capacity-scheduler: `capacity-scheduler.xml` * core: `core-site.xml` * distcp: `distcp-default.xml` * hdfs: `hdfs-site.xml` * hive: `hive-site.xml` * mapred: `mapred-site.xml` * pig: `pig.properties` * spark: `spark-defaults.conf` * yarn: `yarn-site.xml` For more information, see (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).