Package com.google.cloud.bigquery
Interface LoadConfiguration.Builder
- All Known Implementing Classes:
LoadJobConfiguration.Builder
,WriteChannelConfiguration.Builder
- Enclosing interface:
- LoadConfiguration
public static interface LoadConfiguration.Builder
-
Method Summary
Modifier and TypeMethodDescriptionbuild()
setAutodetect
(Boolean autodetect) [Experimental] Sets automatic inference of the options and schema for CSV and JSON sources.setClustering
(Clustering clustering) Sets the clustering specification for the destination table.setCreateDisposition
(JobInfo.CreateDisposition createDisposition) Sets whether the job is allowed to create new tables.setDecimalTargetTypes
(List<String> decimalTargetTypes) Defines the list of possible SQL data types to which the source decimal values are converted.setDestinationEncryptionConfiguration
(EncryptionConfiguration encryptionConfiguration) setDestinationTable
(TableId destinationTable) Sets the destination table to load the data into.setFormatOptions
(FormatOptions formatOptions) Sets the source format, and possibly some parsing options, of the external data.setIgnoreUnknownValues
(Boolean ignoreUnknownValues) Sets whether BigQuery should allow extra values that are not represented in the table schema.setMaxBadRecords
(Integer maxBadRecords) Sets the maximum number of bad records that BigQuery can ignore when running the job.setNullMarker
(String nullMarker) Sets the string that represents a null value in a CSV file.Sets the schema for the destination table.setSchemaUpdateOptions
(List<JobInfo.SchemaUpdateOption> schemaUpdateOptions) [Experimental] Sets options allowing the schema of the destination table to be updated as a side effect of the load job.setTimePartitioning
(TimePartitioning timePartitioning) Sets the time partitioning specification for the destination table.setUseAvroLogicalTypes
(Boolean useAvroLogicalTypes) If FormatOptions is set to AVRO, you can interpret logical types into their corresponding types (such as TIMESTAMP) instead of only using their raw types (such as INTEGER).setWriteDisposition
(JobInfo.WriteDisposition writeDisposition) Sets the action that should occur if the destination table already exists.
-
Method Details
-
setDestinationTable
Sets the destination table to load the data into. -
setDestinationEncryptionConfiguration
LoadConfiguration.Builder setDestinationEncryptionConfiguration(EncryptionConfiguration encryptionConfiguration) -
setCreateDisposition
Sets whether the job is allowed to create new tables.- See Also:
-
setWriteDisposition
Sets the action that should occur if the destination table already exists.- See Also:
-
setFormatOptions
Sets the source format, and possibly some parsing options, of the external data. Supported formats areCSV
,NEWLINE_DELIMITED_JSON
andDATASTORE_BACKUP
. If not specified,CSV
format is assumed. -
setNullMarker
Sets the string that represents a null value in a CSV file. For example, if you specify "\N", BigQuery interprets "\N" as a null value when loading a CSV file. The default value is the empty string. If you set this property to a custom value, BigQuery throws an error if an empty string is present for all data types except forSTRING
andBYTE
. ForSTRING
andBYTE
columns, BigQuery interprets the empty string as an empty value. -
setMaxBadRecords
Sets the maximum number of bad records that BigQuery can ignore when running the job. If the number of bad records exceeds this value, an invalid error is returned in the job result. By default no bad record is ignored. -
setSchema
Sets the schema for the destination table. The schema can be omitted if the destination table already exists, or if you're loading data from a Google Cloud Datastore backup (i.e.DATASTORE_BACKUP
format option). -
setIgnoreUnknownValues
Sets whether BigQuery should allow extra values that are not represented in the table schema. Iftrue
, the extra values are ignored. Iffalse
, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. By default unknown values are not allowed. -
setSchemaUpdateOptions
LoadConfiguration.Builder setSchemaUpdateOptions(List<JobInfo.SchemaUpdateOption> schemaUpdateOptions) [Experimental] Sets options allowing the schema of the destination table to be updated as a side effect of the load job. Schema update options are supported in two cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. -
setAutodetect
[Experimental] Sets automatic inference of the options and schema for CSV and JSON sources. -
setTimePartitioning
Sets the time partitioning specification for the destination table. -
setClustering
Sets the clustering specification for the destination table. -
setUseAvroLogicalTypes
If FormatOptions is set to AVRO, you can interpret logical types into their corresponding types (such as TIMESTAMP) instead of only using their raw types (such as INTEGER). The value may benull
. -
setDecimalTargetTypes
Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values.- Parameters:
decimalTargetTypes
- decimalTargetType ornull
for none
-
build
LoadConfiguration build()
-