Class BaseBigQueryStorageClient
- All Implemented Interfaces:
com.google.api.gax.core.BackgroundResource
,AutoCloseable
The BigQuery storage API can be used to read data stored in BigQuery.
The v1beta1 API is not yet officially deprecated, and will go through a full deprecation cycle (https://cloud.google.com/products#product-launch-stages) before the service is turned down. However, new code should use the v1 API going forward.
This class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) {
TableReferenceProto.TableReference tableReference =
TableReferenceProto.TableReference.newBuilder().build();
ProjectName parent = ProjectName.of("[PROJECT]");
int requestedStreams = 1017221410;
Storage.ReadSession response =
baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams);
}
Note: close() needs to be called on the BaseBigQueryStorageClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().
The surface of this class includes several types of Java methods for each of the API's methods:
- A "flattened" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.
- A "request object" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.
- A "callable" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.
See the individual methods for example code.
Many parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.
This class can be customized by passing in a custom instance of BaseBigQueryStorageSettings to create(). For example:
To customize credentials:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BaseBigQueryStorageSettings baseBigQueryStorageSettings =
BaseBigQueryStorageSettings.newBuilder()
.setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))
.build();
BaseBigQueryStorageClient baseBigQueryStorageClient =
BaseBigQueryStorageClient.create(baseBigQueryStorageSettings);
To customize the endpoint:
// This snippet has been automatically generated and should be regarded as a code template only.
// It will require modifications to work:
// - It may require correct/in-range values for request initialization.
// - It may require specifying regional endpoints when creating the service client as shown in
// https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library
BaseBigQueryStorageSettings baseBigQueryStorageSettings =
BaseBigQueryStorageSettings.newBuilder().setEndpoint(myEndpoint).build();
BaseBigQueryStorageClient baseBigQueryStorageClient =
BaseBigQueryStorageClient.create(baseBigQueryStorageSettings);
Please refer to the GitHub repository's samples for more quickstart code snippets.
-
Constructor Summary
ModifierConstructorDescriptionprotected
Constructs an instance of BaseBigQueryStorageClient, using the given settings.protected
-
Method Summary
Modifier and TypeMethodDescriptionboolean
awaitTermination
(long duration, TimeUnit unit) Creates additional streams for a ReadSession.batchCreateReadSessionStreams
(Storage.ReadSession session, int requestedStreams) Creates additional streams for a ReadSession.final com.google.api.gax.rpc.UnaryCallable<Storage.BatchCreateReadSessionStreamsRequest,
Storage.BatchCreateReadSessionStreamsResponse> Creates additional streams for a ReadSession.final void
close()
static final BaseBigQueryStorageClient
create()
Constructs an instance of BaseBigQueryStorageClient with default settings.static final BaseBigQueryStorageClient
create
(BaseBigQueryStorageSettings settings) Constructs an instance of BaseBigQueryStorageClient, using the given settings.static final BaseBigQueryStorageClient
create
(BigQueryStorageStub stub) Constructs an instance of BaseBigQueryStorageClient, using the given stub for making calls.final Storage.ReadSession
Creates a new read session.final Storage.ReadSession
createReadSession
(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams) Creates a new read session.final Storage.ReadSession
createReadSession
(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams) Creates a new read session.final com.google.api.gax.rpc.UnaryCallable<Storage.CreateReadSessionRequest,
Storage.ReadSession> Creates a new read session.final void
Causes a single stream in a ReadSession to gracefully stop.final void
finalizeStream
(Storage.Stream stream) Causes a single stream in a ReadSession to gracefully stop.final com.google.api.gax.rpc.UnaryCallable<Storage.FinalizeStreamRequest,
com.google.protobuf.Empty> Causes a single stream in a ReadSession to gracefully stop.getStub()
boolean
boolean
final com.google.api.gax.rpc.ServerStreamingCallable<Storage.ReadRowsRequest,
Storage.ReadRowsResponse> Reads rows from the table in the format prescribed by the read session.void
shutdown()
void
Splits a given read stream into two Streams.splitReadStream
(Storage.Stream originalStream) Splits a given read stream into two Streams.final com.google.api.gax.rpc.UnaryCallable<Storage.SplitReadStreamRequest,
Storage.SplitReadStreamResponse> Splits a given read stream into two Streams.
-
Constructor Details
-
BaseBigQueryStorageClient
Constructs an instance of BaseBigQueryStorageClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred.- Throws:
IOException
-
BaseBigQueryStorageClient
-
-
Method Details
-
create
Constructs an instance of BaseBigQueryStorageClient with default settings.- Throws:
IOException
-
create
public static final BaseBigQueryStorageClient create(BaseBigQueryStorageSettings settings) throws IOException Constructs an instance of BaseBigQueryStorageClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set.- Throws:
IOException
-
create
Constructs an instance of BaseBigQueryStorageClient, using the given stub for making calls. This is for advanced usage - prefer using create(BaseBigQueryStorageSettings). -
getSettings
-
getStub
-
createReadSession
public final Storage.ReadSession createReadSession(TableReferenceProto.TableReference tableReference, ProjectName parent, int requestedStreams) Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { TableReferenceProto.TableReference tableReference = TableReferenceProto.TableReference.newBuilder().build(); ProjectName parent = ProjectName.of("[PROJECT]"); int requestedStreams = 1017221410; Storage.ReadSession response = baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams); }
- Parameters:
tableReference
- Required. Reference to the table to read.parent
- Required. String of the form `projects/{project_id}` indicating the project this ReadSession is associated with. This is the project that will be billed for usage.requestedStreams
- Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system.Streams must be read starting from offset 0.
- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
createReadSession
public final Storage.ReadSession createReadSession(TableReferenceProto.TableReference tableReference, String parent, int requestedStreams) Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { TableReferenceProto.TableReference tableReference = TableReferenceProto.TableReference.newBuilder().build(); String parent = ProjectName.of("[PROJECT]").toString(); int requestedStreams = 1017221410; Storage.ReadSession response = baseBigQueryStorageClient.createReadSession(tableReference, parent, requestedStreams); }
- Parameters:
tableReference
- Required. Reference to the table to read.parent
- Required. String of the form `projects/{project_id}` indicating the project this ReadSession is associated with. This is the project that will be billed for usage.requestedStreams
- Initial number of streams. If unset or 0, we will provide a value of streams so as to produce reasonable throughput. Must be non-negative. The number of streams may be lower than the requested number, depending on the amount parallelism that is reasonable for the table and the maximum amount of parallelism allowed by the system.Streams must be read starting from offset 0.
- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
createReadSession
Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.CreateReadSessionRequest request = Storage.CreateReadSessionRequest.newBuilder() .setTableReference(TableReferenceProto.TableReference.newBuilder().build()) .setParent(ProjectName.of("[PROJECT]").toString()) .setTableModifiers(TableReferenceProto.TableModifiers.newBuilder().build()) .setRequestedStreams(1017221410) .setReadOptions(ReadOptions.TableReadOptions.newBuilder().build()) .setFormat(Storage.DataFormat.forNumber(0)) .setShardingStrategy(Storage.ShardingStrategy.forNumber(0)) .build(); Storage.ReadSession response = baseBigQueryStorageClient.createReadSession(request); }
- Parameters:
request
- The request object containing all of the parameters for the API call.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
createReadSessionCallable
public final com.google.api.gax.rpc.UnaryCallable<Storage.CreateReadSessionRequest,Storage.ReadSession> createReadSessionCallable()Creates a new read session. A read session divides the contents of a BigQuery table into one or more streams, which can then be used to read data from the table. The read session also specifies properties of the data to be read, such as a list of columns or a push-down filter describing the rows to be returned.A particular row can be read by at most one stream. When the caller has reached the end of each stream in the session, then all the data in the table has been read.
Read sessions automatically expire 6 hours after they are created and do not require manual clean-up by the caller.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.CreateReadSessionRequest request = Storage.CreateReadSessionRequest.newBuilder() .setTableReference(TableReferenceProto.TableReference.newBuilder().build()) .setParent(ProjectName.of("[PROJECT]").toString()) .setTableModifiers(TableReferenceProto.TableModifiers.newBuilder().build()) .setRequestedStreams(1017221410) .setReadOptions(ReadOptions.TableReadOptions.newBuilder().build()) .setFormat(Storage.DataFormat.forNumber(0)) .setShardingStrategy(Storage.ShardingStrategy.forNumber(0)) .build(); ApiFuture<Storage.ReadSession> future = baseBigQueryStorageClient.createReadSessionCallable().futureCall(request); // Do something. Storage.ReadSession response = future.get(); }
-
readRowsCallable
public final com.google.api.gax.rpc.ServerStreamingCallable<Storage.ReadRowsRequest,Storage.ReadRowsResponse> readRowsCallable()Reads rows from the table in the format prescribed by the read session. Each response contains one or more table rows, up to a maximum of 10 MiB per response; read requests which attempt to read individual rows larger than this will fail.Each request also returns a set of stream statistics reflecting the estimated total number of rows in the read stream. This number is computed based on the total table size and the number of active streams in the read session, and may change as other streams continue to read data.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.ReadRowsRequest request = Storage.ReadRowsRequest.newBuilder() .setReadPosition(Storage.StreamPosition.newBuilder().build()) .build(); ServerStream<Storage.ReadRowsResponse> stream = baseBigQueryStorageClient.readRowsCallable().call(request); for (Storage.ReadRowsResponse response : stream) { // Do something when a response is received. } }
-
batchCreateReadSessionStreams
public final Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.ReadSession session, int requestedStreams) Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.ReadSession session = Storage.ReadSession.newBuilder().build(); int requestedStreams = 1017221410; Storage.BatchCreateReadSessionStreamsResponse response = baseBigQueryStorageClient.batchCreateReadSessionStreams(session, requestedStreams); }
- Parameters:
session
- Required. Must be a non-expired session obtained from a call to CreateReadSession. Only the name field needs to be set.requestedStreams
- Required. Number of new streams requested. Must be positive. Number of added streams may be less than this, see CreateReadSessionRequest for more information.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
batchCreateReadSessionStreams
public final Storage.BatchCreateReadSessionStreamsResponse batchCreateReadSessionStreams(Storage.BatchCreateReadSessionStreamsRequest request) Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.BatchCreateReadSessionStreamsRequest request = Storage.BatchCreateReadSessionStreamsRequest.newBuilder() .setSession(Storage.ReadSession.newBuilder().build()) .setRequestedStreams(1017221410) .build(); Storage.BatchCreateReadSessionStreamsResponse response = baseBigQueryStorageClient.batchCreateReadSessionStreams(request); }
- Parameters:
request
- The request object containing all of the parameters for the API call.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
batchCreateReadSessionStreamsCallable
public final com.google.api.gax.rpc.UnaryCallable<Storage.BatchCreateReadSessionStreamsRequest,Storage.BatchCreateReadSessionStreamsResponse> batchCreateReadSessionStreamsCallable()Creates additional streams for a ReadSession. This API can be used to dynamically adjust the parallelism of a batch processing task upwards by adding additional workers.Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.BatchCreateReadSessionStreamsRequest request = Storage.BatchCreateReadSessionStreamsRequest.newBuilder() .setSession(Storage.ReadSession.newBuilder().build()) .setRequestedStreams(1017221410) .build(); ApiFuture<Storage.BatchCreateReadSessionStreamsResponse> future = baseBigQueryStorageClient.batchCreateReadSessionStreamsCallable().futureCall(request); // Do something. Storage.BatchCreateReadSessionStreamsResponse response = future.get(); }
-
finalizeStream
Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.
This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.Stream stream = Storage.Stream.newBuilder().build(); baseBigQueryStorageClient.finalizeStream(stream); }
- Parameters:
stream
- Required. Stream to finalize.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
finalizeStream
Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.
This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.FinalizeStreamRequest request = Storage.FinalizeStreamRequest.newBuilder() .setStream(Storage.Stream.newBuilder().build()) .build(); baseBigQueryStorageClient.finalizeStream(request); }
- Parameters:
request
- The request object containing all of the parameters for the API call.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
finalizeStreamCallable
public final com.google.api.gax.rpc.UnaryCallable<Storage.FinalizeStreamRequest,com.google.protobuf.Empty> finalizeStreamCallable()Causes a single stream in a ReadSession to gracefully stop. This API can be used to dynamically adjust the parallelism of a batch processing task downwards without losing data.This API does not delete the stream -- it remains visible in the ReadSession, and any data processed by the stream is not released to other streams. However, no additional data will be assigned to the stream once this call completes. Callers must continue reading data on the stream until the end of the stream is reached so that data which has already been assigned to the stream will be processed.
This method will return an error if there are no other live streams in the Session, or if SplitReadStream() has been called on the given Stream.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.FinalizeStreamRequest request = Storage.FinalizeStreamRequest.newBuilder() .setStream(Storage.Stream.newBuilder().build()) .build(); ApiFuture<Empty> future = baseBigQueryStorageClient.finalizeStreamCallable().futureCall(request); // Do something. future.get(); }
-
splitReadStream
Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.
This method is guaranteed to be idempotent.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.Stream originalStream = Storage.Stream.newBuilder().build(); Storage.SplitReadStreamResponse response = baseBigQueryStorageClient.splitReadStream(originalStream); }
- Parameters:
originalStream
- Required. Stream to split.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
splitReadStream
public final Storage.SplitReadStreamResponse splitReadStream(Storage.SplitReadStreamRequest request) Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.
This method is guaranteed to be idempotent.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.SplitReadStreamRequest request = Storage.SplitReadStreamRequest.newBuilder() .setOriginalStream(Storage.Stream.newBuilder().build()) .setFraction(-1653751294) .build(); Storage.SplitReadStreamResponse response = baseBigQueryStorageClient.splitReadStream(request); }
- Parameters:
request
- The request object containing all of the parameters for the API call.- Throws:
com.google.api.gax.rpc.ApiException
- if the remote call fails
-
splitReadStreamCallable
public final com.google.api.gax.rpc.UnaryCallable<Storage.SplitReadStreamRequest,Storage.SplitReadStreamResponse> splitReadStreamCallable()Splits a given read stream into two Streams. These streams are referred to as the primary and the residual of the split. The original stream can still be read from in the same manner as before. Both of the returned streams can also be read from, and the total rows return by both child streams will be the same as the rows read from the original stream.Moreover, the two child streams will be allocated back to back in the original Stream. Concretely, it is guaranteed that for streams Original, Primary, and Residual, that Original[0-j] = Primary[0-j] and Original[j-n] = Residual[0-m] once the streams have been read to completion.
This method is guaranteed to be idempotent.
Sample code:
// This snippet has been automatically generated and should be regarded as a code template only. // It will require modifications to work: // - It may require correct/in-range values for request initialization. // - It may require specifying regional endpoints when creating the service client as shown in // https://cloud.google.com/java/docs/setup#configure_endpoints_for_the_client_library try (BaseBigQueryStorageClient baseBigQueryStorageClient = BaseBigQueryStorageClient.create()) { Storage.SplitReadStreamRequest request = Storage.SplitReadStreamRequest.newBuilder() .setOriginalStream(Storage.Stream.newBuilder().build()) .setFraction(-1653751294) .build(); ApiFuture<Storage.SplitReadStreamResponse> future = baseBigQueryStorageClient.splitReadStreamCallable().futureCall(request); // Do something. Storage.SplitReadStreamResponse response = future.get(); }
-
close
public final void close()- Specified by:
close
in interfaceAutoCloseable
-
shutdown
public void shutdown()- Specified by:
shutdown
in interfacecom.google.api.gax.core.BackgroundResource
-
isShutdown
public boolean isShutdown()- Specified by:
isShutdown
in interfacecom.google.api.gax.core.BackgroundResource
-
isTerminated
public boolean isTerminated()- Specified by:
isTerminated
in interfacecom.google.api.gax.core.BackgroundResource
-
shutdownNow
public void shutdownNow()- Specified by:
shutdownNow
in interfacecom.google.api.gax.core.BackgroundResource
-
awaitTermination
- Specified by:
awaitTermination
in interfacecom.google.api.gax.core.BackgroundResource
- Throws:
InterruptedException
-