@InterfaceAudience.Public @InterfaceStability.Stable public class HTable extends Object implements HTableInterface
Used to communicate with a single HBase table. An implementation of
HTableInterface
. Instances of this class can be constructed directly but it is
encouraged that users get instances via HConnection
and HConnectionManager
.
See HConnectionManager
class comment for an example.
This class is not thread safe for reads nor write.
In case of writes (Put, Delete), the underlying write buffer can be corrupted if multiple threads contend over a single HTable instance.
In case of reads, some fields used by a Scan are shared among all threads. The HTable implementation can either not contract to be safe in case of a Get
Instances of HTable passed the same Configuration
instance will
share connections to servers out on the cluster and to the zookeeper ensemble
as well as caches of region locations. This is usually a *good* thing and it
is recommended to reuse the same configuration object for all your tables.
This happens because they will all share the same underlying
HConnection
instance. See HConnectionManager
for more on
how this mechanism works.
HConnection
will read most of the
configuration it needs from the passed Configuration
on initial
construction. Thereafter, for settings such as
hbase.client.pause
, hbase.client.retries.number
,
and hbase.client.rpc.maxattempts
updating their values in the
passed Configuration
subsequent to HConnection
construction
will go unnoticed. To run with changed values, make a new
HTable
passing a new Configuration
instance that has the
new configuration.
Note that this class implements the Closeable
interface. When a
HTable instance is no longer required, it *should* be closed in order to ensure
that the underlying resources are promptly released. Please note that the close
method can throw java.io.IOException that must be handled.
Modifier and Type | Field and Description |
---|---|
protected org.apache.hadoop.hbase.client.AsyncProcess<Object> |
ap
The Async process for puts with autoflush set to false or multiputs
|
protected HConnection |
connection |
protected long |
currentWriteBufferSize |
protected int |
scannerCaching |
protected List<Row> |
writeAsyncBuffer |
Modifier | Constructor and Description |
---|---|
protected |
HTable()
For internal testing.
|
|
HTable(byte[] tableName,
HConnection connection,
ExecutorService pool)
Creates an object to access a HBase table.
|
|
HTable(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
Creates an object to access a HBase table.
|
|
HTable(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
ExecutorService pool)
Creates an object to access a HBase table.
|
|
HTable(org.apache.hadoop.conf.Configuration conf,
String tableName)
Creates an object to access a HBase table.
|
|
HTable(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
Creates an object to access a HBase table.
|
|
HTable(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
ExecutorService pool)
Creates an object to access a HBase table.
|
|
HTable(TableName tableName,
HConnection connection)
Creates an object to access a HBase table.
|
|
HTable(TableName tableName,
HConnection connection,
ExecutorService pool)
Creates an object to access a HBase table.
|
|
HTable(TableName tableName,
HConnection connection,
TableConfiguration tableConfig,
RpcRetryingCallerFactory rpcCallerFactory,
RpcControllerFactory rpcControllerFactory,
ExecutorService pool)
Creates an object to access a HBase table.
|
Modifier and Type | Method and Description |
---|---|
Result |
append(Append append)
Appends values to one or more columns within a single row.
|
Object[] |
batch(List<? extends Row> actions)
Deprecated.
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use
batch(List, Object[]) instead. |
void |
batch(List<? extends Row> actions,
Object[] results)
Method that does a batch call on Deletes, Gets, Puts, Increments, Appends and RowMutations.
|
<R> Object[] |
batchCallback(List<? extends Row> actions,
Batch.Callback<R> callback)
Deprecated.
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use
batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead. |
<R> void |
batchCallback(List<? extends Row> actions,
Object[] results,
Batch.Callback<R> callback)
Same as
HTableInterface.batch(List, Object[]) , but with a callback. |
<R extends Message> |
batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor,
Message request,
byte[] startKey,
byte[] endKey,
R responsePrototype)
Creates an instance of the given
Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive), all
the invocations to the same region server will be batched into one call. |
<R extends Message> |
batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor,
Message request,
byte[] startKey,
byte[] endKey,
R responsePrototype,
Batch.Callback<R> callback)
Creates an instance of the given
Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive), all
the invocations to the same region server will be batched into one call. |
boolean |
checkAndDelete(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
Delete delete)
Atomically checks if a row/family/qualifier value matches the expected
value.
|
boolean |
checkAndMutate(byte[] row,
byte[] family,
byte[] qualifier,
CompareFilter.CompareOp compareOp,
byte[] value,
RowMutations rm)
Atomically checks if a row/family/qualifier value matches the expected val
If it does, it performs the row mutations.
|
boolean |
checkAndPut(byte[] row,
byte[] family,
byte[] qualifier,
byte[] value,
Put put)
Atomically checks if a row/family/qualifier value matches the expected
value.
|
void |
clearRegionCache()
Explicitly clears the region cache to fetch the latest value from META.
|
void |
close()
Releases any resources held or pending changes in internal buffers.
|
CoprocessorRpcChannel |
coprocessorService(byte[] row)
Creates and returns a
RpcChannel instance connected to the
table region containing the specified row. |
<T extends Service,R> |
coprocessorService(Class<T> service,
byte[] startKey,
byte[] endKey,
Batch.Call<T,R> callable)
Creates an instance of the given
Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance. |
<T extends Service,R> |
coprocessorService(Class<T> service,
byte[] startKey,
byte[] endKey,
Batch.Call<T,R> callable,
Batch.Callback<R> callback)
Creates an instance of the given
Service subclass for each table
region spanning the range from the startKey row to endKey row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service instance. |
void |
delete(Delete delete)
Deletes the specified cells/row.
|
void |
delete(List<Delete> deletes)
Deletes the specified cells/rows in bulk.
|
boolean |
exists(Get get)
Test for the existence of columns in the table, as specified by the Get.
|
Boolean[] |
exists(List<Get> gets)
Test for the existence of columns in the table, as specified by the Gets.
|
void |
flushCommits()
Executes all the buffered
Put operations. |
Result |
get(Get get)
Extracts certain cells from a given row.
|
Result[] |
get(List<Get> gets)
Extracts certain cells from the given rows, in batch.
|
org.apache.hadoop.conf.Configuration |
getConfiguration()
Returns the
Configuration object used by this instance. |
HConnection |
getConnection()
Deprecated.
This method will be changed from public to package protected.
|
static ThreadPoolExecutor |
getDefaultExecutor(org.apache.hadoop.conf.Configuration conf) |
byte[][] |
getEndKeys()
Gets the ending row key for every region in the currently open table.
|
static int |
getMaxKeyValueSize(org.apache.hadoop.conf.Configuration conf) |
TableName |
getName()
Gets the fully qualified table name instance of this table.
|
int |
getOperationTimeout() |
static boolean |
getRegionCachePrefetch(byte[] tableName)
Check whether region cache prefetch is enabled or not for the table.
|
static boolean |
getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
Check whether region cache prefetch is enabled or not for the table.
|
static boolean |
getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
TableName tableName) |
static boolean |
getRegionCachePrefetch(TableName tableName) |
HRegionLocation |
getRegionLocation(byte[] row)
Finds the region on which the given row is being served.
|
HRegionLocation |
getRegionLocation(byte[] row,
boolean reload)
Finds the region on which the given row is being served.
|
HRegionLocation |
getRegionLocation(String row)
Find region location hosting passed row using cached info
|
NavigableMap<HRegionInfo,ServerName> |
getRegionLocations()
Gets all the regions and their address for this table.
|
List<HRegionLocation> |
getRegionsInRange(byte[] startKey,
byte[] endKey)
Get the corresponding regions for an arbitrary range of keys.
|
List<HRegionLocation> |
getRegionsInRange(byte[] startKey,
byte[] endKey,
boolean reload)
Get the corresponding regions for an arbitrary range of keys.
|
Result |
getRowOrBefore(byte[] row,
byte[] family)
Return the row that matches row exactly,
or the one that immediately precedes it.
|
ResultScanner |
getScanner(byte[] family)
Gets a scanner on the current table for the given family.
|
ResultScanner |
getScanner(byte[] family,
byte[] qualifier)
Gets a scanner on the current table for the given family and qualifier.
|
ResultScanner |
getScanner(Scan scan)
Returns a scanner on the current table as specified by the
Scan
object. |
int |
getScannerCaching()
Deprecated.
|
Pair<byte[][],byte[][]> |
getStartEndKeys()
Gets the starting and ending row keys for every region in the currently
open table.
|
byte[][] |
getStartKeys()
Gets the starting row key for every region in the currently open table.
|
HTableDescriptor |
getTableDescriptor()
Gets the
table descriptor for this table. |
byte[] |
getTableName()
Gets the name of this table.
|
List<Row> |
getWriteBuffer()
Deprecated.
since 0.96. This is an internal buffer that should not be read nor write.
|
long |
getWriteBufferSize()
Returns the maximum size in bytes of the write buffer for this HTable.
|
Result |
increment(Increment increment)
Increments one or more columns within a single row.
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount)
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
boolean writeToWAL)
Deprecated.
|
long |
incrementColumnValue(byte[] row,
byte[] family,
byte[] qualifier,
long amount,
Durability durability)
Atomically increments a column value.
|
boolean |
isAutoFlush()
Tells whether or not 'auto-flush' is turned on.
|
static boolean |
isTableEnabled(byte[] tableName)
Deprecated.
|
static boolean |
isTableEnabled(org.apache.hadoop.conf.Configuration conf,
byte[] tableName)
Deprecated.
|
static boolean |
isTableEnabled(org.apache.hadoop.conf.Configuration conf,
String tableName)
Deprecated.
|
static boolean |
isTableEnabled(org.apache.hadoop.conf.Configuration conf,
TableName tableName)
|
static boolean |
isTableEnabled(String tableName)
Deprecated.
|
static boolean |
isTableEnabled(TableName tableName)
Deprecated.
|
static void |
main(String[] args)
Run basic test.
|
void |
mutateRow(RowMutations rm)
Performs multiple mutations atomically on a single row.
|
void |
processBatch(List<? extends Row> list,
Object[] results)
Parameterized batch processing, allowing varying return types for different
Row implementations. |
<R> void |
processBatchCallback(List<? extends Row> list,
Object[] results,
Batch.Callback<R> callback)
Process a mixed batch of Get, Put and Delete actions.
|
void |
put(List<Put> puts)
Puts some data in the table, in batch.
|
void |
put(Put put)
Puts some data in the table.
|
void |
setAutoFlush(boolean autoFlush)
Deprecated.
|
void |
setAutoFlush(boolean autoFlush,
boolean clearBufferOnFail)
Turns 'auto-flush' on or off.
|
void |
setAutoFlushTo(boolean autoFlush)
Set the autoFlush behavior, without changing the value of
clearBufferOnFail |
void |
setOperationTimeout(int operationTimeout) |
static void |
setRegionCachePrefetch(byte[] tableName,
boolean enable)
Enable or disable region cache prefetch for the table.
|
static void |
setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
boolean enable)
Enable or disable region cache prefetch for the table.
|
static void |
setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf,
TableName tableName,
boolean enable) |
static void |
setRegionCachePrefetch(TableName tableName,
boolean enable) |
void |
setScannerCaching(int scannerCaching)
Deprecated.
|
void |
setWriteBufferSize(long writeBufferSize)
Sets the size of the buffer in bytes.
|
String |
toString() |
void |
validatePut(Put put) |
static void |
validatePut(Put put,
int maxKeyValueSize) |
protected HConnection connection
protected long currentWriteBufferSize
protected int scannerCaching
protected org.apache.hadoop.hbase.client.AsyncProcess<Object> ap
public HTable(org.apache.hadoop.conf.Configuration conf, String tableName) throws IOException
conf
instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf
instance. Recommended.conf
- Configuration object to use.tableName
- Name of the table.IOException
- if a remote or network exception occurspublic HTable(org.apache.hadoop.conf.Configuration conf, byte[] tableName) throws IOException
conf
instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf
instance. Recommended.conf
- Configuration object to use.tableName
- Name of the table.IOException
- if a remote or network exception occurspublic HTable(org.apache.hadoop.conf.Configuration conf, TableName tableName) throws IOException
conf
instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf
instance. Recommended.conf
- Configuration object to use.tableName
- table name pojoIOException
- if a remote or network exception occurspublic HTable(TableName tableName, HConnection connection) throws IOException
connection
instance. Use this
constructor when the HConnection instance is externally managed.tableName
- Name of the table.connection
- HConnection to be used.IOException
- if a remote or network exception occurspublic HTable(org.apache.hadoop.conf.Configuration conf, byte[] tableName, ExecutorService pool) throws IOException
conf
instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf
instance.
Use this constructor when the ExecutorService is externally managed.conf
- Configuration object to use.tableName
- Name of the table.pool
- ExecutorService to be used.IOException
- if a remote or network exception occurspublic HTable(org.apache.hadoop.conf.Configuration conf, TableName tableName, ExecutorService pool) throws IOException
conf
instance. Uses already-populated
region cache if one is available, populated by any other HTable instances
sharing this conf
instance.
Use this constructor when the ExecutorService is externally managed.conf
- Configuration object to use.tableName
- Name of the table.pool
- ExecutorService to be used.IOException
- if a remote or network exception occurspublic HTable(byte[] tableName, HConnection connection, ExecutorService pool) throws IOException
connection
instance.
Use this constructor when the ExecutorService and HConnection instance are
externally managed.tableName
- Name of the table.connection
- HConnection to be used.pool
- ExecutorService to be used.IOException
- if a remote or network exception occurspublic HTable(TableName tableName, HConnection connection, ExecutorService pool) throws IOException
connection
instance.
Use this constructor when the ExecutorService and HConnection instance are
externally managed.tableName
- Name of the table.connection
- HConnection to be used.pool
- ExecutorService to be used.IOException
- if a remote or network exception occurspublic HTable(TableName tableName, HConnection connection, TableConfiguration tableConfig, RpcRetryingCallerFactory rpcCallerFactory, RpcControllerFactory rpcControllerFactory, ExecutorService pool) throws IOException
connection
instance.
Use this constructor when the ExecutorService and HConnection instance are
externally managed.tableName
- Name of the table.connection
- HConnection to be used.tableConfig
- table configurationrpcCallerFactory
- RPC caller factoryrpcControllerFactory
- RPC controller factorypool
- ExecutorService to be used.IOException
- if a remote or network exception occursprotected HTable()
public static ThreadPoolExecutor getDefaultExecutor(org.apache.hadoop.conf.Configuration conf)
public static int getMaxKeyValueSize(org.apache.hadoop.conf.Configuration conf)
public org.apache.hadoop.conf.Configuration getConfiguration()
Configuration
object used by this instance.
The reference returned is not a copy, so any change made to it will affect this instance.
getConfiguration
in interface HTableInterface
@Deprecated public static boolean isTableEnabled(String tableName) throws IOException
HBaseAdmin.isTableEnabled(byte[])
tableName
- Name of table to check.true
if table is online.IOException
- if a remote or network exception occurs@Deprecated public static boolean isTableEnabled(byte[] tableName) throws IOException
HBaseAdmin.isTableEnabled(byte[])
tableName
- Name of table to check.true
if table is online.IOException
- if a remote or network exception occurs@Deprecated public static boolean isTableEnabled(TableName tableName) throws IOException
HBaseAdmin.isTableEnabled(byte[])
tableName
- Name of table to check.true
if table is online.IOException
- if a remote or network exception occurs@Deprecated public static boolean isTableEnabled(org.apache.hadoop.conf.Configuration conf, String tableName) throws IOException
HBaseAdmin.isTableEnabled(byte[])
conf
- The Configuration object to use.tableName
- Name of table to check.true
if table is online.IOException
- if a remote or network exception occurs@Deprecated public static boolean isTableEnabled(org.apache.hadoop.conf.Configuration conf, byte[] tableName) throws IOException
HBaseAdmin.isTableEnabled(byte[])
conf
- The Configuration object to use.tableName
- Name of table to check.true
if table is online.IOException
- if a remote or network exception occurs@Deprecated public static boolean isTableEnabled(org.apache.hadoop.conf.Configuration conf, TableName tableName) throws IOException
conf
- The Configuration object to use.tableName
- Name of table to check.true
if table is online.IOException
- if a remote or network exception occurspublic HRegionLocation getRegionLocation(String row) throws IOException
row
- Row to find.IOException
- if a remote or network exception occurspublic HRegionLocation getRegionLocation(byte[] row) throws IOException
row
- Row to find.IOException
- if a remote or network exception occurspublic HRegionLocation getRegionLocation(byte[] row, boolean reload) throws IOException
row
- Row to find.reload
- true to reload information or false to use cached informationIOException
- if a remote or network exception occurspublic byte[] getTableName()
getTableName
in interface HTableInterface
public TableName getName()
HTableInterface
getName
in interface HTableInterface
@Deprecated public HConnection getConnection()
@Deprecated public int getScannerCaching()
The default value comes from hbase.client.scanner.caching
.
@Deprecated public List<Row> getWriteBuffer()
@Deprecated public void setScannerCaching(int scannerCaching)
Scan.setCaching(int)
This will override the value specified by
hbase.client.scanner.caching
.
Increasing this value will reduce the amount of work needed each time
next()
is called on a scanner, at the expense of memory use
(since more rows will need to be maintained in memory by the scanners).
scannerCaching
- the number of rows a scanner will fetch at once.public HTableDescriptor getTableDescriptor() throws IOException
table descriptor
for this table.getTableDescriptor
in interface HTableInterface
IOException
- if a remote or network exception occurs.public byte[][] getStartKeys() throws IOException
This is mainly useful for the MapReduce integration.
IOException
- if a remote or network exception occurspublic byte[][] getEndKeys() throws IOException
This is mainly useful for the MapReduce integration.
IOException
- if a remote or network exception occurspublic Pair<byte[][],byte[][]> getStartEndKeys() throws IOException
This is mainly useful for the MapReduce integration.
IOException
- if a remote or network exception occurspublic NavigableMap<HRegionInfo,ServerName> getRegionLocations() throws IOException
This is mainly useful for the MapReduce integration.
IOException
- if a remote or network exception occurspublic List<HRegionLocation> getRegionsInRange(byte[] startKey, byte[] endKey) throws IOException
startKey
- Starting row in range, inclusiveendKey
- Ending row in range, exclusiveIOException
- if a remote or network exception occurspublic List<HRegionLocation> getRegionsInRange(byte[] startKey, byte[] endKey, boolean reload) throws IOException
startKey
- Starting row in range, inclusiveendKey
- Ending row in range, exclusivereload
- true to reload information or false to use cached informationIOException
- if a remote or network exception occurspublic Result getRowOrBefore(byte[] row, byte[] family) throws IOException
getRowOrBefore
in interface HTableInterface
row
- A row key.family
- Column family to include in the Result
.IOException
- if a remote or network exception occurs.public ResultScanner getScanner(Scan scan) throws IOException
Scan
object.
Note that the passed Scan
's start row and caching properties
maybe changed.getScanner
in interface HTableInterface
scan
- A configured Scan
object.IOException
- if a remote or network exception occurs.public ResultScanner getScanner(byte[] family) throws IOException
getScanner
in interface HTableInterface
family
- The column family to scan.IOException
- if a remote or network exception occurs.public ResultScanner getScanner(byte[] family, byte[] qualifier) throws IOException
getScanner
in interface HTableInterface
family
- The column family to scan.qualifier
- The column qualifier to scan.IOException
- if a remote or network exception occurs.public Result get(Get get) throws IOException
get
in interface HTableInterface
get
- The object that specifies what data to fetch and from which row.Result
instance returned won't
contain any KeyValue
, as indicated by Result.isEmpty()
.IOException
- if a remote or network exception occurs.public Result[] get(List<Get> gets) throws IOException
get
in interface HTableInterface
gets
- The objects that specify what data to fetch and from which rows.Result
instance returned won't
contain any KeyValue
, as indicated by Result.isEmpty()
.
If there are any failures even after retries, there will be a null in
the results array for those Gets, AND an exception will be thrown.IOException
- if a remote or network exception occurs.public void batch(List<? extends Row> actions, Object[] results) throws InterruptedException, IOException
HTableInterface.batch(java.util.List<? extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[])
call, you will not necessarily be
guaranteed that the Get returns what the Put had put.batch
in interface HTableInterface
actions
- list of Get, Put, Delete, Increment, Append, RowMutations objectsresults
- Empty Object[], same size as actions. Provides access to partial
results, in case an exception is thrown. A null in the result array means that
the call for that action failed, even after retriesIOException
InterruptedException
public Object[] batch(List<? extends Row> actions) throws InterruptedException, IOException
batch(List, Object[])
instead.HTableInterface.batch(List, Object[])
, but returns an array of
results instead of using a results parameter reference.batch
in interface HTableInterface
actions
- list of Get, Put, Delete, Increment, Append, RowMutations objectsIOException
InterruptedException
public <R> void batchCallback(List<? extends Row> actions, Object[] results, Batch.Callback<R> callback) throws IOException, InterruptedException
HTableInterface.batch(List, Object[])
, but with a callback.batchCallback
in interface HTableInterface
IOException
InterruptedException
public <R> Object[] batchCallback(List<? extends Row> actions, Batch.Callback<R> callback) throws IOException, InterruptedException
batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
instead.HTableInterface.batch(List)
, but with a callback.batchCallback
in interface HTableInterface
IOException
InterruptedException
public void delete(Delete delete) throws IOException
delete
in interface HTableInterface
delete
- The object that specifies what to delete.IOException
- if a remote or network exception occurs.public void delete(List<Delete> deletes) throws IOException
delete
in interface HTableInterface
deletes
- List of things to delete. List gets modified by this
method (in particular it gets re-ordered, so the order in which the elements
are inserted in the list gives no guarantee as to the order in which the
Delete
s are executed).IOException
- if a remote or network exception occurs. In that case
the deletes
argument will contain the Delete
instances
that have not be successfully applied.public void put(Put put) throws InterruptedIOException, RetriesExhaustedWithDetailsException
If isAutoFlush
is false, the update is buffered
until the internal buffer is full.
put
in interface HTableInterface
put
- The data to put.InterruptedIOException
RetriesExhaustedWithDetailsException
public void put(List<Put> puts) throws InterruptedIOException, RetriesExhaustedWithDetailsException
If isAutoFlush
is false, the update is buffered
until the internal buffer is full.
This can be used for group commit, or for submitting user defined batches. The writeBuffer will be periodically inspected while the List is processed, so depending on the List size the writeBuffer may flush not at all, or more than once.
put
in interface HTableInterface
puts
- The list of mutations to apply. The batch put is done by
aggregating the iteration of the Puts over the write buffer
at the client-side for a single RPC call.InterruptedIOException
RetriesExhaustedWithDetailsException
public void mutateRow(RowMutations rm) throws IOException
mutateRow
in interface HTableInterface
rm
- object that specifies the set of mutations to perform atomicallyIOException
public Result append(Append append) throws IOException
This operation does not appear atomic to readers. Appends are done under a single row lock, so write operations to a row are synchronized, but readers do not take row locks so get and scan operations can see this operation partially completed.
append
in interface HTableInterface
append
- object that specifies the columns and amounts to be used
for the increment operationsIOException
- epublic Result increment(Increment increment) throws IOException
This operation does not appear atomic to readers. Increments are done under a single row lock, so write operations to a row are synchronized, but readers do not take row locks so get and scan operations can see this operation partially completed.
increment
in interface HTableInterface
increment
- object that specifies the columns and amounts to be used
for the increment operationsIOException
- epublic long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount) throws IOException
HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
The Durability
is defaulted to Durability.SYNC_WAL
.
incrementColumnValue
in interface HTableInterface
row
- The row that contains the cell to increment.family
- The column family of the cell to increment.qualifier
- The column qualifier of the cell to increment.amount
- The amount to increment the cell with (or decrement, if the
amount is negative).IOException
- if a remote or network exception occurs.@Deprecated public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount, boolean writeToWAL) throws IOException
incrementColumnValue(byte[], byte[], byte[], long, Durability)
incrementColumnValue
in interface HTableInterface
IOException
public long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, long amount, Durability durability) throws IOException
amount
and
written to the specified column.
Setting durability to Durability.SKIP_WAL
means that in a fail
scenario you will lose any increments that have not been flushed.
incrementColumnValue
in interface HTableInterface
row
- The row that contains the cell to increment.family
- The column family of the cell to increment.qualifier
- The column qualifier of the cell to increment.amount
- The amount to increment the cell with (or decrement, if the
amount is negative).durability
- The persistence guarantee for this increment.IOException
- if a remote or network exception occurs.public boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, byte[] value, Put put) throws IOException
checkAndPut
in interface HTableInterface
row
- to checkfamily
- column family to checkqualifier
- column qualifier to checkvalue
- the expected valueput
- data to put if check succeedsIOException
- epublic boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, byte[] value, Delete delete) throws IOException
checkAndDelete
in interface HTableInterface
row
- to checkfamily
- column family to checkqualifier
- column qualifier to checkvalue
- the expected valuedelete
- data to delete if check succeedsIOException
- epublic boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, CompareFilter.CompareOp compareOp, byte[] value, RowMutations rm) throws IOException
checkAndMutate
in interface HTableInterface
row
- to checkfamily
- column family to checkqualifier
- column qualifier to checkcompareOp
- the comparison operatorvalue
- the expected valuerm
- mutations to perform if check succeedsIOException
- epublic boolean exists(Get get) throws IOException
This will return true if the Get matches one or more keys, false if not.
This is a server-side call so it prevents any data from being transfered to the client.
exists
in interface HTableInterface
get
- the GetIOException
- epublic Boolean[] exists(List<Get> gets) throws IOException
This will return an array of booleans. Each value will be true if the related Get matches one or more keys, false if not.
This is a server-side call so it prevents any data from being transfered to the client.
exists
in interface HTableInterface
gets
- the GetsIOException
- epublic void flushCommits() throws InterruptedIOException, RetriesExhaustedWithDetailsException
Put
operations.
This method gets called once automatically for every Put
or batch
of Put
s (when put(List
is used) when
HTableInterface.isAutoFlush()
is true
.
flushCommits
in interface HTableInterface
InterruptedIOException
RetriesExhaustedWithDetailsException
public <R> void processBatchCallback(List<? extends Row> list, Object[] results, Batch.Callback<R> callback) throws IOException, InterruptedException
list
- The collection of actions.results
- An empty array, same size as list. If an exception is thrown,
you can test here for partial results, and to determine which actions
processed successfully.IOException
- if there are problems talking to META. Per-item
exceptions are stored in the results array.InterruptedException
public void processBatch(List<? extends Row> list, Object[] results) throws IOException, InterruptedException
Row
implementations.IOException
InterruptedException
public void close() throws IOException
HTableInterface
close
in interface Closeable
close
in interface AutoCloseable
close
in interface HTableInterface
IOException
- if a remote or network exception occurs.public void validatePut(Put put) throws IllegalArgumentException
IllegalArgumentException
public static void validatePut(Put put, int maxKeyValueSize) throws IllegalArgumentException
IllegalArgumentException
public boolean isAutoFlush()
isAutoFlush
in interface HTableInterface
true
if 'auto-flush' is enabled (default), meaning
Put
operations don't get buffered/delayed and are immediately
executed.@Deprecated public void setAutoFlush(boolean autoFlush)
setAutoFlush
in interface HTableInterface
autoFlush
- Whether or not to enable 'auto-flush'.public void setAutoFlushTo(boolean autoFlush)
clearBufferOnFail
setAutoFlushTo
in interface HTableInterface
public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail)
When enabled (default), Put
operations don't get buffered/delayed
and are immediately executed. Failed operations are not retried. This is
slower but safer.
Turning off #autoFlush
means that multiple Put
s will be
accepted before any RPC is actually sent to do the write operations. If the
application dies before pending writes get flushed to HBase, data will be
lost.
When you turn #autoFlush
off, you should also consider the
#clearBufferOnFail
option. By default, asynchronous Put
requests will be retried on failure until successful. However, this can
pollute the writeBuffer and slow down batching performance. Additionally,
you may want to issue a number of Put requests and call
HTableInterface.flushCommits()
as a barrier. In both use cases, consider setting
clearBufferOnFail to true to erase the buffer after HTableInterface.flushCommits()
has been called, regardless of success.
In other words, if you call #setAutoFlush(false)
; HBase will retry N time for each
flushCommit, including the last one when closing the table. This is NOT recommended,
most of the time you want to call #setAutoFlush(false, true)
.
setAutoFlush
in interface HTableInterface
autoFlush
- Whether or not to enable 'auto-flush'.clearBufferOnFail
- Whether to keep Put failures in the writeBuffer. If autoFlush is true, then
the value of this parameter is ignored and clearBufferOnFail is set to true.
Setting clearBufferOnFail to false is deprecated since 0.96.HTableInterface.flushCommits()
public long getWriteBufferSize()
The default value comes from the configuration parameter
hbase.client.write.buffer
.
getWriteBufferSize
in interface HTableInterface
public void setWriteBufferSize(long writeBufferSize) throws IOException
If the new size is less than the current amount of data in the write buffer, the buffer gets flushed.
setWriteBufferSize
in interface HTableInterface
writeBufferSize
- The new write buffer size, in bytes.IOException
- if a remote or network exception occurs.public static void setRegionCachePrefetch(byte[] tableName, boolean enable) throws IOException
tableName
- name of table to configure.enable
- Set to true to enable region cache prefetch. Or set to
false to disable it.IOException
public static void setRegionCachePrefetch(TableName tableName, boolean enable) throws IOException
IOException
public static void setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf, byte[] tableName, boolean enable) throws IOException
conf
- The Configuration object to use.tableName
- name of table to configure.enable
- Set to true to enable region cache prefetch. Or set to
false to disable it.IOException
public static void setRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf, TableName tableName, boolean enable) throws IOException
IOException
public static boolean getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf, byte[] tableName) throws IOException
conf
- The Configuration object to use.tableName
- name of table to checkIOException
public static boolean getRegionCachePrefetch(org.apache.hadoop.conf.Configuration conf, TableName tableName) throws IOException
IOException
public static boolean getRegionCachePrefetch(byte[] tableName) throws IOException
tableName
- name of table to checkIOException
public static boolean getRegionCachePrefetch(TableName tableName) throws IOException
IOException
public void clearRegionCache()
public CoprocessorRpcChannel coprocessorService(byte[] row)
RpcChannel
instance connected to the
table region containing the specified row. The row given does not actually have
to exist. Whichever region would contain the row based on start and end keys will
be used. Note that the row
parameter is also not passed to the
coprocessor handler registered for this protocol, unless the row
is separately passed as an argument in the service request. The parameter
here is only used to locate the region used to handle the call.
The obtained RpcChannel
instance can be used to access a published
coprocessor Service
using standard protobuf service invocations:
CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey); MyService.BlockingInterface service = MyService.newBlockingStub(channel); MyCallRequest request = MyCallRequest.newBuilder() ... .build(); MyCallResponse response = service.myCall(null, request);
coprocessorService
in interface HTableInterface
row
- The row key used to identify the remote region locationpublic <T extends Service,R> Map<byte[],R> coprocessorService(Class<T> service, byte[] startKey, byte[] endKey, Batch.Call<T,R> callable) throws ServiceException, Throwable
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.coprocessorService
in interface HTableInterface
T
- the Service
subclass to connect toR
- Return type for the callable
parameter's
Batch.Call.call(T)
methodservice
- the protocol buffer Service
implementation to callstartKey
- start region selection with region containing this row. If null
, the
selection will start with the first table region.endKey
- select regions up to and including the region containing this row.
If null
, selection will continue through the last table region.callable
- this instance's
Batch.Call.call(T)
method will be invoked once per table region, using the Service
instance connected to that region.ServiceException
Throwable
public <T extends Service,R> void coprocessorService(Class<T> service, byte[] startKey, byte[] endKey, Batch.Call<T,R> callable, Batch.Callback<R> callback) throws ServiceException, Throwable
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive),
and invokes the passed Batch.Call.call(T)
method with each Service
instance.
The given
Batch.Callback.update(byte[], byte[], Object)
method will be called with the return value from each region's
Batch.Call.call(T)
invocation.
coprocessorService
in interface HTableInterface
T
- the Service
subclass to connect toR
- Return type for the callable
parameter's
Batch.Call.call(T)
methodservice
- the protocol buffer Service
implementation to callstartKey
- start region selection with region containing this row. If null
, the
selection will start with the first table region.endKey
- select regions up to and including the region containing this row.
If null
, selection will continue through the last table region.callable
- this instance's
Batch.Call.call(T)
method
will be invoked once per table region, using the Service
instance
connected to that region.ServiceException
Throwable
public void setOperationTimeout(int operationTimeout)
public int getOperationTimeout()
public static void main(String[] args) throws IOException
args
- Pass table name and row and will get the content.IOException
public <R extends Message> Map<byte[],R> batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor, Message request, byte[] startKey, byte[] endKey, R responsePrototype) throws ServiceException, Throwable
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive), all
the invocations to the same region server will be batched into one call. The coprocessor
service is invoked according to the service instance, method name and parameters.batchCoprocessorService
in interface HTableInterface
R
- the response type for the coprocessor Service methodmethodDescriptor
- the descriptor for the protobuf service method to call.request
- the method call parametersstartKey
- start region selection with region containing this row. If null
, the
selection will start with the first table region.endKey
- select regions up to and including the region containing this row. If null
,
selection will continue through the last table region.responsePrototype
- the proto type of the response of the method in Service.ServiceException
Throwable
public <R extends Message> void batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor, Message request, byte[] startKey, byte[] endKey, R responsePrototype, Batch.Callback<R> callback) throws ServiceException, Throwable
Service
subclass for each table
region spanning the range from the startKey
row to endKey
row (inclusive), all
the invocations to the same region server will be batched into one call. The coprocessor
service is invoked according to the service instance, method name and parameters.
The given
Batch.Callback.update(byte[],byte[],Object)
method will be called with the return value from each region's invocation.
batchCoprocessorService
in interface HTableInterface
R
- the response type for the coprocessor Service methodmethodDescriptor
- the descriptor for the protobuf service method to call.request
- the method call parametersstartKey
- start region selection with region containing this row. If null
, the
selection will start with the first table region.endKey
- select regions up to and including the region containing this row. If null
,
selection will continue through the last table region.responsePrototype
- the proto type of the response of the method in Service.callback
- callback to invoke with the response for each regionServiceException
Throwable
Copyright © 2015 The Apache Software Foundation. All Rights Reserved.