| 
 | ||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||
Put operation to the list of mutations
Delete operation to the list of mutations
Put.add(byte[], byte[], byte[]).
Put.add(byte[], byte[], long, byte[]).
Put.add(byte[], ByteBuffer, long, ByteBuffer).
HTable.batch(List, Object[]) instead.
HTableInterface.batch(List, Object[]) instead.
Batch.Call.call(Object)
 result.HTableInterface.batch(List, Object[]), but with a callback.
HTable.batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
 instead.
HTableInterface.batch(List, Object[]), but with a callback.
HTableInterface.batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)
 instead.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive), all
 the invocations to the same region server will be batched into one call.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive), all
 the invocations to the same region server will be batched into one call.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive), all
 the invocations to the same region server will be batched into one call.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive), all
 the invocations to the same region server will be batched into one call.
Bytes.compareTo(byte[], byte[]).RpcCallback implementation providing a
 Future-like BlockingRpcCallback.get() method, which
 will block util the instance's BlockingRpcCallback.run(Object) method has been called.hbase:meta.sn.
tableName
true to enable metrics collection of client requests.
ClientScanner.ClientScanner(Configuration, Scan, TableName)
Scan's start row maybe changed changed.
ClientScanner.ClientScanner(Configuration, Scan, TableName, HConnection)
ClientScanner.ClientScanner(Configuration, Scan, TableName, HConnection,
             RpcRetryingCallerFactory, RpcControllerFactory)
             instead
Scan's start
 row maybe changed changed.
Scan's start row maybe changed.
Scan's start row maybe changed changed.
HTablePool.closeTablePool(String).
ZKConfig.
DeserializationException to a more palatable KeeperException.
Services.RpcChannel instance
 connected to the active master.
RpcChannel instance
 connected to the passed region server.
RpcChannel instance connected to the
 table region containing the specified row.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive),
 and invokes the passed Batch.Call.call(T)
 method with each Service
 instance.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive),
 and invokes the passed Batch.Call.call(T)
 method with each Service instance.
RpcChannel instance connected to the
 table region containing the specified row.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive),
 and invokes the passed Batch.Call.call(T)
 method with each Service
 instance.
Service subclass for each table
 region spanning the range from the startKey row to endKey row (inclusive),
 and invokes the passed Batch.Call.call(T)
 method with each Service instance.
conf instance.
conf instance.
conf instance.
conf instance.
RpcRetryingCaller
RemoteException.unwrapRemoteException() instead.
 In fact we should look into deprecating this whole class - St.Ack 2010929
HTableDescriptor.DURABILITY instead.
RpcControllerFactory to help override
 standard behavior of a PayloadCarryingRpcController.RetryingCallable.hbase:meta in ZooKeeper.
CompareFilter
Mutations
 Note that the items must be sorted in order of increasing durabilityDurability setting for the table.
HConnectable.connect(org.apache.hadoop.hbase.client.HConnection)
 implementation using a HConnection instance that lasts just for the
 duration of the invocation.
Filter that represents an ordered List of Filters
 which will be evaluated with a specified boolean operator FilterList.Operator.MUST_PASS_ALL
 (AND) or FilterList.Operator.MUST_PASS_ONE (OR).Filters.
Filters.
Filters and an operator.
Filters and an operator.
Filter.filterKeyValue(Cell) calls.
FilterBase.filterKeyValue(Cell) can inherit this implementation that
 never filters a row.
HConnectionManager.HConnectionImplementation.refCount is.
Put operations.
Put operations.
hbase:meta, skipping regions from any
 tables in the specified set of disabled tables.
hbase:meta, skipping regions from any
 tables in the specified set of disabled tables.
hbase:meta.
hbase:meta.
hbase:meta table.
BlockingRpcCallback.run(Object) or null if a null value was
 passed.
Result.getColumnCells(byte[], byte[]) instead.
HColumnDescriptor of the column families
 of the table.
Result.getColumnLatestCell(byte[], byte[]) instead.
Result.getColumnLatestCell(byte[], int, int, byte[], int, int) instead.
Configuration object used by this instance.
Configuration object used by this instance.
HColumnDescriptor.configuration map.
HTableDescriptor.configuration map.
null if no exception
 was thrown.
HColumnDescriptor
 of all the column families of the table.
Mutation.getFamilyCellMap() instead.
HConstants.CATALOG_FAMILY and
 qualifier of the catalog table result.
Result.
MasterKeepAliveConnection to the active master
ServerName.
hbase:meta or null if location is
 not currently available.
PoolMap.PoolType.RoundRobin.
PoolMap.PoolType.RoundRobin or PoolMap.PoolType.ThreadLocal,
 otherwise default to the former.
SaslUtil.QualityOfProtection
 corresponding to the given stringQop value.
hbase:meta for passed table.
RpcClient.setRpcTimeout(int) and the given
 default timeout.
Scan
 object.
Scan
 object.
Scan.setCaching(int) and Scan.getCaching()
ClusterStatus.getServers()
ServerName from catalog table Result.
table descriptor for this table.
table descriptor for this table.
ClientScanner.getTable()
ColumnInterpreter.getValue(byte[], byte[], KeyValue), this method should be made abstract
ByteBuffer.
ByteBuffer.
HTableDescriptor.values map.
Writable.readFields(java.io.DataInput).
Writable.readFields(java.io.DataInput).
Mutation.getDurability() instead.
AccessControlClient.grant(Configuration, TableName, String, byte[], byte[], Permission.Action...)  instead.
HColumnDescriptor.HColumnDescriptor(String) and setters
HColumnDescriptor.HColumnDescriptor(String) and setters
HColumnDescriptor.HColumnDescriptor(String) and setters
HConnection instance based on the given Configuration.HConnections. -ROOT-  or
  hbase:meta , if the table is read only, the maximum size of the memstore,
 when the region split should occur, coprocessors associated with it etc...hbase:meta and -ROOT-.
hbase:meta and -ROOT-.
HConnectionManager.createConnection(Configuration).HConnection.getTable(String).HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
HTable.incrementColumnValue(byte[], byte[], byte[], long, Durability)
HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, Durability)
 hbase:meta  or  -ROOT- 
ZooKeeperProtos.Table.State#DISABLED.
ZooKeeperProtos.Table.State#DISABLING
 of ZooKeeperProtos.Table.State#DISABLED.
ZooKeeperProtos.Table.State#ENABLED.
 hbase:meta 
 region.
hbase:meta table
 -ROOT-  region.
hbase.security.authentication is set to
 kerberos.
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(byte[])
HBaseAdmin.isTableEnabled(org.apache.hadoop.hbase.TableName tableName)
Result.listCells()
ReplicationAdmin.listPeerConfigs()
Results in the cache.
ByteBuffer.
ByteBuffer.
zookeeper reference.
Services
 against the active master.true if this permission matches the given column
 family at least.
hbase:meta.hbase:meta table scanning logic.r to dest.
MetricsConnection.CallStats for clients to attach to RPCs.
Bytes.compareTo(byte[], byte[]).TokenUtil.obtainAndCacheToken(HConnection,User)
TokenUtil.obtainToken(HConnection)
TokenUtil.obtainTokenForJob(HConnection,User,Job)
TokenUtil.obtainTokenForJob(HConnection,JobConf,User)
ResultScanner.next().src/main/protobuf definition files.ParseFilterIOException.
ServerName from bytes
 gotten from a call to ServerName.getVersionedBytes().
PoolMap maps a key to a collection of values, the elements
 of which are managed by a pool.ReusablePool represents a PoolMap.Pool that builds
 on the LinkedList class.str
Callable.call() invocation.
ProtobufUtil.PB_MAGIC, to flag what
 follows as a protobuf in hbase.
HTableInterface.batch(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[]) instead
Row implementations.
HTableInterface.batchCallback(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)  instead
HTable.processBatchCallback(java.util.List extends org.apache.hadoop.hbase.client.Row>, java.lang.Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)  instead
Result.rawCells()
off
 Send a ping if timeout on read.
HColumnDescriptor.parseFrom(byte[]) instead.
HTableDescriptor.parseFrom(byte[]) instead.
CompareFilter implementations, such
 as RowFilter, QualifierFilter, and ValueFilter, for
 filtering based on the value of a given column.Services
 against a given table region.Callable.call().Services against a given region server.RemoteException with some extra information.HTableDescriptor.values map
HTableDescriptor.values map
HTableDescriptor.values map
HColumnDescriptor.configuration map.
HTableDescriptor.configuration map
Get or Scan query.Result.rawCells().
Result.create(List) instead.
Result.create(List) instead.
Result with some statistics about the server/region statusRetriesExhaustedException
 is thrown when we have more information about which rows were causing which
 exceptions on what servers.Scan's start row maybe changed.
AccessControlClient.revoke(Configuration, TableName, String, byte[], byte[], Permission.Action...) instead
clusterId with the default SocketFactory
clusterId with the default SocketFactory
PayloadCarryingRpcControllerRpcRetryingCallernull in the case of
 an error.
ScannerCallable.ScannerCallable(HConnection, TableName, Scan, 
  ScanMetrics, PayloadCarryingRpcController)
HTableInterface.setAutoFlushTo(boolean) for all other cases.
clearBufferOnFail
clearBufferOnFail
Charset to use to convert the row key to a String.
HColumnDescriptor.configuration map.
HTableDescriptor.configuration map.
Durability setting for the table.
Service client.
Mutation.setFamilyCellMap(NavigableMap) instead.
HColumnDescriptor.setKeepDeletedCells(KeepDeletedCells)
master znode or into the backup
 subdirectory of backup masters; switch off the passed in znode
 path.
hbase:meta in ZooKeeper to the
 specified server address.
 -ROOT-  or  hbase:meta  region.
ReplicationAdmin.setPeerTableCFs(String, Map)
 -ROOT-  region.
Scan.setCaching(int)
HConnection.
HTableDescriptor.values map
HTableDescriptor.values map
Mutation.setDurability(Durability) instead.
Filter that checks a single column value, but does not emit the
 tested column.RpcRetryingCaller that will update the per-region stats for the call on return,
 if stats are availableCallable.call() throws an exception and we are going to retry; take action to
 make it so we succeed on next call (clear caches, do relookup of locations, etc.).
IOException.
HRegionInfo.toByteArray() when writing to a stream and you want to use
 the pb mergeDelimitedFrom (w/o the delimiter, pb reads to EOF which may not be what you want).
ResultStatsUtil
hbase:meta is deployed and accessible.
hbase:meta if available and waits
 for up to the specified timeout if not immediately available.
hbase:meta.
WhileMatchFilter.filterAllRemaining() as soon
 as the wrapped filters Filter.filterRowKey(byte[], int, int),
 Filter.filterKeyValue(org.apache.hadoop.hbase.Cell),
 Filter.filterRow() or
 Filter.filterAllRemaining() methods
 returns true.HColumnDescriptor.toByteArray() instead.
HRegionInfo.toByteArray() and
 HRegionInfo.toDelimitedByteArray()
MessageLite.toByteArray() instead.
| 
 | ||||||||||
| PREV NEXT | FRAMES NO FRAMES | |||||||||