public class ReadOptions extends RocksObject
nativeHandle_
owningHandle_
Constructor and Description |
---|
ReadOptions() |
ReadOptions(boolean verifyChecksums,
boolean fillCache) |
ReadOptions(ReadOptions other)
Copy constructor.
|
Modifier and Type | Method and Description |
---|---|
boolean |
autoPrefixMode()
When true, by default use total_order_seek = true, and RocksDB can
selectively enable prefix seek mode if won't generate a different result
from total_order_seek, based on seek key, and iterator upper bound.
|
boolean |
backgroundPurgeOnIteratorCleanup()
If true, when PurgeObsoleteFile is called in CleanupIteratorState, we
schedule a background job in the flush job queue and delete obsolete files
in background.
|
long |
deadline()
Deadline for completing an API call (Get/MultiGet/Seek/Next for now)
in microseconds.
|
protected void |
disposeInternal(long handle) |
boolean |
fillCache()
Fill the cache when loading the block-based sst formated db.
|
boolean |
ignoreRangeDeletions()
If true, keys deleted using the DeleteRange() API will be visible to
readers until they are naturally deleted during compaction.
|
long |
ioTimeout()
A timeout in microseconds to be passed to the underlying FileSystem for
reads.
|
Slice |
iterateLowerBound()
Returns the smallest key at which the backward
iterator can return an entry.
|
Slice |
iterateUpperBound()
Returns the largest key at which the forward
iterator can return an entry.
|
Slice |
iterStartTs()
Timestamp of operation.
|
boolean |
managed()
Deprecated.
This options is not used anymore.
|
long |
maxSkippableInternalKeys()
A threshold for the number of keys that can be skipped before failing an
iterator seek as incomplete.
|
boolean |
pinData()
Returns whether the blocks loaded by the iterator will be pinned in memory
|
boolean |
prefixSameAsStart()
Returns whether the iterator only iterates over the same prefix as the seek
|
long |
readaheadSize()
If non-zero, NewIterator will create a new table reader which
performs reads of the given size.
|
ReadTier |
readTier()
Returns the current read tier.
|
ReadOptions |
setAutoPrefixMode(boolean mode)
When true, by default use total_order_seek = true, and RocksDB can
selectively enable prefix seek mode if won't generate a different result
from total_order_seek, based on seek key, and iterator upper bound.
|
ReadOptions |
setBackgroundPurgeOnIteratorCleanup(boolean backgroundPurgeOnIteratorCleanup)
If true, when PurgeObsoleteFile is called in CleanupIteratorState, we
schedule a background job in the flush job queue and delete obsolete files
in background.
|
ReadOptions |
setDeadline(long deadlineTime)
Deadline for completing an API call (Get/MultiGet/Seek/Next for now)
in microseconds.
|
ReadOptions |
setFillCache(boolean fillCache)
Fill the cache when loading the block-based sst formatted db.
|
ReadOptions |
setIgnoreRangeDeletions(boolean ignoreRangeDeletions)
If true, keys deleted using the DeleteRange() API will be visible to
readers until they are naturally deleted during compaction.
|
ReadOptions |
setIoTimeout(long ioTimeout)
A timeout in microseconds to be passed to the underlying FileSystem for
reads.
|
ReadOptions |
setIterateLowerBound(AbstractSlice<?> iterateLowerBound)
Defines the smallest key at which the backward
iterator can return an entry.
|
ReadOptions |
setIterateUpperBound(AbstractSlice<?> iterateUpperBound)
Defines the extent up to which the forward iterator
can returns entries.
|
ReadOptions |
setIterStartTs(AbstractSlice<?> iterStartTs)
Timestamp of operation.
|
ReadOptions |
setManaged(boolean managed)
Deprecated.
This options is not used anymore.
|
ReadOptions |
setMaxSkippableInternalKeys(long maxSkippableInternalKeys)
A threshold for the number of keys that can be skipped before failing an
iterator seek as incomplete.
|
ReadOptions |
setPinData(boolean pinData)
Keep the blocks loaded by the iterator pinned in memory as long as the
iterator is not deleted, If used when reading from tables created with
BlockBasedTableOptions::use_delta_encoding = false,
Iterator's property "rocksdb.iterator.is-key-pinned" is guaranteed to
return 1.
|
ReadOptions |
setPrefixSameAsStart(boolean prefixSameAsStart)
Enforce that the iterator only iterates over the same prefix as the seek.
|
ReadOptions |
setReadaheadSize(long readaheadSize)
If non-zero, NewIterator will create a new table reader which
performs reads of the given size.
|
ReadOptions |
setReadTier(ReadTier readTier)
Specify if this read request should process data that ALREADY
resides on a particular cache.
|
ReadOptions |
setSnapshot(Snapshot snapshot)
If "snapshot" is non-nullptr, read as of the supplied snapshot
(which must belong to the DB that is being read and which must
not have been released).
|
ReadOptions |
setTableFilter(AbstractTableFilter tableFilter)
A callback to determine whether relevant keys for this scan exist in a
given table based on the table's properties.
|
ReadOptions |
setTailing(boolean tailing)
Specify to create a tailing iterator -- a special iterator that has a
view of the complete database (i.e.
|
ReadOptions |
setTimestamp(AbstractSlice<?> timestamp)
Timestamp of operation.
|
ReadOptions |
setTotalOrderSeek(boolean totalOrderSeek)
Enable a total order seek regardless of index format (e.g.
|
ReadOptions |
setValueSizeSoftLimit(long valueSizeSoftLimit)
It limits the maximum cumulative value size of the keys in batch while
reading through MultiGet.
|
ReadOptions |
setVerifyChecksums(boolean verifyChecksums)
If true, all data read from underlying storage will be
verified against corresponding checksums.
|
Snapshot |
snapshot()
Returns the currently assigned Snapshot instance.
|
boolean |
tailing()
Specify to create a tailing iterator -- a special iterator that has a
view of the complete database (i.e.
|
Slice |
timestamp()
Timestamp of operation.
|
boolean |
totalOrderSeek()
Returns whether a total seek order will be used
|
long |
valueSizeSoftLimit()
It limits the maximum cumulative value size of the keys in batch while
reading through MultiGet.
|
boolean |
verifyChecksums()
If true, all data read from underlying storage will be
verified against corresponding checksums.
|
disposeInternal, getNativeHandle
close, disOwnNativeHandle, isOwningHandle
public ReadOptions()
public ReadOptions(boolean verifyChecksums, boolean fillCache)
verifyChecksums
- verification will be performed on every read
when set to truefillCache
- if true, then fill-cache behavior will be performed.public ReadOptions(ReadOptions other)
other
- The ReadOptions to copy.public boolean verifyChecksums()
public ReadOptions setVerifyChecksums(boolean verifyChecksums)
verifyChecksums
- if true, then checksum verification
will be performed on every read.public boolean fillCache()
public ReadOptions setFillCache(boolean fillCache)
fillCache
- if true, then fill-cache behavior will be
performed.public Snapshot snapshot()
public ReadOptions setSnapshot(Snapshot snapshot)
If "snapshot" is non-nullptr, read as of the supplied snapshot (which must belong to the DB that is being read and which must not have been released). If "snapshot" is nullptr, use an implicit snapshot of the state at the beginning of this read operation.
Default: null
snapshot
- Snapshot
instancepublic ReadTier readTier()
ReadTier.READ_ALL_TIER
public ReadOptions setReadTier(ReadTier readTier)
RocksDBException
is thrown.readTier
- ReadTier
instancepublic boolean tailing()
ROCKSDB_LITE
mode!public ReadOptions setTailing(boolean tailing)
tailing
- if true, then tailing iterator will be enabled.@Deprecated public boolean managed()
@Deprecated public ReadOptions setManaged(boolean managed)
managed
- if true, then managed iterators will be enabled.public boolean totalOrderSeek()
public ReadOptions setTotalOrderSeek(boolean totalOrderSeek)
totalOrderSeek
- if true, then total order seek will be enabled.public boolean prefixSameAsStart()
public ReadOptions setPrefixSameAsStart(boolean prefixSameAsStart)
totalOrderSeek()
is false.
Unlike iterate_upper_bound, setPrefixSameAsStart(boolean)
only
works within a prefix but in both directions.prefixSameAsStart
- if true, then the iterator only iterates over the
same prefix as the seekpublic boolean pinData()
public ReadOptions setPinData(boolean pinData)
pinData
- if true, the blocks loaded by the iterator will be pinnedpublic boolean backgroundPurgeOnIteratorCleanup()
public ReadOptions setBackgroundPurgeOnIteratorCleanup(boolean backgroundPurgeOnIteratorCleanup)
backgroundPurgeOnIteratorCleanup
- true when PurgeObsoleteFile is
called in CleanupIteratorStatepublic long readaheadSize()
public ReadOptions setReadaheadSize(long readaheadSize)
readaheadSize
- The readahead size is bytespublic long maxSkippableInternalKeys()
public ReadOptions setMaxSkippableInternalKeys(long maxSkippableInternalKeys)
maxSkippableInternalKeys
- the number of keys that can be skipped
before failing an iterator seek as incomplete.public boolean ignoreRangeDeletions()
public ReadOptions setIgnoreRangeDeletions(boolean ignoreRangeDeletions)
ignoreRangeDeletions
- true if keys deleted using the DeleteRange()
API should be visiblepublic ReadOptions setIterateLowerBound(AbstractSlice<?> iterateLowerBound)
AbstractRocksIterator.isValid()
will be false.
The lower bound is inclusive i.e. the bound value is a valid
entry.
If prefix_extractor is not null, the Seek target and `iterate_lower_bound`
need to have the same prefix. This is because ordering is not guaranteed
outside of prefix domain.
Default: nulliterateLowerBound
- Slice representing the lower boundpublic Slice iterateLowerBound()
public ReadOptions setIterateUpperBound(AbstractSlice<?> iterateUpperBound)
AbstractRocksIterator.isValid()
will be false.
The upper bound is exclusive i.e. the bound value is not a valid entry.
If prefix_extractor is not null, the Seek target and iterate_upper_bound
need to have the same prefix. This is because ordering is not guaranteed
outside of prefix domain.
Default: nulliterateUpperBound
- Slice representing the upper boundpublic Slice iterateUpperBound()
public ReadOptions setTableFilter(AbstractTableFilter tableFilter)
tableFilter
- the table filter for the callback.public boolean autoPrefixMode()
public ReadOptions setAutoPrefixMode(boolean mode)
mode
- auto prefix modepublic Slice timestamp()
iterStartTs()
public ReadOptions setTimestamp(AbstractSlice<?> timestamp)
<key, timestamp>
tuples.
For iterator, iter_start_ts
is the lower bound (older) and timestamp
serves as the upper bound. Versions of the same record that fall in
the timestamp range will be returned. If iter_start_ts is nullptr,
only the most recent version visible to timestamp is returned.
The user-specified timestamp feature is still under active development,
and the API is subject to change.
Default: nulltimestamp
- Slice representing the timestampsetIterStartTs(AbstractSlice)
public Slice iterStartTs()
<key, timestamp>
tuples.
For iterator, iter_start_ts
is the lower bound (older) and timestamp
serves as the upper bound. Versions of the same record that fall in
the timestamp range will be returned. If iter_start_ts is nullptr,
only the most recent version visible to timestamp is returned.
The user-specified timestamp feature is still under active development,
and the API is subject to change.
Default: nullpublic ReadOptions setIterStartTs(AbstractSlice<?> iterStartTs)
<key, timestamp>
tuples.
For iterator, iter_start_ts
is the lower bound (older) and timestamp
serves as the upper bound. Versions of the same record that fall in
the timestamp range will be returned. If iter_start_ts is nullptr,
only the most recent version visible to timestamp is returned.
The user-specified timestamp feature is still under active development,
and the API is subject to change.
Default: nulliterStartTs
- Reference to lower bound timestamp or null if there is no lower bound
timestamp definedpublic long deadline()
gettimeofday
or
equivalent plus allowed duration in microseconds. The best way is to use
env->NowMicros() + some timeout
.
This is best efforts. The call may exceed the deadline if there is IO
involved and the file system doesn't support deadlines, or due to
checking for deadline periodically rather than for every key if
processing a batchpublic ReadOptions setDeadline(long deadlineTime)
gettimeofday
or
equivalent plus allowed duration in microseconds. The best way is to use
env->NowMicros() + some timeout
.
This is best efforts. The call may exceed the deadline if there is IO
involved and the file system doesn't support deadlines, or due to
checking for deadline periodically rather than for every key if
processing a batchdeadlineTime
- deadline time in microseconds.public long ioTimeout()
public ReadOptions setIoTimeout(long ioTimeout)
ioTimeout
- time in microseconds.public long valueSizeSoftLimit()
std::numeric_limits<uint64_t>::max()
public ReadOptions setValueSizeSoftLimit(long valueSizeSoftLimit)
std::numeric_limits<uint64_t>::max()
valueSizeSoftLimit
- the maximum cumulative value size of the keysprotected final void disposeInternal(long handle)
disposeInternal
in class RocksObject