public class Options extends RocksObject implements DBOptionsInterface, ColumnFamilyOptionsInterface
RocksDB
(i.e., RocksDB.open()).
If AbstractNativeReference.dispose()
function is not called, then it will be GC'd
automaticallyand native resources will be released as part of the process.nativeHandle_
DEFAULT_COMPACTION_MEMTABLE_MEMORY_BUDGET
Constructor and Description |
---|
Options()
Construct options for opening a RocksDB.
|
Options(DBOptions dbOptions,
ColumnFamilyOptions columnFamilyOptions)
Construct options for opening a RocksDB.
|
Modifier and Type | Method and Description |
---|---|
boolean |
adviseRandomOnOpen()
If set true, will hint the underlying file system that the file
access pattern is random, when a sst file is opened.
|
boolean |
allowMmapReads()
Allow the OS to mmap file for reading sst tables.
|
boolean |
allowMmapWrites()
Allow the OS to mmap file for writing.
|
boolean |
allowOsBuffer()
Data being read from file storage may be buffered in the OS
Default: true
|
long |
arenaBlockSize()
The size of one block in arena memory allocation.
|
int |
bloomLocality()
Control locality of bloom filter probes to improve cache miss rate.
|
long |
bytesPerSync()
Allows OS to incrementally sync files to disk while they are being
written, asynchronously, in the background.
|
CompactionStyle |
compactionStyle()
Compaction style for DB.
|
java.util.List<CompressionType> |
compressionPerLevel()
Return the currently set
CompressionType
per instances. |
CompressionType |
compressionType()
Compress blocks using the specified compression algorithm.
|
boolean |
createIfMissing()
Return true if the create_if_missing flag is set to true.
|
boolean |
createMissingColumnFamilies()
Return true if the create_missing_column_families flag is set
to true.
|
Options |
createStatistics()
Creates statistics object which collects metrics about database operations.
|
java.lang.String |
dbLogDir()
Returns the directory of info log.
|
long |
deleteObsoleteFilesPeriodMicros()
The periodicity when obsolete files get deleted.
|
boolean |
disableAutoCompactions()
Disable automatic compactions.
|
boolean |
disableDataSync()
If true, then the contents of data files are not synced
to stable storage.
|
protected void |
disposeInternal(long handle) |
boolean |
errorIfExists()
If true, an error will be thrown during RocksDB.open() if the
database already exists.
|
int |
expandedCompactionFactor()
Maximum number of bytes in all compacted files.
|
boolean |
filterDeletes()
Use KeyMayExist API to filter deletes when this is true.
|
Env |
getEnv()
Returns the set RocksEnv instance.
|
double |
hardRateLimit()
Puts are delayed 1ms at a time when any level has a compaction score that
exceeds hard_rate_limit.
|
InfoLogLevel |
infoLogLevel()
Returns currently set log level.
|
long |
inplaceUpdateNumLocks()
Number of locks used for inplace update
Default: 10000, if inplace_update_support = true, else 0.
|
boolean |
inplaceUpdateSupport()
Allows thread-safe inplace updates.
|
boolean |
isFdCloseOnExec()
Disable child process inherit open files.
|
long |
keepLogFileNum()
Returns the maximum number of info log files to be kept.
|
boolean |
levelCompactionDynamicLevelBytes()
Return if
LevelCompactionDynamicLevelBytes is enabled. |
int |
levelZeroFileNumCompactionTrigger()
The number of files in level 0 to trigger compaction from level-0 to
level-1.
|
int |
levelZeroSlowdownWritesTrigger()
Soft limit on the number of level-0 files.
|
int |
levelZeroStopWritesTrigger()
Maximum number of level-0 files.
|
long |
logFileTimeToRoll()
Returns the time interval for the info log file to roll (in seconds).
|
long |
manifestPreallocationSize()
Number of bytes to preallocate (via fallocate) the manifest
files.
|
int |
maxBackgroundCompactions()
Returns the maximum number of concurrent background compaction jobs,
submitted to the default LOW priority thread pool.
|
int |
maxBackgroundFlushes()
Returns the maximum number of concurrent background flush jobs.
|
long |
maxBytesForLevelBase()
The upper-bound of the total size of level-1 files in bytes.
|
int |
maxBytesForLevelMultiplier()
The ratio between the total size of level-(L+1) files and the total
size of level-L files for all L.
|
int |
maxGrandparentOverlapFactor()
Control maximum bytes of overlaps in grandparent (i.e., level+2) before we
stop building a single file in a level->level+1 compaction.
|
long |
maxLogFileSize()
Returns the maximum size of a info log file.
|
long |
maxManifestFileSize()
Manifest file is rolled over on reaching this limit.
|
int |
maxMemCompactionLevel()
This does nothing anymore.
|
int |
maxOpenFiles()
Number of open files that can be used by the DB.
|
long |
maxSequentialSkipInIterations()
An iteration->Next() sequentially skips over keys with the same
user-key unless this option is set.
|
long |
maxSuccessiveMerges()
Maximum number of successive merge operations on a key in the memtable.
|
long |
maxTableFilesSizeFIFO()
FIFO compaction option.
|
long |
maxTotalWalSize()
Returns the max total wal size.
|
int |
maxWriteBufferNumber()
Returns maximum number of write buffers.
|
java.lang.String |
memTableFactoryName()
Returns the name of the current mem table representation.
|
int |
memtablePrefixBloomBits()
Returns the number of bits used in the prefix bloom filter.
|
int |
memtablePrefixBloomProbes()
The number of hash probes per key used in the mem-table.
|
int |
minPartialMergeOperands()
The number of partial merge operands to accumulate before partial
merge will be performed.
|
int |
minWriteBufferNumberToMerge()
The minimum number of write buffers that will be merged together
before writing to storage.
|
int |
numLevels()
If level-styled compaction is used, then this number determines
the total number of levels.
|
boolean |
optimizeFiltersForHits()
Returns the current state of the
optimize_filters_for_hits
setting. |
Options |
optimizeForPointLookup(long blockCacheSizeMb)
Use this if you don't need to keep the data sorted, i.e.
|
Options |
optimizeLevelStyleCompaction()
Default values for some parameters in ColumnFamilyOptions are not
optimized for heavy workloads and big datasets, which means you might
observe write stalls under some conditions.
|
Options |
optimizeLevelStyleCompaction(long memtableMemoryBudget)
Default values for some parameters in ColumnFamilyOptions are not
optimized for heavy workloads and big datasets, which means you might
observe write stalls under some conditions.
|
Options |
optimizeUniversalStyleCompaction()
Default values for some parameters in ColumnFamilyOptions are not
optimized for heavy workloads and big datasets, which means you might
observe write stalls under some conditions.
|
Options |
optimizeUniversalStyleCompaction(long memtableMemoryBudget)
Default values for some parameters in ColumnFamilyOptions are not
optimized for heavy workloads and big datasets, which means you might
observe write stalls under some conditions.
|
boolean |
paranoidChecks()
If true, the implementation will do aggressive checking of the
data it is processing and will stop early if it detects any
errors.
|
Options |
prepareForBulkLoad()
Set appropriate parameters for bulk loading.
|
boolean |
purgeRedundantKvsWhileFlush()
Purge duplicate/deleted keys when a memtable is flushed to storage.
|
int |
rateLimitDelayMaxMilliseconds()
The maximum time interval a put will be stalled when hard_rate_limit
is enforced.
|
Options |
setAdviseRandomOnOpen(boolean adviseRandomOnOpen)
If set true, will hint the underlying file system that the file
access pattern is random, when a sst file is opened.
|
Options |
setAllowMmapReads(boolean allowMmapReads)
Allow the OS to mmap file for reading sst tables.
|
Options |
setAllowMmapWrites(boolean allowMmapWrites)
Allow the OS to mmap file for writing.
|
Options |
setAllowOsBuffer(boolean allowOsBuffer)
Data being read from file storage may be buffered in the OS
Default: true
|
Options |
setArenaBlockSize(long arenaBlockSize)
The size of one block in arena memory allocation.
|
Options |
setBloomLocality(int bloomLocality)
Control locality of bloom filter probes to improve cache miss rate.
|
Options |
setBytesPerSync(long bytesPerSync)
Allows OS to incrementally sync files to disk while they are being
written, asynchronously, in the background.
|
Options |
setCompactionStyle(CompactionStyle compactionStyle)
Set compaction style for DB.
|
Options |
setComparator(AbstractComparator<? extends AbstractSlice<?>> comparator)
Use the specified comparator for key ordering.
|
Options |
setComparator(BuiltinComparator builtinComparator)
Set
BuiltinComparator to be used with RocksDB. |
Options |
setCompressionPerLevel(java.util.List<CompressionType> compressionLevels)
Different levels can have different compression
policies.
|
Options |
setCompressionType(CompressionType compressionType)
Compress blocks using the specified compression algorithm.
|
Options |
setCreateIfMissing(boolean flag)
If this value is set to true, then the database will be created
if it is missing during
RocksDB.open() . |
Options |
setCreateMissingColumnFamilies(boolean flag)
If true, missing column families will be automatically created
|
Options |
setDbLogDir(java.lang.String dbLogDir)
This specifies the info LOG dir.
|
Options |
setDeleteObsoleteFilesPeriodMicros(long micros)
The periodicity when obsolete files get deleted.
|
Options |
setDisableAutoCompactions(boolean disableAutoCompactions)
Disable automatic compactions.
|
Options |
setDisableDataSync(boolean disableDataSync)
If true, then the contents of manifest and data files are
not synced to stable storage.
|
Options |
setEnv(Env env)
Use the specified object to interact with the environment,
e.g.
|
Options |
setErrorIfExists(boolean errorIfExists)
If true, an error will be thrown during RocksDB.open() if the
database already exists.
|
Options |
setExpandedCompactionFactor(int expandedCompactionFactor)
Maximum number of bytes in all compacted files.
|
Options |
setFilterDeletes(boolean filterDeletes)
Use KeyMayExist API to filter deletes when this is true.
|
Options |
setHardRateLimit(double hardRateLimit)
Puts are delayed 1ms at a time when any level has a compaction score that
exceeds hard_rate_limit.
|
Options |
setIncreaseParallelism(int totalThreads)
By default, RocksDB uses only one background thread for flush and
compaction.
|
Options |
setInfoLogLevel(InfoLogLevel infoLogLevel)
Sets the RocksDB log level.
|
Options |
setInplaceUpdateNumLocks(long inplaceUpdateNumLocks)
Number of locks used for inplace update
Default: 10000, if inplace_update_support = true, else 0.
|
Options |
setInplaceUpdateSupport(boolean inplaceUpdateSupport)
Allows thread-safe inplace updates.
|
Options |
setIsFdCloseOnExec(boolean isFdCloseOnExec)
Disable child process inherit open files.
|
Options |
setKeepLogFileNum(long keepLogFileNum)
Specifies the maximum number of info log files to be kept.
|
Options |
setLevelCompactionDynamicLevelBytes(boolean enableLevelCompactionDynamicLevelBytes)
If
true , RocksDB will pick target size of each level
dynamically. |
Options |
setLevelZeroFileNumCompactionTrigger(int numFiles)
Number of files to trigger level-0 compaction.
|
Options |
setLevelZeroSlowdownWritesTrigger(int numFiles)
Soft limit on number of level-0 files.
|
Options |
setLevelZeroStopWritesTrigger(int numFiles)
Maximum number of level-0 files.
|
Options |
setLogFileTimeToRoll(long logFileTimeToRoll)
Specifies the time interval for the info log file to roll (in seconds).
|
Options |
setLogger(Logger logger)
Any internal progress/error information generated by
the db will be written to the Logger if it is non-nullptr,
or to a file stored in the same directory as the DB
contents if info_log is nullptr.
|
Options |
setManifestPreallocationSize(long size)
Number of bytes to preallocate (via fallocate) the manifest
files.
|
Options |
setMaxBackgroundCompactions(int maxBackgroundCompactions)
Specifies the maximum number of concurrent background compaction jobs,
submitted to the default LOW priority thread pool.
|
Options |
setMaxBackgroundFlushes(int maxBackgroundFlushes)
Specifies the maximum number of concurrent background flush jobs.
|
Options |
setMaxBytesForLevelBase(long maxBytesForLevelBase)
The upper-bound of the total size of level-1 files in bytes.
|
Options |
setMaxBytesForLevelMultiplier(int multiplier)
The ratio between the total size of level-(L+1) files and the total
size of level-L files for all L.
|
Options |
setMaxGrandparentOverlapFactor(int maxGrandparentOverlapFactor)
Control maximum bytes of overlaps in grandparent (i.e., level+2) before we
stop building a single file in a level->level+1 compaction.
|
Options |
setMaxLogFileSize(long maxLogFileSize)
Specifies the maximum size of a info log file.
|
Options |
setMaxManifestFileSize(long maxManifestFileSize)
Manifest file is rolled over on reaching this limit.
|
Options |
setMaxMemCompactionLevel(int maxMemCompactionLevel)
This does nothing anymore.
|
Options |
setMaxOpenFiles(int maxOpenFiles)
Number of open files that can be used by the DB.
|
Options |
setMaxSequentialSkipInIterations(long maxSequentialSkipInIterations)
An iteration->Next() sequentially skips over keys with the same
user-key unless this option is set.
|
Options |
setMaxSuccessiveMerges(long maxSuccessiveMerges)
Maximum number of successive merge operations on a key in the memtable.
|
Options |
setMaxTableFilesSizeFIFO(long maxTableFilesSize)
FIFO compaction option.
|
Options |
setMaxTotalWalSize(long maxTotalWalSize)
Once write-ahead logs exceed this size, we will start forcing the
flush of column families whose memtables are backed by the oldest live
WAL file (i.e.
|
Options |
setMaxWriteBufferNumber(int maxWriteBufferNumber)
The maximum number of write buffers that are built up in memory.
|
Options |
setMemTableConfig(MemTableConfig config)
Set the config for mem-table.
|
Options |
setMemtablePrefixBloomBits(int memtablePrefixBloomBits)
Sets the number of bits used in the prefix bloom filter.
|
Options |
setMemtablePrefixBloomProbes(int memtablePrefixBloomProbes)
The number of hash probes per key used in the mem-table.
|
Options |
setMergeOperator(MergeOperator mergeOperator)
Set the merge operator to be used for merging two different key/value
pairs that share the same key.
|
Options |
setMergeOperatorName(java.lang.String name)
Set the merge operator to be used for merging two merge operands
of the same key.
|
Options |
setMinPartialMergeOperands(int minPartialMergeOperands)
The number of partial merge operands to accumulate before partial
merge will be performed.
|
Options |
setMinWriteBufferNumberToMerge(int minWriteBufferNumberToMerge)
The minimum number of write buffers that will be merged together
before writing to storage.
|
Options |
setNumLevels(int numLevels)
Set the number of levels for this database
If level-styled compaction is used, then this number determines
the total number of levels.
|
Options |
setOptimizeFiltersForHits(boolean optimizeFiltersForHits)
This flag specifies that the implementation should optimize the filters
mainly for cases where keys are found rather than also optimize for keys
missed.
|
Options |
setParanoidChecks(boolean paranoidChecks)
If true, the implementation will do aggressive checking of the
data it is processing and will stop early if it detects any
errors.
|
Options |
setPurgeRedundantKvsWhileFlush(boolean purgeRedundantKvsWhileFlush)
Purge duplicate/deleted keys when a memtable is flushed to storage.
|
Options |
setRateLimitDelayMaxMilliseconds(int rateLimitDelayMaxMilliseconds)
The maximum time interval a put will be stalled when hard_rate_limit
is enforced.
|
Options |
setRateLimiterConfig(RateLimiterConfig config)
Use to control write rate of flush and compaction.
|
Options |
setSoftRateLimit(double softRateLimit)
Puts are delayed 0-1 ms when any level has a compaction score that exceeds
soft_rate_limit.
|
Options |
setSourceCompactionFactor(int sourceCompactionFactor)
Maximum number of bytes in all source files to be compacted in a
single compaction run.
|
Options |
setStatsDumpPeriodSec(int statsDumpPeriodSec)
if not zero, dump rocksdb.stats to LOG every stats_dump_period_sec
Default: 3600 (1 hour)
|
Options |
setTableCacheNumshardbits(int tableCacheNumshardbits)
Number of shards used for table cache.
|
Options |
setTableFormatConfig(TableFormatConfig config)
Set the config for table format.
|
Options |
setTargetFileSizeBase(long targetFileSizeBase)
The target file size for compaction.
|
Options |
setTargetFileSizeMultiplier(int multiplier)
targetFileSizeMultiplier defines the size ratio between a
level-L file and level-(L+1) file.
|
Options |
setUseAdaptiveMutex(boolean useAdaptiveMutex)
Use adaptive mutex, which spins in the user space before resorting
to kernel.
|
Options |
setUseFsync(boolean useFsync)
If true, then every store to stable storage will issue a fsync.
|
Options |
setVerifyChecksumsInCompaction(boolean verifyChecksumsInCompaction)
If true, compaction will verify checksum on every read that happens
as part of compaction
Default: true
|
Options |
setWalDir(java.lang.String walDir)
This specifies the absolute dir path for write-ahead logs (WAL).
|
Options |
setWalSizeLimitMB(long sizeLimitMB)
WalTtlSeconds() and walSizeLimitMB() affect how archived logs
will be deleted.
|
Options |
setWalTtlSeconds(long walTtlSeconds)
DBOptionsInterface.walTtlSeconds() and DBOptionsInterface.walSizeLimitMB() affect how archived logs
will be deleted. |
Options |
setWriteBufferSize(long writeBufferSize)
Amount of data to build up in memory (backed by an unsorted log
on disk) before converting to a sorted on-disk file.
|
double |
softRateLimit()
Puts are delayed 0-1 ms when any level has a compaction score that exceeds
soft_rate_limit.
|
int |
sourceCompactionFactor()
Maximum number of bytes in all source files to be compacted in a
single compaction run.
|
Statistics |
statisticsPtr()
Returns statistics object.
|
int |
statsDumpPeriodSec()
If not zero, dump rocksdb.stats to LOG every stats_dump_period_sec
Default: 3600 (1 hour)
|
int |
tableCacheNumshardbits()
Number of shards used for table cache.
|
java.lang.String |
tableFactoryName() |
long |
targetFileSizeBase()
The target file size for compaction.
|
int |
targetFileSizeMultiplier()
targetFileSizeMultiplier defines the size ratio between a
level-(L+1) file and level-L file.
|
boolean |
useAdaptiveMutex()
Use adaptive mutex, which spins in the user space before resorting
to kernel.
|
Options |
useCappedPrefixExtractor(int n)
Same as fixed length prefix extractor, except that when slice is
shorter than the fixed length, it will use the full key.
|
Options |
useFixedLengthPrefixExtractor(int n)
This prefix-extractor uses the first n bytes of a key as its prefix.
|
boolean |
useFsync()
If true, then every store to stable storage will issue a fsync.
|
boolean |
verifyChecksumsInCompaction()
If true, compaction will verify checksum on every read that happens
as part of compaction
Default: true
|
java.lang.String |
walDir()
Returns the path to the write-ahead-logs (WAL) directory.
|
long |
walSizeLimitMB()
DBOptionsInterface.walTtlSeconds() and #walSizeLimitMB() affect how archived logs
will be deleted. |
long |
walTtlSeconds()
WalTtlSeconds() and walSizeLimitMB() affect how archived logs
will be deleted.
|
long |
writeBufferSize()
Return size of write buffer size.
|
disposeInternal
close, disOwnNativeHandle, isOwningHandle
dispose, finalize
public Options()
rocksdb::Options
in the c++ side.public Options(DBOptions dbOptions, ColumnFamilyOptions columnFamilyOptions)
dbOptions
- DBOptions
instancecolumnFamilyOptions
- ColumnFamilyOptions
instancepublic Options setIncreaseParallelism(int totalThreads)
DBOptionsInterface
By default, RocksDB uses only one background thread for flush and compaction. Calling this function will set it up such that total of `total_threads` is used.
You almost definitely want to call this function if your system is bottlenecked by RocksDB.
setIncreaseParallelism
in interface DBOptionsInterface
totalThreads
- The total number of threads to be used by RocksDB.
A good value is the number of cores.public Options setCreateIfMissing(boolean flag)
DBOptionsInterface
RocksDB.open()
.
Default: falsesetCreateIfMissing
in interface DBOptionsInterface
flag
- a flag indicating whether to create a database the
specified database in RocksDB.open(org.rocksdb.Options, String)
operation
is missing.RocksDB.open(org.rocksdb.Options, String)
public Options setCreateMissingColumnFamilies(boolean flag)
DBOptionsInterface
If true, missing column families will be automatically created
Default: false
setCreateMissingColumnFamilies
in interface DBOptionsInterface
flag
- a flag indicating if missing column families shall be
created automatically.public Options setEnv(Env env)
Env.getDefault()
env
- Env
instance.public Env getEnv()
RocksEnv
instance set in the Options.public Options prepareForBulkLoad()
Set appropriate parameters for bulk loading. The reason that this is a function that returns "this" instead of a constructor is to enable chaining of multiple similar calls in the future.
All data will be in level 0 without any automatic compaction. It's recommended to manually call CompactRange(NULL, NULL) before reading from the database, because otherwise the read can be very slow.
public boolean createIfMissing()
DBOptionsInterface
createIfMissing
in interface DBOptionsInterface
DBOptionsInterface.setCreateIfMissing(boolean)
public boolean createMissingColumnFamilies()
DBOptionsInterface
createMissingColumnFamilies
in interface DBOptionsInterface
DBOptionsInterface.setCreateMissingColumnFamilies(boolean)
public Options optimizeForPointLookup(long blockCacheSizeMb)
ColumnFamilyOptionsInterface
optimizeForPointLookup
in interface ColumnFamilyOptionsInterface
blockCacheSizeMb
- Block cache size in MBpublic Options optimizeLevelStyleCompaction()
ColumnFamilyOptionsInterface
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for level style compaction.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
optimizeLevelStyleCompaction
in interface ColumnFamilyOptionsInterface
public Options optimizeLevelStyleCompaction(long memtableMemoryBudget)
ColumnFamilyOptionsInterface
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for level style compaction.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
optimizeLevelStyleCompaction
in interface ColumnFamilyOptionsInterface
memtableMemoryBudget
- memory budget in bytespublic Options optimizeUniversalStyleCompaction()
ColumnFamilyOptionsInterface
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for universal style compaction.
Universal style compaction is focused on reducing Write Amplification Factor for big data sets, but increases Space Amplification.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
optimizeUniversalStyleCompaction
in interface ColumnFamilyOptionsInterface
public Options optimizeUniversalStyleCompaction(long memtableMemoryBudget)
ColumnFamilyOptionsInterface
Default values for some parameters in ColumnFamilyOptions are not optimized for heavy workloads and big datasets, which means you might observe write stalls under some conditions. As a starting point for tuning RocksDB options, use the following for universal style compaction.
Universal style compaction is focused on reducing Write Amplification Factor for big data sets, but increases Space Amplification.
Make sure to also call IncreaseParallelism(), which will provide the biggest performance gains.
Note: we might use more memory than memtable_memory_budget during high write rate period
optimizeUniversalStyleCompaction
in interface ColumnFamilyOptionsInterface
memtableMemoryBudget
- memory budget in bytespublic Options setComparator(BuiltinComparator builtinComparator)
ColumnFamilyOptionsInterface
BuiltinComparator
to be used with RocksDB.
Note: Comparator can be set once upon database creation.
Default: BytewiseComparator.setComparator
in interface ColumnFamilyOptionsInterface
builtinComparator
- a BuiltinComparator
type.public Options setComparator(AbstractComparator<? extends AbstractSlice<?>> comparator)
ColumnFamilyOptionsInterface
setComparator
in interface ColumnFamilyOptionsInterface
comparator
- java instance.public Options setMergeOperatorName(java.lang.String name)
ColumnFamilyOptionsInterface
Set the merge operator to be used for merging two merge operands of the same key. The merge function is invoked during compaction and at lookup time, if multiple key/value pairs belonging to the same key are found in the database.
setMergeOperatorName
in interface ColumnFamilyOptionsInterface
name
- the name of the merge function, as defined by
the MergeOperators factory (see utilities/MergeOperators.h)
The merge function is specified by name and must be one of the
standard merge operators provided by RocksDB. The available
operators are "put", "uint64add", "stringappend" and "stringappendtest".public Options setMergeOperator(MergeOperator mergeOperator)
ColumnFamilyOptionsInterface
Set the merge operator to be used for merging two different key/value pairs that share the same key. The merge function is invoked during compaction and at lookup time, if multiple key/value pairs belonging to the same key are found in the database.
setMergeOperator
in interface ColumnFamilyOptionsInterface
mergeOperator
- MergeOperator
instance.public Options setWriteBufferSize(long writeBufferSize)
ColumnFamilyOptionsInterface
max_write_buffer_number
write buffers may be held in memory
at the same time, so you may wish to adjust this parameter
to control memory usage.
Also, a larger write buffer will result in a longer recovery time
the next time the database is opened.
Default: 4MBsetWriteBufferSize
in interface ColumnFamilyOptionsInterface
writeBufferSize
- the size of write buffer.public long writeBufferSize()
ColumnFamilyOptionsInterface
writeBufferSize
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.setWriteBufferSize(long)
public Options setMaxWriteBufferNumber(int maxWriteBufferNumber)
ColumnFamilyOptionsInterface
setMaxWriteBufferNumber
in interface ColumnFamilyOptionsInterface
maxWriteBufferNumber
- maximum number of write buffers.public int maxWriteBufferNumber()
ColumnFamilyOptionsInterface
maxWriteBufferNumber
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.setMaxWriteBufferNumber(int)
public boolean errorIfExists()
DBOptionsInterface
errorIfExists
in interface DBOptionsInterface
public Options setErrorIfExists(boolean errorIfExists)
DBOptionsInterface
setErrorIfExists
in interface DBOptionsInterface
errorIfExists
- if true, an exception will be thrown
during RocksDB.open()
if the database already exists.RocksDB.open(org.rocksdb.Options, String)
public boolean paranoidChecks()
DBOptionsInterface
paranoidChecks
in interface DBOptionsInterface
public Options setParanoidChecks(boolean paranoidChecks)
DBOptionsInterface
setParanoidChecks
in interface DBOptionsInterface
paranoidChecks
- a flag to indicate whether paranoid-check
is on.public int maxOpenFiles()
DBOptionsInterface
target_file_size_base
and target_file_size_multiplier
for level-based compaction. For universal-style compaction, you can usually
set it to -1.maxOpenFiles
in interface DBOptionsInterface
public Options setMaxTotalWalSize(long maxTotalWalSize)
DBOptionsInterface
Once write-ahead logs exceed this size, we will start forcing the flush of column families whose memtables are backed by the oldest live WAL file (i.e. the ones that are causing all the space amplification).
If set to 0 (default), we will dynamically choose the WAL size limit to be [sum of all write_buffer_size * max_write_buffer_number] * 2
Default: 0
setMaxTotalWalSize
in interface DBOptionsInterface
maxTotalWalSize
- max total wal size.public long maxTotalWalSize()
DBOptionsInterface
Returns the max total wal size. Once write-ahead logs exceed this size, we will start forcing the flush of column families whose memtables are backed by the oldest live WAL file (i.e. the ones that are causing all the space amplification).
If set to 0 (default), we will dynamically choose the WAL size limit to be [sum of all write_buffer_size * max_write_buffer_number] * 2
maxTotalWalSize
in interface DBOptionsInterface
public Options setMaxOpenFiles(int maxOpenFiles)
DBOptionsInterface
target_file_size_base
and target_file_size_multiplier
for level-based compaction. For universal-style compaction, you can usually
set it to -1.
Default: 5000setMaxOpenFiles
in interface DBOptionsInterface
maxOpenFiles
- the maximum number of open files.public boolean disableDataSync()
DBOptionsInterface
disableDataSync
in interface DBOptionsInterface
public Options setDisableDataSync(boolean disableDataSync)
DBOptionsInterface
If true, then the contents of manifest and data files are not synced to stable storage. Their contents remain in the OS buffers till theOS decides to flush them.
This option is good for bulk-loading of data.
Once the bulk-loading is complete, please issue a sync to the OS to flush all dirty buffers to stable storage.
Default: false
setDisableDataSync
in interface DBOptionsInterface
disableDataSync
- a boolean flag to specify whether to
disable data sync.public boolean useFsync()
DBOptionsInterface
If true, then every store to stable storage will issue a fsync.
If false, then every store to stable storage will issue a fdatasync. This parameter should be set to true while storing data to filesystem like ext3 that can lose files after a reboot.
useFsync
in interface DBOptionsInterface
public Options setUseFsync(boolean useFsync)
DBOptionsInterface
If true, then every store to stable storage will issue a fsync.
If false, then every store to stable storage will issue a fdatasync. This parameter should be set to true while storing data to filesystem like ext3 that can lose files after a reboot.
Default: false
setUseFsync
in interface DBOptionsInterface
useFsync
- a boolean flag to specify whether to use fsyncpublic java.lang.String dbLogDir()
DBOptionsInterface
dbLogDir
in interface DBOptionsInterface
public Options setDbLogDir(java.lang.String dbLogDir)
DBOptionsInterface
setDbLogDir
in interface DBOptionsInterface
dbLogDir
- the path to the info log directorypublic java.lang.String walDir()
DBOptionsInterface
walDir
in interface DBOptionsInterface
public Options setWalDir(java.lang.String walDir)
DBOptionsInterface
setWalDir
in interface DBOptionsInterface
walDir
- the path to the write-ahead-log directory.public long deleteObsoleteFilesPeriodMicros()
DBOptionsInterface
deleteObsoleteFilesPeriodMicros
in interface DBOptionsInterface
public Options setDeleteObsoleteFilesPeriodMicros(long micros)
DBOptionsInterface
setDeleteObsoleteFilesPeriodMicros
in interface DBOptionsInterface
micros
- the time interval in microspublic int maxBackgroundCompactions()
DBOptionsInterface
maxBackgroundCompactions
in interface DBOptionsInterface
Env.setBackgroundThreads(int)
,
Env.setBackgroundThreads(int, int)
public Options createStatistics()
DBOptionsInterface
Creates statistics object which collects metrics about database operations. Statistics objects should not be shared between DB instances as it does not use any locks to prevent concurrent updates.
createStatistics
in interface DBOptionsInterface
RocksDB.open(org.rocksdb.Options, String)
public Statistics statisticsPtr()
DBOptionsInterface
Returns statistics object. Calls DBOptionsInterface.createStatistics()
if
C++ returns nullptr
for statistics.
statisticsPtr
in interface DBOptionsInterface
DBOptionsInterface.createStatistics()
public Options setMaxBackgroundCompactions(int maxBackgroundCompactions)
DBOptionsInterface
setMaxBackgroundCompactions
in interface DBOptionsInterface
maxBackgroundCompactions
- the maximum number of background
compaction jobs.Env.setBackgroundThreads(int)
,
Env.setBackgroundThreads(int, int)
,
DBOptionsInterface.maxBackgroundFlushes()
public int maxBackgroundFlushes()
DBOptionsInterface
maxBackgroundFlushes
in interface DBOptionsInterface
Env.setBackgroundThreads(int)
,
Env.setBackgroundThreads(int, int)
public Options setMaxBackgroundFlushes(int maxBackgroundFlushes)
DBOptionsInterface
setMaxBackgroundFlushes
in interface DBOptionsInterface
maxBackgroundFlushes
- number of max concurrent flush jobsEnv.setBackgroundThreads(int)
,
Env.setBackgroundThreads(int, int)
,
DBOptionsInterface.maxBackgroundCompactions()
public long maxLogFileSize()
DBOptionsInterface
maxLogFileSize
in interface DBOptionsInterface
public Options setMaxLogFileSize(long maxLogFileSize)
DBOptionsInterface
setMaxLogFileSize
in interface DBOptionsInterface
maxLogFileSize
- the maximum size of a info log file.public long logFileTimeToRoll()
DBOptionsInterface
logFileTimeToRoll
in interface DBOptionsInterface
public Options setLogFileTimeToRoll(long logFileTimeToRoll)
DBOptionsInterface
setLogFileTimeToRoll
in interface DBOptionsInterface
logFileTimeToRoll
- the time interval in seconds.public long keepLogFileNum()
DBOptionsInterface
keepLogFileNum
in interface DBOptionsInterface
public Options setKeepLogFileNum(long keepLogFileNum)
DBOptionsInterface
setKeepLogFileNum
in interface DBOptionsInterface
keepLogFileNum
- the maximum number of info log files to be kept.public long maxManifestFileSize()
DBOptionsInterface
maxManifestFileSize
in interface DBOptionsInterface
public Options setMaxManifestFileSize(long maxManifestFileSize)
DBOptionsInterface
setMaxManifestFileSize
in interface DBOptionsInterface
maxManifestFileSize
- the size limit of a manifest file.public Options setMaxTableFilesSizeFIFO(long maxTableFilesSize)
ColumnFamilyOptionsInterface
setMaxTableFilesSizeFIFO
in interface ColumnFamilyOptionsInterface
maxTableFilesSize
- the size limit of the total sum of table files.public long maxTableFilesSizeFIFO()
ColumnFamilyOptionsInterface
maxTableFilesSizeFIFO
in interface ColumnFamilyOptionsInterface
public int tableCacheNumshardbits()
DBOptionsInterface
tableCacheNumshardbits
in interface DBOptionsInterface
public Options setTableCacheNumshardbits(int tableCacheNumshardbits)
DBOptionsInterface
setTableCacheNumshardbits
in interface DBOptionsInterface
tableCacheNumshardbits
- the number of chardspublic long walTtlSeconds()
DBOptionsInterface
walTtlSeconds
in interface DBOptionsInterface
DBOptionsInterface.walSizeLimitMB()
public Options setWalTtlSeconds(long walTtlSeconds)
DBOptionsInterface
DBOptionsInterface.walTtlSeconds()
and DBOptionsInterface.walSizeLimitMB()
affect how archived logs
will be deleted.
setWalTtlSeconds
in interface DBOptionsInterface
walTtlSeconds
- the ttl secondsDBOptionsInterface.setWalSizeLimitMB(long)
public long walSizeLimitMB()
DBOptionsInterface
DBOptionsInterface.walTtlSeconds()
and #walSizeLimitMB()
affect how archived logs
will be deleted.
walSizeLimitMB
in interface DBOptionsInterface
DBOptionsInterface.walSizeLimitMB()
public Options setWalSizeLimitMB(long sizeLimitMB)
DBOptionsInterface
setWalSizeLimitMB
in interface DBOptionsInterface
sizeLimitMB
- size limit in mega-bytes.DBOptionsInterface.setWalSizeLimitMB(long)
public long manifestPreallocationSize()
DBOptionsInterface
manifestPreallocationSize
in interface DBOptionsInterface
public Options setManifestPreallocationSize(long size)
DBOptionsInterface
setManifestPreallocationSize
in interface DBOptionsInterface
size
- the size in bytepublic boolean allowOsBuffer()
DBOptionsInterface
allowOsBuffer
in interface DBOptionsInterface
public Options setAllowOsBuffer(boolean allowOsBuffer)
DBOptionsInterface
setAllowOsBuffer
in interface DBOptionsInterface
allowOsBuffer
- if true, then OS buffering is allowed.public boolean allowMmapReads()
DBOptionsInterface
allowMmapReads
in interface DBOptionsInterface
public Options setAllowMmapReads(boolean allowMmapReads)
DBOptionsInterface
setAllowMmapReads
in interface DBOptionsInterface
allowMmapReads
- true if mmap reads are allowed.public boolean allowMmapWrites()
DBOptionsInterface
allowMmapWrites
in interface DBOptionsInterface
public Options setAllowMmapWrites(boolean allowMmapWrites)
DBOptionsInterface
setAllowMmapWrites
in interface DBOptionsInterface
allowMmapWrites
- true if mmap writes are allowd.public boolean isFdCloseOnExec()
DBOptionsInterface
isFdCloseOnExec
in interface DBOptionsInterface
public Options setIsFdCloseOnExec(boolean isFdCloseOnExec)
DBOptionsInterface
setIsFdCloseOnExec
in interface DBOptionsInterface
isFdCloseOnExec
- true if child process inheriting open
files is disabled.public int statsDumpPeriodSec()
DBOptionsInterface
statsDumpPeriodSec
in interface DBOptionsInterface
public Options setStatsDumpPeriodSec(int statsDumpPeriodSec)
DBOptionsInterface
setStatsDumpPeriodSec
in interface DBOptionsInterface
statsDumpPeriodSec
- time interval in seconds.public boolean adviseRandomOnOpen()
DBOptionsInterface
adviseRandomOnOpen
in interface DBOptionsInterface
public Options setAdviseRandomOnOpen(boolean adviseRandomOnOpen)
DBOptionsInterface
setAdviseRandomOnOpen
in interface DBOptionsInterface
adviseRandomOnOpen
- true if hinting random access is on.public boolean useAdaptiveMutex()
DBOptionsInterface
useAdaptiveMutex
in interface DBOptionsInterface
public Options setUseAdaptiveMutex(boolean useAdaptiveMutex)
DBOptionsInterface
setUseAdaptiveMutex
in interface DBOptionsInterface
useAdaptiveMutex
- true if adaptive mutex is used.public long bytesPerSync()
DBOptionsInterface
bytesPerSync
in interface DBOptionsInterface
public Options setBytesPerSync(long bytesPerSync)
DBOptionsInterface
setBytesPerSync
in interface DBOptionsInterface
bytesPerSync
- size in bytespublic Options setMemTableConfig(MemTableConfig config)
ColumnFamilyOptionsInterface
setMemTableConfig
in interface ColumnFamilyOptionsInterface
config
- the mem-table config.public Options setRateLimiterConfig(RateLimiterConfig config)
DBOptionsInterface
setRateLimiterConfig
in interface DBOptionsInterface
config
- rate limiter config.public Options setLogger(Logger logger)
DBOptionsInterface
Any internal progress/error information generated by the db will be written to the Logger if it is non-nullptr, or to a file stored in the same directory as the DB contents if info_log is nullptr.
Default: nullptr
setLogger
in interface DBOptionsInterface
logger
- Logger
instance.public Options setInfoLogLevel(InfoLogLevel infoLogLevel)
DBOptionsInterface
Sets the RocksDB log level. Default level is INFO
setInfoLogLevel
in interface DBOptionsInterface
infoLogLevel
- log level to set.public InfoLogLevel infoLogLevel()
DBOptionsInterface
Returns currently set log level.
infoLogLevel
in interface DBOptionsInterface
InfoLogLevel
instance.public java.lang.String memTableFactoryName()
ColumnFamilyOptionsInterface
memTableFactoryName
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.setTableFormatConfig(org.rocksdb.TableFormatConfig)
public Options setTableFormatConfig(TableFormatConfig config)
ColumnFamilyOptionsInterface
setTableFormatConfig
in interface ColumnFamilyOptionsInterface
config
- the table format config.public java.lang.String tableFactoryName()
tableFactoryName
in interface ColumnFamilyOptionsInterface
public Options useFixedLengthPrefixExtractor(int n)
ColumnFamilyOptionsInterface
useFixedLengthPrefixExtractor
in interface ColumnFamilyOptionsInterface
n
- use the first n bytes of a key as its prefix.public Options useCappedPrefixExtractor(int n)
ColumnFamilyOptionsInterface
useCappedPrefixExtractor
in interface ColumnFamilyOptionsInterface
n
- use the first n bytes of a key as its prefix.public CompressionType compressionType()
ColumnFamilyOptionsInterface
compressionType
in interface ColumnFamilyOptionsInterface
public Options setCompressionPerLevel(java.util.List<CompressionType> compressionLevels)
ColumnFamilyOptionsInterface
Different levels can have different compression policies. There are cases where most lower levels would like to use quick compression algorithms while the higher levels (which have more data) use compression algorithms that have better compression but could be slower. This array, if non-empty, should have an entry for each level of the database; these override the value specified in the previous field 'compression'.
NOTICEIf level_compaction_dynamic_level_bytes=true
,
compression_per_level[0]
still determines L0
,
but other elements of the array are based on base level
(the level L0
files are merged to), and may not
match the level users see from info log for metadata.
If L0
files are merged to level - n
,
then, for i>0
, compression_per_level[i]
determines compaction type for level n+i-1
.
For example, if we have 5 levels, and we determine to
merge L0
data to L4
(which means L1..L3
will be empty), then the new files go to L4
uses
compression type compression_per_level[1]
.
If now L0
is merged to L2
. Data goes to
L2
will be compressed according to
compression_per_level[1]
, L3
using
compression_per_level[2]
and L4
using
compression_per_level[3]
. Compaction for each
level can change when data grows.
Default: empty
setCompressionPerLevel
in interface ColumnFamilyOptionsInterface
compressionLevels
- list of
CompressionType
instances.public java.util.List<CompressionType> compressionPerLevel()
ColumnFamilyOptionsInterface
Return the currently set CompressionType
per instances.
See: ColumnFamilyOptionsInterface.setCompressionPerLevel(java.util.List)
compressionPerLevel
in interface ColumnFamilyOptionsInterface
CompressionType
instances.public Options setCompressionType(CompressionType compressionType)
ColumnFamilyOptionsInterface
setCompressionType
in interface ColumnFamilyOptionsInterface
compressionType
- Compression Type.public CompactionStyle compactionStyle()
ColumnFamilyOptionsInterface
compactionStyle
in interface ColumnFamilyOptionsInterface
public Options setCompactionStyle(CompactionStyle compactionStyle)
ColumnFamilyOptionsInterface
setCompactionStyle
in interface ColumnFamilyOptionsInterface
compactionStyle
- Compaction style.public int numLevels()
ColumnFamilyOptionsInterface
numLevels
in interface ColumnFamilyOptionsInterface
public Options setNumLevels(int numLevels)
ColumnFamilyOptionsInterface
setNumLevels
in interface ColumnFamilyOptionsInterface
numLevels
- the number of levels.public int levelZeroFileNumCompactionTrigger()
ColumnFamilyOptionsInterface
levelZeroFileNumCompactionTrigger
in interface ColumnFamilyOptionsInterface
public Options setLevelZeroFileNumCompactionTrigger(int numFiles)
ColumnFamilyOptionsInterface
setLevelZeroFileNumCompactionTrigger
in interface ColumnFamilyOptionsInterface
numFiles
- the number of files in level-0 to trigger compaction.public int levelZeroSlowdownWritesTrigger()
ColumnFamilyOptionsInterface
levelZeroSlowdownWritesTrigger
in interface ColumnFamilyOptionsInterface
public Options setLevelZeroSlowdownWritesTrigger(int numFiles)
ColumnFamilyOptionsInterface
setLevelZeroSlowdownWritesTrigger
in interface ColumnFamilyOptionsInterface
numFiles
- soft limit on number of level-0 files.public int levelZeroStopWritesTrigger()
ColumnFamilyOptionsInterface
levelZeroStopWritesTrigger
in interface ColumnFamilyOptionsInterface
public Options setLevelZeroStopWritesTrigger(int numFiles)
ColumnFamilyOptionsInterface
setLevelZeroStopWritesTrigger
in interface ColumnFamilyOptionsInterface
numFiles
- the hard limit of the number of level-0 files.public int maxMemCompactionLevel()
ColumnFamilyOptionsInterface
maxMemCompactionLevel
in interface ColumnFamilyOptionsInterface
public Options setMaxMemCompactionLevel(int maxMemCompactionLevel)
ColumnFamilyOptionsInterface
setMaxMemCompactionLevel
in interface ColumnFamilyOptionsInterface
maxMemCompactionLevel
- Unused.public long targetFileSizeBase()
ColumnFamilyOptionsInterface
targetFileSizeBase
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.targetFileSizeMultiplier()
public Options setTargetFileSizeBase(long targetFileSizeBase)
ColumnFamilyOptionsInterface
setTargetFileSizeBase
in interface ColumnFamilyOptionsInterface
targetFileSizeBase
- the target size of a level-0 file.ColumnFamilyOptionsInterface.setTargetFileSizeMultiplier(int)
public int targetFileSizeMultiplier()
ColumnFamilyOptionsInterface
targetFileSizeMultiplier
in interface ColumnFamilyOptionsInterface
public Options setTargetFileSizeMultiplier(int multiplier)
ColumnFamilyOptionsInterface
setTargetFileSizeMultiplier
in interface ColumnFamilyOptionsInterface
multiplier
- the size ratio between a level-(L+1) file
and level-L file.public Options setMaxBytesForLevelBase(long maxBytesForLevelBase)
ColumnFamilyOptionsInterface
setMaxBytesForLevelBase
in interface ColumnFamilyOptionsInterface
maxBytesForLevelBase
- maximum bytes for level base.ColumnFamilyOptionsInterface.setMaxBytesForLevelMultiplier(int)
public long maxBytesForLevelBase()
ColumnFamilyOptionsInterface
maxBytesForLevelBase
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.maxBytesForLevelMultiplier()
public Options setLevelCompactionDynamicLevelBytes(boolean enableLevelCompactionDynamicLevelBytes)
ColumnFamilyOptionsInterface
If true
, RocksDB will pick target size of each level
dynamically. We will pick a base level b >= 1. L0 will be
directly merged into level b, instead of always into level 1.
Level 1 to b-1 need to be empty. We try to pick b and its target
size so that
At the same time max_bytes_for_level_multiplier and max_bytes_for_level_multiplier_additional are still satisfied.
With this option on, from an empty DB, we make last level the base
level, which means merging L0 data into the last level, until it exceeds
max_bytes_for_level_base. And then we make the second last level to be
base level, to start to merge L0 data to second last level, with its
target size to be 1/max_bytes_for_level_multiplier
of the last
levels extra size. After the data accumulates more so that we need to
move the base level to the third last one, and so on.
For example, assume max_bytes_for_level_multiplier=10
,
num_levels=6
, and max_bytes_for_level_base=10MB
.
Target sizes of level 1 to 5 starts with:
[- - - - 10MB]
with base level is level. Target sizes of level 1 to 4 are not applicable because they will not be used. Until the size of Level 5 grows to more than 10MB, say 11MB, we make base target to level 4 and now the targets looks like:
[- - - 1.1MB 11MB]
While data are accumulated, size targets are tuned based on actual data of level 5. When level 5 has 50MB of data, the target is like:
[- - - 5MB 50MB]
Until level 5's actual size is more than 100MB, say 101MB. Now if we keep level 4 to be the base level, its target size needs to be 10.1MB, which doesn't satisfy the target size range. So now we make level 3 the target size and the target sizes of the levels look like:
[- - 1.01MB 10.1MB 101MB]
In the same way, while level 5 further grows, all levels' targets grow, like
[- - 5MB 50MB 500MB]
Until level 5 exceeds 1000MB and becomes 1001MB, we make level 2 the base level and make levels' target sizes like this:
[- 1.001MB 10.01MB 100.1MB 1001MB]
and go on...
By doing it, we give max_bytes_for_level_multiplier
a priority
against max_bytes_for_level_base
, for a more predictable LSM tree
shape. It is useful to limit worse case space amplification.
max_bytes_for_level_multiplier_additional
is ignored with
this flag on.
Turning this feature on or off for an existing DB can cause unexpected LSM tree structure so it's not recommended.
Caution: this option is experimental
Default: false
setLevelCompactionDynamicLevelBytes
in interface ColumnFamilyOptionsInterface
enableLevelCompactionDynamicLevelBytes
- boolean value indicating
if LevelCompactionDynamicLevelBytes
shall be enabled.public boolean levelCompactionDynamicLevelBytes()
ColumnFamilyOptionsInterface
Return if LevelCompactionDynamicLevelBytes
is enabled.
For further information see
ColumnFamilyOptionsInterface.setLevelCompactionDynamicLevelBytes(boolean)
levelCompactionDynamicLevelBytes
in interface ColumnFamilyOptionsInterface
levelCompactionDynamicLevelBytes
is enabled.public int maxBytesForLevelMultiplier()
ColumnFamilyOptionsInterface
maxBytesForLevelMultiplier
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.maxBytesForLevelBase()
public Options setMaxBytesForLevelMultiplier(int multiplier)
ColumnFamilyOptionsInterface
setMaxBytesForLevelMultiplier
in interface ColumnFamilyOptionsInterface
multiplier
- the ratio between the total size of level-(L+1)
files and the total size of level-L files for all L.ColumnFamilyOptionsInterface.setMaxBytesForLevelBase(long)
public int expandedCompactionFactor()
ColumnFamilyOptionsInterface
expandedCompactionFactor
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.sourceCompactionFactor()
public Options setExpandedCompactionFactor(int expandedCompactionFactor)
ColumnFamilyOptionsInterface
setExpandedCompactionFactor
in interface ColumnFamilyOptionsInterface
expandedCompactionFactor
- the maximum number of bytes in all
compacted files.ColumnFamilyOptionsInterface.setSourceCompactionFactor(int)
public int sourceCompactionFactor()
ColumnFamilyOptionsInterface
sourceCompactionFactor
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.expandedCompactionFactor()
public Options setSourceCompactionFactor(int sourceCompactionFactor)
ColumnFamilyOptionsInterface
setSourceCompactionFactor
in interface ColumnFamilyOptionsInterface
sourceCompactionFactor
- the maximum number of bytes in all
source files to be compacted in a single compaction run.ColumnFamilyOptionsInterface.setExpandedCompactionFactor(int)
public int maxGrandparentOverlapFactor()
ColumnFamilyOptionsInterface
maxGrandparentOverlapFactor
in interface ColumnFamilyOptionsInterface
public Options setMaxGrandparentOverlapFactor(int maxGrandparentOverlapFactor)
ColumnFamilyOptionsInterface
setMaxGrandparentOverlapFactor
in interface ColumnFamilyOptionsInterface
maxGrandparentOverlapFactor
- maximum bytes of overlaps in
"grandparent" level.public double softRateLimit()
ColumnFamilyOptionsInterface
softRateLimit
in interface ColumnFamilyOptionsInterface
public Options setSoftRateLimit(double softRateLimit)
ColumnFamilyOptionsInterface
setSoftRateLimit
in interface ColumnFamilyOptionsInterface
softRateLimit
- the soft-rate-limit of a compaction score
for put delay.public double hardRateLimit()
ColumnFamilyOptionsInterface
hardRateLimit
in interface ColumnFamilyOptionsInterface
public Options setHardRateLimit(double hardRateLimit)
ColumnFamilyOptionsInterface
setHardRateLimit
in interface ColumnFamilyOptionsInterface
hardRateLimit
- the hard-rate-limit of a compaction score for put
delay.public int rateLimitDelayMaxMilliseconds()
ColumnFamilyOptionsInterface
rateLimitDelayMaxMilliseconds
in interface ColumnFamilyOptionsInterface
public Options setRateLimitDelayMaxMilliseconds(int rateLimitDelayMaxMilliseconds)
ColumnFamilyOptionsInterface
setRateLimitDelayMaxMilliseconds
in interface ColumnFamilyOptionsInterface
rateLimitDelayMaxMilliseconds
- the maximum time interval a put
will be stalled.public long arenaBlockSize()
ColumnFamilyOptionsInterface
arenaBlockSize
in interface ColumnFamilyOptionsInterface
public Options setArenaBlockSize(long arenaBlockSize)
ColumnFamilyOptionsInterface
setArenaBlockSize
in interface ColumnFamilyOptionsInterface
arenaBlockSize
- the size of an arena blockpublic boolean disableAutoCompactions()
ColumnFamilyOptionsInterface
disableAutoCompactions
in interface ColumnFamilyOptionsInterface
public Options setDisableAutoCompactions(boolean disableAutoCompactions)
ColumnFamilyOptionsInterface
setDisableAutoCompactions
in interface ColumnFamilyOptionsInterface
disableAutoCompactions
- true if auto-compactions are disabled.public boolean purgeRedundantKvsWhileFlush()
ColumnFamilyOptionsInterface
purgeRedundantKvsWhileFlush
in interface ColumnFamilyOptionsInterface
public Options setPurgeRedundantKvsWhileFlush(boolean purgeRedundantKvsWhileFlush)
ColumnFamilyOptionsInterface
setPurgeRedundantKvsWhileFlush
in interface ColumnFamilyOptionsInterface
purgeRedundantKvsWhileFlush
- true if purging keys is disabled.public boolean verifyChecksumsInCompaction()
ColumnFamilyOptionsInterface
verifyChecksumsInCompaction
in interface ColumnFamilyOptionsInterface
public Options setVerifyChecksumsInCompaction(boolean verifyChecksumsInCompaction)
ColumnFamilyOptionsInterface
setVerifyChecksumsInCompaction
in interface ColumnFamilyOptionsInterface
verifyChecksumsInCompaction
- true if compaction verifies
checksum on every read.public boolean filterDeletes()
ColumnFamilyOptionsInterface
filterDeletes
in interface ColumnFamilyOptionsInterface
public Options setFilterDeletes(boolean filterDeletes)
ColumnFamilyOptionsInterface
setFilterDeletes
in interface ColumnFamilyOptionsInterface
filterDeletes
- true if filter-deletes behavior is on.public long maxSequentialSkipInIterations()
ColumnFamilyOptionsInterface
maxSequentialSkipInIterations
in interface ColumnFamilyOptionsInterface
public Options setMaxSequentialSkipInIterations(long maxSequentialSkipInIterations)
ColumnFamilyOptionsInterface
setMaxSequentialSkipInIterations
in interface ColumnFamilyOptionsInterface
maxSequentialSkipInIterations
- the number of keys could
be skipped in a iteration.public boolean inplaceUpdateSupport()
ColumnFamilyOptionsInterface
inplaceUpdateSupport
in interface ColumnFamilyOptionsInterface
public Options setInplaceUpdateSupport(boolean inplaceUpdateSupport)
ColumnFamilyOptionsInterface
setInplaceUpdateSupport
in interface ColumnFamilyOptionsInterface
inplaceUpdateSupport
- true if thread-safe inplace updates
are allowed.public long inplaceUpdateNumLocks()
ColumnFamilyOptionsInterface
inplaceUpdateNumLocks
in interface ColumnFamilyOptionsInterface
public Options setInplaceUpdateNumLocks(long inplaceUpdateNumLocks)
ColumnFamilyOptionsInterface
setInplaceUpdateNumLocks
in interface ColumnFamilyOptionsInterface
inplaceUpdateNumLocks
- the number of locks used for
inplace updates.public int memtablePrefixBloomBits()
ColumnFamilyOptionsInterface
memtablePrefixBloomBits
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.useFixedLengthPrefixExtractor(int)
public Options setMemtablePrefixBloomBits(int memtablePrefixBloomBits)
ColumnFamilyOptionsInterface
setMemtablePrefixBloomBits
in interface ColumnFamilyOptionsInterface
memtablePrefixBloomBits
- the number of bits used in the
prefix bloom filter.public int memtablePrefixBloomProbes()
ColumnFamilyOptionsInterface
memtablePrefixBloomProbes
in interface ColumnFamilyOptionsInterface
public Options setMemtablePrefixBloomProbes(int memtablePrefixBloomProbes)
ColumnFamilyOptionsInterface
setMemtablePrefixBloomProbes
in interface ColumnFamilyOptionsInterface
memtablePrefixBloomProbes
- the number of hash probes per key.public int bloomLocality()
ColumnFamilyOptionsInterface
bloomLocality
in interface ColumnFamilyOptionsInterface
ColumnFamilyOptionsInterface.setMemtablePrefixBloomProbes(int)
public Options setBloomLocality(int bloomLocality)
ColumnFamilyOptionsInterface
setBloomLocality
in interface ColumnFamilyOptionsInterface
bloomLocality
- the level of locality of bloom-filter probes.public long maxSuccessiveMerges()
ColumnFamilyOptionsInterface
maxSuccessiveMerges
in interface ColumnFamilyOptionsInterface
public Options setMaxSuccessiveMerges(long maxSuccessiveMerges)
ColumnFamilyOptionsInterface
setMaxSuccessiveMerges
in interface ColumnFamilyOptionsInterface
maxSuccessiveMerges
- the maximum number of successive merges.public int minWriteBufferNumberToMerge()
ColumnFamilyOptionsInterface
minWriteBufferNumberToMerge
in interface ColumnFamilyOptionsInterface
public Options setMinWriteBufferNumberToMerge(int minWriteBufferNumberToMerge)
ColumnFamilyOptionsInterface
setMinWriteBufferNumberToMerge
in interface ColumnFamilyOptionsInterface
minWriteBufferNumberToMerge
- the minimum number of write buffers
that will be merged together.public int minPartialMergeOperands()
ColumnFamilyOptionsInterface
minPartialMergeOperands
in interface ColumnFamilyOptionsInterface
public Options setMinPartialMergeOperands(int minPartialMergeOperands)
ColumnFamilyOptionsInterface
setMinPartialMergeOperands
in interface ColumnFamilyOptionsInterface
minPartialMergeOperands
- min partial merge operandspublic Options setOptimizeFiltersForHits(boolean optimizeFiltersForHits)
ColumnFamilyOptionsInterface
This flag specifies that the implementation should optimize the filters mainly for cases where keys are found rather than also optimize for keys missed. This would be used in cases where the application knows that there are very few misses or the performance in the case of misses is not important.
For now, this flag allows us to not store filters for the last level i.e the largest level which contains data of the LSM store. For keys which are hits, the filters in this level are not useful because we will search for the data anyway.
NOTE: the filters in other levels are still useful even for key hit because they tell us whether to look in that level or go to the higher level.
Default: false
setOptimizeFiltersForHits
in interface ColumnFamilyOptionsInterface
optimizeFiltersForHits
- boolean value indicating if this flag is set.public boolean optimizeFiltersForHits()
ColumnFamilyOptionsInterface
Returns the current state of the optimize_filters_for_hits
setting.
optimizeFiltersForHits
in interface ColumnFamilyOptionsInterface
optimize_filters_for_hits
was set.protected final void disposeInternal(long handle)
disposeInternal
in class RocksObject