public interface AdvancedColumnFamilyOptionsInterface<T extends AdvancedColumnFamilyOptionsInterface<T>>
AdvancedMutableColumnFamilyOptionsInterface
Taken from include/rocksdb/advanced_options.hModifier and Type | Method and Description |
---|---|
int |
bloomLocality()
Control locality of bloom filter probes to improve cache miss rate.
|
CompactionOptionsFIFO |
compactionOptionsFIFO()
The options for FIFO compaction style
|
CompactionOptionsUniversal |
compactionOptionsUniversal()
The options needed to support Universal Style compactions
|
CompactionPriority |
compactionPriority()
Get the Compaction priority if level compaction
is used for all levels
|
CompactionStyle |
compactionStyle()
Compaction style for DB.
|
java.util.List<CompressionType> |
compressionPerLevel()
Return the currently set
CompressionType
per instances. |
boolean |
forceConsistencyChecks()
In debug mode, RocksDB run consistency checks on the LSM every time the LSM
change (Flush, Compaction, AddFile).
|
boolean |
inplaceUpdateSupport()
Allows thread-safe inplace updates.
|
boolean |
levelCompactionDynamicLevelBytes()
Return if
LevelCompactionDynamicLevelBytes is enabled. |
long |
maxCompactionBytes()
Control maximum size of each compaction (not guaranteed)
|
int |
maxWriteBufferNumberToMaintain()
The total maximum number of write buffers to maintain in memory including
copies of buffers that have already been flushed.
|
int |
minWriteBufferNumberToMerge()
The minimum number of write buffers that will be merged together
before writing to storage.
|
int |
numLevels()
If level-styled compaction is used, then this number determines
the total number of levels.
|
boolean |
optimizeFiltersForHits()
Returns the current state of the
optimize_filters_for_hits
setting. |
T |
setBloomLocality(int bloomLocality)
Control locality of bloom filter probes to improve cache miss rate.
|
T |
setCompactionOptionsFIFO(CompactionOptionsFIFO compactionOptionsFIFO)
The options for FIFO compaction style
|
T |
setCompactionOptionsUniversal(CompactionOptionsUniversal compactionOptionsUniversal)
Set the options needed to support Universal Style compactions
|
T |
setCompactionPriority(CompactionPriority compactionPriority)
If level
compactionStyle() == CompactionStyle.LEVEL ,
for each level, which files are prioritized to be picked to compact. |
ColumnFamilyOptionsInterface |
setCompactionStyle(CompactionStyle compactionStyle)
Set compaction style for DB.
|
T |
setCompressionPerLevel(java.util.List<CompressionType> compressionLevels)
Different levels can have different compression
policies.
|
T |
setForceConsistencyChecks(boolean forceConsistencyChecks)
In debug mode, RocksDB run consistency checks on the LSM every time the LSM
change (Flush, Compaction, AddFile).
|
T |
setInplaceUpdateSupport(boolean inplaceUpdateSupport)
Allows thread-safe inplace updates.
|
T |
setLevelCompactionDynamicLevelBytes(boolean enableLevelCompactionDynamicLevelBytes)
If
true , RocksDB will pick target size of each level
dynamically. |
T |
setMaxCompactionBytes(long maxCompactionBytes)
Maximum size of each compaction (not guarantee)
|
T |
setMaxWriteBufferNumberToMaintain(int maxWriteBufferNumberToMaintain)
The total maximum number of write buffers to maintain in memory including
copies of buffers that have already been flushed.
|
T |
setMinWriteBufferNumberToMerge(int minWriteBufferNumberToMerge)
The minimum number of write buffers that will be merged together
before writing to storage.
|
T |
setNumLevels(int numLevels)
Set the number of levels for this database
If level-styled compaction is used, then this number determines
the total number of levels.
|
T |
setOptimizeFiltersForHits(boolean optimizeFiltersForHits)
This flag specifies that the implementation should optimize the filters
mainly for cases where keys are found rather than also optimize for keys
missed.
|
T setMinWriteBufferNumberToMerge(int minWriteBufferNumberToMerge)
minWriteBufferNumberToMerge
- the minimum number of write buffers
that will be merged together.int minWriteBufferNumberToMerge()
T setMaxWriteBufferNumberToMaintain(int maxWriteBufferNumberToMaintain)
AdvancedMutableColumnFamilyOptionsInterface.maxWriteBufferNumber()
,
this parameter does not affect flushing.
This controls the minimum amount of write history that will be available
in memory for conflict checking when Transactions are used.
When using an OptimisticTransactionDB:
If this value is too low, some transactions may fail at commit time due
to not being able to determine whether there were any write conflicts.
When using a TransactionDB:
If Transaction::SetSnapshot is used, TransactionDB will read either
in-memory write buffers or SST files to do write-conflict checking.
Increasing this value can reduce the number of reads to SST files
done for conflict detection.
Setting this value to 0 will cause write buffers to be freed immediately
after they are flushed.
If this value is set to -1,
AdvancedMutableColumnFamilyOptionsInterface.maxWriteBufferNumber()
will be used.
Default:
If using a TransactionDB/OptimisticTransactionDB, the default value will
be set to the value of
AdvancedMutableColumnFamilyOptionsInterface.maxWriteBufferNumber()
if it is not explicitly set by the user. Otherwise, the default is 0.maxWriteBufferNumberToMaintain
- The maximum number of write
buffers to maintainint maxWriteBufferNumberToMaintain()
T setInplaceUpdateSupport(boolean inplaceUpdateSupport)
inplaceUpdateSupport
- true if thread-safe inplace updates
are allowed.boolean inplaceUpdateSupport()
T setBloomLocality(int bloomLocality)
bloomLocality
- the level of locality of bloom-filter probes.int bloomLocality()
setBloomLocality(int)
T setCompressionPerLevel(java.util.List<CompressionType> compressionLevels)
Different levels can have different compression policies. There are cases where most lower levels would like to use quick compression algorithms while the higher levels (which have more data) use compression algorithms that have better compression but could be slower. This array, if non-empty, should have an entry for each level of the database; these override the value specified in the previous field 'compression'.
NOTICEIf level_compaction_dynamic_level_bytes=true
,
compression_per_level[0]
still determines L0
,
but other elements of the array are based on base level
(the level L0
files are merged to), and may not
match the level users see from info log for metadata.
If L0
files are merged to level - n
,
then, for i>0
, compression_per_level[i]
determines compaction type for level n+i-1
.
For example, if we have 5 levels, and we determine to
merge L0
data to L4
(which means L1..L3
will be empty), then the new files go to L4
uses
compression type compression_per_level[1]
.
If now L0
is merged to L2
. Data goes to
L2
will be compressed according to
compression_per_level[1]
, L3
using
compression_per_level[2]
and L4
using
compression_per_level[3]
. Compaction for each
level can change when data grows.
Default: empty
compressionLevels
- list of
CompressionType
instances.java.util.List<CompressionType> compressionPerLevel()
Return the currently set CompressionType
per instances.
CompressionType
instances.T setNumLevels(int numLevels)
numLevels
- the number of levels.int numLevels()
@Experimental(value="Turning this feature on or off for an existing DB can causeunexpected LSM tree structure so it\'s not recommended") T setLevelCompactionDynamicLevelBytes(boolean enableLevelCompactionDynamicLevelBytes)
If true
, RocksDB will pick target size of each level
dynamically. We will pick a base level b >= 1. L0 will be
directly merged into level b, instead of always into level 1.
Level 1 to b-1 need to be empty. We try to pick b and its target
size so that
At the same time max_bytes_for_level_multiplier and max_bytes_for_level_multiplier_additional are still satisfied.
With this option on, from an empty DB, we make last level the base
level, which means merging L0 data into the last level, until it exceeds
max_bytes_for_level_base. And then we make the second last level to be
base level, to start to merge L0 data to second last level, with its
target size to be 1/max_bytes_for_level_multiplier
of the last
levels extra size. After the data accumulates more so that we need to
move the base level to the third last one, and so on.
Example
For example, assume max_bytes_for_level_multiplier=10
,
num_levels=6
, and max_bytes_for_level_base=10MB
.
Target sizes of level 1 to 5 starts with:
[- - - - 10MB]
with base level is level. Target sizes of level 1 to 4 are not applicable because they will not be used. Until the size of Level 5 grows to more than 10MB, say 11MB, we make base target to level 4 and now the targets looks like:
[- - - 1.1MB 11MB]
While data are accumulated, size targets are tuned based on actual data of level 5. When level 5 has 50MB of data, the target is like:
[- - - 5MB 50MB]
Until level 5's actual size is more than 100MB, say 101MB. Now if we keep level 4 to be the base level, its target size needs to be 10.1MB, which doesn't satisfy the target size range. So now we make level 3 the target size and the target sizes of the levels look like:
[- - 1.01MB 10.1MB 101MB]
In the same way, while level 5 further grows, all levels' targets grow, like
[- - 5MB 50MB 500MB]
Until level 5 exceeds 1000MB and becomes 1001MB, we make level 2 the base level and make levels' target sizes like this:
[- 1.001MB 10.01MB 100.1MB 1001MB]
and go on...
By doing it, we give max_bytes_for_level_multiplier
a priority
against max_bytes_for_level_base
, for a more predictable LSM tree
shape. It is useful to limit worse case space amplification.
max_bytes_for_level_multiplier_additional
is ignored with
this flag on.
Turning this feature on or off for an existing DB can cause unexpected LSM tree structure so it's not recommended.
Caution: this option is experimental
Default: false
enableLevelCompactionDynamicLevelBytes
- boolean value indicating
if LevelCompactionDynamicLevelBytes
shall be enabled.@Experimental(value="Caution: this option is experimental") boolean levelCompactionDynamicLevelBytes()
Return if LevelCompactionDynamicLevelBytes
is enabled.
For further information see
setLevelCompactionDynamicLevelBytes(boolean)
levelCompactionDynamicLevelBytes
is enabled.T setMaxCompactionBytes(long maxCompactionBytes)
maxCompactionBytes
- the compaction size limitlong maxCompactionBytes()
ColumnFamilyOptionsInterface setCompactionStyle(CompactionStyle compactionStyle)
compactionStyle
- Compaction style.CompactionStyle compactionStyle()
T setCompactionPriority(CompactionPriority compactionPriority)
compactionStyle()
== CompactionStyle.LEVEL
,
for each level, which files are prioritized to be picked to compact.
Default: CompactionPriority.ByCompensatedSize
compactionPriority
- The compaction priorityCompactionPriority compactionPriority()
T setCompactionOptionsUniversal(CompactionOptionsUniversal compactionOptionsUniversal)
compactionOptionsUniversal
- The Universal Style compaction optionsCompactionOptionsUniversal compactionOptionsUniversal()
T setCompactionOptionsFIFO(CompactionOptionsFIFO compactionOptionsFIFO)
compactionOptionsFIFO
- The FIFO compaction optionsCompactionOptionsFIFO compactionOptionsFIFO()
T setOptimizeFiltersForHits(boolean optimizeFiltersForHits)
This flag specifies that the implementation should optimize the filters mainly for cases where keys are found rather than also optimize for keys missed. This would be used in cases where the application knows that there are very few misses or the performance in the case of misses is not important.
For now, this flag allows us to not store filters for the last level i.e the largest level which contains data of the LSM store. For keys which are hits, the filters in this level are not useful because we will search for the data anyway.
NOTE: the filters in other levels are still useful even for key hit because they tell us whether to look in that level or go to the higher level.
Default: false
optimizeFiltersForHits
- boolean value indicating if this flag is set.boolean optimizeFiltersForHits()
Returns the current state of the optimize_filters_for_hits
setting.
optimize_filters_for_hits
was set.T setForceConsistencyChecks(boolean forceConsistencyChecks)
forceConsistencyChecks
- true to force consistency checksboolean forceConsistencyChecks()