K
- element type of the sets, created by this builderpublic final class ChronicleSetBuilder<K> extends Object implements ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
ChronicleSetBuilder
manages the whole set of ChronicleSet
configurations, could
be used as a classic builder and/or factory.
ChronicleMapBuilder
is mutable, see a note in ChronicleHashBuilder
interface documentation.
ChronicleSet
,
ChronicleMapBuilder
Modifier and Type | Method and Description |
---|---|
ChronicleSetBuilder<K> |
actualChunkSize(int actualChunkSize)
Configures the size in bytes of allocation unit of hash container instances, created by this
builder.
|
ChronicleSetBuilder<K> |
actualChunksPerSegmentTier(long actualChunksPerSegmentTier)
Configures the actual number of chunks, that will be reserved for any single segment tier of
the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
actualSegments(int actualSegments)
Configures the actual number of segments in the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
aligned64BitMemoryOperationsAtomic(boolean aligned64BitMemoryOperationsAtomic)
Specifies whether on the current combination of platform, OS and Jvm aligned 8-byte reads
and writes are atomic or not.
|
ChronicleSetBuilder<K> |
allowSegmentTiering(boolean allowSegmentTiering)
In addition to
maxBloatFactor(1.0) , that does not
guarantee that segments won't tier (due to bad hash distribution or natural variance),
configuring allowSegmentTiering(false) makes Chronicle Hashes, created by this
builder, to throw IllegalStateException immediately when some segment overflows. |
ChronicleSetBuilder<K> |
averageKey(K averageKey)
Configures the average number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder, by serializing the given
averageKey using
the configured keys marshallers . |
ChronicleSetBuilder<K> |
averageKeySize(double averageKeySize)
Configures the average number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder.
|
ChronicleSetBuilder<K> |
checksumEntries(boolean checksumEntries)
Configures whether hash containers, created by this builder, should compute and store entry
checksums.
|
ChronicleSetBuilder<K> |
clone()
Clones this builder.
|
ChronicleSetBuilder<K> |
constantKeySizeBySample(K sampleKey)
Configures the constant number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder.
|
ChronicleSet<K> |
create()
Creates a new hash container from this builder, storing it's data in off-heap memory, not
mapped to any file.
|
ChronicleSet<K> |
createOrRecoverPersistedTo(File file)
Recovers and opens the hash container, persisted to the specified file, or creates a new one
from this builder, if the file doesn't exist yet, and maps its off-heap memory to the file.
|
ChronicleSet<K> |
createOrRecoverPersistedTo(File file,
boolean sameLibraryVersion)
Recovers and opens the hash container, persisted to the specified file, or creates a new one
from this builder, if the file doesn't exist yet, and maps its off-heap memory to the file.
|
ChronicleSet<K> |
createOrRecoverPersistedTo(File file,
boolean sameLibraryVersion,
ChronicleHashCorruption.Listener corruptionListener)
Recovers and opens the hash container, persisted to the specified file, or creates a new one
from this builder, if the file doesn't exist yet, and maps its off-heap memory to the file.
|
ChronicleSet<K> |
createPersistedTo(File file)
Opens a hash container residing the specified file, or creates a new one from this builder,
if the file doesn't yet exist and maps its off-heap memory to the file.
|
ChronicleSetBuilder<K> |
entries(long entries)
Configures the target number of entries, that is going be inserted into the hash containers,
created by this builder.
|
ChronicleSetBuilder<K> |
entriesPerSegment(long entriesPerSegment)
Configures the actual maximum number entries, that could be inserted into any single segment
of the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
entryOperations(SetEntryOperations<K,?> entryOperations)
Inject your SPI code around basic
ChronicleSet 's operations with entries:
removing entries and inserting new entries. |
boolean |
equals(Object o) |
int |
hashCode() |
<M extends BytesReader<K> & BytesWriter<? super K>> |
keyMarshaller(M marshaller)
Shortcut for
keyMarshallers(marshaller, marshaller) . |
<M extends SizedReader<K> & SizedWriter<? super K>> |
keyMarshaller(M sizedMarshaller)
Shortcut for
keyMarshallers(sizedMarshaller, sizedMarshaller) . |
ChronicleSetBuilder<K> |
keyMarshallers(BytesReader<K> keyReader,
BytesWriter<? super K> keyWriter)
Configures the marshallers, used to serialize/deserialize keys to/from off-heap memory in
hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
keyMarshallers(SizedReader<K> keyReader,
SizedWriter<? super K> keyWriter)
Configures the marshallers, used to serialize/deserialize keys to/from off-heap memory in
hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
keyReaderAndDataAccess(SizedReader<K> keyReader,
DataAccess<K> keyDataAccess)
Configures the
DataAccess and SizedReader used to serialize and deserialize
keys to and from off-heap memory in hash containers, created by this builder. |
ChronicleSetBuilder<K> |
keySizeMarshaller(SizeMarshaller keySizeMarshaller)
Configures the marshaller used to serialize actual key sizes to off-heap memory in hash
containers, created by this builder.
|
ChronicleSetBuilder<K> |
maxBloatFactor(double maxBloatFactor)
Configures the maximum number of times, the hash containers, created by this builder,
are allowed to grow in size beyond the configured target number
of entries.
|
ChronicleSetBuilder<K> |
maxChunksPerEntry(int maxChunksPerEntry)
Configures how many chunks a single entry, inserted into
ChronicleHash es, created
by this builder, could take. |
ChronicleSetBuilder<K> |
minSegments(int minSegments)
Set minimum number of segments in hash containers, constructed by this builder.
|
ChronicleSetBuilder<K> |
name(String name)
Specify the name which will be given to a ChronicleHash, created by this builder.
|
ChronicleSetBuilder<K> |
nonTieredSegmentsPercentile(double nonTieredSegmentsPercentile)
Configures probabilistic fraction of segments, which shouldn't become tiered, if Chronicle
Hash size is
ChronicleHashBuilder.entries(long) , assuming hash code distribution of the keys, inserted
into configured Chronicle Hash, is good. |
static <K> ChronicleSetBuilder<K> |
of(Class<K> keyClass)
Returns a new
ChronicleSetBuilder instance which is able to create sets with the specified key class. |
ChronicleSet<K> |
recoverPersistedTo(File file,
boolean sameBuilderConfigAndLibraryVersion)
Recovers and opens the hash container, persisted to the specified file.
|
ChronicleSet<K> |
recoverPersistedTo(File file,
boolean sameBuilderConfigAndLibraryVersion,
ChronicleHashCorruption.Listener corruptionListener)
Recovers and opens the hash container, persisted to the specified file.
|
ChronicleSetBuilder<K> |
setPreShutdownAction(Runnable preShutdownAction)
A
ChronicleHash created using this builder is closed using a JVM shutdown hook. |
ChronicleSetBuilder<K> |
skipCloseOnExitHook(boolean skipCloseOnExitHook)
Skips the default automatic close configuration on the
ChronicleHash created by
this builder. |
String |
toString() |
public static <K> ChronicleSetBuilder<K> of(Class<K> keyClass)
ChronicleSetBuilder
instance which is able to create sets with the specified key class.K
- key type of the sets, created by the returned builderkeyClass
- class object used to infer key type and discover it's properties via
reflectionpublic ChronicleSetBuilder<K> clone()
ChronicleHashBuilder
ChronicleHashBuilder
s are mutable and changed on each configuration method call. Original
and cloned builders are independent.clone
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
clone
in class Object
public ChronicleSetBuilder<K> name(String name)
ChronicleHashBuilder
ChronicleHash
, because this
name is used when logging errors and warnings inside Chronicle Map library itself, so having
the concrete ChronicleHash
name in logs may help to debug.
name()
is a JVM-level configuration, it is not stored in the persistence file (or
the other way to say this: they are not parts of the Chronicle Map data store
specification) and have to be configured explicitly for each created on-heap ChronicleHash
instance, even if it is a view of an existing Chronicle Map data store. On the
other hand, name()
could be different for different views of the same Chronicle
Map data store.
name
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
name
- the name for a ChronicleHash, created by this builderChronicleHash.name()
public ChronicleSetBuilder<K> actualSegments(int actualSegments)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
call.
This is a low-level configuration. The configured number is used as-is, without anything like round-up to the closest power of 2.
actualSegments
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
actualSegments
- the actual number of segments in hash containers, created by
this builderChronicleHashBuilder.minSegments(int)
,
ChronicleHashBuilder.entriesPerSegment(long)
public ChronicleSetBuilder<K> minSegments(int minSegments)
ChronicleHashBuilder
ConcurrentHashMap
.minSegments
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
minSegments
- the minimum number of segments in containers, constructed by this builderpublic ChronicleSetBuilder<K> entriesPerSegment(long entriesPerSegment)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
configuration.
This is a low-level configuration.
entriesPerSegment
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
entriesPerSegment
- the actual maximum number entries per segment in the
hash containers, created by this builderChronicleHashBuilder.entries(long)
,
ChronicleHashBuilder.actualSegments(int)
public ChronicleSetBuilder<K> actualChunksPerSegmentTier(long actualChunksPerSegmentTier)
ChronicleHashBuilder
ChronicleHashBuilder.entriesPerSegment(long)
. Makes sense only if ChronicleHashBuilder.actualChunkSize(int)
,
ChronicleHashBuilder.actualSegments(int)
and ChronicleHashBuilder.entriesPerSegment(long)
are also configured
manually.actualChunksPerSegmentTier
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
actualChunksPerSegmentTier
- the actual number of chunks, reserved per segment tier in
the hash containers, created by this builderpublic ChronicleSetBuilder<K> averageKeySize(double averageKeySize)
ChronicleHashBuilder.averageKey(Object)
might
be easier to use and more reliable. If key size is always the same, call ChronicleHashBuilder.constantKeySizeBySample(Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses
the actual chunk size based on this configuration, that,
however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average key size
configuration, which is anyway needed.
If key is a boxed primitive type, a value interface or Byteable
subclass, i. e. if
key size is known statically, it is automatically accounted and shouldn't be specified by
user.
Calling this method clears any previous ChronicleHashBuilder.constantKeySizeBySample(Object)
and
ChronicleHashBuilder.averageKey(Object)
configurations.
Example: if keys in your set(s) are English words in String
form, average English
word length is 5.1, configure average key size of 6:
ChronicleSet<String> uniqueWords = ChronicleSetBuilder.of(String.class)
.entries(50000)
.averageKeySize(6)
.create();
(Note that 6 is chosen as average key size in bytes despite strings in Java are UTF-16
encoded (and each character takes 2 bytes on-heap), because default off-heap String
encoding is UTF-8 in ChronicleSet
.)
averageKeySize
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
averageKeySize
- the average size in bytes of the keyconstantKeySizeBySample(Object)
,
actualChunkSize(int)
public ChronicleSetBuilder<K> averageKey(K averageKey)
ChronicleHashBuilder
averageKey
using
the configured keys marshallers
.
In some cases, ChronicleHashBuilder.averageKeySize(double)
might be easier to use, than constructing the
"average key". If key size is always the same, call ChronicleHashBuilder.constantKeySizeBySample(
Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses
the actual chunk size based on this configuration, that,
however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average key size
configuration, which is anyway needed.
If key is a boxed primitive type or Byteable
subclass, i. e. if key size is known
statically, it is automatically accounted and shouldn't be specified by user.
Calling this method clears any previous ChronicleHashBuilder.constantKeySizeBySample(Object)
and
ChronicleHashBuilder.averageKeySize(double)
configurations.
averageKey
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
averageKey
- the average (by footprint in serialized form) key, is going to be put
into the hash containers, created by this builderChronicleHashBuilder.averageKeySize(double)
,
ChronicleHashBuilder.constantKeySizeBySample(Object)
,
ChronicleHashBuilder.actualChunkSize(int)
public ChronicleSetBuilder<K> constantKeySizeBySample(K sampleKey)
sampleKey
, all
keys should take the same number of bytes in serialized form, as this sample object.
If keys are of boxed primitive type or Byteable
subclass, i. e. if key size is
known statically, it is automatically accounted and this method shouldn't be called.
If key size varies, method ChronicleHashBuilder.averageKeySize(double)
should be called instead of
this one.
Calling this method clears any previous ChronicleHashBuilder.averageKey(Object)
and
ChronicleHashBuilder.averageKeySize(double)
configurations.
For example, if your keys are Git commit hashes:
Set<byte[]> gitCommitsOfInterest = ChronicleSetBuilder.of(byte[].class)
.constantKeySizeBySample(new byte[20])
.create();
constantKeySizeBySample
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
sampleKey
- the sample keyChronicleHashBuilder.averageKeySize(double)
public ChronicleSetBuilder<K> actualChunkSize(int actualChunkSize)
ChronicleHashBuilder
ChronicleMap
and ChronicleSet
store their data off-heap, so it is required
to serialize key (and values, in ChronicleMap
case) (unless they are direct Byteable
instances). Serialized key bytes (+ serialized value bytes, in ChronicleMap
case) + some metadata bytes comprise "entry space", which ChronicleMap
or ChronicleSet
should allocate. So chunk size is the minimum allocation portion in the
hash containers, created by this builder. E. g. if chunk size is 100, the created container
could only allocate 100, 200, 300... bytes for an entry. If say 150 bytes of entry space are
required by the entry, 200 bytes will be allocated, 150 used and 50 wasted. This is called
internal fragmentation.
To minimize memory overuse and improve speed, you should pay decent attention to this configuration. Alternatively, you can just trust the heuristics and doesn't configure the chunk size.
Specify chunk size so that most entries would take from 5 to several dozens of chunks. However, remember that operations with entries that span several chunks are a bit slower, than with entries which take a single chunk. Particularly avoid entries to take more than 64 chunks.
Example: if values in your ChronicleMap
are adjacency lists of some social graph,
where nodes are represented as long
ids, and adjacency lists are serialized in
efficient manner, for example as long[]
arrays. Typical number of connections is
100-300, maximum is 3000. In this case chunk size of
30 * (8 bytes for each id) = 240 bytes would be a good choice:
Map<Long, long[]> socialGraph = ChronicleMapBuilder
.of(Long.class, long[].class)
.entries(1_000_000_000L)
.averageValueSize(150 * 8) // 150 is average adjacency list size
.actualChunkSize(30 * 8) // average 5-6 chunks per entry
.create();
This is a low-level configuration. The configured number of bytes is used as-is, without anything like round-up to the multiple of 8 or 16, or any other adjustment.
actualChunkSize
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
actualChunkSize
- the "chunk size" in bytesChronicleHashBuilder.entries(long)
,
ChronicleHashBuilder.maxChunksPerEntry(int)
public ChronicleSetBuilder<K> maxChunksPerEntry(int maxChunksPerEntry)
ChronicleHashBuilder
ChronicleHash
es, created
by this builder, could take. If you try to insert larger entry, IllegalStateException
is fired. This is useful as self-check, that you configured chunk size right and you
keys (and values, in ChronicleMap
case) take expected number of bytes. For example,
if ChronicleHashBuilder.constantKeySizeBySample(Object)
is configured or key size is statically known
to be constant (boxed primitives, data value generated implementations, Byteable
s,
etc.), and the same for value objects in ChronicleMap
case, max chunks per entry
is configured to 1, to ensure keys and values are actually constantly-sized.maxChunksPerEntry
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
maxChunksPerEntry
- how many chunks a single entry could span at mostChronicleHashBuilder.actualChunkSize(int)
public ChronicleSetBuilder<K> entries(long entries)
ChronicleHashBuilder
ChronicleHashBuilder.maxBloatFactor(double)
is configured to 1.0
(and this is by default), this number of entries is also the maximum. If you try to insert
more entries, than the configured maxBloatFactor
, multiplied by the given number of
entries
, IllegalStateException
might be thrown.
This configuration should represent the expected maximum number of entries in a stable
state, maxBloatFactor
- the maximum bloat up coefficient,
during exceptional bursts.
To be more precise - try to configure the entries
so, that the created hash
container is going to serve about 99% requests being less or equal than this number
of entries in size.
You shouldn't put additional margin over the actual target number of entries.
This bad practice was popularized by HashMap(int)
and HashSet(int)
constructors, which accept capacity, that should be multiplied
by load factor to obtain the actual maximum expected number of entries.
ChronicleMap
and ChronicleSet
don't have a notion of load factor.
The default target number of entries is 2^20 (~ 1 million).
entries
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
entries
- the target size of the maps or sets, created by this builderChronicleHashBuilder.maxBloatFactor(double)
public ChronicleSetBuilder<K> maxBloatFactor(double maxBloatFactor)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
should represent the expected maximum number of entries in a stable
state, maxBloatFactor
- the maximum bloat up coefficient, during exceptional bursts.
This configuration should be used for self-checking. Even if you configure impossibly
large maxBloatFactor
, the created ChronicleHash
, of cause, will be still
operational, and even won't allocate any extra resources before they are actually needed.
But when the ChronicleHash
grows beyond the configured ChronicleHashBuilder.entries(long)
, it
could start to serve requests progressively slower. If you insert new entries into
ChronicleHash
infinitely, due to a bug in your business logic code, or the
ChronicleHash configuration, and if you configure the ChronicleHash to grow infinitely, you
will have a terribly slow and fat, but operational application, instead of a fail with
IllegalStateException
, which will quickly show you, that there is a bug in you
application.
The default maximum bloat factor factor is 1.0
- i. e. "no bloat is expected".
It is strongly advised not to configure maxBloatFactor
to more than 10.0
,
almost certainly, you either should configure ChronicleHash
es completely differently,
or this data store doesn't fit to your case.
maxBloatFactor
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
maxBloatFactor
- the maximum number ot times, the created hash container is supposed
to bloat up beyond the ChronicleHashBuilder.entries(long)
ChronicleHashBuilder.entries(long)
public ChronicleSetBuilder<K> allowSegmentTiering(boolean allowSegmentTiering)
ChronicleHashBuilder
maxBloatFactor(1.0)
, that does not
guarantee that segments won't tier (due to bad hash distribution or natural variance),
configuring allowSegmentTiering(false)
makes Chronicle Hashes, created by this
builder, to throw IllegalStateException
immediately when some segment overflows.
Useful exactly for testing hash distribution and variance of segment filling.
Default is true
, segments are allowed to tier.
When configured to false
, ChronicleHashBuilder.maxBloatFactor(double)
configuration becomes
irrelevant, because effectively no bloat is allowed.
allowSegmentTiering
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
allowSegmentTiering
- if true
, when a segment overflows a next tier
is allocated to accommodate new entriespublic ChronicleSetBuilder<K> nonTieredSegmentsPercentile(double nonTieredSegmentsPercentile)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
, assuming hash code distribution of the keys, inserted
into configured Chronicle Hash, is good.
The last caveat means that the configured percentile and affects segment size relying on Poisson distribution law, if inserted entries (keys) fall into all segments randomly. If e. g. the keys, inserted into the Chronicle Hash, are purposely selected to collide by a certain range of hash code bits, so that they all fall into the same segment (a DOS attacker might do this), this segment is obviously going to be tiered.
This configuration affects the actual number of segments, if ChronicleHashBuilder.entries(long)
and
ChronicleHashBuilder.entriesPerSegment(long)
or ChronicleHashBuilder.actualChunksPerSegmentTier(long)
are configured.
It affects the actual number of entries per segment/chunks per segment tier, if ChronicleHashBuilder.entries(long)
and ChronicleHashBuilder.actualSegments(int)
are configured. If all 4 configurations,
mentioned in this paragraph, are specified, nonTieredSegmentsPercentile
doesn't make
any effect.
Default value is 0.99999, i. e. if hash code distribution of the keys is good, only one segment of 100K is tiered on average. If your segment size is small and you want to improve memory footprint of Chronicle Hash (probably compromising latency percentiles), you might want to configure more "relaxed" value, e. g. 0.99.
nonTieredSegmentsPercentile
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
nonTieredSegmentsPercentile
- Fraction of segments which shouldn't be tieredpublic ChronicleSetBuilder<K> keyReaderAndDataAccess(SizedReader<K> keyReader, @NotNull DataAccess<K> keyDataAccess)
ChronicleHashBuilder
DataAccess
and SizedReader
used to serialize and deserialize
keys to and from off-heap memory in hash containers, created by this builder.keyReaderAndDataAccess
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keyReader
- the new bytes → key object reader strategykeyDataAccess
- the new strategy of accessing the keys' bytes for writingChronicleHashBuilder.keyMarshallers(SizedReader, SizedWriter)
public ChronicleSetBuilder<K> keyMarshallers(@NotNull BytesReader<K> keyReader, @NotNull BytesWriter<? super K> keyWriter)
ChronicleHashBuilder
keyMarshallers
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keyReader
- the new bytes → key object reader strategykeyWriter
- the new key object → bytes writer strategyChronicleHashBuilder.keyReaderAndDataAccess(SizedReader, DataAccess)
public <M extends BytesReader<K> & BytesWriter<? super K>> ChronicleSetBuilder<K> keyMarshaller(@NotNull M marshaller)
ChronicleHashBuilder
keyMarshallers(marshaller, marshaller)
.keyMarshaller
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
public ChronicleSetBuilder<K> keyMarshallers(@NotNull SizedReader<K> keyReader, @NotNull SizedWriter<? super K> keyWriter)
ChronicleHashBuilder
keyMarshallers
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keyReader
- the new bytes → key object reader strategykeyWriter
- the new key object → bytes writer strategyChronicleHashBuilder.keyReaderAndDataAccess(SizedReader, DataAccess)
public <M extends SizedReader<K> & SizedWriter<? super K>> ChronicleSetBuilder<K> keyMarshaller(@NotNull M sizedMarshaller)
ChronicleHashBuilder
keyMarshallers(sizedMarshaller, sizedMarshaller)
.keyMarshaller
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
sizedMarshaller
- implementation of both SizedReader
and SizedWriter
interfacespublic ChronicleSetBuilder<K> keySizeMarshaller(@NotNull SizeMarshaller keySizeMarshaller)
ChronicleHashBuilder
Default key size marshaller is so-called "stop bit encoding" marshalling. If constant key size is configured, or defaulted if the key
type is always constant and ChronicleHashBuilder
implementation knows about it, this
configuration takes no effect, because a special SizeMarshaller
implementation, which
doesn't actually do any marshalling, and just returns the known constant size on SizeMarshaller.readSize(Bytes)
calls, is used instead of any SizeMarshaller
configured using this method.
keySizeMarshaller
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keySizeMarshaller
- the new marshaller, used to serialize actual key sizes to off-heap
memorypublic ChronicleSetBuilder<K> aligned64BitMemoryOperationsAtomic(boolean aligned64BitMemoryOperationsAtomic)
ChronicleHashBuilder
OS.is64Bit()
.aligned64BitMemoryOperationsAtomic
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
aligned64BitMemoryOperationsAtomic
- true
if aligned 8-byte memory operations
are atomicpublic ChronicleSetBuilder<K> checksumEntries(boolean checksumEntries)
ChronicleHashBuilder
By default, persisted hash containers, created by
ChronicleMapBuilder
do compute and store entry checksums, but hash containers,
created in the process memory via ChronicleHashBuilder.create()
- don't.
checksumEntries
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
checksumEntries
- if entry checksums should be computed and storedChecksumEntry
,
ChronicleHashBuilder.recoverPersistedTo(File, boolean)
public ChronicleSetBuilder<K> entryOperations(SetEntryOperations<K,?> entryOperations)
ChronicleSet
's operations with entries:
removing entries and inserting new entries.
This affects behaviour of ordinary set.add(), set.remove(), calls, as well as removes during iterations, updates during remote calls and internal replication operations.
public ChronicleSet<K> create()
ChronicleHashBuilder
ChronicleHash.close()
called on the returned container, or
after the container object is collected during GC, or on JVM shutdown the off-heap memory
used by the returned container is freed.create
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
ChronicleHashBuilder.createPersistedTo(File)
public ChronicleSet<K> createPersistedTo(File file) throws IOException
ChronicleHashBuilder
Multiple containers could give access to the same data simultaneously, either inside a single JVM or across processes. Access is synchronized correctly across all instances, i. e. hash container mapping the data from the first JVM isn't able to modify the data, concurrently accessed from the second JVM by another hash container instance, mapping the same data.
On container's close()
the data isn't removed, it remains on
disk and available to be opened again (given the same file name) or during different JVM
run.
This method is shortcut for instance().persistedTo(file).create()
.
createPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- the file with existing hash container or a desired location of a new off-heap
persisted hash containerIOException
- if any IO error, related to off-heap memory allocation or file mapping,
or establishing replication connections, occursChronicleHash.file()
,
ChronicleHash.close()
,
ChronicleHashBuilder.create()
,
ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.recoverPersistedTo(File, boolean)
public ChronicleSet<K> createOrRecoverPersistedTo(File file) throws IOException
ChronicleHashBuilder
createOrRecoverPersistedTo(file, true)
.
This method couldn't be used, if the Chronicle Map was created using an older version
of the Chronicle Map library, and then createOrRecoverPersistedTo()
is called with
a newer version of the Chronicle Map library. In this case, createOrRecoverPersistedTo(file, false)
should
be used.
WARNING: Make sure this instance is the only one that accesses the
provided file
during recovery across all JVMs/threads/processes or else
the behavior is unspecified including the possibility that the Map file gets
completely corrupted and/or is silently returning stale or otherwise erroneous data.
Chronicle Map employs a best-effort to ensure file exclusivity during recovery operations. However, these efforts may not be applicable for all platforms and/or situations. Ultimately, the user is responsible for ensuring absolute exclusivity.
createOrRecoverPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- the persistence file for existing of future hash containerChronicleHash
instance, mapped to the given instanceIOException
- if any IO error occurs on reading data from the file, or related to
off-heap memory allocation or file mapping, or establishing replication connections. Probably
the file is corrupted on OS level, and should be recovered on that level first, before
calling this procedure.ChronicleHashBuilder.createPersistedTo(File)
,
ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.recoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean, ChronicleHashCorruption.Listener)
public ChronicleSet<K> createOrRecoverPersistedTo(File file, boolean sameLibraryVersion) throws IOException
ChronicleHashBuilder
ChronicleHashBuilder.createPersistedTo(File)
, if the given
file doesn't exist, and recoverPersistedTo(file, sameLibraryVersion)
, if the file exists.
The difference between this method and ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean,
ChronicleHashCorruption.Listener)
is that this method just logs the encountered corruptions,
instead of passing them to the specified corruption listener.
WARNING: Make sure this instance is the only one that accesses the
provided file
during recovery across all JVMs/threads/processes or else
the behavior is unspecified including the possibility that the Map file gets
completely corrupted and/or is silently returning stale or otherwise erroneous data.
Chronicle Map employs a best-effort to ensure file exclusivity during recovery operations. However, these efforts may not be applicable for all platforms and/or situations. Ultimately, the user is responsible for ensuring absolute exclusivity.
createOrRecoverPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- the persistence file for existing of future hash containersameLibraryVersion
- if this builder is configured with the same configurations, as the
builder, which created the Chronicle Map in the given file initially, and the same version
of the Chronicle Map library was used. In this case, the header of the file is overridden
(with presumably the same configurations), protecting from ChronicleHashRecoveryFailedException
, if the header is corrupted.ChronicleHash
instance, mapped to the given instanceIOException
- if any IO error occurs on reading data from the file, or related to
off-heap memory allocation or file mapping, or establishing replication connections. Probably
the file is corrupted on OS level, and should be recovered on that level first, before
calling this procedure.ChronicleHashBuilder.createPersistedTo(File)
,
ChronicleHashBuilder.recoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean, ChronicleHashCorruption.Listener)
public ChronicleSet<K> createOrRecoverPersistedTo(File file, boolean sameLibraryVersion, ChronicleHashCorruption.Listener corruptionListener) throws IOException
ChronicleHashBuilder
ChronicleHashBuilder.createPersistedTo(File)
, if the given
file doesn't exist, and recoverPersistedTo(file, sameLibraryVersion, corruptionListener)
, if the file exists.
If this procedure encounters corruptions, it fixes them (recovers from them) and
notifies the provided corruption listener with the details about the corruption. See the
documentation for ChronicleHashCorruption
for more information.
WARNING: Make sure this instance is the only one that accesses the
provided file
during recovery across all JVMs/threads/processes or else
the behavior is unspecified including the possibility that the Map file gets
completely corrupted and/or is silently returning stale or otherwise erroneous data.
Chronicle Map employs a best-effort to ensure file exclusivity during recovery operations. However, these efforts may not be applicable for all platforms and/or situations. Ultimately, the user is responsible for ensuring absolute exclusivity.
createOrRecoverPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- the persistence file for existing of future hash containersameLibraryVersion
- if this builder is configured with the same configurations, as the
builder, which created the Chronicle Map in the given file initially, and the same version
of the Chronicle Map library was used. In this case, the header of the file is overridden
(with presumably the same configurations), protecting from ChronicleHashRecoveryFailedException
, if the header is corrupted.ChronicleHash
instance, mapped to the given instanceIOException
- if any IO error occurs on reading data from the file, or related to
off-heap memory allocation or file mapping, or establishing replication connections. Probably
the file is corrupted on OS level, and should be recovered on that level first, before
calling this procedure.ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.recoverPersistedTo(File, boolean)
public ChronicleSet<K> recoverPersistedTo(File file, boolean sameBuilderConfigAndLibraryVersion) throws IOException
ChronicleHashBuilder
This method, unlike ChronicleHashBuilder.createPersistedTo(File)
or ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean)
methods, expects that the given file already
exists.
"Recovery" of the hash container is changing the memory of the data structure so that
after the recovery the hash container is in some correct state: with "clean" locks, coherent
entry counters, not containing provably corrupt entries, etc. If checksumEntries(true)
is configured for the chronicle hash
container, recovery procedure checks for each entry that the checksums is correct, otherwise
it assumes the entry is corrupt and deletes it from the Chronicle Hash. See the
Recovery section in the
Chronicle Map tutorial for more information.
The difference between this method and ChronicleHashBuilder.recoverPersistedTo(File, boolean,
ChronicleHashCorruption.Listener)
is that this method just logs the encountered corruptions,
instead of passing them to the specified corruption listener.
At the moment this method is called and executed, no other thread or process should be
mapping to the given file, and trying to access the given file. Otherwise the outcomes of
the recoverPersistedTo()
call, as well as the behaviour of the concurrent thread or
process, accessing the same Chronicle Map, are unspecified: exception or error could be
thrown, the Chronicle Map persisted to the given file could be further corrupted.
It is strongly recommended to configure this builder with the same configurations, as
the builder, that created the given file for the first time, and pass true
as the
sameBuilderConfigAndLibraryVersion
argument (another requirement is running
the same version of the Chronicle Map library). Otherwise, if the header of the given
persisted Chronicle Hash file is corrupted, this method is likely to be unable to recover and
throw ChronicleHashRecoveryFailedException
, or even worse, to corrupt the file
further. Fortunately, the header should never be corrupted on an "ordinary" process
crash/termination or power loss, only on direct file corruption.
WARNING: Make sure this instance is the only one that accesses the
provided file
during recovery across all JVMs/threads/processes or else
the behavior is unspecified including the possibility that the Map file gets
completely corrupted and/or is silently returning stale or otherwise erroneous data.
Chronicle Map employs a best-effort to ensure file exclusivity during recovery operations. However, these efforts may not be applicable for all platforms and/or situations. Ultimately, the user is responsible for ensuring absolute exclusivity.
recoverPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- a hash container was mapped to the given filesameBuilderConfigAndLibraryVersion
- if this builder is configured with the same
configurations, as the builder, which created the file (the persisted Chronicle Hash
instance) for the first time, and with the same version of the Chronicle Map library.
In this case, the header of the file is overridden (with presumably the same configurations),
protecting from ChronicleHashRecoveryFailedException
, if the header is corrupted.FileNotFoundException
- if the file doesn't existIOException
- if any IO error occurs on reading data from the file, or related to
off-heap memory allocation or file mapping, or establishing replication connections. Probably
the file is corrupted on OS level, and should be recovered on that level first, before
calling this procedure.ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.createPersistedTo(File)
,
ChronicleHashBuilder.recoverPersistedTo(File, boolean, ChronicleHashCorruption.Listener)
public ChronicleSet<K> recoverPersistedTo(File file, boolean sameBuilderConfigAndLibraryVersion, ChronicleHashCorruption.Listener corruptionListener) throws IOException
ChronicleHashBuilder
This method, unlike ChronicleHashBuilder.createPersistedTo(File)
or ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean, ChronicleHashCorruption.Listener)
methods,
expects that the given file already exists.
"Recovery" of the hash container is changing the memory of the data structure so that
after the recovery the hash container is in some correct state: with "clean" locks, coherent
entry counters, not containing provably corrupt entries, etc. If checksumEntries(true)
is configured for the chronicle hash
container, recovery procedure checks for each entry that the checksums is correct, otherwise
it assumes the entry is corrupt and deletes it from the Chronicle Hash. See the
Recovery section in the
Chronicle Map tutorial for more information.
If this procedure encounters corruptions, it fixes them (recovers from them) and
notifies the provided corruption listener with the details about the corruption. See the
documentation for ChronicleHashCorruption
for more information.
At the moment this method is called and executed, no other thread or process should be
mapping to the given file, and trying to access the given file. Otherwise the outcomes of
the recoverPersistedTo()
call, as well as the behaviour of the concurrent thread or
process, accessing the same Chronicle Map, are unspecified: exception or error could be
thrown, the Chronicle Map persisted to the given file could be further corrupted.
WARNING: Make sure this instance is the only one that accesses the
provided file
during recovery across all JVMs/threads/processes or else
the behavior is unspecified including the possibility that the Map file gets
completely corrupted and/or is silently returning stale or otherwise erroneous data.
Chronicle Map employs a best-effort to ensure file exclusivity during recovery operations. However, these efforts may not be applicable for all platforms and/or situations. Ultimately, the user is responsible for ensuring absolute exclusivity.
recoverPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- a hash container was mapped to the given filesameBuilderConfigAndLibraryVersion
- if this builder is configured with the same
configurations, as the builder, which created the file (the persisted Chronicle Hash
instance) for the first time, and with the same version of the Chronicle Map library.
In this case, the header of the file is overridden (with presumably the same configurations),
protecting from ChronicleHashRecoveryFailedException
, if the header is corrupted.FileNotFoundException
- if the file doesn't existIOException
- if any IO error occurs on reading data from the file, or related to
off-heap memory allocation or file mapping, or establishing replication connections. Probably
the file is corrupted on OS level, and should be recovered on that level first, before
calling this procedure.ChronicleHashBuilder.recoverPersistedTo(File, boolean)
,
ChronicleHashBuilder.createOrRecoverPersistedTo(File, boolean, ChronicleHashCorruption.Listener)
,
ChronicleHashBuilder.createPersistedTo(File)
public ChronicleSetBuilder<K> setPreShutdownAction(Runnable preShutdownAction)
ChronicleHashBuilder
ChronicleHash
created using this builder is closed using a JVM shutdown hook.
This method lets you perform an action before the ChronicleHash
is closed.
The registered action is not executed when JVM is running and a
ChronicleHash.close()
is explicitly called.
Example usage of this call: To carry out a graceful shutdown and explicitly
control when the ChronicleHash
is closed, the action can be a wait on a
CountDownLatch
that would be released appropriately.
setPreShutdownAction
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
preShutdownAction
- action to run before closing the ChronicleHash
in a
JVM shutdown hook.public ChronicleSetBuilder<K> skipCloseOnExitHook(boolean skipCloseOnExitHook)
ChronicleHashBuilder
ChronicleHash
created by
this builder. By setting this to true, the caller agrees to closing the built ChronicleHash
explicitly. Any pre-shutdown action configured via ChronicleHashBuilder.setPreShutdownAction(Runnable)
won't be executed if skipCloseOnExitHook is set to true.skipCloseOnExitHook
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
skipCloseOnExitHook
- if true
, default automatic close configuration is not enabled.Copyright © 2023. All rights reserved.