K
- element type of the sets, created by this builderpublic final class ChronicleSetBuilder<K> extends Object implements ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
ChronicleSetBuilder
manages the whole set of ChronicleSet
configurations, could
be used as a classic builder and/or factory.
ChronicleMapBuilder
is mutable, see a note in ChronicleHashBuilder
interface documentation.
ChronicleSet
,
ChronicleMapBuilder
Modifier and Type | Method and Description |
---|---|
ChronicleSetBuilder<K> |
actualChunkSize(int actualChunkSize)
Configures the size in bytes of allocation unit of hash container instances, created by this
builder.
|
ChronicleSetBuilder<K> |
actualChunksPerSegmentTier(long actualChunksPerSegmentTier)
Configures the actual number of chunks, that will be reserved for any single segment tier of
the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
actualSegments(int actualSegments)
Configures the actual number of segments in the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
aligned64BitMemoryOperationsAtomic(boolean aligned64BitMemoryOperationsAtomic)
Specifies whether on the current combination of platform, OS and Jvm aligned 8-byte reads
and writes are atomic or not.
|
ChronicleSetBuilder<K> |
allowSegmentTiering(boolean allowSegmentTiering)
In addition to
maxBloatFactor(1.0) , that does not
guarantee that segments won't tier (due to bad hash distribution or natural variance),
configuring allowSegmentTiering(false) makes Chronicle Hashes, created by this
builder, to throw IllegalStateException immediately when some segment overflows. |
ChronicleSetBuilder<K> |
averageKey(K averageKey)
Configures the average number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder, by serializing the given
averageKey using
the configured keys marshallers . |
ChronicleSetBuilder<K> |
averageKeySize(double averageKeySize)
Configures the average number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder.
|
ChronicleSetBuilder<K> |
checksumEntries(boolean checksumEntries)
Configures whether hash containers, created by this builder, should compute and store entry
checksums.
|
ChronicleSetBuilder<K> |
cleanupRemovedEntries(boolean cleanupRemovedEntries)
Configures if replicated Chronicle Hashes, constructed by this builder, should
completely erase entries, removed some time ago.
|
ChronicleSetBuilder<K> |
clone()
Clones this builder.
|
ChronicleSetBuilder<K> |
constantKeySizeBySample(K sampleKey)
Configures the constant number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder.
|
ChronicleSet<K> |
create()
Creates a new hash container, storing it's data in off-heap memory, not mapped to any file.
|
ChronicleSet<K> |
createPersistedTo(File file)
Opens a hash container residing the specified file, or creates a new one if the file not yet
exists and maps its off-heap memory to the file.
|
ChronicleSetBuilder<K> |
entries(long entries)
Configures the target number of entries, that is going be inserted into the hash containers,
created by this builder.
|
ChronicleSetBuilder<K> |
entriesPerSegment(long entriesPerSegment)
Configures the actual maximum number entries, that could be inserted into any single segment
of the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
entryOperations(SetEntryOperations<K,?> entryOperations) |
boolean |
equals(Object o) |
int |
hashCode() |
ChronicleHashInstanceBuilder<ChronicleSet<K>> |
instance() |
<M extends BytesReader<K> & BytesWriter<? super K>> |
keyMarshaller(M marshaller)
Shortcut for
keyMarshallers(marshaller, marshaller) . |
<M extends SizedReader<K> & SizedWriter<? super K>> |
keyMarshaller(M sizedMarshaller)
Shortcut for
keyMarshallers(sizedMarshaller, sizedMarshaller) . |
ChronicleSetBuilder<K> |
keyMarshallers(BytesReader<K> keyReader,
BytesWriter<? super K> keyWriter)
Configures the marshallers, used to serialize/deserialize keys to/from off-heap memory in
hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
keyMarshallers(SizedReader<K> keyReader,
SizedWriter<? super K> keyWriter)
Configures the marshallers, used to serialize/deserialize keys to/from off-heap memory in
hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
keyReaderAndDataAccess(SizedReader<K> keyReader,
DataAccess<K> keyDataAccess)
Configures the
DataAccess and SizedReader used to serialize and deserialize
keys to and from off-heap memory in hash containers, created by this builder. |
ChronicleSetBuilder<K> |
keySizeMarshaller(SizeMarshaller keySizeMarshaller)
Configures the marshaller used to serialize actual key sizes to off-heap memory in hash
containers, created by this builder.
|
ChronicleSetBuilder<K> |
maxBloatFactor(double maxBloatFactor)
Configures the maximum number of times, the hash containers, created by this builder,
are allowed to grow in size beyond the configured target number
of entries.
|
ChronicleSetBuilder<K> |
maxChunksPerEntry(int maxChunksPerEntry)
Configures how many chunks a single entry, inserted into
ChronicleHash es, created
by this builder, could take. |
ChronicleSetBuilder<K> |
minSegments(int minSegments)
Set minimum number of segments in hash containers, constructed by this builder.
|
ChronicleSetBuilder<K> |
nonTieredSegmentsPercentile(double nonTieredSegmentsPercentile)
Configures probabilistic fraction of segments, which shouldn't become tiered, if Chronicle
Hash size is
ChronicleHashBuilder.entries(long) , assuming hash code distribution of the keys, inserted
into configured Chronicle Hash, is good. |
static <K> ChronicleSetBuilder<K> |
of(Class<K> keyClass) |
net.openhft.chronicle.hash.ChronicleHashBuilderPrivateAPI<K> |
privateAPI()
Deprecated.
don't use private API in the client code
|
ChronicleSetBuilder<K> |
remoteOperations(SetRemoteOperations<K,?> remoteOperations) |
ChronicleSetBuilder<K> |
removedEntryCleanupTimeout(long removedEntryCleanupTimeout,
TimeUnit unit)
Configures timeout after which entries, marked as removed in the Chronicle Hash, constructed
by this builder, are allowed to be completely removed from the data structure.
|
ChronicleSetBuilder<K> |
replication(byte identifier) |
ChronicleSetBuilder<K> |
replication(byte identifier,
TcpTransportAndNetworkConfig tcpTransportAndNetwork)
Shortcut for
replication(SimpleReplication.builder() .tcpTransportAndNetwork(tcpTransportAndNetwork).createWithId(identifier)) . |
ChronicleSetBuilder<K> |
replication(SingleChronicleHashReplication replication)
Configures replication of the hash containers, created by this builder.
|
ChronicleSetBuilder<K> |
timeProvider(TimeProvider timeProvider)
Configures a time provider, used by hash containers, created by this builder, for needs of
replication consensus protocol (conflicting data updates resolution).
|
String |
toString() |
public static <K> ChronicleSetBuilder<K> of(Class<K> keyClass)
public ChronicleSetBuilder<K> clone()
ChronicleHashBuilder
ChronicleHashBuilder
s are mutable and changed on each configuration method call. Original
and cloned builders are independent.clone
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
clone
in class Object
public ChronicleSetBuilder<K> actualSegments(int actualSegments)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
call.
This is a low-level configuration. The configured number is used as-is, without anything like round-up to the closest power of 2.
actualSegments
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
actualSegments
- the actual number of segments in hash containers, created by
this builderChronicleHashBuilder.minSegments(int)
,
ChronicleHashBuilder.entriesPerSegment(long)
public ChronicleSetBuilder<K> minSegments(int minSegments)
ChronicleHashBuilder
ConcurrentHashMap
.minSegments
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
minSegments
- the minimum number of segments in containers, constructed by this builderpublic ChronicleSetBuilder<K> entriesPerSegment(long entriesPerSegment)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
configuration.
This is a low-level configuration.
entriesPerSegment
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
entriesPerSegment
- the actual maximum number entries per segment in the
hash containers, created by this builderChronicleHashBuilder.entries(long)
,
ChronicleHashBuilder.actualSegments(int)
public ChronicleSetBuilder<K> actualChunksPerSegmentTier(long actualChunksPerSegmentTier)
ChronicleHashBuilder
ChronicleHashBuilder.entriesPerSegment(long)
. Makes sense only if ChronicleHashBuilder.actualChunkSize(int)
,
ChronicleHashBuilder.actualSegments(int)
and ChronicleHashBuilder.entriesPerSegment(long)
are also configured
manually.actualChunksPerSegmentTier
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
actualChunksPerSegmentTier
- the actual number of chunks, reserved per segment tier in
the hash containers, created by this builderpublic ChronicleSetBuilder<K> averageKeySize(double averageKeySize)
ChronicleHashBuilder.averageKey(Object)
might
be easier to use and more reliable. If key size is always the same, call ChronicleHashBuilder.constantKeySizeBySample(Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses
the actual chunk size based on this configuration, that,
however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average key size
configuration, which is anyway needed.
If key is a boxed primitive type, a value interface or Byteable
subclass, i. e. if
key size is known statically, it is automatically accounted and shouldn't be specified by
user.
Calling this method clears any previous ChronicleHashBuilder.constantKeySizeBySample(Object)
and
ChronicleHashBuilder.averageKey(Object)
configurations.
Example: if keys in your set(s) are English words in String
form, average English
word length is 5.1, configure average key size of 6:
ChronicleSet<String> uniqueWords = ChronicleSetBuilder.of(String.class)
.entries(50000)
.averageKeySize(6)
.create();
(Note that 6 is chosen as average key size in bytes despite strings in Java are UTF-16
encoded (and each character takes 2 bytes on-heap), because default off-heap String
encoding is UTF-8 in ChronicleSet
.)
averageKeySize
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
averageKeySize
- the average size in bytes of the keyconstantKeySizeBySample(Object)
,
actualChunkSize(int)
public ChronicleSetBuilder<K> averageKey(K averageKey)
ChronicleHashBuilder
averageKey
using
the configured keys marshallers
.
In some cases, ChronicleHashBuilder.averageKeySize(double)
might be easier to use, than constructing the
"average key". If key size is always the same, call ChronicleHashBuilder.constantKeySizeBySample(
Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses
the actual chunk size based on this configuration, that,
however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average key size
configuration, which is anyway needed.
If key is a boxed primitive type or Byteable
subclass, i. e. if key size is known
statically, it is automatically accounted and shouldn't be specified by user.
Calling this method clears any previous ChronicleHashBuilder.constantKeySizeBySample(Object)
and
ChronicleHashBuilder.averageKeySize(double)
configurations.
averageKey
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
averageKey
- the average (by footprint in serialized form) key, is going to be put
into the hash containers, created by this builderChronicleHashBuilder.averageKeySize(double)
,
ChronicleHashBuilder.constantKeySizeBySample(Object)
,
ChronicleHashBuilder.actualChunkSize(int)
public ChronicleSetBuilder<K> constantKeySizeBySample(K sampleKey)
sampleKey
, all
keys should take the same number of bytes in serialized form, as this sample object.
If keys are of boxed primitive type or Byteable
subclass, i. e. if key size is
known statically, it is automatically accounted and this method shouldn't be called.
If key size varies, method ChronicleHashBuilder.averageKeySize(double)
should be called instead of
this one.
Calling this method clears any previous ChronicleHashBuilder.averageKey(Object)
and
ChronicleHashBuilder.averageKeySize(double)
configurations.
For example, if your keys are Git commit hashes:
Set<byte[]> gitCommitsOfInterest = ChronicleSetBuilder.of(byte[].class)
.constantKeySizeBySample(new byte[20])
.create();
constantKeySizeBySample
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
sampleKey
- the sample keyChronicleHashBuilder.averageKeySize(double)
public ChronicleSetBuilder<K> actualChunkSize(int actualChunkSize)
ChronicleHashBuilder
ChronicleMap
and ChronicleSet
store their data off-heap, so it is required
to serialize key (and values, in ChronicleMap
case) (unless they are direct Byteable
instances). Serialized key bytes (+ serialized value bytes, in ChronicleMap
case) + some metadata bytes comprise "entry space", which ChronicleMap
or ChronicleSet
should allocate. So chunk size is the minimum allocation portion in the
hash containers, created by this builder. E. g. if chunk size is 100, the created container
could only allocate 100, 200, 300... bytes for an entry. If say 150 bytes of entry space are
required by the entry, 200 bytes will be allocated, 150 used and 50 wasted. This is called
internal fragmentation.
To minimize memory overuse and improve speed, you should pay decent attention to this configuration. Alternatively, you can just trust the heuristics and doesn't configure the chunk size.
Specify chunk size so that most entries would take from 5 to several dozens of chunks. However, remember that operations with entries that span several chunks are a bit slower, than with entries which take a single chunk. Particularly avoid entries to take more than 64 chunks.
Example: if values in your ChronicleMap
are adjacency lists of some social graph,
where nodes are represented as long
ids, and adjacency lists are serialized in
efficient manner, for example as long[]
arrays. Typical number of connections is
100-300, maximum is 3000. In this case chunk size of
30 * (8 bytes for each id) = 240 bytes would be a good choice:
Map<Long, long[]> socialGraph = ChronicleMapBuilder
.of(Long.class, long[].class)
.entries(1_000_000_000L)
.averageValueSize(150 * 8) // 150 is average adjacency list size
.actualChunkSize(30 * 8) // average 5-6 chunks per entry
.create();
This is a low-level configuration. The configured number of bytes is used as-is, without anything like round-up to the multiple of 8 or 16, or any other adjustment.
actualChunkSize
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
actualChunkSize
- the "chunk size" in bytesChronicleHashBuilder.entries(long)
,
ChronicleHashBuilder.maxChunksPerEntry(int)
public ChronicleSetBuilder<K> maxChunksPerEntry(int maxChunksPerEntry)
ChronicleHashBuilder
ChronicleHash
es, created
by this builder, could take. If you try to insert larger entry, IllegalStateException
is fired. This is useful as self-check, that you configured chunk size right and you
keys (and values, in ChronicleMap
case) take expected number of bytes. For example,
if ChronicleHashBuilder.constantKeySizeBySample(Object)
is configured or key size is statically known
to be constant (boxed primitives, data value generated implementations, Byteable
s,
etc.), and the same for value objects in ChronicleMap
case, max chunks per entry
is configured to 1, to ensure keys and values are actually constantly-sized.maxChunksPerEntry
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
maxChunksPerEntry
- how many chunks a single entry could span at mostChronicleHashBuilder.actualChunkSize(int)
public ChronicleSetBuilder<K> entries(long entries)
ChronicleHashBuilder
ChronicleHashBuilder.maxBloatFactor(double)
is configured to 1.0
(and this is by default), this number of entries is also the maximum. If you try to insert
more entries, than the configured maxBloatFactor
, multiplied by the given number of
entries
, IllegalStateException
might be thrown.
This configuration should represent the expected maximum number of entries in a stable
state, maxBloatFactor
- the maximum bloat up coefficient,
during exceptional bursts.
To be more precise - try to configure the entries
so, that the created hash
container is going to serve about 99% requests being less or equal than this number
of entries in size.
You shouldn't put additional margin over the actual target number of entries.
This bad practice was popularized by HashMap.HashMap(int)
and HashSet.HashSet(int)
constructors, which accept capacity, that should be multiplied
by load factor to obtain the actual maximum expected number of entries.
ChronicleMap
and ChronicleSet
don't have a notion of load factor.
The default target number of entries is 2^20 (~ 1 million).
entries
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
entries
- the target size of the maps or sets, created by this builderChronicleHashBuilder.maxBloatFactor(double)
public ChronicleSetBuilder<K> maxBloatFactor(double maxBloatFactor)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
should represent the expected maximum number of entries in a stable
state, maxBloatFactor
- the maximum bloat up coefficient, during exceptional bursts.
This configuration should be used for self-checking. Even if you configure impossibly
large maxBloatFactor
, the created ChronicleHash
, of cause, will be still
operational, and even won't allocate any extra resources before they are actually needed.
But when the ChronicleHash
grows beyond the configured ChronicleHashBuilder.entries(long)
, it
could start to serve requests progressively slower. If you insert new entries into
ChronicleHash
infinitely, due to a bug in your business logic code, or the
ChronicleHash configuration, and if you configure the ChronicleHash to grow infinitely, you
will have a terribly slow and fat, but operational application, instead of a fail with
IllegalStateException
, which will quickly show you, that there is a bug in you
application.
The default maximum bloat factor factor is 1.0
- i. e. "no bloat is expected".
It is strongly advised not to configure maxBloatFactor
to more than 10.0
,
almost certainly, you either should configure ChronicleHash
es completely differently,
or this data structure doesn't fit you case.
maxBloatFactor
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
maxBloatFactor
- the maximum number ot times, the created hash container is supposed
to bloat up beyond the ChronicleHashBuilder.entries(long)
ChronicleHashBuilder.entries(long)
public ChronicleSetBuilder<K> allowSegmentTiering(boolean allowSegmentTiering)
ChronicleHashBuilder
maxBloatFactor(1.0)
, that does not
guarantee that segments won't tier (due to bad hash distribution or natural variance),
configuring allowSegmentTiering(false)
makes Chronicle Hashes, created by this
builder, to throw IllegalStateException
immediately when some segment overflows.
Useful exactly for testing hash distribution and variance of segment filling.
Default is true
, segments are allowed to tier.
When configured to false
, ChronicleHashBuilder.maxBloatFactor(double)
configuration becomes
irrelevant, because effectively no bloat is allowed.
allowSegmentTiering
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
allowSegmentTiering
- if true
, when a segment overflows a next tier
is allocated to accommodate new entriespublic ChronicleSetBuilder<K> nonTieredSegmentsPercentile(double nonTieredSegmentsPercentile)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
, assuming hash code distribution of the keys, inserted
into configured Chronicle Hash, is good.
The last caveat means that the configured percentile and affects segment size relying on Poisson distribution law, if inserted entries (keys) fall into all segments randomly. If e. g. the keys, inserted into the Chronicle Hash, are purposely selected to collide by a certain range of hash code bits, so that they all fall into the same segment (a DOS attacker might do this), this segment is obviously going to be tiered.
This configuration affects the actual number of segments, if ChronicleHashBuilder.entries(long)
and
ChronicleHashBuilder.entriesPerSegment(long)
or ChronicleHashBuilder.actualChunksPerSegmentTier(long)
are configured.
It affects the actual number of entries per segment/chunks per segment tier, if ChronicleHashBuilder.entries(long)
and ChronicleHashBuilder.actualSegments(int)
are configured. If all 4 configurations,
mentioned in this paragraph, are specified, nonTieredSegmentsPercentile
doesn't make
any effect.
Default value is 0.99999, i. e. if hash code distribution of the keys is good, only one segment of 100K is tiered on average. If your segment size is small and you want to improve memory footprint of Chronicle Hash (probably compromising latency percentiles), you might want to configure more "relaxed" value, e. g. 0.99.
nonTieredSegmentsPercentile
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
nonTieredSegmentsPercentile
- Fraction of segments which shouldn't be tieredpublic ChronicleSetBuilder<K> timeProvider(TimeProvider timeProvider)
ChronicleHashBuilder
Default time provider uses system time (System.currentTimeMillis()
) in
microsecond precision.
timeProvider
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
timeProvider
- a new time provider for replication needsChronicleHashBuilder.replication(SingleChronicleHashReplication)
public ChronicleSetBuilder<K> removedEntryCleanupTimeout(long removedEntryCleanupTimeout, TimeUnit unit)
ChronicleHashBuilder
remove()
on the key is called, the corresponding entry
is not immediately erased from the data structure, to let the distributed system eventually
converge on some value for this key (or converge on the fact, that this key is removed).
Chronicle Hash watch in runtime after the entries, and if one is removed and not updated
in any way for this removedEntryCleanupTimeout
, Chronicle is allowed to remove this
entry completely from the data structure. This timeout should depend on your distributed
system topology, and typical replication latencies, that should be determined experimentally.
Default timeout is 1 minute.
removedEntryCleanupTimeout
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
removedEntryCleanupTimeout
- timeout, after which stale removed entries could be erased
from Chronicle Hash data structure completelyunit
- time unit, in which the timeout is givenChronicleHashBuilder.cleanupRemovedEntries(boolean)
,
ReplicableEntry.doRemoveCompletely()
public ChronicleSetBuilder<K> cleanupRemovedEntries(boolean cleanupRemovedEntries)
ChronicleHashBuilder
ChronicleHashBuilder.removedEntryCleanupTimeout(
long, TimeUnit)
for more details on this mechanism.
Default value is true
-- old removed entries are erased with 1 second timeout.
cleanupRemovedEntries
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
cleanupRemovedEntries
- if stale removed entries should be purged from Chronicle HashChronicleHashBuilder.removedEntryCleanupTimeout(long, TimeUnit)
,
ReplicableEntry.doRemoveCompletely()
public ChronicleSetBuilder<K> keyReaderAndDataAccess(SizedReader<K> keyReader, @NotNull DataAccess<K> keyDataAccess)
ChronicleHashBuilder
DataAccess
and SizedReader
used to serialize and deserialize
keys to and from off-heap memory in hash containers, created by this builder.keyReaderAndDataAccess
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keyReader
- the new bytes → key object reader strategykeyDataAccess
- the new strategy of accessing the keys' bytes for writingChronicleHashBuilder.keyMarshallers(SizedReader, SizedWriter)
public ChronicleSetBuilder<K> keyMarshallers(@NotNull BytesReader<K> keyReader, @NotNull BytesWriter<? super K> keyWriter)
ChronicleHashBuilder
keyMarshallers
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keyReader
- the new bytes → key object reader strategykeyWriter
- the new key object → bytes writer strategyChronicleHashBuilder.keyReaderAndDataAccess(SizedReader, DataAccess)
public <M extends BytesReader<K> & BytesWriter<? super K>> ChronicleSetBuilder<K> keyMarshaller(@NotNull M marshaller)
ChronicleHashBuilder
keyMarshallers(marshaller, marshaller)
.keyMarshaller
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
public ChronicleSetBuilder<K> keyMarshallers(@NotNull SizedReader<K> keyReader, @NotNull SizedWriter<? super K> keyWriter)
ChronicleHashBuilder
keyMarshallers
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keyReader
- the new bytes → key object reader strategykeyWriter
- the new key object → bytes writer strategyChronicleHashBuilder.keyReaderAndDataAccess(SizedReader, DataAccess)
public <M extends SizedReader<K> & SizedWriter<? super K>> ChronicleSetBuilder<K> keyMarshaller(@NotNull M sizedMarshaller)
ChronicleHashBuilder
keyMarshallers(sizedMarshaller, sizedMarshaller)
.keyMarshaller
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
public ChronicleSetBuilder<K> keySizeMarshaller(@NotNull SizeMarshaller keySizeMarshaller)
ChronicleHashBuilder
Default key size marshaller is so-called "stop bit encoding" marshalling. If constant key size is configured, or defaulted if the key
type is always constant and ChronicleHashBuilder
implementation knows about it, this
configuration takes no effect, because a special SizeMarshaller
implementation, which
doesn't actually do any marshalling, and just returns the known constant size on SizeMarshaller.readSize(net.openhft.chronicle.bytes.Bytes)
calls, is used instead of any SizeMarshaller
configured using this method.
keySizeMarshaller
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
keySizeMarshaller
- the new marshaller, used to serialize actual key sizes to off-heap
memorypublic ChronicleSetBuilder<K> aligned64BitMemoryOperationsAtomic(boolean aligned64BitMemoryOperationsAtomic)
ChronicleHashBuilder
aligned64BitMemoryOperationsAtomic
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
aligned64BitMemoryOperationsAtomic
- true
if aligned 8-byte memory operations
are atomicpublic ChronicleSetBuilder<K> checksumEntries(boolean checksumEntries)
ChronicleHashBuilder
By default, persisted hash containers, created by
ChronicleMapBuilder
do compute and store entry checksums, but hash containers,
created in the process memory via ChronicleHashBuilder.create()
- don't.
checksumEntries
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
checksumEntries
- if entry checksums should be computed and storedChecksumEntry
public ChronicleSetBuilder<K> replication(SingleChronicleHashReplication replication)
ChronicleHashBuilder
By default, hash containers, created by this builder doesn't replicate their data.
This method call overrides all previous replication configurations of this builder, made
either by this method or ChronicleHashBuilder.replication(byte, TcpTransportAndNetworkConfig)
shortcut
method.
replication
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
replication
- the replication configChronicleHashInstanceBuilder.replicated(SingleChronicleHashReplication)
,
ChronicleHashBuilder.replication(byte, TcpTransportAndNetworkConfig)
public ChronicleSetBuilder<K> replication(byte identifier, TcpTransportAndNetworkConfig tcpTransportAndNetwork)
ChronicleHashBuilder
replication(SimpleReplication.builder() .tcpTransportAndNetwork(tcpTransportAndNetwork).createWithId(identifier))
.replication
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
identifier
- the network-wide identifier of the containers, created by this
buildertcpTransportAndNetwork
- configuration of tcp connection and networkChronicleHashBuilder.replication(SingleChronicleHashReplication)
,
ChronicleHashInstanceBuilder.replicated(byte, TcpTransportAndNetworkConfig)
public ChronicleSetBuilder<K> replication(byte identifier)
replication
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
public ChronicleSetBuilder<K> entryOperations(SetEntryOperations<K,?> entryOperations)
public ChronicleSetBuilder<K> remoteOperations(SetRemoteOperations<K,?> remoteOperations)
public ChronicleHashInstanceBuilder<ChronicleSet<K>> instance()
instance
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
public ChronicleSet<K> create()
ChronicleHashBuilder
ChronicleHash.close()
called on the returned container, or after the container
object is collected during GC, or on JVM shutdown the off-heap memory used by the returned
container is freed.
This method is a shortcut for instance().create()
.
create
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
ChronicleHashBuilder.createPersistedTo(File)
,
ChronicleHashBuilder.instance()
public ChronicleSet<K> createPersistedTo(File file) throws IOException
ChronicleHashBuilder
Multiple containers could give access to the same data simultaneously, either inside a single JVM or across processes. Access is synchronized correctly across all instances, i. e. hash container mapping the data from the first JVM isn't able to modify the data, concurrently accessed from the second JVM by another hash container instance, mapping the same data.
On container's close()
the data isn't removed, it remains on
disk and available to be opened again (given the same file name) or during different JVM
run.
This method is shortcut for instance().persistedTo(file).create()
.
createPersistedTo
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
file
- the file with existing hash container or a desired location of a new off-heap
persisted hash containerIOException
- if any IO error, related to off-heap memory allocation or file mapping,
or establishing replication connections, occursChronicleHash.file()
,
ChronicleHash.close()
,
ChronicleHashBuilder.create()
,
ChronicleHashInstanceBuilder.persistedTo(File)
@Deprecated public net.openhft.chronicle.hash.ChronicleHashBuilderPrivateAPI<K> privateAPI()
privateAPI
in interface ChronicleHashBuilder<K,ChronicleSet<K>,ChronicleSetBuilder<K>>
Copyright © 2015. All rights reserved.