K
- key type of the maps, produced by this builderV
- value type of the maps, produced by this builderpublic final class ChronicleMapBuilder<K,V> extends Object implements ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
ChronicleMapBuilder
manages ChronicleMap
configurations; could be used as a
classic builder and/or factory. This means that in addition to the standard builder usage
pattern:
ChronicleMap<Key, Value> map = ChronicleMapOnHeapUpdatableBuilder
.of(Key.class, Value.class)
.entries(100500)
// ... other configurations
.create();
it could be prepared and used to create many similar maps:
ChronicleMapBuilder<Key, Value> builder = ChronicleMapBuilder
.of(Key.class, Value.class)
.entries(100500);
ChronicleMap<Key, Value> map1 = builder.create();
ChronicleMap<Key, Value> map2 = builder.create();
i. e. created ChronicleMap
instances don't depend on the builder.
ChronicleMapBuilder
is mutable, see a note in ChronicleHashBuilder
interface
documentation.
Later in this documentation, "ChronicleMap" means "ChronicleMaps, created by ChronicleMapBuilder
", unless specified different, because theoretically someone might provide
ChronicleMap
implementations with completely different properties.
ChronicleMap
("ChronicleMaps, created by ChronicleMapBuilder
") currently
doesn't support resizing. That is why you must configure number
of entries you are going to insert into the created map at most. See entries(long)
method documentation for more information on this.
If you key or value type is not constantly sized and known to ChronicleHashBuilder
, i.
e. it is not a boxed primitive, data value generated interface, Byteable
, etc. (see the
complete list TODO insert the link to the complete list), you must provide the ChronicleHashBuilder
with some information about you keys or values: if they are
constantly-sized, call constantKeySizeBySample(Object)
, otherwise ChronicleHashBuilder.averageKeySize(double)
method, accordingly for values.
ChronicleMap
,
ChronicleSetBuilder
Modifier and Type | Method and Description |
---|---|
ChronicleMapBuilder<K,V> |
actualChunkSize(int actualChunkSize)
Configures the size in bytes of allocation unit of hash container instances, created by this
builder.
|
ChronicleMapBuilder<K,V> |
actualChunksPerSegment(long actualChunksPerSegment)
Configures the actual number of chunks, that will be reserved for any single segment of the
hash containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
actualSegments(int actualSegments)
Configures the actual number of segments in the hash containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
aligned64BitMemoryOperationsAtomic(boolean aligned64BitMemoryOperationsAtomic)
Specifies whether on the current combination of platform, OS and Jvm aligned 8-byte reads
and writes are atomic or not.
|
ChronicleMapBuilder<K,V> |
allowSegmentTiering(boolean allowSegmentTiering)
In addition to
maxBloatFactor(1.0) , that does not
guarantee that segments won't tier (due to bad hash distribution or natural variance),
configuring allowSegmentTiering(false) makes Chronicle Hashes, created by this
builder, to throw IllegalStateException immediately when some segment overflows. |
ChronicleMapBuilder<K,V> |
averageKey(K averageKey)
Configures the average number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder, by serializing the given
averageKey using
the configured keys marshallers . |
ChronicleMapBuilder<K,V> |
averageKeySize(double averageKeySize)
Configures the average number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
averageValue(V averageValue)
Configures the average number of bytes, taken by serialized form of values, put into maps,
created by this builder, by serializing the given
averageValue using the configured
value marshallers . |
ChronicleMapBuilder<K,V> |
averageValueSize(double averageValueSize)
Configures the average number of bytes, taken by serialized form of values, put into maps,
created by this builder.
|
ChronicleMapBuilder<K,V> |
bytesMarshallerFactory(BytesMarshallerFactory bytesMarshallerFactory)
Configures a
BytesMarshallerFactory to be used with BytesMarshallableSerializer , which is a default ObjectSerializer ,
to serialize/deserialize data to/from off-heap memory in hash containers, created by this
builder. |
ChronicleMapBuilder<K,V> |
checksumEntries(boolean checksumEntries)
Configures whether hash containers, created by this builder, should compute and store entry
checksums.
|
ChronicleMapBuilder<K,V> |
cleanupRemovedEntries(boolean cleanupRemovedEntries)
Configures if replicated Chronicle Hashes, constructed by this builder, should
completely erase entries, removed some time ago.
|
ChronicleMapBuilder<K,V> |
clone()
Clones this builder.
|
ChronicleMapBuilder<K,V> |
constantKeySizeBySample(K sampleKey)
Configures the constant number of bytes, taken by serialized form of keys, put into hash
containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
constantValueSizeBySample(V sampleValue)
Configures the constant number of bytes, taken by serialized form of values, put into maps,
created by this builder.
|
ChronicleMap<K,V> |
create()
Creates a new hash container, storing it's data in off-heap memory, not mapped to any file.
|
ChronicleMap<K,V> |
createPersistedTo(File file)
Opens a hash container residing the specified file, or creates a new one if the file not yet
exists and maps its off-heap memory to the file.
|
ChronicleMapBuilder<K,V> |
defaultValue(V defaultValue)
Specifies the value to be put for each key queried in
acquireUsing() method, if the key is absent in the map, created by this builder. |
ChronicleMapBuilder<K,V> |
defaultValueProvider(DefaultValueProvider<K,V> defaultValueProvider)
Specifies the function to obtain a value for the key during
acquireUsing() calls, if the key is absent in the map, created by this builder. |
ChronicleMapBuilder<K,V> |
entries(long entries)
Configures the target number of entries, that is going be inserted into the hash containers,
created by this builder.
|
ChronicleMapBuilder<K,V> |
entriesPerSegment(long entriesPerSegment)
Configures the actual maximum number entries, that could be inserted into any single segment
of the hash containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
entryAndValueAlignment(Alignment alignment)
Configures alignment strategy of address in memory of entries and independently of address in
memory of values within entries in ChronicleMaps, created by this builder.
|
ChronicleMapBuilder<K,V> |
entryOperations(MapEntryOperations<K,V,?> entryOperations)
Inject your SPI code around basic
ChronicleMap 's operations with entries:
removing entries, replacing the entries' value and inserting the new entry. |
boolean |
equals(Object o) |
int |
hashCode() |
ChronicleMapBuilder<K,V> |
immutableKeys()
Specifies that key objects, queried with the hash containers, created by this builder, are
inherently immutable.
|
ChronicleHashInstanceBuilder<ChronicleMap<K,V>> |
instance() |
ChronicleMapBuilder<K,V> |
keyDeserializationFactory(ObjectFactory<? extends K> keyDeserializationFactory)
Configures factory which is used to create a new key instance, if key class is either
Byteable , BytesMarshallable or Externalizable subclass, or key type is
eligible for data value generation, or configured custom key reader implements DeserializationFactoryConfigurableBytesReader , in maps, created by this builder. |
ChronicleMapBuilder<K,V> |
keyMarshaller(BytesMarshaller<? super K> keyMarshaller)
Configures the
BytesMarshaller used to serialize/deserialize keys to/from off-heap
memory in hash containers, created by this builder. |
ChronicleMapBuilder<K,V> |
keyMarshallers(BytesWriter<? super K> keyWriter,
BytesReader<K> keyReader)
Configures the marshallers, used to serialize/deserialize keys to/from off-heap memory in
hash containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
keySizeMarshaller(SizeMarshaller keySizeMarshaller)
Configures the marshaller used to serialize actual key sizes to off-heap memory in hash
containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
mapMethods(MapMethods<K,V,?> mapMethods)
Inject your SPI around logic of all
ChronicleMap 's operations with individual keys:
from Map.containsKey(java.lang.Object) to ChronicleMap.acquireUsing(K, V) and
ConcurrentMap.merge(K, V, java.util.function.BiFunction<? super V, ? super V, ? extends V>) . |
ChronicleMapBuilder<K,V> |
maxBloatFactor(double maxBloatFactor)
Configures the maximum number of times, the hash containers, created by this builder,
are allowed to grow in size beyond the configured target number
of entries.
|
ChronicleMapBuilder<K,V> |
maxChunksPerEntry(int maxChunksPerEntry)
Configures how many chunks a single entry, inserted into
ChronicleHash es, created
by this builder, could take. |
ChronicleMapBuilder<K,V> |
minSegments(int minSegments)
Set minimum number of segments in hash containers, constructed by this builder.
|
ChronicleMapBuilder<K,V> |
nonTieredSegmentsPercentile(double nonTieredSegmentsPercentile)
Configures probabilistic fraction of segments, which shouldn't become tiered, if Chronicle
Hash size is
ChronicleHashBuilder.entries(long) , assuming hash code distribution of the keys, inserted
into configured Chronicle Hash, is good. |
ChronicleMapBuilder<K,V> |
objectSerializer(ObjectSerializer objectSerializer)
Configures the serializer used to serialize/deserialize data to/from off-heap memory, when
specified class doesn't implement a specific serialization interface like
Externalizable or BytesMarshallable (for example, if data is loosely typed and just
Object is specified as the data class), or nullable data, and if custom marshaller is
not configured, in hash containers, created by
this builder. |
static <K,V> ChronicleMapBuilder<K,V> |
of(Class<K> keyClass,
Class<V> valueClass)
Returns a new
ChronicleMapBuilder instance which is able to create maps with the specified key and value classes. |
net.openhft.chronicle.hash.ChronicleHashBuilderPrivateAPI<K> |
privateAPI()
Deprecated.
don't use private API in the client code
|
ChronicleMapBuilder<K,V> |
putReturnsNull(boolean putReturnsNull)
Configures if the maps created by this
ChronicleMapBuilder should return null
instead of previous mapped values on ChornicleMap.put(key, value) calls. |
ChronicleMapBuilder<K,V> |
remoteOperations(MapRemoteOperations<K,V,?> remoteOperations) |
ChronicleMapBuilder<K,V> |
removedEntryCleanupTimeout(long removedEntryCleanupTimeout,
TimeUnit unit)
Configures timeout after which entries, marked as removed in the Chronicle Hash, constructed
by this builder, are allowed to be completely removed from the data structure.
|
ChronicleMapBuilder<K,V> |
removeReturnsNull(boolean removeReturnsNull)
Configures if the maps created by this
ChronicleMapBuilder should return null
instead of the last mapped value on ChronicleMap.remove(key) calls. |
ChronicleMapBuilder<K,V> |
replication(byte identifier) |
ChronicleMapBuilder<K,V> |
replication(byte identifier,
TcpTransportAndNetworkConfig tcpTransportAndNetwork)
Shortcut for
replication(SimpleReplication.builder() .tcpTransportAndNetwork(tcpTransportAndNetwork).createWithId(identifier)) . |
ChronicleMapBuilder<K,V> |
replication(SingleChronicleHashReplication replication)
Configures replication of the hash containers, created by this builder.
|
ChronicleMapBuilder<K,V> |
timeProvider(TimeProvider timeProvider)
Configures a time provider, used by hash containers, created by this builder, for needs of
replication consensus protocol (conflicting data updates resolution).
|
String |
toString() |
ChronicleMapBuilder<K,V> |
valueDeserializationFactory(ObjectFactory<V> valueDeserializationFactory)
Configures factory which is used to create a new value instance, if value class is either
Byteable , BytesMarshallable or Externalizable subclass, or value type
is eligible for data value generation, or configured custom value reader is an instance of
DeserializationFactoryConfigurableBytesReader , in maps, created by this builder. |
ChronicleMapBuilder<K,V> |
valueMarshaller(BytesMarshaller<? super V> valueMarshaller)
Configures the
BytesMarshaller used to serialize/deserialize values to/from off-heap
memory in maps, created by this builder. |
ChronicleMapBuilder<K,V> |
valueMarshallers(BytesWriter<V> valueWriter,
BytesReader<V> valueReader)
Configures the marshallers, used to serialize/deserialize values to/from off-heap memory in
maps, created by this builder.
|
ChronicleMapBuilder<K,V> |
valueSizeMarshaller(SizeMarshaller valueSizeMarshaller)
Configures the marshaller used to serialize actual value sizes to off-heap memory in maps,
created by this builder.
|
public static <K,V> ChronicleMapBuilder<K,V> of(@NotNull Class<K> keyClass, @NotNull Class<V> valueClass)
ChronicleMapBuilder
instance which is able to create maps with the specified key and value classes.K
- key type of the maps, created by the returned builderV
- value type of the maps, created by the returned builderkeyClass
- class object used to infer key type and discover it's properties via
reflectionvalueClass
- class object used to infer value type and discover it's properties via
reflectionpublic ChronicleMapBuilder<K,V> clone()
ChronicleHashBuilder
ChronicleHashBuilder
s are mutable and changed on each configuration method call. Original
and cloned builders are independent.clone
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
clone
in class Object
public net.openhft.chronicle.hash.ChronicleHashBuilderPrivateAPI<K> privateAPI()
privateAPI
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
public ChronicleMapBuilder<K,V> averageKeySize(double averageKeySize)
ChronicleHashBuilder.averageKey(Object)
might
be easier to use and more reliable. If key size is always the same, call ChronicleHashBuilder.constantKeySizeBySample(Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses
the actual chunk size based on this configuration, that,
however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average key size
configuration, which is anyway needed.
If key is a boxed primitive type or Byteable
subclass, i. e. if key size is known
statically, it is automatically accounted and shouldn't be specified by user.
Calling this method clears any previous ChronicleHashBuilder.constantKeySizeBySample(Object)
and
ChronicleHashBuilder.averageKey(Object)
configurations.
Example: if keys in your map(s) are English words in String
form, average English
word length is 5.1, configure average key size of 6:
ChronicleMap<String, LongValue> wordFrequencies = ChronicleMapBuilder
.of(String.class, LongValue.class)
.entries(50000)
.averageKeySize(6)
.create();
(Note that 6 is chosen as average key size in bytes despite strings in Java are UTF-16
encoded (and each character takes 2 bytes on-heap), because default off-heap String
encoding is UTF-8 in ChronicleMap
.)averageKeySize
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
averageKeySize
- the average size of the keyIllegalStateException
- if key size is known statically and shouldn't be configured
by userIllegalArgumentException
- if the given keySize
is non-positiveaverageKey(Object)
,
constantKeySizeBySample(Object)
,
averageValueSize(double)
,
actualChunkSize(int)
public ChronicleMapBuilder<K,V> averageKey(K averageKey)
averageKey
using
the configured keys marshallers
.
In some cases, ChronicleHashBuilder.averageKeySize(double)
might be easier to use, than constructing the
"average key". If key size is always the same, call ChronicleHashBuilder.constantKeySizeBySample(
Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses
the actual chunk size based on this configuration, that,
however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average key size
configuration, which is anyway needed.
If key is a boxed primitive type or Byteable
subclass, i. e. if key size is known
statically, it is automatically accounted and shouldn't be specified by user.
Calling this method clears any previous ChronicleHashBuilder.constantKeySizeBySample(Object)
and
ChronicleHashBuilder.averageKeySize(double)
configurations.
averageKey
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
averageKey
- the average (by footprint in serialized form) key, is going to be put
into the hash containers, created by this builderNullPointerException
- if the given averageKey
is null
averageKeySize(double)
,
constantKeySizeBySample(Object)
,
averageValue(Object)
,
actualChunkSize(int)
public ChronicleMapBuilder<K,V> constantKeySizeBySample(K sampleKey)
sampleKey
, all
keys should take the same number of bytes in serialized form, as this sample object.
If keys are of boxed primitive type or Byteable
subclass, i. e. if key size is
known statically, it is automatically accounted and this method shouldn't be called.
If key size varies, method ChronicleHashBuilder.averageKeySize(double)
should be called instead of
this one.
Calling this method clears any previous ChronicleHashBuilder.averageKey(Object)
and
ChronicleHashBuilder.averageKeySize(double)
configurations.
For example, if your keys are Git commit hashes:
Map<byte[], String> gitCommitMessagesByHash =
ChronicleMapBuilder.of(byte[].class, String.class)
.constantKeySizeBySample(new byte[20])
.immutableKeys()
.create();
constantKeySizeBySample
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
sampleKey
- the sample keyaverageKeySize(double)
,
averageKey(Object)
,
constantValueSizeBySample(Object)
public ChronicleMapBuilder<K,V> averageValueSize(double averageValueSize)
averageValue(Object)
might be easier
to use and more reliable. If value size is always the same, call constantValueSizeBySample(Object)
method instead of this one.
ChronicleHashBuilder
implementation heuristically chooses the actual chunk size based on this configuration and the key size,
that, however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average value size
configuration, which is anyway needed.
If values are of boxed primitive type or Byteable
subclass, i. e. if value size is
known statically, it is automatically accounted and shouldn't be specified by user.
Calling this method clears any previous constantValueSizeBySample(Object)
and
averageValue(Object)
configurations.
averageValueSize
- number of bytes, taken by serialized form of valuesIllegalStateException
- if value size is known statically and shouldn't be
configured by userIllegalArgumentException
- if the given averageValueSize
is non-positiveaverageValue(Object)
,
constantValueSizeBySample(Object)
,
averageKeySize(double)
,
actualChunkSize(int)
public ChronicleMapBuilder<K,V> averageValue(V averageValue)
averageValue
using the configured
value marshallers
. In some cases, averageValueSize(double)
might be easier to use, than constructing the "average value".
If value size is always the same, call constantValueSizeBySample(Object)
method
instead of this one.
Example: If you
ChronicleHashBuilder
implementation heuristically chooses the actual chunk size based on this configuration and the key size,
that, however, might result to quite high internal fragmentation, i. e. losses because only
integral number of chunks could be allocated for the entry. If you want to avoid this, you
should manually configure the actual chunk size in addition to this average value size
configuration, which is anyway needed.
If values are of boxed primitive type or Byteable
subclass, i. e. if value size is
known statically, it is automatically accounted and shouldn't be specified by user.
Calling this method clears any previous constantValueSizeBySample(Object)
and averageValueSize(double)
configurations.
averageValue
- the average (by footprint in serialized form) value, is going to be put
into the maps, created by this builderNullPointerException
- if the given averageValue
is null
averageValueSize(double)
,
constantValueSizeBySample(Object)
,
averageKey(Object)
,
actualChunkSize(int)
public ChronicleMapBuilder<K,V> constantValueSizeBySample(V sampleValue)
sampleValue
, all values should
take the same number of bytes in serialized form, as this sample object.
If values are of boxed primitive type or Byteable
subclass, i. e. if value size is
known statically, it is automatically accounted and this method shouldn't be called.
If value size varies, method averageValue(Object)
or averageValueSize(double)
should be called instead of this one.
Calling this method clears any previous averageValue(Object)
and
averageValueSize(double)
configurations.
sampleValue
- the sample valueaverageValueSize(double)
,
averageValue(Object)
,
constantKeySizeBySample(Object)
public ChronicleMapBuilder<K,V> actualChunkSize(int actualChunkSize)
ChronicleMap
and ChronicleSet
store their data off-heap, so it is required
to serialize key (and values, in ChronicleMap
case) (unless they are direct Byteable
instances). Serialized key bytes (+ serialized value bytes, in ChronicleMap
case) + some metadata bytes comprise "entry space", which ChronicleMap
or ChronicleSet
should allocate. So chunk size is the minimum allocation portion in the
hash containers, created by this builder. E. g. if chunk size is 100, the created container
could only allocate 100, 200, 300... bytes for an entry. If say 150 bytes of entry space are
required by the entry, 200 bytes will be allocated, 150 used and 50 wasted. This is called
internal fragmentation.
To minimize memory overuse and improve speed, you should pay decent attention to this configuration. Alternatively, you can just trust the heuristics and doesn't configure the chunk size.
Specify chunk size so that most entries would take from 5 to several dozens of chunks. However, remember that operations with entries that span several chunks are a bit slower, than with entries which take a single chunk. Particularly avoid entries to take more than 64 chunks.
Example: if values in your ChronicleMap
are adjacency lists of some social graph,
where nodes are represented as long
ids, and adjacency lists are serialized in
efficient manner, for example as long[]
arrays. Typical number of connections is
100-300, maximum is 3000. In this case chunk size of
30 * (8 bytes for each id) = 240 bytes would be a good choice:
Map<Long, long[]> socialGraph = ChronicleMapOnHeapUpdatableBuilder
.of(Long.class, long[].class)
.entries(1_000_000_000L)
.averageValueSize(150 * 8) // 150 is average adjacency list size
.actualChunkSize(30 * 8) // average 5-6 chunks per entry
.create();
This is a low-level configuration. The configured number of bytes is used as-is, without anything like round-up to the multiple of 8 or 16, or any other adjustment.
actualChunkSize
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
actualChunkSize
- the "chunk size" in bytesIllegalStateException
- is sizes of both keys and values of maps created by this
builder are constant, hence chunk size shouldn't be configured
by userentryAndValueAlignment(Alignment)
,
entries(long)
,
maxChunksPerEntry(int)
public ChronicleMapBuilder<K,V> maxChunksPerEntry(int maxChunksPerEntry)
ChronicleHashBuilder
ChronicleHash
es, created
by this builder, could take. If you try to insert larger entry, IllegalStateException
is fired. This is useful as self-check, that you configured chunk size right and you
keys (and values, in ChronicleMap
case) take expected number of bytes. For example,
if ChronicleHashBuilder.constantKeySizeBySample(Object)
is configured or key size is statically known
to be constant (boxed primitives, data value generated implementations, Byteable
s,
etc.), and the same for value objects in ChronicleMap
case, max chunks per entry
is configured to 1, to ensure keys and values are actually constantly-sized.maxChunksPerEntry
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
maxChunksPerEntry
- how many chunks a single entry could span at mostChronicleHashBuilder.actualChunkSize(int)
public ChronicleMapBuilder<K,V> entryAndValueAlignment(Alignment alignment)
Useful when values of the map are updated intensively, particularly fields with volatile access, because it doesn't work well if the value crosses cache lines. Also, on some (nowadays rare) architectures any misaligned memory access is more expensive than aligned.
If values couldn't reference off-heap memory (i. e. it is not Byteable
or "data
value generated"), alignment configuration makes no sense and forbidden.
Default is Alignment.NO_ALIGNMENT
if values couldn't reference off-heap memory,
otherwise chosen heuristically (configure explicitly for being sure and to compare
performance in your case).
alignment
- the new alignment of the maps constructed by this builderChronicleMapOnHeapUpdatableBuilder
backIllegalStateException
- if values of maps, created by this builder, couldn't reference
off-heap memorypublic ChronicleMapBuilder<K,V> entries(long entries)
ChronicleHashBuilder
ChronicleHashBuilder.maxBloatFactor(double)
is configured to {code 1.0}
(and this is by default), this number of entries is also the maximum. If you try to insert
more entries, than the configured maxBloatFactor
, multiplied by the given number of
entries
, IllegalStateException
might be thrown.
This configuration should represent the expected maximum number of entries in a stable
state, maxBloatFactor
- the maximum bloat up coefficient,
during exceptional bursts.
To be more precise - try to configure the entries
so, that the created hash
container is going to serve about 99% requests being less or equal than this number
of entries in size.
You shouldn't put additional margin over the actual target number of entries.
This bad practice was popularized by HashMap.HashMap(int)
and HashSet.HashSet(int)
constructors, which accept capacity, that should be multiplied
by load factor to obtain the actual maximum expected number of entries.
ChronicleMap
and ChronicleSet
don't have a notion of load factor.
The default target number of entries is 2^20 (~ 1 million).
entries
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
entries
- the target size of the maps or sets, created by this builderChronicleHashBuilder.maxBloatFactor(double)
public ChronicleMapBuilder<K,V> entriesPerSegment(long entriesPerSegment)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
configuration.
This is a low-level configuration.
entriesPerSegment
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
entriesPerSegment
- the actual maximum number entries per segment in the
hash containers, created by this builderChronicleHashBuilder.entries(long)
,
ChronicleHashBuilder.actualSegments(int)
public ChronicleMapBuilder<K,V> actualChunksPerSegment(long actualChunksPerSegment)
ChronicleHashBuilder
ChronicleHashBuilder.entriesPerSegment(long)
. Makes sense only if ChronicleHashBuilder.actualChunkSize(int)
,
ChronicleHashBuilder.actualSegments(int)
and ChronicleHashBuilder.entriesPerSegment(long)
are also configured
manually.actualChunksPerSegment
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
actualChunksPerSegment
- the actual number of segments, reserved per segment in the
hash containers, created by this builderpublic ChronicleMapBuilder<K,V> minSegments(int minSegments)
ChronicleHashBuilder
ConcurrentHashMap
.minSegments
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
minSegments
- the minimum number of segments in containers, constructed by this builderpublic ChronicleMapBuilder<K,V> actualSegments(int actualSegments)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
call.
This is a low-level configuration. The configured number is used as-is, without anything like round-up to the closest power of 2.
actualSegments
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
actualSegments
- the actual number of segments in hash containers, created by
this builderChronicleHashBuilder.minSegments(int)
,
ChronicleHashBuilder.entriesPerSegment(long)
public ChronicleMapBuilder<K,V> putReturnsNull(boolean putReturnsNull)
ChronicleMapBuilder
should return null
instead of previous mapped values on ChornicleMap.put(key, value)
calls.
Map.put()
returns the previous value, functionality
which is rarely used but fairly cheap for simple in-process, on-heap implementations like
HashMap
. But an off-heap collection has to create a new object and deserialize
the data from off-heap memory. A collection hiding remote queries over the network should
send the value back in addition to that. It's expensive for something you probably don't use.
By default, of cause, ChronicleMap
conforms the general Map
contract and
returns the previous mapped value on put()
calls.
putReturnsNull
- true
if you want ChronicleMap.put()
to not return the value that was replaced but
instead return null
removeReturnsNull(boolean)
public ChronicleMapBuilder<K,V> removeReturnsNull(boolean removeReturnsNull)
ChronicleMapBuilder
should return null
instead of the last mapped value on ChronicleMap.remove(key)
calls.
Map.remove()
returns the previous value, functionality which is
rarely used but fairly cheap for simple in-process, on-heap implementations like HashMap
. But an off-heap collection has to create a new object and deserialize the data
from off-heap memory. A collection hiding remote queries over the network should send
the value back in addition to that. It's expensive for something you probably don't use.
By default, of cause, ChronicleMap
conforms the general Map
contract and
returns the mapped value on remove()
calls.
removeReturnsNull
- true
if you want ChronicleMap.remove()
to not return the value of the removed entry
but instead return null
putReturnsNull(boolean)
public ChronicleMapBuilder<K,V> maxBloatFactor(double maxBloatFactor)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
should represent the expected maximum number of entries in a stable
state, maxBloatFactor
- the maximum bloat up coefficient, during exceptional bursts.
This configuration should be used for self-checking. Even if you configure impossibly
large maxBloatFactor
, the created ChronicleHash
, of cause, will be still
operational, and even won't allocate any extra resources before they are actually needed.
But when the ChronicleHash
grows beyond the configured ChronicleHashBuilder.entries(long)
, it
could start to serve requests progressively slower. If you insert new entries into
ChronicleHash
infinitely, due to a bug in your business logic code, or the
ChronicleHash configuration, and if you configure the ChronicleHash to grow infinitely, you
will have a terribly slow and fat, but operational application, instead of a fail with
IllegalStateException
, which will quickly show you, that there is a bug in you
application.
The default maximum bloat factor factor is 1.0
- i. e. "no bloat is expected".
It is strongly advised not to configure maxBloatFactor
to more than 10.0
,
almost certainly, you either should configure ChronicleHash
es completely differently,
or this data structure doesn't fit you case.
maxBloatFactor
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
maxBloatFactor
- the maximum number ot times, the created hash container is supposed
to bloat up beyond the ChronicleHashBuilder.entries(long)
ChronicleHashBuilder.entries(long)
public ChronicleMapBuilder<K,V> allowSegmentTiering(boolean allowSegmentTiering)
ChronicleHashBuilder
maxBloatFactor(1.0)
, that does not
guarantee that segments won't tier (due to bad hash distribution or natural variance),
configuring allowSegmentTiering(false)
makes Chronicle Hashes, created by this
builder, to throw IllegalStateException
immediately when some segment overflows.
Useful exactly for testing hash distribution and variance of segment filling.
Default is true
, segments are allowed to tier.
When configured to false
, ChronicleHashBuilder.maxBloatFactor(double)
configuration becomes
irrelevant, because effectively no bloat is allowed.
allowSegmentTiering
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
allowSegmentTiering
- if true
, when a segment overflows a next tier
is allocated to accommodate new entriespublic ChronicleMapBuilder<K,V> nonTieredSegmentsPercentile(double nonTieredSegmentsPercentile)
ChronicleHashBuilder
ChronicleHashBuilder.entries(long)
, assuming hash code distribution of the keys, inserted
into configured Chronicle Hash, is good.
The last caveat means that the configured percentile and affects segment size relying on Poisson distribution law, if inserted entries (keys) fall into all segments randomly. If e. g. the keys, inserted into the Chronicle Hash, are purposely selected to collide by a certain range of hash code bits, so that they all fall into the same segment (a DOS attacker might do this), this segment is obviously going to be tiered.
This configuration affects the actual number of segments, if ChronicleHashBuilder.entries(long)
and
ChronicleHashBuilder.entriesPerSegment(long)
or ChronicleHashBuilder.actualChunksPerSegment(long)
are configured. It
affects the actual number of entries/chunks per segment, if ChronicleHashBuilder.entries(long)
and
ChronicleHashBuilder.actualSegments(int)
are configured. If all 4 configurations, mentioned in this
paragraph, are specified, nonTieredSegmentsPercentile
is irrelevant.
Default value is 0.99999, i. e. if hash code distribution of the keys is good, only one segment of 100K is tiered on average. If your segment size is small and you want to improve memory footprint of Chronicle Hash (probably compromising latency percentiles), you might want to configure more "relaxed" value, e. g. 0.99.
nonTieredSegmentsPercentile
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
nonTieredSegmentsPercentile
- Fraction of segments which shouldn't be tieredpublic ChronicleMapBuilder<K,V> timeProvider(TimeProvider timeProvider)
ChronicleHashBuilder
Default time provider uses system time (System.currentTimeMillis()
) in
microsecond precision.
timeProvider
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
timeProvider
- a new time provider for replication needsChronicleHashBuilder.replication(SingleChronicleHashReplication)
public ChronicleMapBuilder<K,V> removedEntryCleanupTimeout(long removedEntryCleanupTimeout, TimeUnit unit)
ChronicleHashBuilder
remove()
on the key is called, the corresponding entry
is not immediately erased from the data structure, to let the distributed system eventually
converge on some value for this key (or converge on the fact, that this key is removed).
Chronicle Hash watch in runtime after the entries, and if one is removed and not updated
in any way for this removedEntryCleanupTimeout
, Chronicle is allowed to remove this
entry completely from the data structure. This timeout should depend on your distributed
system topology, and typical replication latencies, that should be determined experimentally.
Default timeout is 1 minute.
removedEntryCleanupTimeout
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
removedEntryCleanupTimeout
- timeout, after which stale removed entries could be erased
from Chronicle Hash data structure completelyunit
- time unit, in which the timeout is givenChronicleHashBuilder.cleanupRemovedEntries(boolean)
,
ReplicableEntry.doRemoveCompletely()
public ChronicleMapBuilder<K,V> cleanupRemovedEntries(boolean cleanupRemovedEntries)
ChronicleHashBuilder
ChronicleHashBuilder.removedEntryCleanupTimeout(
long, TimeUnit)
for more details on this mechanism.
Default value is true
-- old removed entries are erased with 1 second timeout.
cleanupRemovedEntries
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
cleanupRemovedEntries
- if stale removed entries should be purged from Chronicle HashChronicleHashBuilder.removedEntryCleanupTimeout(long, TimeUnit)
,
ReplicableEntry.doRemoveCompletely()
public ChronicleMapBuilder<K,V> bytesMarshallerFactory(BytesMarshallerFactory bytesMarshallerFactory)
ChronicleHashBuilder
BytesMarshallerFactory
to be used with BytesMarshallableSerializer
, which is a default ObjectSerializer
,
to serialize/deserialize data to/from off-heap memory in hash containers, created by this
builder.
Default BytesMarshallerFactory
is an instance of VanillaBytesMarshallerFactory
. This is a convenience configuration method, it has no effect
on the resulting hash containers, if custom data
marshallers are configured, data types extends one of specific serialization interfaces,
recognized by this builder (e. g. Externalizable
or BytesMarshallable
), or
ObjectSerializer
is configured.
bytesMarshallerFactory
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
bytesMarshallerFactory
- the marshaller factory to be used with the default ObjectSerializer
, i. e. BytesMarshallableSerializer
ChronicleHashBuilder.objectSerializer(ObjectSerializer)
public ChronicleMapBuilder<K,V> objectSerializer(ObjectSerializer objectSerializer)
Externalizable
or BytesMarshallable
(for example, if data is loosely typed and just
Object
is specified as the data class), or nullable data, and if custom marshaller is
not configured, in hash containers, created by
this builder. Please read ObjectSerializer
docs for more info and available options.
Default serializer is BytesMarshallableSerializer
, configured with the specified
or default BytesMarshallerFactory
.
Example:
Map<Key, Value> map =
ChronicleMapBuilder.of(Key.class, Value.class)
.entries(1_000_000)
.averageKeySize(50).averageValueSize(200)
// this class hasn't implemented yet, just for example
.objectSerializer(new KryoObjectSerializer())
.create();
This serializer is used to serialize both keys and values, if they both require this:
loosely typed, nullable, and custom key and
value marshallers are not configured.objectSerializer
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
objectSerializer
- the serializer used to serialize loosely typed or nullable data if
custom marshaller is not configuredChronicleHashBuilder.bytesMarshallerFactory(BytesMarshallerFactory)
,
ChronicleHashBuilder.keyMarshaller(BytesMarshaller)
public ChronicleMapBuilder<K,V> keyMarshaller(@NotNull BytesMarshaller<? super K> keyMarshaller)
ChronicleHashBuilder
BytesMarshaller
used to serialize/deserialize keys to/from off-heap
memory in hash containers, created by this builder. See the
section about serialization in ChronicleMap manual for more information.keyMarshaller
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
keyMarshaller
- the marshaller used to serialize keysChronicleHashBuilder.keyMarshallers(BytesWriter, BytesReader)
,
ChronicleHashBuilder.objectSerializer(ObjectSerializer)
public ChronicleMapBuilder<K,V> keyMarshallers(@NotNull BytesWriter<? super K> keyWriter, @NotNull BytesReader<K> keyReader)
ChronicleHashBuilder
Configuring marshalling this way results to a little bit more compact in-memory layout of
the map, comparing to a single interface configuration: ChronicleHashBuilder.keyMarshaller(BytesMarshaller)
.
Passing BytesInterop
(which is a subinterface of BytesWriter
) as the first
argument is supported, and even more advantageous from performance perspective.
keyMarshallers
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
keyWriter
- the new key object → Bytes
writer (interop) strategykeyReader
- the new Bytes
→ key object reader strategyChronicleHashBuilder.keyMarshaller(BytesMarshaller)
public ChronicleMapBuilder<K,V> keySizeMarshaller(@NotNull SizeMarshaller keySizeMarshaller)
ChronicleHashBuilder
Default key size marshaller is so-called "stop bit encoding" marshalling. If constant key size is configured, or defaulted if the key
type is always constant and ChronicleHashBuilder
implementation knows about it, this
configuration takes no effect, because a special SizeMarshaller
implementation, which
doesn't actually do any marshalling, and just returns the known constant size on SizeMarshaller.readSize(Bytes)
calls, is used instead of any SizeMarshaller
configured using this method.
keySizeMarshaller
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
keySizeMarshaller
- the new marshaller, used to serialize actual key sizes to off-heap
memorypublic ChronicleMapBuilder<K,V> keyDeserializationFactory(@NotNull ObjectFactory<? extends K> keyDeserializationFactory)
ChronicleHashBuilder
Byteable
, BytesMarshallable
or Externalizable
subclass, or key type is
eligible for data value generation, or configured custom key reader implements DeserializationFactoryConfigurableBytesReader
, in maps, created by this builder.
Default key deserialization factory is NewInstanceObjectFactory
, which creates a
new key instance using Class.newInstance()
default constructor. You could provide an
AllocateInstanceObjectFactory
, which uses Unsafe.allocateInstance(Class)
(you
might want to do this for better performance or if you don't want to initialize fields), or a
factory which calls a key class constructor with some arguments, or a factory which
internally delegates to instance pool or ThreadLocal
, to reduce allocations.
keyDeserializationFactory
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
keyDeserializationFactory
- the key factory used to produce instances to deserialize
data inIllegalStateException
- if it is not possible to apply deserialization factory to
key deserializers, currently configured for this buildervalueDeserializationFactory(ObjectFactory)
public ChronicleMapBuilder<K,V> immutableKeys()
ChronicleHashBuilder
ChronicleMap
or ChronicleSet
are not required
to be immutable, as in ordinary Map
or Set
implementations, because they are
serialized off-heap. However, ChronicleMap
and ChronicleSet
implementations
can benefit from the knowledge that keys are not mutated between queries.
By default, ChronicleHashBuilder
s detects immutability automatically only for very
few standard JDK types (for example, for String
), it is not recommended to rely on
ChronicleHashBuilder
to be smart enough about this.
immutableKeys
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
public ChronicleMapBuilder<K,V> aligned64BitMemoryOperationsAtomic(boolean aligned64BitMemoryOperationsAtomic)
ChronicleHashBuilder
aligned64BitMemoryOperationsAtomic
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
aligned64BitMemoryOperationsAtomic
- true
if aligned 8-byte memory operations
are atomicpublic ChronicleMapBuilder<K,V> checksumEntries(boolean checksumEntries)
ChronicleHashBuilder
By default, persisted hash containers, created by
ChronicleMapBuilder
do compute and store entry checksums, but hash containers,
created in the process memory via ChronicleHashBuilder.create()
- don't.
checksumEntries
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
checksumEntries
- if entry checksums should be computed and storedChecksumEntry
public ChronicleMapBuilder<K,V> valueMarshaller(@NotNull BytesMarshaller<? super V> valueMarshaller)
BytesMarshaller
used to serialize/deserialize values to/from off-heap
memory in maps, created by this builder. See the
section about serialization in ChronicleMap manual for more information.valueMarshaller
- the marshaller used to serialize valuesvalueMarshallers(BytesWriter, BytesReader)
,
objectSerializer(ObjectSerializer)
,
keyMarshaller(BytesMarshaller)
public ChronicleMapBuilder<K,V> valueMarshallers(@NotNull BytesWriter<V> valueWriter, @NotNull BytesReader<V> valueReader)
valueMarshaller(
BytesMarshaller)
. Passing BytesInterop
instead of plain BytesWriter
is, of cause, possible, but currently pointless for values.valueWriter
- the new value object → Bytes
writer (interop) strategyvalueReader
- the new Bytes
→ value object reader strategyvalueMarshaller(BytesMarshaller)
,
valueSizeMarshaller(SizeMarshaller)
,
keyMarshallers(BytesWriter, BytesReader)
public ChronicleMapBuilder<K,V> valueSizeMarshaller(@NotNull SizeMarshaller valueSizeMarshaller)
Default value size marshaller is so-called "stop bit encoding" marshalling, unless constantValueSizeBySample(Object)
or the builder statically knows the value size is
constant -- special constant size marshalling is used by default in these cases.
valueSizeMarshaller
- the new marshaller, used to serialize actual value sizes to
off-heap memorykeySizeMarshaller(SizeMarshaller)
public ChronicleMapBuilder<K,V> valueDeserializationFactory(@NotNull ObjectFactory<V> valueDeserializationFactory)
Byteable
, BytesMarshallable
or Externalizable
subclass, or value type
is eligible for data value generation, or configured custom value reader is an instance of
DeserializationFactoryConfigurableBytesReader
, in maps, created by this builder.
Default value deserialization factory is NewInstanceObjectFactory
, which creates a
new value instance using Class.newInstance()
default constructor. You could provide
an AllocateInstanceObjectFactory
, which uses Unsafe.allocateInstance(Class)
(you might want to do this for better performance or if you don't want to initialize fields),
or a factory which calls a value class constructor with some arguments, or a factory which
internally delegates to instance pool or ThreadLocal
, to reduce allocations.valueDeserializationFactory
- the value factory used to produce instances to deserialize
data inIllegalStateException
- if it is not possible to apply deserialization factory to value
deserializers, currently configured for this builderkeyDeserializationFactory(ObjectFactory)
public ChronicleMapBuilder<K,V> defaultValue(V defaultValue)
acquireUsing()
method, if the key is absent in the map, created by this builder.
This configuration overrides any previous defaultValueProvider(
DefaultValueProvider)
configuration to this builder.
defaultValue
- the default value to be put to the map for absent keys during acquireUsing()
callsdefaultValueProvider(DefaultValueProvider)
public ChronicleMapBuilder<K,V> defaultValueProvider(@NotNull DefaultValueProvider<K,V> defaultValueProvider)
acquireUsing()
calls, if the key is absent in the map, created by this builder.
This configuration overrides any previous defaultValue(Object)
configuration
to this builder.
defaultValueProvider
- the strategy to obtain a default value by the absent keydefaultValue(Object)
public ChronicleMapBuilder<K,V> replication(SingleChronicleHashReplication replication)
ChronicleHashBuilder
By default, hash containers, created by this builder doesn't replicate their data.
This method call overrides all previous replication configurations of this builder, made
either by this method or ChronicleHashBuilder.replication(byte, TcpTransportAndNetworkConfig)
shortcut
method.
replication
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
replication
- the replication configChronicleHashInstanceBuilder.replicated(SingleChronicleHashReplication)
,
ChronicleHashBuilder.replication(byte, TcpTransportAndNetworkConfig)
public ChronicleMapBuilder<K,V> replication(byte identifier)
replication
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
public ChronicleMapBuilder<K,V> replication(byte identifier, TcpTransportAndNetworkConfig tcpTransportAndNetwork)
ChronicleHashBuilder
replication(SimpleReplication.builder() .tcpTransportAndNetwork(tcpTransportAndNetwork).createWithId(identifier))
.replication
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
identifier
- the network-wide identifier of the containers, created by this
buildertcpTransportAndNetwork
- configuration of tcp connection and networkChronicleHashBuilder.replication(SingleChronicleHashReplication)
,
ChronicleHashInstanceBuilder.replicated(byte, TcpTransportAndNetworkConfig)
public ChronicleHashInstanceBuilder<ChronicleMap<K,V>> instance()
instance
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
public ChronicleMap<K,V> createPersistedTo(File file) throws IOException
ChronicleHashBuilder
Multiple containers could give access to the same data simultaneously, either inside a single JVM or across processes. Access is synchronized correctly across all instances, i. e. hash container mapping the data from the first JVM isn't able to modify the data, concurrently accessed from the second JVM by another hash container instance, mapping the same data.
On container's close()
the data isn't removed, it remains on
disk and available to be opened again (given the same file name) or during different JVM
run.
This method is shortcut for instance().persistedTo(file).create()
.
createPersistedTo
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
file
- the file with existing hash container or a desired location of a new off-heap
persisted hash containerIOException
- if any IO error, related to off-heap memory allocation or file mapping,
or establishing replication connections, occursChronicleHash.file()
,
ChronicleHash.close()
,
ChronicleHashBuilder.create()
,
ChronicleHashInstanceBuilder.persistedTo(File)
public ChronicleMap<K,V> create()
ChronicleHashBuilder
ChronicleHash.close()
called on the returned container, or after the container
object is collected during GC, or on JVM shutdown the off-heap memory used by the returned
container is freed.
This method is a shortcut for instance().create()
.
create
in interface ChronicleHashBuilder<K,ChronicleMap<K,V>,ChronicleMapBuilder<K,V>>
ChronicleHashBuilder.createPersistedTo(File)
,
ChronicleHashBuilder.instance()
public ChronicleMapBuilder<K,V> entryOperations(MapEntryOperations<K,V,?> entryOperations)
ChronicleMap
's operations with entries:
removing entries, replacing the entries' value and inserting the new entry.
This affects behaviour of ordinary map.put(), map.remove(), etc. calls, as well as removes and replacing values during iterations, remote map calls and internal replication operations.
public ChronicleMapBuilder<K,V> mapMethods(MapMethods<K,V,?> mapMethods)
ChronicleMap
's operations with individual keys:
from Map.containsKey(java.lang.Object)
to ChronicleMap.acquireUsing(K, V)
and
ConcurrentMap.merge(K, V, java.util.function.BiFunction<? super V, ? super V, ? extends V>)
.
This affects behaviour of ordinary map calls, as well as remote calls.
public ChronicleMapBuilder<K,V> remoteOperations(MapRemoteOperations<K,V,?> remoteOperations)
Copyright © 2015. All rights reserved.