Class HazelcastCoreClusterProperties
- All Implemented Interfaces:
Serializable
HazelcastCoreClusterProperties
.- Since:
- 6.4.0
- See Also:
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionint
Hazelcast supports both synchronous and asynchronous backups.int
To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have.int
CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures.Hazelcast supports policy-based eviction for distributed maps.The instance name.Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging.Define how data items in Hazelcast maps are merged together from source to destination.int
Max timeout of heartbeat in seconds for a node to assume it is dead.int
Sets the maximum size of the map.FREE_HEAP_PERCENTAGE
: Policy based on minimum free JVM heap memory percentage per JVM.FREE_HEAP_SIZE
: Policy based on minimum free JVM heap memory in megabytes per JVM.FREE_NATIVE_MEMORY_PERCENTAGE
: Policy based on minimum free native memory percentage per Hazelcast instance.FREE_NATIVE_MEMORY_SIZE
: Policy based on minimum free native memory in megabytes per Hazelcast instance.PER_NODE
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each Hazelcast instance.PER_PARTITION
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each partition.USED_HEAP_PERCENTAGE
: Policy based on maximum used JVM heap memory percentage per data structure (map, cache etc) on each Hazelcast instance .USED_HEAP_SIZE
: Policy based on maximum used JVM heap memory in megabytes per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_PERCENTAGE
: Policy based on maximum used native memory percentage per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_SIZE
: Policy based on maximum used native memory in megabytes per data structure (map, cache etc) on each Hazelcast instance .WithPartitionGroupConfig
, you can control how primary and backup partitions are mapped to physical Members.int
Connection timeout in seconds for the TCP/IP config and members joining the cluster.boolean
Used when replication is turned on withisReplicated()
.boolean
A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster.setAsyncBackupCount
(int asyncBackupCount) Hazelcast supports both synchronous and asynchronous backups.setAsyncFillup
(boolean asyncFillup) Used when replication is turned on withisReplicated()
.setBackupCount
(int backupCount) To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have.setCpMemberCount
(int cpMemberCount) CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures.setEvictionPolicy
(String evictionPolicy) Hazelcast supports policy-based eviction for distributed maps.setInstanceName
(String instanceName) The instance name.setLoggingType
(String loggingType) Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging.setMapMergePolicy
(String mapMergePolicy) Define how data items in Hazelcast maps are merged together from source to destination.setMaxNoHeartbeatSeconds
(int maxNoHeartbeatSeconds) Max timeout of heartbeat in seconds for a node to assume it is dead.setMaxSize
(int maxSize) Sets the maximum size of the map.setMaxSizePolicy
(String maxSizePolicy) FREE_HEAP_PERCENTAGE
: Policy based on minimum free JVM heap memory percentage per JVM.FREE_HEAP_SIZE
: Policy based on minimum free JVM heap memory in megabytes per JVM.FREE_NATIVE_MEMORY_PERCENTAGE
: Policy based on minimum free native memory percentage per Hazelcast instance.FREE_NATIVE_MEMORY_SIZE
: Policy based on minimum free native memory in megabytes per Hazelcast instance.PER_NODE
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each Hazelcast instance.PER_PARTITION
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each partition.USED_HEAP_PERCENTAGE
: Policy based on maximum used JVM heap memory percentage per data structure (map, cache etc) on each Hazelcast instance .USED_HEAP_SIZE
: Policy based on maximum used JVM heap memory in megabytes per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_PERCENTAGE
: Policy based on maximum used native memory percentage per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_SIZE
: Policy based on maximum used native memory in megabytes per data structure (map, cache etc) on each Hazelcast instance .setPartitionMemberGroupType
(String partitionMemberGroupType) WithPartitionGroupConfig
, you can control how primary and backup partitions are mapped to physical Members.setReplicated
(boolean replicated) A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster.setTimeout
(int timeout) Connection timeout in seconds for the TCP/IP config and members joining the cluster.
-
Constructor Details
-
HazelcastCoreClusterProperties
public HazelcastCoreClusterProperties()
-
-
Method Details
-
isAsyncFillup
public boolean isAsyncFillup()Used when replication is turned on withisReplicated()
.If a new member joins the cluster, there are two ways you can handle the initial provisioning that is executed to replicate all existing values to the new member. Each involves how you configure the async fill up.
- First, you can configure async fill up to true, which does not block reads while the fill up operation is underway. That way, you have immediate access on the new member, but it will take time until all the values are eventually accessible. Not yet replicated values are returned as non-existing (null).
- Second, you can configure for a synchronous initial fill up (by configuring the async fill up to false), which blocks every read or write access to the map until the fill up operation is finished. Use this with caution since it might block your application from operating.
-
isReplicated
public boolean isReplicated()A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. A Replicated Map does not partition data (it does not spread data to different cluster members); instead, it replicates the data to all members. Replication leads to higher memory consumption. However, a Replicated Map has faster read and write access since the data is available on all members. Writes could take place on local/remote members in order to provide write-order, eventually being replicated to all other members.If you have a large cluster or very high occurrences of updates, the Replicated Map may not scale linearly as expected since it has to replicate update operations to all members in the cluster. Since the replication of updates is performed in an asynchronous manner, Hazelcast recommends you enable back pressure in case your system has high occurrences of updates.
Note that Replicated Map does not guarantee eventual consistency because there are some edge cases that fail to provide consistency.
Replicated Map uses the internal partition system of Hazelcast in order to serialize updates happening on the same key at the same time. This happens by sending updates of the same key to the same Hazelcast member in the cluster.
Due to the asynchronous nature of replication, a Hazelcast member could die before successfully replicating a "write" operation to other members after sending the "write completed" response to its caller during the write process. In this scenario, Hazelcast’s internal partition system promotes one of the replicas of the partition as the primary one. The new primary partition does not have the latest "write" since the dead member could not successfully replicate the update.
-
getPartitionMemberGroupType
WithPartitionGroupConfig
, you can control how primary and backup partitions are mapped to physical Members. Hazelcast will always place partitions on different partition groups so as to provide redundancy. Accepted value are:PER_MEMBER, HOST_AWARE, CUSTOM, ZONE_AWARE, SPI
. In all cases a partition will never be created on the same group. If there are more partitions defined than there are partition groups, then only those partitions, up to the number of partition groups, will be created. For example, if you define 2 backups, then with the primary, that makes 3. If you have only two partition groups only two will be created.- PER_MEMBER Partition Groups}: This is the default partition scheme and is used if no other scheme is defined. Each Member is in a group of its own.
- HOST_AWARE Partition Groups}: In this scheme, a group corresponds to a host, based on its IP address. Partitions will not be written to any other members on the same host. This scheme provides good redundancy when multiple instances are being run on the same host.
- CUSTOM Partition Groups}: In this scheme, IP addresses, or IP address ranges, are allocated to groups. Partitions are not written to the same group. This is very useful for ensuring partitions are written to different racks or even availability zones.
- ZONE_AWARE Partition Groups}: In this scheme, groups are allocated according to the metadata provided by Discovery SPI Partitions are not written to the same group. This is very useful for ensuring partitions are written to availability zones or different racks without providing the IP addresses to the config ahead.
- SPI Partition Groups}: In this scheme, groups are allocated according to the implementation provided by Discovery SPI.
-
getLoggingType
Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging. It has in-built adaptors for a number of logging frameworks and also supports custom loggers by providing logging interfaces. To use built-in adaptors you should set this setting to one of predefined types below.jdk
: JDK logginglog4j
: Log4jslf4j
: Slf4jnone
: Disable logging
-
getMaxNoHeartbeatSeconds
public int getMaxNoHeartbeatSeconds()Max timeout of heartbeat in seconds for a node to assume it is dead. -
getInstanceName
The instance name. -
getMapMergePolicy
Define how data items in Hazelcast maps are merged together from source to destination. By default, merges map entries from source to destination if they don't exist in the destination map. Accepted values are:PUT_IF_ABSENT
: Merges data structure entries from source to destination if they don't exist in the destination data structure.HIGHER_HITS
: * Merges data structure entries from source to destination data structure if the source entry has more hits than the destination one.DISCARD
: Merges only entries from the destination data structure and discards all entries from the source data structure.PASS_THROUGH
: Merges data structure entries from source to destination directly unless the merging entry is nullEXPIRATION_TIME
: Merges data structure entries from source to destination data structure if the source entry will expire later than the destination entry. This policy can only be used if the clocks of the nodes are in sync.LATEST_UPDATE
: Merges data structure entries from source to destination data structure if the source entry was updated more frequently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.LATEST_ACCESS
: Merges data structure entries from source to destination data structure if the source entry has been accessed more recently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.
-
getMaxSize
public int getMaxSize()Sets the maximum size of the map. -
getMaxSizePolicy
FREE_HEAP_PERCENTAGE
: Policy based on minimum free JVM heap memory percentage per JVM.FREE_HEAP_SIZE
: Policy based on minimum free JVM heap memory in megabytes per JVM.FREE_NATIVE_MEMORY_PERCENTAGE
: Policy based on minimum free native memory percentage per Hazelcast instance.FREE_NATIVE_MEMORY_SIZE
: Policy based on minimum free native memory in megabytes per Hazelcast instance.PER_NODE
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each Hazelcast instance.PER_PARTITION
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each partition.USED_HEAP_PERCENTAGE
: Policy based on maximum used JVM heap memory percentage per data structure (map, cache etc) on each Hazelcast instance .USED_HEAP_SIZE
: Policy based on maximum used JVM heap memory in megabytes per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_PERCENTAGE
: Policy based on maximum used native memory percentage per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_SIZE
: Policy based on maximum used native memory in megabytes per data structure (map, cache etc) on each Hazelcast instance .
-
getEvictionPolicy
Hazelcast supports policy-based eviction for distributed maps. Currently supported policies are LRU (Least Recently Used) and LFU (Least Frequently Used) and NONE. See this for more info. -
getBackupCount
public int getBackupCount()To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have. That way, data on a cluster member will be copied onto other member(s). To create synchronous backups, select the number of backup copies. When this count is 1, a map entry will have its backup on one other member in the cluster. If you set it to 2, then a map entry will have its backup on two other members. You can set it to 0 if you do not want your entries to be backed up, e.g., if performance is more important than backing up. The maximum value for the backup count is 6. Sync backup operations have a blocking cost which may lead to latency issues. -
getAsyncBackupCount
public int getAsyncBackupCount()Hazelcast supports both synchronous and asynchronous backups. By default, backup operations are synchronous. In this case, backup operations block operations until backups are successfully copied to backup members (or deleted from backup members in case of remove) and acknowledgements are received. Therefore, backups are updated before a put operation is completed, provided that the cluster is stable. Asynchronous backups, on the other hand, do not block operations. They are fire and forget and do not require acknowledgements; the backup operations are performed at some point in time. -
getTimeout
public int getTimeout()Connection timeout in seconds for the TCP/IP config and members joining the cluster. -
getCpMemberCount
public int getCpMemberCount()CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures. Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions. Besides network partitions, CP Subsystem withstands server and client failures. All members of a Hazelcast cluster do not necessarily take part in CP Subsystem. The number of Hazelcast members that take part in CP Subsystem is specified here. CP Subsystem must have at least 3 CP members. -
setAsyncFillup
Used when replication is turned on withisReplicated()
.If a new member joins the cluster, there are two ways you can handle the initial provisioning that is executed to replicate all existing values to the new member. Each involves how you configure the async fill up.
- First, you can configure async fill up to true, which does not block reads while the fill up operation is underway. That way, you have immediate access on the new member, but it will take time until all the values are eventually accessible. Not yet replicated values are returned as non-existing (null).
- Second, you can configure for a synchronous initial fill up (by configuring the async fill up to false), which blocks every read or write access to the map until the fill up operation is finished. Use this with caution since it might block your application from operating.
- Returns:
this
.
-
setReplicated
A Replicated Map is a distributed key-value data structure where the data is replicated to all members in the cluster. It provides full replication of entries to all members for high speed access. A Replicated Map does not partition data (it does not spread data to different cluster members); instead, it replicates the data to all members. Replication leads to higher memory consumption. However, a Replicated Map has faster read and write access since the data is available on all members. Writes could take place on local/remote members in order to provide write-order, eventually being replicated to all other members.If you have a large cluster or very high occurrences of updates, the Replicated Map may not scale linearly as expected since it has to replicate update operations to all members in the cluster. Since the replication of updates is performed in an asynchronous manner, Hazelcast recommends you enable back pressure in case your system has high occurrences of updates.
Note that Replicated Map does not guarantee eventual consistency because there are some edge cases that fail to provide consistency.
Replicated Map uses the internal partition system of Hazelcast in order to serialize updates happening on the same key at the same time. This happens by sending updates of the same key to the same Hazelcast member in the cluster.
Due to the asynchronous nature of replication, a Hazelcast member could die before successfully replicating a "write" operation to other members after sending the "write completed" response to its caller during the write process. In this scenario, Hazelcast’s internal partition system promotes one of the replicas of the partition as the primary one. The new primary partition does not have the latest "write" since the dead member could not successfully replicate the update.
- Returns:
this
.
-
setPartitionMemberGroupType
WithPartitionGroupConfig
, you can control how primary and backup partitions are mapped to physical Members. Hazelcast will always place partitions on different partition groups so as to provide redundancy. Accepted value are:PER_MEMBER, HOST_AWARE, CUSTOM, ZONE_AWARE, SPI
. In all cases a partition will never be created on the same group. If there are more partitions defined than there are partition groups, then only those partitions, up to the number of partition groups, will be created. For example, if you define 2 backups, then with the primary, that makes 3. If you have only two partition groups only two will be created.- PER_MEMBER Partition Groups}: This is the default partition scheme and is used if no other scheme is defined. Each Member is in a group of its own.
- HOST_AWARE Partition Groups}: In this scheme, a group corresponds to a host, based on its IP address. Partitions will not be written to any other members on the same host. This scheme provides good redundancy when multiple instances are being run on the same host.
- CUSTOM Partition Groups}: In this scheme, IP addresses, or IP address ranges, are allocated to groups. Partitions are not written to the same group. This is very useful for ensuring partitions are written to different racks or even availability zones.
- ZONE_AWARE Partition Groups}: In this scheme, groups are allocated according to the metadata provided by Discovery SPI Partitions are not written to the same group. This is very useful for ensuring partitions are written to availability zones or different racks without providing the IP addresses to the config ahead.
- SPI Partition Groups}: In this scheme, groups are allocated according to the implementation provided by Discovery SPI.
- Returns:
this
.
-
setLoggingType
Hazelcast has a flexible logging configuration and doesn't depend on any logging framework except JDK logging. It has in-built adaptors for a number of logging frameworks and also supports custom loggers by providing logging interfaces. To use built-in adaptors you should set this setting to one of predefined types below.jdk
: JDK logginglog4j
: Log4jslf4j
: Slf4jnone
: Disable logging
- Returns:
this
.
-
setMaxNoHeartbeatSeconds
Max timeout of heartbeat in seconds for a node to assume it is dead.- Returns:
this
.
-
setInstanceName
The instance name.- Returns:
this
.
-
setMapMergePolicy
Define how data items in Hazelcast maps are merged together from source to destination. By default, merges map entries from source to destination if they don't exist in the destination map. Accepted values are:PUT_IF_ABSENT
: Merges data structure entries from source to destination if they don't exist in the destination data structure.HIGHER_HITS
: * Merges data structure entries from source to destination data structure if the source entry has more hits than the destination one.DISCARD
: Merges only entries from the destination data structure and discards all entries from the source data structure.PASS_THROUGH
: Merges data structure entries from source to destination directly unless the merging entry is nullEXPIRATION_TIME
: Merges data structure entries from source to destination data structure if the source entry will expire later than the destination entry. This policy can only be used if the clocks of the nodes are in sync.LATEST_UPDATE
: Merges data structure entries from source to destination data structure if the source entry was updated more frequently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.LATEST_ACCESS
: Merges data structure entries from source to destination data structure if the source entry has been accessed more recently than the destination entry. This policy can only be used if the clocks of the nodes are in sync.
- Returns:
this
.
-
setMaxSize
Sets the maximum size of the map.- Returns:
this
.
-
setMaxSizePolicy
FREE_HEAP_PERCENTAGE
: Policy based on minimum free JVM heap memory percentage per JVM.FREE_HEAP_SIZE
: Policy based on minimum free JVM heap memory in megabytes per JVM.FREE_NATIVE_MEMORY_PERCENTAGE
: Policy based on minimum free native memory percentage per Hazelcast instance.FREE_NATIVE_MEMORY_SIZE
: Policy based on minimum free native memory in megabytes per Hazelcast instance.PER_NODE
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each Hazelcast instance.PER_PARTITION
: Policy based on maximum number of entries stored per data structure (map, cache etc) on each partition.USED_HEAP_PERCENTAGE
: Policy based on maximum used JVM heap memory percentage per data structure (map, cache etc) on each Hazelcast instance .USED_HEAP_SIZE
: Policy based on maximum used JVM heap memory in megabytes per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_PERCENTAGE
: Policy based on maximum used native memory percentage per data structure (map, cache etc) on each Hazelcast instance.USED_NATIVE_MEMORY_SIZE
: Policy based on maximum used native memory in megabytes per data structure (map, cache etc) on each Hazelcast instance .
- Returns:
this
.
-
setEvictionPolicy
Hazelcast supports policy-based eviction for distributed maps. Currently supported policies are LRU (Least Recently Used) and LFU (Least Frequently Used) and NONE. See this for more info.- Returns:
this
.
-
setBackupCount
To provide data safety, Hazelcast allows you to specify the number of backup copies you want to have. That way, data on a cluster member will be copied onto other member(s). To create synchronous backups, select the number of backup copies. When this count is 1, a map entry will have its backup on one other member in the cluster. If you set it to 2, then a map entry will have its backup on two other members. You can set it to 0 if you do not want your entries to be backed up, e.g., if performance is more important than backing up. The maximum value for the backup count is 6. Sync backup operations have a blocking cost which may lead to latency issues.- Returns:
this
.
-
setAsyncBackupCount
Hazelcast supports both synchronous and asynchronous backups. By default, backup operations are synchronous. In this case, backup operations block operations until backups are successfully copied to backup members (or deleted from backup members in case of remove) and acknowledgements are received. Therefore, backups are updated before a put operation is completed, provided that the cluster is stable. Asynchronous backups, on the other hand, do not block operations. They are fire and forget and do not require acknowledgements; the backup operations are performed at some point in time.- Returns:
this
.
-
setTimeout
Connection timeout in seconds for the TCP/IP config and members joining the cluster.- Returns:
this
.
-
setCpMemberCount
CP Subsystem is a component of a Hazelcast cluster that builds a strongly consistent layer for a set of distributed data structures. Its data structures are CP with respect to the CAP principle, i.e., they always maintain linearizability and prefer consistency over availability during network partitions. Besides network partitions, CP Subsystem withstands server and client failures. All members of a Hazelcast cluster do not necessarily take part in CP Subsystem. The number of Hazelcast members that take part in CP Subsystem is specified here. CP Subsystem must have at least 3 CP members.- Returns:
this
.
-