@DefaultServiceFactory(value=LockServiceFactory.class) public interface LockService extends Service
LockService
provides support for mutually exclusive distributed locks. Each lock can be obtained only by a single node within
the cluster. All other nodes that are trying to obtain the same lock will be kept waiting until the lock is released by the owner node
or until the owner node goes down and leaves the cluster.
LockService
can be registered and configured in HekateBootstrap
with the help of LockServiceFactory
as shown in
the example below:
// Prepare lock service factory.
LockServiceFactory factory = new LockServiceFactory()
// Register some lock regions.
.withRegion(new LockRegionConfig()
.withName("region1")
)
.withRegion(new LockRegionConfig()
.withName("region2")
);
// Start node.
Hekate hekate = new HekateBootstrap()
.withService(factory)
.join();
// Access the service.
LockService locks = hekate.locks();
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:h="http://www.hekate.io/spring/hekate-core"
xmlns="http://www.springframework.org/schema/beans"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.hekate.io/spring/hekate-core
http://www.hekate.io/spring/hekate-core.xsd">
<h:node id="hekate">
<!-- Lock service. -->
<h:locks>
<!-- Register some lock regions. -->
<h:region name="region1"/>
<h:region name="region2"/>
</h:locks>
<!-- ...other services... -->
</h:node>
</beans>
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.springframework.org/schema/beans"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="hekate" class="io.hekate.spring.bean.HekateSpringBootstrap">
<property name="services">
<list>
<!-- Lock service. -->
<bean class="io.hekate.lock.LockServiceFactory">
<property name="regions">
<list>
<!-- Register some lock regions. -->
<bean class="io.hekate.lock.LockRegionConfig">
<property name="name" value="region1"/>
</bean>
<bean class="io.hekate.lock.LockRegionConfig">
<property name="name" value="region2"/>
</bean>
</list>
</property>
</bean>
<!-- ...other services... -->
</list>
</property>
</bean>
</beans>
Each lock within the lock service is identified by its name
and a LockRegion
. Lock name is
an arbitrary string which can be dynamically constructed and doesn't require any pre-registration within the lock service. Each LockRegion
acts as an independent namespace for locks and must be pre-registered within the LockServiceFactory
. Each region
can manage an unlimited amount of locks.
Lock regions provide support for distributing workload among the cluster nodes. Only those nodes that have some particular region defined in their configuration will be managing locks for that region. Breaking locks into regions makes it possible to configure cluster nodes so that some subset of cluster nodes will be managing region 'A' while some other subset will be managing region 'B' (note that those subsets can intersect and some nodes from those subsets can manage both regions, i.e. there are no restrictions on how many regions are managed by the particular node).
In order obtain the lock, one must get the LockRegion
instance from the LockService
and then use that instance to create
a named DistributedLock
as in the example below:
// Get lock with name 'example.lock'.
DistributedLock lock = hekate.locks().region("region1").get("example.lock");
// Acquire the lock.
lock.lock();
try {
// Do some work ...
thereCanBeOnlyOne();
} finally {
// Make sure that lock is always released after the work is done.
lock.unlock();
}
DistributedLock
s are reentrant and it is possible to call lock()
while the same
lock (with the same region and name) is already held by the current thread. In such case lock()
method
will return immediately without accessing remote nodes. However, please note that lock()
method calls
must always be followed by the same amount of unlock()
method calls or the lock will never be released.
LockService
uses RendezvousHashMapper
to evenly distribute locks processing workload among the nodes. For each
lock/unlock operation it selects a node that is responsible for managing the lock with the given name and forwards those operations to
such node. Manager node controls the lifecycle and order of locks and makes sure that locks are released if lock owner node prematurely
leaves the cluster before properly releasing the lock.
During the normal work each lock/unlock operation is performed within a single network round trip from the lock requester to the lock
manager. In case of the cluster topology changes all lock/unlock operations are suspended until the cluster rebalancing is finished and
live locks are migrated among the cluster members. The time it takes to perform rebalancing depends on the number of nodes within the
lock region and the amount of acquired locks that require migration. Note that LockService
doesn't keep track of released locks
and removes them from memory, thus only those locks that are in the LOCKED state at the time of rebalancing are migrated.
Locks rebalancing is controlled by the cluster coordinator which is dynamically selected among the cluster nodes. Coordinator uses a two-phase lock migration protocol. During the the first phase it collects information about all locks that require migration by sending a 'prepare' message over the nodes ring. This message circulates over the ring so that each node could inspect its local state and decide which of its owned locks require migration (i.e. which lock was re-mapped to another manager node). When message returns back to the coordinator it contains information about all locks in the region that require migration. During the second phase, coordinator sends an 'apply' message over the ring. This message contains information about migrating locks and is used by the cluster nodes to change their lock management state (i.e. take control over newly assigned locks and unload locks that were re-mapped to some other manager node).
LockServiceFactory
Modifier and Type | Method and Description |
---|---|
List<LockRegion> |
allRegions()
Returns all lock regions that are
LockServiceFactory.setRegions(List) registered within this service. |
boolean |
hasRegion(String region)
Returns
true if this service has a lock region with the specified name. |
LockRegion |
region(String region)
Returns a lock region for the specified name.
|
List<LockRegion> allRegions()
LockServiceFactory.setRegions(List)
registered within this service.LockRegion region(String region)
Lock region with the specified name must be registered within the LockServiceFactory
otherwise an error will be thrown.
region
- Region name (see LockRegionConfig.setName(String)
).LockServiceFactory.withRegion(LockRegionConfig)
boolean hasRegion(String region)
true
if this service has a lock region with the specified name.region
- Region name (see LockRegionConfig.setName(String)
).true
if region exists.