See: Description
Interface | Description |
---|---|
ClusterBackupEventsListener |
Listener which can be registered via
ClusterBackup.Context for tracking backup progress. |
Class | Description |
---|---|
ClusterBackup |
Backup component which can run remote from a cluster which polls for snapshots and replicates the log.
|
ClusterBackup.Configuration |
Configuration options for
ClusterBackup with defaults and constants for system properties lookup. |
ClusterBackup.Context |
Context for overriding default configuration for
ClusterBackup . |
ClusterBackupAgent |
Agent which backs up a remote cluster by replicating the log and polling for snapshots. |
ClusterBackupMediaDriver | |
ClusterControl |
Toggle control
ClusterControl.ToggleState s for a cluster node such as ClusterControl.ToggleState.SUSPEND or
ClusterControl.ToggleState.RESUME . |
ClusteredArchive |
Clustered archive which is an aggregate of a
ConsensusModule and an Archive . |
ClusteredMediaDriver | |
ClusterMember |
Represents a member of the cluster that participates in consensus for storing state from the perspective
of any single member.
|
ClusterMembership |
Detail for the cluster membership from the perspective of a given member.
|
ClusterTool |
Tool for control and investigating the state of a cluster node.
|
ConsensusModule |
Component which resides on each node and is responsible for coordinating consensus within a cluster in concert
with the lifecycle of clustered services.
|
ConsensusModule.Configuration |
Configuration options for cluster.
|
ConsensusModule.Context |
Programmable overrides for configuring the
ConsensusModule in a cluster. |
MillisecondClusterClock |
A
ClusterClock implemented by calling System.currentTimeMillis() . |
NanosecondClusterClock |
A
ClusterClock implemented by calling HighResolutionClock.epochNanos() . |
RecordingLog |
A log of recordings which make up the history of a Raft log across leadership terms.
|
RecordingLog.Entry |
Representation of the entry in the
RecordingLog . |
RecordingLog.Log |
Representation of a log entry in the
RecordingLog . |
RecordingLog.RecoveryPlan |
The snapshots and steps to recover the state of a cluster.
|
RecordingLog.Snapshot |
Representation of a snapshot entry in the
RecordingLog . |
Enum | Description |
---|---|
ClusterBackup.State |
State of the cluster backup state machine.
|
ClusterControl.ToggleState |
Toggle states for controlling the cluster node once it has entered the active state after initialising.
|
ConsensusModule.State |
Possible states for the
ConsensusModule . |
ElectionState |
Election states for a
ConsensusModule which get represented by a ElectionState.code() stored in a
Counter of the type ConsensusModule.Configuration.ELECTION_STATE_TYPE_ID . |
The purpose of Aeron Cluster is to aggregate and sequence streams from cluster clients into a single log. This log is replicated and archived on a number of nodes to achieve fault tolerance. Cluster services process this log and respond to cluster clients.
Aeron Cluster works on the concept of a strong leader using an adaptation of the Raft protocol. The leader sequences the log and is responsible for replicating it to other cluster members known as followers.
Aeron Cluster is composed of a number of components. Central is the Consensus Module which sequences the log and coordinates consensus on the recording of the sequenced log to persistent storage, and the services consuming the log across cluster members. Recording of the log to persistent storage is performed by the Aeron Archive module. Services consume the log once a majority of the cluster members have safely recorded the log to persistent storage.
To enable fast recovery, the services and consensus module can take a snapshot of their state as of a given log
position thus enabling recovery by loading the most recent snapshot and replaying logs from that point forward.
Snapshots are recorded as streams in the Archive
for local and remote replay so that a
distributed file system is not required.
Based on the membership size, consensus is determined by the majority of the cluster members. Clusters should be 3 or 5 in population size. However, 2 node clusters are supported whereby both members must agree the log and in the event of failure the remaining member must be manually reconfigured as a single node cluster.
Copyright © 2014-2021 Real Logic Limited. All Rights Reserved.