org.apache.hadoop.hdfs.server.namenode
Class JournalSet

java.lang.Object
  extended by org.apache.hadoop.hdfs.server.namenode.JournalSet
All Implemented Interfaces:
Closeable, org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmable, org.apache.hadoop.hdfs.server.namenode.JournalManager

public class JournalSet
extends Object
implements org.apache.hadoop.hdfs.server.namenode.JournalManager

Manages a collection of Journals. None of the methods are synchronized, it is assumed that FSEditLog methods, that use this class, use proper synchronization.


Nested Class Summary
 
Nested classes/interfaces inherited from interface org.apache.hadoop.hdfs.server.namenode.JournalManager
JournalManager.CorruptionException
 
Field Summary
static Comparator<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> EDIT_LOG_INPUT_STREAM_COMPARATOR
           
 
Method Summary
 boolean canRollBack(org.apache.hadoop.hdfs.server.common.StorageInfo storage, org.apache.hadoop.hdfs.server.common.StorageInfo prevStorage, int targetLayoutVersion)
          Return true if this JM can roll back to the previous storage state, false otherwise.
static void chainAndMakeRedundantStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> outStreams, PriorityQueue<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> allStreams, long fromTxId)
           
 void close()
          Close the journal manager, freeing any resources it may hold.
 void discardSegments(long startTxId)
          Discard the segments whose first txid is >= the given txid.
 void doFinalize()
          Finalize the upgrade.
 void doPreUpgrade()
          Perform any steps that must succeed across all JournalManagers involved in an upgrade before proceeding onto the actual upgrade stage.
 void doRollback()
          Perform the rollback to the previous FS state.
 void doUpgrade(org.apache.hadoop.hdfs.server.common.Storage storage)
          Perform the actual upgrade of the JM.
 void finalizeLogSegment(long firstTxId, long lastTxId)
          Mark the log segment that spans from firstTxId to lastTxId as finalized and complete.
 void format(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo nsInfo)
          Format the underlying storage, removing any previously stored data.
 RemoteEditLogManifest getEditLogManifest(long fromTxId)
          Return a manifest of what finalized edit logs are available.
 long getJournalCTime()
           
 boolean hasSomeData()
           
 boolean isEmpty()
          Returns true if there are no journals, all redundant journals are disabled, or any required journals are disabled.
 boolean isOpen()
           
 void purgeLogsOlderThan(long minTxIdToKeep)
          Remove all edit logs with transaction IDs lower than the given transaction ID.
 void recoverUnfinalizedSegments()
          Recover segments which have not been finalized.
 void selectInputStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> streams, long fromTxId, boolean inProgressOk)
          In this function, we get a bunch of streams from all of our JournalManager objects.
 void setOutputBufferCapacity(int size)
          Set the amount of memory that this stream should use to buffer edits
 org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream startLogSegment(long txId, int layoutVersion)
          Begin writing to a new segment of the log stream, which starts at the given transaction ID.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 
Methods inherited from interface org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmable
toString
 

Field Detail

EDIT_LOG_INPUT_STREAM_COMPARATOR

public static final Comparator<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> EDIT_LOG_INPUT_STREAM_COMPARATOR
Method Detail

format

public void format(org.apache.hadoop.hdfs.server.protocol.NamespaceInfo nsInfo)
            throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Format the underlying storage, removing any previously stored data.

Specified by:
format in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

hasSomeData

public boolean hasSomeData()
                    throws IOException
Specified by:
hasSomeData in interface org.apache.hadoop.hdfs.server.common.Storage.FormatConfirmable
Returns:
true if the storage seems to have some valid data in it, and the user should be required to confirm the format. Otherwise, false.
Throws:
IOException - if the storage cannot be accessed at all.

startLogSegment

public org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream startLogSegment(long txId,
                                                                                  int layoutVersion)
                                                                           throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Begin writing to a new segment of the log stream, which starts at the given transaction ID.

Specified by:
startLogSegment in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

finalizeLogSegment

public void finalizeLogSegment(long firstTxId,
                               long lastTxId)
                        throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Mark the log segment that spans from firstTxId to lastTxId as finalized and complete.

Specified by:
finalizeLogSegment in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

close

public void close()
           throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Close the journal manager, freeing any resources it may hold.

Specified by:
close in interface Closeable
Specified by:
close in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

isOpen

public boolean isOpen()

selectInputStreams

public void selectInputStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> streams,
                               long fromTxId,
                               boolean inProgressOk)
                        throws IOException
In this function, we get a bunch of streams from all of our JournalManager objects. Then we add these to the collection one by one.

Parameters:
streams - The collection to add the streams to. It may or may not be sorted-- this is up to the caller.
fromTxId - The transaction ID to start looking for streams at
inProgressOk - Should we consider unfinalized streams?
Throws:
IOException - if the underlying storage has an error or is otherwise inaccessible

chainAndMakeRedundantStreams

public static void chainAndMakeRedundantStreams(Collection<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> outStreams,
                                                PriorityQueue<org.apache.hadoop.hdfs.server.namenode.EditLogInputStream> allStreams,
                                                long fromTxId)

isEmpty

public boolean isEmpty()
Returns true if there are no journals, all redundant journals are disabled, or any required journals are disabled.

Returns:
True if there no journals, all redundant journals are disabled, or any required journals are disabled.

setOutputBufferCapacity

public void setOutputBufferCapacity(int size)
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Set the amount of memory that this stream should use to buffer edits

Specified by:
setOutputBufferCapacity in interface org.apache.hadoop.hdfs.server.namenode.JournalManager

purgeLogsOlderThan

public void purgeLogsOlderThan(long minTxIdToKeep)
                        throws IOException
Remove all edit logs with transaction IDs lower than the given transaction ID.

Parameters:
minTxIdToKeep - the lowest transaction ID that should be retained
Throws:
IOException - in the event of error

recoverUnfinalizedSegments

public void recoverUnfinalizedSegments()
                                throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Recover segments which have not been finalized.

Specified by:
recoverUnfinalizedSegments in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

getEditLogManifest

public RemoteEditLogManifest getEditLogManifest(long fromTxId)
Return a manifest of what finalized edit logs are available. All available edit logs are returned starting from the transaction id passed. If 'fromTxId' falls in the middle of a log, that log is returned as well.

Parameters:
fromTxId - Starting transaction id to read the logs.
Returns:
RemoteEditLogManifest object.

discardSegments

public void discardSegments(long startTxId)
                     throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Discard the segments whose first txid is >= the given txid.

Specified by:
discardSegments in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Parameters:
startTxId - The given txid should be right at the segment boundary, i.e., it should be the first txid of some segment, if segment corresponding to the txid exists.
Throws:
IOException

doPreUpgrade

public void doPreUpgrade()
                  throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Perform any steps that must succeed across all JournalManagers involved in an upgrade before proceeding onto the actual upgrade stage. If a call to any JM's doPreUpgrade method fails, then doUpgrade will not be called for any JM.

Specified by:
doPreUpgrade in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

doUpgrade

public void doUpgrade(org.apache.hadoop.hdfs.server.common.Storage storage)
               throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Perform the actual upgrade of the JM. After this is completed, the NN can begin to use the new upgraded metadata. This metadata may later be either finalized or rolled back to the previous state.

Specified by:
doUpgrade in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Parameters:
storage - info about the new upgraded versions.
Throws:
IOException

doFinalize

public void doFinalize()
                throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Finalize the upgrade. JMs should purge any state that they had been keeping around during the upgrade process. After this is completed, rollback is no longer allowed.

Specified by:
doFinalize in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

canRollBack

public boolean canRollBack(org.apache.hadoop.hdfs.server.common.StorageInfo storage,
                           org.apache.hadoop.hdfs.server.common.StorageInfo prevStorage,
                           int targetLayoutVersion)
                    throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Return true if this JM can roll back to the previous storage state, false otherwise. The NN will refuse to run the rollback operation unless at least one JM or fsimage storage directory can roll back.

Specified by:
canRollBack in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Parameters:
storage - the storage info for the current state
prevStorage - the storage info for the previous (unupgraded) state
targetLayoutVersion - the layout version we intend to roll back to
Returns:
true if this JM can roll back, false otherwise.
Throws:
IOException

doRollback

public void doRollback()
                throws IOException
Description copied from interface: org.apache.hadoop.hdfs.server.namenode.JournalManager
Perform the rollback to the previous FS state. JMs which do not need to roll back their state should just return without error.

Specified by:
doRollback in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Throws:
IOException

getJournalCTime

public long getJournalCTime()
                     throws IOException
Specified by:
getJournalCTime in interface org.apache.hadoop.hdfs.server.namenode.JournalManager
Returns:
the CTime of the journal manager.
Throws:
IOException


Copyright © 2014 Apache Software Foundation. All Rights Reserved.