org.apache.hadoop.hdfs.client
Class HdfsAdmin

java.lang.Object
  extended by org.apache.hadoop.hdfs.client.HdfsAdmin

@InterfaceAudience.Public
@InterfaceStability.Evolving
public class HdfsAdmin
extends Object

The public API for performing administrative functions on HDFS. Those writing applications against HDFS should prefer this interface to directly accessing functionality in DistributedFileSystem or DFSClient. Note that this is distinct from the similarly-named DFSAdmin, which is a class that provides the functionality for the CLI `hdfs dfsadmin ...' commands.


Constructor Summary
HdfsAdmin(URI uri, org.apache.hadoop.conf.Configuration conf)
          Create a new HdfsAdmin client.
 
Method Summary
 long addCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags)
          Add a new CacheDirectiveInfo.
 void addCachePool(CachePoolInfo info)
          Add a cache pool.
 void allowSnapshot(org.apache.hadoop.fs.Path path)
          Allow snapshot on a directory.
 void clearQuota(org.apache.hadoop.fs.Path src)
          Clear the namespace quota (count of files, directories and sym links) for a directory.
 void clearSpaceQuota(org.apache.hadoop.fs.Path src)
          Clear the disk space quota (size of files) for a directory.
 void createEncryptionZone(org.apache.hadoop.fs.Path path, String keyName)
          Create an encryption zone rooted at an empty existing directory, using the specified encryption key.
 void disallowSnapshot(org.apache.hadoop.fs.Path path)
          Disallow snapshot on a directory.
 EncryptionZone getEncryptionZoneForPath(org.apache.hadoop.fs.Path path)
          Get the path of the encryption zone for a given file or directory.
 DFSInotifyEventInputStream getInotifyEventStream()
          Exposes a stream of namesystem events.
 DFSInotifyEventInputStream getInotifyEventStream(long lastReadTxid)
          A version of getInotifyEventStream() meant for advanced users who are aware of HDFS edits up to lastReadTxid (e.g.
 org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter)
          List cache directives.
 org.apache.hadoop.fs.RemoteIterator<CachePoolEntry> listCachePools()
          List all cache pools.
 org.apache.hadoop.fs.RemoteIterator<EncryptionZone> listEncryptionZones()
          Returns a RemoteIterator which can be used to list the encryption zones in HDFS.
 void modifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags)
          Modify a CacheDirective.
 void modifyCachePool(CachePoolInfo info)
          Modify an existing cache pool.
 void removeCacheDirective(long id)
          Remove a CacheDirective.
 void removeCachePool(String poolName)
          Remove a cache pool.
 void setQuota(org.apache.hadoop.fs.Path src, long quota)
          Set the namespace quota (count of files, directories, and sym links) for a directory.
 void setSpaceQuota(org.apache.hadoop.fs.Path src, long spaceQuota)
          Set the disk space quota (size of files) for a directory.
 void setStoragePolicy(org.apache.hadoop.fs.Path src, String policyName)
          Set the source path to the specified storage policy.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

HdfsAdmin

public HdfsAdmin(URI uri,
                 org.apache.hadoop.conf.Configuration conf)
          throws IOException
Create a new HdfsAdmin client.

Parameters:
uri - the unique URI of the HDFS file system to administer
conf - configuration
Throws:
IOException - in the event the file system could not be created
Method Detail

setQuota

public void setQuota(org.apache.hadoop.fs.Path src,
                     long quota)
              throws IOException
Set the namespace quota (count of files, directories, and sym links) for a directory.

Parameters:
src - the path to set the quota for
quota - the value to set for the quota
Throws:
IOException - in the event of error

clearQuota

public void clearQuota(org.apache.hadoop.fs.Path src)
                throws IOException
Clear the namespace quota (count of files, directories and sym links) for a directory.

Parameters:
src - the path to clear the quota of
Throws:
IOException - in the event of error

setSpaceQuota

public void setSpaceQuota(org.apache.hadoop.fs.Path src,
                          long spaceQuota)
                   throws IOException
Set the disk space quota (size of files) for a directory. Note that directories and sym links do not occupy disk space.

Parameters:
src - the path to set the space quota of
spaceQuota - the value to set for the space quota
Throws:
IOException - in the event of error

clearSpaceQuota

public void clearSpaceQuota(org.apache.hadoop.fs.Path src)
                     throws IOException
Clear the disk space quota (size of files) for a directory. Note that directories and sym links do not occupy disk space.

Parameters:
src - the path to clear the space quota of
Throws:
IOException - in the event of error

allowSnapshot

public void allowSnapshot(org.apache.hadoop.fs.Path path)
                   throws IOException
Allow snapshot on a directory.

Parameters:
path - The path of the directory where snapshots will be taken.
Throws:
IOException

disallowSnapshot

public void disallowSnapshot(org.apache.hadoop.fs.Path path)
                      throws IOException
Disallow snapshot on a directory.

Parameters:
path - The path of the snapshottable directory.
Throws:
IOException

addCacheDirective

public long addCacheDirective(CacheDirectiveInfo info,
                              EnumSet<CacheFlag> flags)
                       throws IOException
Add a new CacheDirectiveInfo.

Parameters:
info - Information about a directive to add.
flags - CacheFlags to use for this operation.
Returns:
the ID of the directive that was created.
Throws:
IOException - if the directive could not be added

modifyCacheDirective

public void modifyCacheDirective(CacheDirectiveInfo info,
                                 EnumSet<CacheFlag> flags)
                          throws IOException
Modify a CacheDirective.

Parameters:
info - Information about the directive to modify. You must set the ID to indicate which CacheDirective you want to modify.
flags - CacheFlags to use for this operation.
Throws:
IOException - if the directive could not be modified

removeCacheDirective

public void removeCacheDirective(long id)
                          throws IOException
Remove a CacheDirective.

Parameters:
id - identifier of the CacheDirectiveInfo to remove
Throws:
IOException - if the directive could not be removed

listCacheDirectives

public org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter)
                                                                             throws IOException
List cache directives. Incrementally fetches results from the server.

Parameters:
filter - Filter parameters to use when listing the directives, null to list all directives visible to us.
Returns:
A RemoteIterator which returns CacheDirectiveInfo objects.
Throws:
IOException

addCachePool

public void addCachePool(CachePoolInfo info)
                  throws IOException
Add a cache pool.

Parameters:
info - The request to add a cache pool.
Throws:
IOException - If the request could not be completed.

modifyCachePool

public void modifyCachePool(CachePoolInfo info)
                     throws IOException
Modify an existing cache pool.

Parameters:
info - The request to modify a cache pool.
Throws:
IOException - If the request could not be completed.

removeCachePool

public void removeCachePool(String poolName)
                     throws IOException
Remove a cache pool.

Parameters:
poolName - Name of the cache pool to remove.
Throws:
IOException - if the cache pool did not exist, or could not be removed.

listCachePools

public org.apache.hadoop.fs.RemoteIterator<CachePoolEntry> listCachePools()
                                                                   throws IOException
List all cache pools.

Returns:
A remote iterator from which you can get CachePoolEntry objects. Requests will be made as needed.
Throws:
IOException - If there was an error listing cache pools.

createEncryptionZone

public void createEncryptionZone(org.apache.hadoop.fs.Path path,
                                 String keyName)
                          throws IOException,
                                 org.apache.hadoop.security.AccessControlException,
                                 FileNotFoundException
Create an encryption zone rooted at an empty existing directory, using the specified encryption key. An encryption zone has an associated encryption key used when reading and writing files within the zone.

Parameters:
path - The path of the root of the encryption zone. Must refer to an empty, existing directory.
keyName - Name of key available at the KeyProvider.
Throws:
IOException - if there was a general IO exception
org.apache.hadoop.security.AccessControlException - if the caller does not have access to path
FileNotFoundException - if the path does not exist

getEncryptionZoneForPath

public EncryptionZone getEncryptionZoneForPath(org.apache.hadoop.fs.Path path)
                                        throws IOException,
                                               org.apache.hadoop.security.AccessControlException,
                                               FileNotFoundException
Get the path of the encryption zone for a given file or directory.

Parameters:
path - The path to get the ez for.
Returns:
The EncryptionZone of the ez, or null if path is not in an ez.
Throws:
IOException - if there was a general IO exception
org.apache.hadoop.security.AccessControlException - if the caller does not have access to path
FileNotFoundException - if the path does not exist

listEncryptionZones

public org.apache.hadoop.fs.RemoteIterator<EncryptionZone> listEncryptionZones()
                                                                        throws IOException
Returns a RemoteIterator which can be used to list the encryption zones in HDFS. For large numbers of encryption zones, the iterator will fetch the list of zones in a number of small batches.

Since the list is fetched in batches, it does not represent a consistent snapshot of the entire list of encryption zones.

This method can only be called by HDFS superusers.

Throws:
IOException

getInotifyEventStream

public DFSInotifyEventInputStream getInotifyEventStream()
                                                 throws IOException
Exposes a stream of namesystem events. Only events occurring after the stream is created are available. See DFSInotifyEventInputStream for information on stream usage. See Event for information on the available events.

Inotify users may want to tune the following HDFS parameters to ensure that enough extra HDFS edits are saved to support inotify clients that fall behind the current state of the namespace while reading events. The default parameter values should generally be reasonable. If edits are deleted before their corresponding events can be read, clients will see a MissingEventsException on DFSInotifyEventInputStream method calls. It should generally be sufficient to tune these parameters: dfs.namenode.num.extra.edits.retained dfs.namenode.max.extra.edits.segments.retained Parameters that affect the number of created segments and the number of edits that are considered necessary, i.e. do not count towards the dfs.namenode.num.extra.edits.retained quota): dfs.namenode.checkpoint.period dfs.namenode.checkpoint.txns dfs.namenode.num.checkpoints.retained dfs.ha.log-roll.period

It is recommended that local journaling be configured (dfs.namenode.edits.dir) for inotify (in addition to a shared journal) so that edit transfers from the shared journal can be avoided.

Throws:
IOException - If there was an error obtaining the stream.

getInotifyEventStream

public DFSInotifyEventInputStream getInotifyEventStream(long lastReadTxid)
                                                 throws IOException
A version of getInotifyEventStream() meant for advanced users who are aware of HDFS edits up to lastReadTxid (e.g. because they have access to an FSImage inclusive of lastReadTxid) and only want to read events after this point.

Throws:
IOException

setStoragePolicy

public void setStoragePolicy(org.apache.hadoop.fs.Path src,
                             String policyName)
                      throws IOException
Set the source path to the specified storage policy.

Parameters:
src - The source path referring to either a directory or a file.
policyName - The name of the storage policy.
Throws:
IOException


Copyright © 2014 Apache Software Foundation. All Rights Reserved.