|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
FileSystem
.
See:
Description
Interface Summary | |
---|---|
BlockReader | A BlockReader is responsible for reading a single block from a single datanode. |
Class Summary | |
---|---|
ByteRangeInputStream | To support HTTP byte streams, a new connection to an HTTP server needs to be created each time. |
ByteRangeInputStream.URLOpener | This class wraps a URL and provides method to open connection. |
CorruptFileBlockIterator | Provides an iterator interface for listCorruptFileBlocks. |
DFSClient.Conf | DFSClient configuration |
DFSClient.DFSDataInputStream | Deprecated. use HdfsDataInputStream instead. |
DFSInputStream.ReadStatistics | |
DFSUtil.ConfiguredNNAddress | Represent one of the NameNodes configured in the cluster. |
HAUtil | |
HsftpFileSystem.DummyHostnameVerifier | Dummy hostname verifier that is used to bypass hostname checking |
HsftpFileSystem.DummyTrustManager | Dummy trustmanager that is used to trust all server certificates |
NameNodeProxies | Create proxy objects to communicate with a remote NN. |
NameNodeProxies.ProxyAndInfo<PROXYTYPE> | Wrapper for a client proxy as well as its associated service ID. |
A distributed implementation of FileSystem
. This is loosely modelled after
Google's GFS.
The most important difference is that unlike GFS, Hadoop DFS files have strictly one writer at any one time. Bytes are always appended to the end of the writer's stream. There is no notion of "record appends" or "mutations" that are then checked or reordered. Writers simply emit a byte stream. That byte stream is guaranteed to be stored in the order written.
|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |