- CACHE_ARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_ARCHIVES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES_SIZES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_ARCHIVES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_ARCHIVES_TIMESTAMPS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_ARCHIVES_VISIBILITIES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_FILE_TIMESTAMPS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_FILE_VISIBILITIES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_FILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_FILES_SIZES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_FILES_SIZES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_FILES_TIMESTAMPS - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALARCHIVES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALARCHIVES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_LOCALFILES - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_LOCALFILES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CACHE_SYMLINK - Static variable in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- CACHE_SYMLINK - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
Deprecated.
Symlinks are always on and cannot be disabled.
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
Use Token.cancel(org.apache.hadoop.conf.Configuration)
instead
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Deprecated.
Use Token.cancel(org.apache.hadoop.conf.Configuration)
instead
- cancelDelegationToken(Token<DelegationTokenIdentifier>) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Cancel a delegation token.
- canCommit(TaskAttemptID) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Polling to know whether the task can go-ahead with commit
- captureOutAndError(List<String>, List<String>, File, File, long, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Wrap a command in a shell to capture stdout and stderr to files.
- Chain - Class in org.apache.hadoop.mapreduce.lib.chain
-
- Chain(boolean) - Constructor for class org.apache.hadoop.mapreduce.lib.chain.Chain
-
Creates a Chain instance configured for a Mapper or a Reducer.
- CHAIN_MAPPER - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- CHAIN_MAPPER_CLASS - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- CHAIN_MAPPER_CONFIG - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- CHAIN_MAPPER_SIZE - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- CHAIN_REDUCER - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- CHAIN_REDUCER_CLASS - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- CHAIN_REDUCER_CONFIG - Static variable in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- ChainMapper - Class in org.apache.hadoop.mapred.lib
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapred.lib.ChainMapper
-
Constructor.
- ChainMapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce.lib.chain
-
The ChainMapper class allows to use multiple Mapper classes within a single
Map task.
- ChainMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.chain.ChainMapper
-
- ChainReducer - Class in org.apache.hadoop.mapred.lib
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapred.lib.ChainReducer
-
Constructor.
- ChainReducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce.lib.chain
-
The ChainReducer class allows to chain multiple Mapper classes after a
Reducer within the Reducer task.
- ChainReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
- checkAccess(UserGroupInformation, JobACL, String, AccessControlList) - Method in class org.apache.hadoop.mapred.JobACLsManager
-
If authorization is enabled, checks whether the user (in the callerUGI)
is authorized to perform the operation specified by 'jobOperation' on
the job by checking if the user is jobOwner or part of job ACL for the
specific job operation.
- checkCounters(int) - Method in class org.apache.hadoop.mapreduce.counters.Limits
-
- checkGroups(int) - Method in class org.apache.hadoop.mapreduce.counters.Limits
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.LazyOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
Check for validity of the output-specification for the job.
- checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Check for validity of the output-specification for the job.
- Checkpointable - Annotation Type in org.apache.hadoop.mapreduce.task.annotation
-
Contract representing to the framework that the task can be safely preempted
and restarted between invocations of the user-defined function.
- checkReducerAlreadySet(boolean, Configuration, String, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- checkURIs(URI[], URI[]) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
Deprecated.
This method checks if there is a conflict in the fragment names
of the uris.
- CLASSIC_FRAMEWORK_NAME - Static variable in interface org.apache.hadoop.mapreduce.MRConfig
-
- CLASSPATH_ARCHIVES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CLASSPATH_FILES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- cleanup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
-
Called once at the end of the task.
- cleanup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
-
Called once at the end of the task.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Deprecated.
- cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
- cleanUpPartialOutputForTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.PartialFileOutputCommitter
-
- cleanUpPartialOutputForTask(TaskAttemptContext) - Method in interface org.apache.hadoop.mapreduce.lib.output.PartialOutputCommitter
-
Remove all previously committed outputs from prior executions of this task.
- cleanupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
-
- cleanupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- cleanupProgress() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the progress of the job's cleanup-tasks, as a float between 0.0
and 1.0.
- cleanUpTokenReferral(Configuration) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
Remove jobtoken referrals which don't make sense in the context
of the task execution.
- clear() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- clear() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader.JoinCollector
-
Clear all state information.
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader.JoinDelegationIterator
-
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- clear() - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Close datasources, but do not release internal resources.
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.ResetableIterator.EMPTY
-
- clear() - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- clearAcls() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'acls' field
- clearApplicationAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Clears the value of the 'applicationAttemptId' field
- clearAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'attemptId' field
- clearAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'attemptId' field
- clearAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'attemptId' field
- clearAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'attemptId' field
- clearAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'attemptId' field
- clearAvataar() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'avataar' field
- clearClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'clockSplits' field
- clearClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'clockSplits' field
- clearClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'clockSplits' field
- clearContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Clears the value of the 'containerId' field
- clearContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'containerId' field
- clearCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'counters' field
- clearCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'counters' field
- clearCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'counters' field
- clearCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'counters' field
- clearCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'counters' field
- clearCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Clears the value of the 'counters' field
- clearCounts() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Clears the value of the 'counts' field
- clearCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'cpuUsages' field
- clearCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'cpuUsages' field
- clearCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'cpuUsages' field
- clearDiagnostics() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Clears the value of the 'diagnostics' field
- clearDisplayName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Clears the value of the 'displayName' field
- clearDisplayName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Clears the value of the 'displayName' field
- clearError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'error' field
- clearError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'error' field
- clearEvent() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event.Builder
-
Clears the value of the 'event' field
- clearFailedDueToAttempt() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'failedDueToAttempt' field
- clearFailedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'failedMaps' field
- clearFailedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'failedReduces' field
- clearFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'finishedMaps' field
- clearFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Clears the value of the 'finishedMaps' field
- clearFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'finishedReduces' field
- clearFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Clears the value of the 'finishedReduces' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Clears the value of the 'finishTime' field
- clearFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.Builder
-
Clears the value of the 'finishTime' field
- clearGroups() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters.Builder
-
Clears the value of the 'groups' field
- clearHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'hostname' field
- clearHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'hostname' field
- clearHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'hostname' field
- clearHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'hostname' field
- clearHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'httpPort' field
- clearJobConfPath() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'jobConfPath' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'jobid' field
- clearJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Clears the value of the 'jobid' field
- clearJobName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'jobName' field
- clearJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.Builder
-
Clears the value of the 'jobQueueName' field
- clearJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'jobQueueName' field
- clearJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Clears the value of the 'jobStatus' field
- clearJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.Builder
-
Clears the value of the 'jobStatus' field
- clearJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Clears the value of the 'jobStatus' field
- clearLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Clears the value of the 'launchTime' field
- clearLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Clears the value of the 'launchTime' field
- clearLocality() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'locality' field
- clearMapCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'mapCounters' field
- clearMapFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'mapFinishTime' field
- clearMark() - Method in class org.apache.hadoop.mapred.BackupStore
-
- clearMark() - Method in class org.apache.hadoop.mapreduce.MarkableIterator
-
- clearMark() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl.ValueIterator
-
- clearName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Clears the value of the 'name' field
- clearName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Clears the value of the 'name' field
- clearName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters.Builder
-
Clears the value of the 'name' field
- clearNodeManagerHost() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Clears the value of the 'nodeManagerHost' field
- clearNodeManagerHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Clears the value of the 'nodeManagerHttpPort' field
- clearNodeManagerPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Clears the value of the 'nodeManagerPort' field
- clearPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'physMemKbytes' field
- clearPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'physMemKbytes' field
- clearPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'physMemKbytes' field
- clearPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'port' field
- clearPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'port' field
- clearPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'port' field
- clearPriority() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.Builder
-
Clears the value of the 'priority' field
- clearRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'rackname' field
- clearRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'rackname' field
- clearRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'rackname' field
- clearRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'rackname' field
- clearReduceCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'reduceCounters' field
- clearShuffleFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'shuffleFinishTime' field
- clearShufflePort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'shufflePort' field
- clearSortFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'sortFinishTime' field
- clearSplitLocations() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Clears the value of the 'splitLocations' field
- clearStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Clears the value of the 'startTime' field
- clearStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'startTime' field
- clearStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Clears the value of the 'startTime' field
- clearState() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'state' field
- clearState() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'state' field
- clearState() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'state' field
- clearStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'status' field
- clearStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'status' field
- clearStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Clears the value of the 'status' field
- clearSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Clears the value of the 'submitTime' field
- clearSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'submitTime' field
- clearSuccessfulAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Clears the value of the 'successfulAttemptId' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Clears the value of the 'taskid' field
- clearTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.Builder
-
Clears the value of the 'taskid' field
- clearTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'taskStatus' field
- clearTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'taskStatus' field
- clearTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'taskStatus' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Clears the value of the 'taskType' field
- clearTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Clears the value of the 'taskType' field
- clearTotalCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Clears the value of the 'totalCounters' field
- clearTotalMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Clears the value of the 'totalMaps' field
- clearTotalReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Clears the value of the 'totalReduces' field
- clearTrackerName() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Clears the value of the 'trackerName' field
- clearType() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event.Builder
-
Clears the value of the 'type' field
- clearUberized() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Clears the value of the 'uberized' field
- clearUserName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'userName' field
- clearValue() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Clears the value of the 'value' field
- clearVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Clears the value of the 'vMemKbytes' field
- clearVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Clears the value of the 'vMemKbytes' field
- clearVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Clears the value of the 'vMemKbytes' field
- clearWorkflowAdjacencies() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'workflowAdjacencies' field
- clearWorkflowId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'workflowId' field
- clearWorkflowName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'workflowName' field
- clearWorkflowNodeName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'workflowNodeName' field
- clearWorkflowTags() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Clears the value of the 'workflowTags' field
- CLI - Class in org.apache.hadoop.mapreduce.tools
-
Interprets the map reduce cli options
- CLI() - Constructor for class org.apache.hadoop.mapreduce.tools.CLI
-
- CLI(Configuration) - Constructor for class org.apache.hadoop.mapreduce.tools.CLI
-
- ClientDistributedCacheManager - Class in org.apache.hadoop.mapreduce.filecache
-
Manages internal configuration of the cache by the client for job submission.
- ClientDistributedCacheManager() - Constructor for class org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager
-
- ClientProtocol - Interface in org.apache.hadoop.mapreduce.protocol
-
Protocol that a JobClient and the central JobTracker use to communicate.
- ClientProtocolProvider - Class in org.apache.hadoop.mapreduce.protocol
-
- ClientProtocolProvider() - Constructor for class org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
-
- clockSplits - Variable in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Deprecated.
- clockSplits - Variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Deprecated.
- clockSplits - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Deprecated.
- clone() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- clone() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- cloneContext(JobContext, Configuration) - Static method in class org.apache.hadoop.mapreduce.ContextFactory
-
- cloneMapContext(MapContext<K1, V1, K2, V2>, Configuration, RecordReader<K1, V1>, RecordWriter<K2, V2>) - Static method in class org.apache.hadoop.mapreduce.ContextFactory
-
Copy a custom WrappedMapper.Context, optionally replacing
the input and output.
- close() - Method in class org.apache.hadoop.mapred.FixedLengthRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.IFile.Reader
-
- close() - Method in class org.apache.hadoop.mapred.IFile.Writer
-
- close() - Method in class org.apache.hadoop.mapred.IFileInputStream
-
Close the input stream.
- close() - Method in class org.apache.hadoop.mapred.IFileOutputStream
-
- close() - Method in class org.apache.hadoop.mapred.JobClient
-
Close the JobClient
.
- close() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- close() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- close() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Do nothing.
- close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- close() - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Closes the ChainMapper and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Closes the ChainReducer, the Reducer and all the Mappers in the chain.
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- close(Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat.DBRecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
-
- close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- close(Reporter) - Method in class org.apache.hadoop.mapred.lib.FilterOutputFormat.FilterRecordWriter
-
- close() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Closes all the opened named outputs.
- close() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- close() - Method in interface org.apache.hadoop.mapred.MapOutputCollector
-
- close() - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- close() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
- close() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- close() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Closes the iterator so that the underlying streams can be closed.
- close() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- close() - Method in interface org.apache.hadoop.mapred.ShuffleConsumerPlugin
-
- close() - Method in class org.apache.hadoop.mapred.TaskLog.Reader
-
- close() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- close(Reporter) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
-
- close() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Close the Cluster
.
- close() - Method in class org.apache.hadoop.mapreduce.jobhistory.EventReader
-
Close the Event reader
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Close the record reader.
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.ArrayListBackedIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Close all child RRs.
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader.JoinCollector
-
Close all child iterators.
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader.JoinDelegationIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- close() - Method in interface org.apache.hadoop.mapreduce.lib.join.ResetableIterator
-
Close datasources and release resources.
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.ResetableIterator.EMPTY
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Forward close request to proxied RR.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat.FilterRecordWriter
-
- close() - Method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Closes all the opened outputs.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
-
- close(ClientProtocol) - Method in class org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
-
- close() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Close the record reader.
- close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordWriter
-
Close this RecordWriter
to future operations.
- close() - Method in class org.apache.hadoop.mapreduce.task.reduce.InMemoryReader
-
- close() - Method in class org.apache.hadoop.mapreduce.task.reduce.InMemoryWriter
-
- close() - Method in interface org.apache.hadoop.mapreduce.task.reduce.MergeManager
-
Called at the end of shuffle.
- close() - Method in class org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl
-
- close() - Method in class org.apache.hadoop.mapreduce.task.reduce.Shuffle
-
- close() - Method in interface org.apache.hadoop.mapreduce.task.reduce.ShuffleScheduler
-
- close() - Method in class org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl
-
- closeConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- closeInMemoryFile(InMemoryMapOutput<K, V>) - Method in class org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl
-
- closeInMemoryMergedFile(InMemoryMapOutput<K, V>) - Method in class org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl
-
- closeOnDiskFile(MergeManagerImpl.CompressAwarePath) - Method in class org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl
-
- Cluster - Class in org.apache.hadoop.mapreduce
-
Provides a way to access information about the map/reduce cluster.
- Cluster(Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- Cluster(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapreduce.Cluster
-
- cluster - Variable in class org.apache.hadoop.mapreduce.tools.CLI
-
- Cluster.JobTrackerStatus - Enum in org.apache.hadoop.mapreduce
-
- ClusterMetrics - Class in org.apache.hadoop.mapreduce
-
Status information on the current state of the Map-Reduce cluster.
- ClusterMetrics() - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterMetrics(int, int, int, int, int, int, int, int, int, int, int, int, int) - Constructor for class org.apache.hadoop.mapreduce.ClusterMetrics
-
- ClusterStatus - Class in org.apache.hadoop.mapred
-
Status information on the current state of the Map-Reduce cluster.
- ClusterStatus.BlackListInfo - Class in org.apache.hadoop.mapred
-
Class which encapsulates information about a blacklisted tasktracker.
- cmp - Variable in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- cmpcl - Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- cmpcl - Variable in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
- collect(K, V, int) - Method in interface org.apache.hadoop.mapred.MapOutputCollector
-
- collect(K, V, int) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
Serialize the key, value to intermediate storage.
- collect(K, V) - Method in interface org.apache.hadoop.mapred.OutputCollector
-
Adds a key/value pair to the output.
- collect(K, V) - Method in class org.apache.hadoop.mapred.Task.CombineOutputCollector
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.OuterJoinRecordReader
-
Emit everything from the collector.
- combine(RawKeyValueIterator, OutputCollector<K, V>) - Method in class org.apache.hadoop.mapred.Task.CombinerRunner
-
Run the combiner over a set of inputs.
- combine(RawKeyValueIterator, OutputCollector<K, V>) - Method in class org.apache.hadoop.mapred.Task.NewCombinerRunner
-
- combine(RawKeyValueIterator, OutputCollector<K, V>) - Method in class org.apache.hadoop.mapred.Task.OldCombinerRunner
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.InnerJoinRecordReader
-
Return true iff the tuple is full (all data sources contain this key).
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader
-
- combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapreduce.lib.join.OuterJoinRecordReader
-
Emit everything from the collector.
- COMBINE_CLASS_ATTR - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COMBINE_RECORDS_BEFORE_PROGRESS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
default constructor
- CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
default constructor
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapred.lib
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(JobConf, CombineFileSplit, Reporter, Class<RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A generic RecordReader that can hand out different recordReaders
for each chunk in a
CombineFileSplit
.
- CombineFileRecordReader(CombineFileSplit, TaskAttemptContext, Class<? extends RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
- CombineFileRecordReaderWrapper<K,V> - Class in org.apache.hadoop.mapred.lib
-
A wrapper class for a record reader that handles a single file split.
- CombineFileRecordReaderWrapper(FileInputFormat<K, V>, CombineFileSplit, Configuration, Reporter, Integer) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- CombineFileRecordReaderWrapper<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A wrapper class for a record reader that handles a single file split.
- CombineFileRecordReaderWrapper(FileInputFormat<K, V>, CombineFileSplit, TaskAttemptContext, Integer) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- CombineFileSplit - Class in org.apache.hadoop.mapred.lib
-
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
-
Copy constructor
- CombineFileSplit - Class in org.apache.hadoop.mapreduce.lib.input
-
A sub-collection of input files.
- CombineFileSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
default constructor
- CombineFileSplit(Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(Path[], long[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Copy constructor
- COMBINER_GROUP_COMPARATOR_CLASS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CombineSequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
-
Input format that is a CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.
- CombineSequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineSequenceFileInputFormat
-
- CombineSequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
Input format that is a CombineFileInputFormat
-equivalent for
SequenceFileInputFormat
.
- CombineSequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineSequenceFileInputFormat
-
- CombineTextInputFormat - Class in org.apache.hadoop.mapred.lib
-
Input format that is a CombineFileInputFormat
-equivalent for
TextInputFormat
.
- CombineTextInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineTextInputFormat
-
- CombineTextInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
Input format that is a CombineFileInputFormat
-equivalent for
TextInputFormat
.
- CombineTextInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat
-
- commit() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapOutput
-
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For committing job's output after successful job completion.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
The job has completed so move all committed tasks to the final output dir.
- commitJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For committing job's output after successful job completion.
- commitPending(TaskAttemptID, TaskStatus) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Report that the task is complete, but its commit is pending.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
To promote the task's temporary output to final output location.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Move the files from the work directory to the job output directory
- commitTask(TaskAttemptContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
To promote the task's temporary output to final output location.
- committer - Variable in class org.apache.hadoop.mapred.Task
-
- COMPARATOR_OPTIONS - Static variable in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(int, int) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
Compare logical range, st i, j MOD offset capacity.
- compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- compare(MapOutput<K, V>, MapOutput<K, V>) - Method in class org.apache.hadoop.mapreduce.task.reduce.MapOutput.MapOutputComparator
-
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.ID
-
Compare IDs by associated numbers
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.JobID
-
Compare JobIds by first jtIdentifiers, then by job numbers
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Implement Comparable contract (compare key of join or head of heap
with that of another).
- compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Implement Comparable contract (compare key at head of proxied RR
with that of another).
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Compare TaskIds by first tipIds, then by task numbers.
- compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskID
-
Compare TaskInProgressIds by first jobIds, then by tip numbers.
- COMPLETED_MAPS_FOR_REDUCE_SLOWSTART - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COMPLETION_POLL_INTERVAL_KEY - Static variable in class org.apache.hadoop.mapreduce.Job
-
Key in mapred-*.xml that sets completionPollInvervalMillis
- ComposableInputFormat<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable> - Interface in org.apache.hadoop.mapred.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableInputFormat<K extends org.apache.hadoop.io.WritableComparable<?>,V extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
Refinement of InputFormat requiring implementors to provide
ComposableRecordReader instead of RecordReader.
- ComposableInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ComposableInputFormat
-
- ComposableRecordReader<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable> - Interface in org.apache.hadoop.mapred.join
-
Additional operations required of a RecordReader to participate in a join.
- ComposableRecordReader<K extends org.apache.hadoop.io.WritableComparable<?>,V extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
Additional operations required of a RecordReader to participate in a join.
- ComposableRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.join.ComposableRecordReader
-
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Convenience method for constructing composite formats.
- CompositeInputFormat<K extends org.apache.hadoop.io.WritableComparable> - Class in org.apache.hadoop.mapred.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputFormat
-
- CompositeInputFormat<K extends org.apache.hadoop.io.WritableComparable> - Class in org.apache.hadoop.mapreduce.lib.join
-
An InputFormat capable of performing joins over a set of data sources sorted
and partitioned the same way.
- CompositeInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
- CompositeInputSplit - Class in org.apache.hadoop.mapred.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
-
- CompositeInputSplit - Class in org.apache.hadoop.mapreduce.lib.join
-
This InputSplit contains a set of child InputSplits.
- CompositeInputSplit() - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
- CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
- CompositeRecordReader<K extends org.apache.hadoop.io.WritableComparable,V extends org.apache.hadoop.io.Writable,X extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapred.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompositeRecordReader<K extends org.apache.hadoop.io.WritableComparable<?>,V extends org.apache.hadoop.io.Writable,X extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
A RecordReader that can effect joins of RecordReaders sharing a common key
type and partitioning.
- CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a RecordReader with capacity children to position
id in the parent reader.
- CompositeRecordReader.JoinCollector - Class in org.apache.hadoop.mapreduce.lib.join
-
Collector for join values.
- CompositeRecordReader.JoinCollector(int) - Constructor for class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader.JoinCollector
-
Construct a collector capable of handling the specified number of
children.
- COMPRESS - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- COMPRESS_CODEC - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- COMPRESS_TYPE - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- CompressedSplitLineReader - Class in org.apache.hadoop.mapreduce.lib.input
-
Line reader for compressed splits
Reading records from a compressed split is tricky, as the
LineRecordReader is using the reported compressed input stream
position directly to determine when a split has ended.
- CompressedSplitLineReader(SplitCompressionInputStream, Configuration, byte[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.CompressedSplitLineReader
-
- computeHash(byte[], SecretKey) - Static method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Compute the HMAC hash of the message using the key
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- conditions - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapred.Task
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- conf - Variable in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
Combiner does not need to configure.
- configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.BinaryPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainMapper
-
Configures the ChainMapper and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Configures the ChainReducer, the Reducer and all the Mappers in the chain.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes a new instance from a
JobConf
.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
-
Default implementation that does nothing.
- configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
-
- configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- configure(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Do nothing.
- configure(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
get the input file name.
- configure(Configuration) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorDescriptor
-
Configure the object
- configureDB(JobConf, String, String, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(JobConf, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- configureDB(Configuration, String, String, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Sets the DB access related fields in the Configuration
.
- configureDB(Configuration, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Sets the DB access related fields in the JobConf.
- ConfigUtil - Class in org.apache.hadoop.mapreduce.util
-
Place holder for deprecated keys in the framework
- ConfigUtil() - Constructor for class org.apache.hadoop.mapreduce.util.ConfigUtil
-
- connection - Variable in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- constructJobACLs(Configuration) - Method in class org.apache.hadoop.mapred.JobACLsManager
-
Construct the jobACLs from the configuration so that they can be kept in
the memory.
- constructQuery(String, String[]) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Constructs the query used as the prepared statement to insert data.
- containerId - Variable in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Deprecated.
- containerId - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Deprecated.
- contentEquals(Counters.Counter) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
Deprecated.
- context - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- ContextFactory - Class in org.apache.hadoop.mapreduce
-
A factory to allow applications to deal with inconsistencies between
MapReduce Context Objects API between hadoop-0.20 and later versions.
- ContextFactory() - Constructor for class org.apache.hadoop.mapreduce.ContextFactory
-
- ControlledJob - Class in org.apache.hadoop.mapreduce.lib.jobcontrol
-
This class encapsulates a MapReduce job and its dependency.
- ControlledJob(Job, List<ControlledJob>) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- ControlledJob(Configuration) - Constructor for class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Construct a job.
- ControlledJob.State - Enum in org.apache.hadoop.mapreduce.lib.jobcontrol
-
- convertTrackerNameToHostName(String) - Static method in class org.apache.hadoop.mapreduce.util.HostUtil
-
- copyFailed(TaskAttemptID, MapHost, boolean, boolean) - Method in class org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl
-
- copySucceeded(TaskAttemptID, MapHost, long, long, MapOutput<K, V>) - Method in class org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl
-
- count - Variable in class org.apache.hadoop.mapred.PeriodicStatsAccumulator
-
- countCounters() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the total number of counters, by summing the number of counters
in each group.
- Counter - Interface in org.apache.hadoop.mapreduce
-
A named counter that tracks the progress of a map/reduce job.
- COUNTER_GROUP - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Special counters which are written by the application and are
used by the framework for detecting bad records.
- COUNTER_GROUP_NAME_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_GROUP_NAME_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_GROUPS_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_GROUPS_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_MAP_PROCESSED_RECORDS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed map records.
- COUNTER_NAME_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_NAME_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTER_REDUCE_PROCESSED_GROUPS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
-
Number of processed reduce groups.
- COUNTER_UPDATE_INTERVAL - Static variable in interface org.apache.hadoop.mapred.MRConstants
-
- CounterGroup - Interface in org.apache.hadoop.mapreduce
-
A group of
Counter
s that logically belong together.
- CounterGroupBase<T extends Counter> - Interface in org.apache.hadoop.mapreduce.counters
-
The common counter group interface.
- CounterGroupFactory<C extends Counter,G extends CounterGroupBase<C>> - Class in org.apache.hadoop.mapreduce.counters
-
An abstract class to provide common implementation of the
group factory in both mapred and mapreduce packages.
- CounterGroupFactory() - Constructor for class org.apache.hadoop.mapreduce.counters.CounterGroupFactory
-
- CounterGroupFactory.FrameworkGroupFactory<F> - Interface in org.apache.hadoop.mapreduce.counters
-
- Counters - Class in org.apache.hadoop.mapred
-
A set of named counters.
- Counters() - Constructor for class org.apache.hadoop.mapred.Counters
-
- Counters(Counters) - Constructor for class org.apache.hadoop.mapred.Counters
-
- Counters - Class in org.apache.hadoop.mapreduce
-
Counters
holds per job/task counters, defined either by the
Map-Reduce framework or applications.
- Counters() - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Default constructor
- Counters(AbstractCounters<C, G>) - Constructor for class org.apache.hadoop.mapreduce.Counters
-
Construct the Counters object from the another counters object
- counters - Variable in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Deprecated.
- counters - Variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Deprecated.
- counters - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Deprecated.
- counters - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Deprecated.
- counters - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Deprecated.
- counters - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Deprecated.
- Counters.Counter - Class in org.apache.hadoop.mapred
-
A counter record, comprising its name and value.
- Counters.Counter() - Constructor for class org.apache.hadoop.mapred.Counters.Counter
-
- Counters.CountersExceededException - Exception in org.apache.hadoop.mapred
-
Counter exception thrown when the number of counters exceed the limit
- Counters.CountersExceededException(String) - Constructor for exception org.apache.hadoop.mapred.Counters.CountersExceededException
-
- Counters.CountersExceededException(Counters.CountersExceededException) - Constructor for exception org.apache.hadoop.mapred.Counters.CountersExceededException
-
- Counters.Group - Class in org.apache.hadoop.mapred
-
Group
of counters, comprising of counters from a particular
counter
Enum
class.
- Counters.Group() - Constructor for class org.apache.hadoop.mapred.Counters.Group
-
- COUNTERS_MAX_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- COUNTERS_MAX_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- CountersStrings - Class in org.apache.hadoop.mapreduce.util
-
String conversion utilities for counters.
- CountersStrings() - Constructor for class org.apache.hadoop.mapreduce.util.CountersStrings
-
- counts - Variable in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Deprecated.
- cpuUsages - Variable in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Deprecated.
- cpuUsages - Variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Deprecated.
- cpuUsages - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Deprecated.
- create(JobConf, TaskAttemptID, Counters.Counter, Task.TaskReporter, OutputCommitter) - Static method in class org.apache.hadoop.mapred.Task.CombinerRunner
-
- create(Configuration) - Method in class org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
-
- create(InetSocketAddress, Configuration) - Method in class org.apache.hadoop.mapreduce.protocol.ClientProtocolProvider
-
- CREATE_DIR - Static variable in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
Internal to MapReduce framework. Use DistributedCacheManager
instead.
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- createDBRecordReader(DBInputFormat.DBInputSplit, Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createFileSplit(Path, long, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
NLineInputFormat uses LineRecordReader, which always reads
(and consumes) at least one character out of its upper split
boundary.
- createIdentifier() - Method in class org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenSecretManager
-
- createIdentifier() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Create an empty job token identifier
- createInMemoryMerger() - Method in class org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl
-
- createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInstance(String) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Create an instance of the given class
- createInternalValue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createKey() - Method in class org.apache.hadoop.mapred.FixedLengthRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Create a new key value common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new key from proxied RR.
- createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- createKey() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
-
Create an object of the appropriate type to be used as a key.
- createKey() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a key.
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createKey() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a new key common to all child RRs.
- createKey() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Request new key from proxied RR.
- createLogSyncer() - Static method in class org.apache.hadoop.mapred.TaskLog
-
- createMergeManager(ShuffleConsumerPlugin.Context) - Method in class org.apache.hadoop.mapreduce.task.reduce.Shuffle
-
- createPassword(JobTokenIdentifier) - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Create a new password/secret for the given job token identifier.
- createPool(JobConf, List<PathFilter>) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createPool(JobConf, PathFilter...) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createPool(List<PathFilter>) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createPool(PathFilter...) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Create a new pool and add the filters to it.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Create a record reader for a given split.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
This is not implemented yet.
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineSequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
Create a record reader for the given split
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.join.ComposableInputFormat
-
- createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- createReduceContext(Reducer<INKEY, INVALUE, OUTKEY, OUTVALUE>, Configuration, TaskAttemptID, RawKeyValueIterator, Counter, Counter, RecordWriter<OUTKEY, OUTVALUE>, OutputCommitter, StatusReporter, RawComparator<INKEY>, Class<INKEY>, Class<INVALUE>) - Static method in class org.apache.hadoop.mapred.Task
-
- createSecretKey(byte[]) - Static method in class org.apache.hadoop.mapreduce.security.token.JobTokenSecretManager
-
Convert the byte[] to a secret key
- createSplitFiles(Path, Configuration, FileSystem, List<InputSplit>) - Static method in class org.apache.hadoop.mapreduce.split.JobSplitWriter
-
- createSplitFiles(Path, Configuration, FileSystem, T[]) - Static method in class org.apache.hadoop.mapreduce.split.JobSplitWriter
-
- createSplitFiles(Path, Configuration, FileSystem, InputSplit[]) - Static method in class org.apache.hadoop.mapreduce.split.JobSplitWriter
-
- createSymlink(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
Deprecated.
This is a NO-OP.
- createSymlink() - Method in class org.apache.hadoop.mapreduce.Job
-
Deprecated.
- createTupleWritable() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Create a value to be used internally for joins.
- createValue() - Method in class org.apache.hadoop.mapred.FixedLengthRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request new value from proxied RR.
- createValue() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- createValue() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Create an object of the appropriate type to be used as a value.
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.OverrideRecordReader
-
- createValue() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
- createValueAggregatorJob(String[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[], Class<?>) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJob(Configuration, String[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
Create an Aggregate based map/reduce job.
- createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- credentials - Variable in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- curReader - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- curReader - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- currentKeyLength - Variable in class org.apache.hadoop.mapred.IFile.Reader
-
- currentValueLength - Variable in class org.apache.hadoop.mapred.IFile.Reader
-
- gcUpdater - Variable in class org.apache.hadoop.mapred.Task
-
- generateActualKey(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual key from the given key/value.
- generateActualValue(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the actual value from the given key and value.
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateFileNameForKeyValue(K, V, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the file output file name based on the given key and the leaf file
name.
- generateHash(byte[], SecretKey) - Static method in class org.apache.hadoop.mapreduce.security.SecureShuffleUtils
-
Base64 encoded hash of msg
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for the given
key/value pairs by delegating the invocation to the real object.
- generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
- generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorDescriptor
-
Generate a list of aggregation-id/value pairs for
the given key/value pair.
- generateLeafFileName(String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the leaf name for the output file name.
- generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- generateValueAggregator(String, long) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- GenericCounter - Class in org.apache.hadoop.mapreduce.counters
-
A generic counter implementation
- GenericCounter() - Constructor for class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- GenericCounter(String, String) - Constructor for class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- GenericCounter(String, String, long) - Constructor for class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- get(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
- get(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
- get(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Get ith child InputSplit.
- get(int) - Method in class org.apache.hadoop.mapreduce.lib.join.TupleWritable
-
Get ith Writable from Tuple.
- getAclName() - Method in enum org.apache.hadoop.mapred.QueueACL
-
- getAclName() - Method in enum org.apache.hadoop.mapreduce.JobACL
-
Get the name of the ACL.
- getAcls() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'acls' field
- getAcls() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'acls' field.
- getActiveTaskTrackers() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get all active trackers in the cluster.
- getActiveTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getActiveTrackers() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get all active trackers in cluster.
- getAggregatorDescriptors(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- getAllCompletedTaskAttempts() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getAllJobs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get the jobs that are submitted.
- getAllJobs() - Method in class org.apache.hadoop.mapreduce.Cluster
-
- getAllJobs() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get all the jobs submitted.
- getAllJobStatuses() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get job status for all jobs in the cluster.
- getAllTaskAttempts() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getAllTasks() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getAllTaskTypes() - Static method in class org.apache.hadoop.mapreduce.TaskID
-
- getAMInfos() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getAndClearKnownMaps() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapHost
-
- getAppAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getAppAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo
-
- getApplicationAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Gets the value of the 'applicationAttemptId' field
- getApplicationAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Gets the value of the 'applicationAttemptId' field.
- getApplicationId() - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- getArchiveClassPaths(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getArchiveClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the archive entries in classpath as an array of Path
- getArchiveClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getArchiveClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getArchiveClassPaths() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the archive entries in classpath as an array of Path
- getArchiveTimestamps(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getArchiveTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the archives.
- getArchiveTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getArchiveTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getArchiveTimestamps() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the timestamps of the archives.
- getArchiveVisibilities(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
Deprecated.
Get the booleans on whether the archives are public or not.
- getAssignedJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'attemptId' field
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'attemptId' field.
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the attempt id
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'attemptId' field
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'attemptId' field.
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the attempt id
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'attemptId' field
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'attemptId' field.
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the task attempt id
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'attemptId' field
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'attemptId' field.
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'attemptId' field
- getAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'attemptId' field.
- getAttemptsToStartSkipping(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of Task attempts AFTER which skip mode
will be kicked off.
- getAutoIncrMapperProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAutoIncrReducerProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- getAvataar() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'avataar' field
- getAvataar() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'avataar' field.
- getAvataar() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the avataar
- getAvgMapTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.AnalyzedJob
-
Get the average map time
- getAvgReduceTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.AnalyzedJob
-
Get the average reduce time
- getAvgShuffleTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.AnalyzedJob
-
Get the average shuffle time
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
-
- getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
-
- getBaseUrl() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapHost
-
- getBlackListedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of blacklisted trackers in the cluster.
- getBlackListedTaskTrackers() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get blacklisted trackers.
- getBlacklistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the names of task trackers in the cluster.
- getBlacklistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of blacklisted task trackers in the cluster.
- getBlacklistedTrackers() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get all blacklisted trackers in cluster.
- getBlackListedTrackersInfo() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Gets the list of blacklisted trackers along with reasons for blacklisting.
- getBlackListReport() - Method in class org.apache.hadoop.mapred.ClusterStatus.BlackListInfo
-
Gets a descriptive report about why the tasktracker was blacklisted.
- getBlacklistReport() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets a descriptive report about why the tasktracker was blacklisted.
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getBoundingValsQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getBundle(String) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get a resource bundle
- getCacheArchives(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache archives set in the Configuration
- getCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCacheArchives() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get cache archives set in the Configuration
- getCacheFiles(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get cache files set in the Configuration
- getCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCacheFiles() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get cache files set in the Configuration
- getChainElementConf(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
Creates a Configuration
for the Map or Reduce in the chain.
- getChecksum() - Method in class org.apache.hadoop.mapred.IFileInputStream
-
- getChildQueues(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Returns an array of queue information objects about immediate children
of queue queueName.
- getChildQueues(String) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Returns immediate children of queueName.
- getChildQueues(String) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Returns immediate children of queueName.
- getChildren() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
- getClassSchema() - Static method in enum org.apache.hadoop.mapreduce.jobhistory.EventType
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
- getClassSchema() - Static method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
- getCleanupFinished() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of cleanup tasks that finished
- getCleanupProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getCleanupStarted() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of cleanup tasks started
- getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the cleanup tasks of a job.
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'clockSplits' field
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'clockSplits' field.
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'clockSplits' field
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'clockSplits' field.
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'clockSplits' field
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'clockSplits' field.
- getClockSplits() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
- getCluster() - Method in class org.apache.hadoop.mapreduce.Job
-
- getClusterHandle() - Method in class org.apache.hadoop.mapred.JobClient
-
Get a handle to the Cluster
- getClusterMetrics() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get the current status of the cluster
- getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the Map-Reduce cluster.
- getClusterStatus() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get current cluster status.
- getCodec() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getCollector(String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a named output.
- getCollector(String, String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Gets the output collector for a multi named output.
- getCombineCollector() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- getCombinerClass() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getCombinerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the combiner class for the job.
- getCombinerClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCombinerClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCombinerClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the combiner class for the job.
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined WritableComparable
comparator for
grouping keys of inputs to the combiner.
- getCombinerKeyGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the combiner.
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCombinerKeyGroupingComparator() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the combiner.
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getCombinerOutput() - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
- getCombinerOutput() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getCommittedTaskPath(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a committed task is stored until
the entire job is committed.
- getCommittedTaskPath(TaskAttemptContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- getCommittedTaskPath(int, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a committed task is stored until the
entire job is committed for a specific application attempt.
- getCommittedTaskPath(int, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.PartialFileOutputCommitter
-
- getComparator() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getComparator() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return comparator defining the ordering for RecordReaders in this
composite.
- getCompletionPollInterval(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
The interval at which waitForCompletion() should check.
- getCompressedLength() - Method in class org.apache.hadoop.mapred.IFile.Writer
-
- getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
-
Are the outputs of the maps be compressed?
- getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Is the job output compressed?
- getCompressOutput(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Is the job output compressed?
- getConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getConf() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- getConf() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
- getConf() - Method in class org.apache.hadoop.mapred.MapOutputFile
-
- getConf() - Method in class org.apache.hadoop.mapred.Task
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.FilterBase
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getConf() - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getConfiguration() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the underlying job configuration
- getConfiguration() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Return the configuration for the job.
- getConfiguration() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getConfiguration() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getConfiguration() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Return the configuration for the job.
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
Returns a connection object o the DB
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
- getConnection() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Gets the value of the 'containerId' field
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Gets the value of the 'containerId' field.
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo
-
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'containerId' field
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'containerId' field.
- getContainerId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the ContainerId
- getContainerId() - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- getCopyPhase() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.Counters
-
Returns current value of the specified counter, or 0 if the counter
does not exist.
- getCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getCounter(int, String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getCounter(Counters, String, String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapred.Reporter
-
- getCounter(String, String) - Method in interface org.apache.hadoop.mapred.Reporter
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCounter(Enum) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the
Counter
for the given
counterName
.
- getCounter(String, String) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the
Counter
for the given
groupName
and
counterName
.
- getCounter(Counters, String, String) - Method in class org.apache.hadoop.mapreduce.tools.CLI
-
- getCounterForName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
Get the counter for the given name and create it if it doesn't exist.
- getCounterGroupName(String, String) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get the counter group display name
- getCounterName(String, String, String) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get the counter display name
- getCounterNameMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getCounters() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Gets the counters for this job.
- getCounters() - Method in class org.apache.hadoop.mapred.TaskReport
-
- getCounters() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get task's counters.
- getCounters() - Method in class org.apache.hadoop.mapreduce.Job
-
Gets the counters for this job.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'counters' field
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'counters' field.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'counters' field
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'counters' field.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'counters' field
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'counters' field.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'counters' field
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'counters' field.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'counters' field
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'counters' field.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get task counters
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Gets the value of the 'counters' field
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Gets the value of the 'counters' field.
- getCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get task counters
- getCountersEnabled(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns if the counters for the named outputs are enabled or not.
- getCountersEnabled(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Returns if the counters for the named outputs are enabled or not.
- getCountersMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getCountQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Returns the query for getting the total number of rows,
subclasses can override this for custom behaviour.
- getCounts() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Gets the value of the 'counts' field
- getCounts() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Gets the value of the 'counts' field.
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'cpuUsages' field
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'cpuUsages' field.
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'cpuUsages' field
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'cpuUsages' field.
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'cpuUsages' field
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'cpuUsages' field.
- getCpuUsages() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
- getCredentials() - Method in class org.apache.hadoop.mapred.JobConf
-
Get credentials for the job.
- getCredentials() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get credentials for the job.
- getCredentials() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCredentials() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCredentials() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Get the current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Get current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Get the current key
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.task.MapContextImpl
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl
-
- getCurrentKey() - Method in class org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl
-
Get the current key.
- getCurrentKey() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
Get the current key.
- getCurrentStatus() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
The current status
- getCurrentValue(V) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Get current value
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
Get the current value.
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.task.MapContextImpl
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl
-
- getCurrentValue() - Method in class org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl
-
Get the current value.
- getCurrentValue() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
Get the current value.
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getDatum() - Method in interface org.apache.hadoop.mapreduce.jobhistory.HistoryEvent
-
Return the Avro datum wrapped by this.
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChangeEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChangedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.NormalizedResourceEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
- getDatum() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdatedEvent
-
- getDBConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDBConf() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getDBProductName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
- getDecommissionedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of decommissioned trackers in the cluster.
- getDefaultMaps() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the max available Maps in the cluster.
- getDefaultReduces() - Method in class org.apache.hadoop.mapred.JobClient
-
Get status information about the max available Reduces in the cluster.
- getDefaultWorkFile(TaskAttemptContext, String) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the default path and filename for the output format.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
-
Return an iterator wrapping the JoinCollector.
- getDelegate() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
Return an iterator returning a single value from the tuple.
- getDelegate() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
- getDelegate() - Method in class org.apache.hadoop.mapreduce.lib.join.JoinRecordReader
-
Return an iterator wrapping the JoinCollector.
- getDelegate() - Method in class org.apache.hadoop.mapreduce.lib.join.MultiFilterRecordReader
-
Return an iterator returning a single value from the tuple.
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapred.JobClient
-
Get a delegation token for the user from the JobTracker.
- getDelegationToken(Text) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get a delegation token for the user from the JobTracker.
- getDelegationToken(Text) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get a new delegation token.
- getDelegationToken(Credentials, String) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
Deprecated.
Use Credentials.getToken(org.apache.hadoop.io.Text)
instead, this method is included for compatibility against Hadoop-1
- getDelegationTokens(Configuration, Credentials) - Static method in class org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager
-
For each archive or cache file - get the corresponding delegation token
- getDependentJobs() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getDependingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getDescription() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapOutput
-
- getDiagnosticInfo() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getDiagnostics() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Gets the value of the 'diagnostics' field
- getDiagnostics() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Gets the value of the 'diagnostics' field.
- getDiagnostics() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Retrieves diagnostics information preserved in the history file
- getDiagnostics() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
A list of error messages.
- getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getDisplayName() - Method in interface org.apache.hadoop.mapreduce.Counter
-
Get the display name of the counter.
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- getDisplayName() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Get the display name of the group.
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Gets the value of the 'displayName' field
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
Gets the value of the 'displayName' field.
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Gets the value of the 'displayName' field
- getDisplayName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Gets the value of the 'displayName' field.
- getEnd() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
Get an entry from output generated by this class.
- getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
Get an entry from output generated by this class.
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'error' field
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'error' field.
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the error string
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'error' field
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'error' field.
- getError() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the error string
- getErrorInfo() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getEvent() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event.Builder
-
Gets the value of the 'event' field
- getEvent() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
Gets the value of the 'event' field.
- getEventId() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns event Id.
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
Get the attempt id
- getEventType() - Method in interface org.apache.hadoop.mapreduce.jobhistory.HistoryEvent
-
Return this event's type.
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent
-
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChangeEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent
-
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChangedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.NormalizedResourceEvent
-
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
Get the event type
- getEventType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdatedEvent
-
Get the event type
- getExecutable(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Get the URI of the application's executable.
- getFailedAttemptID() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the attempt id due to which the task failed
- getFailedDueToAttempt() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'failedDueToAttempt' field
- getFailedDueToAttempt() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'failedDueToAttempt' field.
- getFailedDueToAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getFailedJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getFailedJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getFailedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'failedMaps' field
- getFailedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'failedMaps' field.
- getFailedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the number of failed maps for the job
- getFailedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getFailedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'failedReduces' field
- getFailedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'failedReduces' field.
- getFailedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the number of failed reducers for the job
- getFailedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getFailedShuffleCounter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getFailureInfo() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get failure info for the job.
- getFailureInfo() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Gets any available info on the reason of failure of the job.
- getFetchFailedMaps() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get the list of maps from which output-fetches failed.
- getFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getFileBlockLocations(FileSystem, FileStatus) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getFileClassPaths(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getFileClassPaths() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the file entries in classpath as an array of Path
- getFileClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getFileClassPaths() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getFileClassPaths() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the file entries in classpath as an array of Path
- getFileStatus(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- getFileStatuses() - Method in class org.apache.hadoop.mapred.LocatedFileStatusFetcher
-
Start executing and return FileStatuses based on the parameters specified
- getFileSystem() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the file system where job-specific files are stored
- getFileSystemCounter() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getFileSystemCounterNames(String) - Static method in class org.apache.hadoop.mapred.Task
-
Counters to measure the usage of the different file systems.
- getFilesystemName() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
A MapReduce system always operates on a single filesystem.
- getFileTimestamps(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getFileTimestamps() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the timestamps of the files.
- getFileTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getFileTimestamps() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getFileTimestamps() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the timestamps of the files.
- getFileVisibilities(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
Deprecated.
Get the booleans on whether the files are public or not.
- getFilter() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.FilteredJob
-
Get the current filter
- getFilteredMap() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.FilteredJob
-
Get the map of the filtered tasks
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'finishedMaps' field
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'finishedMaps' field.
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the number of finished maps for the job
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Gets the value of the 'finishedMaps' field
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Gets the value of the 'finishedMaps' field.
- getFinishedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Get the number of finished maps
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'finishedReduces' field
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'finishedReduces' field.
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the number of finished reducers for the job
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Gets the value of the 'finishedReduces' field
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Gets the value of the 'finishedReduces' field.
- getFinishedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Get the number of finished reduces
- getFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get task finish time.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.Job
-
Get finish time of the job.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the job finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Get the job finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the attempt finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the finish time of the attempt
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the attempt finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the finish time of the attempt
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get the task finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.Builder
-
Gets the value of the 'finishTime' field
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
Gets the value of the 'finishTime' field.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdatedEvent
-
Get the task finish time
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the finish time of the job.
- getFinishTime() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
Get finish time of task.
- getForcedJobStateOnShutDown() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the lower bound on split size imposed by the format.
- getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- getFrameworkGroupId(String) - Static method in class org.apache.hadoop.mapreduce.counters.CounterGroupFactory
-
Get the id of a framework group
- getFs() - Method in class org.apache.hadoop.mapred.JobClient
-
Get a filesystem handle.
- getFsStatistics(Path, Configuration) - Static method in class org.apache.hadoop.mapred.Task
-
Gets a handle to the Statistics instance based on the scheme associated
with path.
- getGrayListedTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of graylisted trackers in the cluster.
- getGraylistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getGraylistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getGroup(String) - Method in class org.apache.hadoop.mapred.Counters
-
- getGroup(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the named counter group, or an empty group if there is none
with the specified name.
- getGroupingComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the reduce.
- getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the user defined RawComparator
comparator for
grouping keys of inputs to the reduce.
- getGroupName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getGroupNameMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getGroupNames() - Method in class org.apache.hadoop.mapred.Counters
-
- getGroupNames() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Returns the names of all counter classes.
- getGroups() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters.Builder
-
Gets the value of the 'groups' field
- getGroups() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
Gets the value of the 'groups' field.
- getGroupsMax() - Static method in class org.apache.hadoop.mapreduce.counters.Limits
-
- getHistoryFile() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getHistoryUrl() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the url where history file is archived.
- getHistoryUrl() - Method in class org.apache.hadoop.mapreduce.Job
-
- getHost() - Method in class org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl
-
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'hostname' field
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'hostname' field.
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the host name
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'hostname' field
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'hostname' field.
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the name of the host where the attempt ran
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'hostname' field
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'hostname' field.
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the host where the attempt executed
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'hostname' field
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'hostname' field.
- getHostname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the name of the host where the attempt executed
- getHostName() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapHost
-
- getHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'httpPort' field
- getHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'httpPort' field.
- getHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the HTTP port
- getID() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the job identifier.
- getId() - Method in class org.apache.hadoop.mapreduce.ID
-
returns the int which represents the identifier
- getIncludeAllCounters() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getIndex(int) - Method in class org.apache.hadoop.mapred.SpillRecord
-
Get spill offsets for given partition.
- getIndex(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- getInputBoundingQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputClass() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputConditions() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputCountQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputDataLength() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getInputDataLength() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getInputDirRecursive(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- getInputFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputFile(int) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return a local reduce input file created earlier
- getInputFile(int) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Return a local reduce input file created earlier
- getInputFileBasedOutputFileName(JobConf, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Generate the outfile name based on a given anme and the input file name.
- getInputFileForWrite(TaskID, long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local reduce input file name.
- getInputFileForWrite(TaskID, long) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local reduce input file name.
- getInputFormat() - Method in class org.apache.hadoop.mapred.JobConf
-
- getInputFormatClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getInputOrderBy() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputPathFilter(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Get a PathFilter instance of the filter set for the input paths.
- getInputPathFilter(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get a PathFilter instance of the filter set for the input paths.
- getInputPaths(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Get the list of input Path
s for the map-reduce job.
- getInputPaths(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the list of input Path
s for the map-reduce job.
- getInputQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInputSplit() - Method in interface org.apache.hadoop.mapred.Reporter
-
- getInputSplit() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getInputSplit() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
Get the input split for this map.
- getInputSplit() - Method in interface org.apache.hadoop.mapreduce.MapContext
-
Get the input split for this map.
- getInputSplit() - Method in class org.apache.hadoop.mapreduce.task.MapContextImpl
-
Get the input split for this map.
- getInputTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getInstance() - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
Creates a new
Job
with no particular
Cluster
and a
given
Configuration
.
- getInstance(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.Job
-
Creates a new
Job
with no particular
Cluster
and a given jobName.
- getInstance(JobStatus, Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Cluster) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Cluster, Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getInstance(Cluster, JobStatus, Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
- getIsCleanup() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
Get whether task is cleanup attempt or not.
- getIsJavaMapper(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java Mapper.
- getIsJavaRecordReader(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java RecordReader
- getIsJavaRecordWriter(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Will the reduce use a Java RecordWriter?
- getIsJavaReducer(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Check whether the job is using a Java Reducer.
- getIsMap() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getJar() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user jar for the map-reduce job.
- getJar() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the pathname of the job's jar.
- getJar() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJar() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJar() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the pathname of the job's jar.
- getJarUnpackPattern() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the pattern for jar contents to unpack on the tasktracker
- getJob(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJob(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getJob() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
-
- getJob(JobID) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get job corresponding to jobid.
- getJob() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobACLs() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getJobAcls() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the acls configured for the job
- getJobACLs() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the job acls.
- getJobAttemptPath(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a given job attempt will be placed.
- getJobAttemptPath(JobContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a given job attempt will be placed.
- getJobAttemptPath(int) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a given job attempt will be placed.
- getJobClient() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobConf() - Method in interface org.apache.hadoop.mapred.JobContext
-
Get the job Configuration
- getJobConf() - Method in class org.apache.hadoop.mapred.JobContextImpl
-
Get the job Configuration
- getJobConf() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getJobConf() - Method in class org.apache.hadoop.mapred.MapOutputCollector.Context
-
- getJobConf() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getJobConf() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
- getJobConf() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
- getJobConfPath() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getJobConfPath() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'jobConfPath' field
- getJobConfPath() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'jobConfPath' field.
- getJobConfPath() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the Path for the Job Configuration file
- getJobConfPath(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job conf path.
- getJobCounters(JobID) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Grab the current job counters
- getJobDir(JobID) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Get the user log directory for the job jobid.
- getJobDistCacheArchives(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job distributed cache archives path.
- getJobDistCacheFiles(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job distributed cache files path.
- getJobDistCacheLibjars(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job distributed cache libjars path.
- getJobEndNotificationURI() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- getJobFile() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the configuration file for the job.
- getJobFile() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the path of the submitted job configuration.
- getJobFile() - Method in class org.apache.hadoop.mapred.Task
-
- getJobFile() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the path of the submitted job configuration.
- getJobFile() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the configuration file for the job.
- getJobHistoryDir() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Gets the directory location of the completed job history files.
- getJobHistoryUrl(JobID) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the job history file path for a given job id.
- getJobID() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the job id.
- getJobId() - Method in class org.apache.hadoop.mapred.JobProfile
-
Deprecated.
use getJobID() instead
- getJobId() - Method in class org.apache.hadoop.mapred.JobStatus
-
Deprecated.
use getJobID instead
- getJobID() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getJobID() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Deprecated.
This method is deprecated and will be removed. Applications should
rather use RunningJob.getID()
.
- getJobID() - Method in class org.apache.hadoop.mapred.Task
-
Get the job name for this task.
- getJobID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getJobID() - Method in class org.apache.hadoop.mapred.TaskID
-
- getJobID() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the unique ID for the job.
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'jobid' field.
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the Job ID
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent
-
Get the Job ID
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get the job ID
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChangeEvent
-
Get the Job ID
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent
-
Get the Job ID
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChangedEvent
-
Get the Job Id
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the Job Id
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Gets the value of the 'jobid' field
- getJobid() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Gets the value of the 'jobid' field.
- getJobId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Get the Job ID
- getJobID() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJobID() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJobId() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
-
Get the jobid
- getJobID() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the unique ID for the job.
- getJobID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the
JobID
object that this task attempt belongs to
- getJobID() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Returns the
JobID
object that this tip belongs to
- getJobIDsPattern(String, Integer) - Static method in class org.apache.hadoop.mapred.JobID
-
Deprecated.
- getJobJar(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Get the job jar path.
- getJobLocalDir() - Method in class org.apache.hadoop.mapred.JobConf
-
Get job-specific shared directory for use as scratch space
- getJobName() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user-specified job name.
- getJobName() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the user-specified job name.
- getJobName() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the name of the job.
- getJobName() - Method in class org.apache.hadoop.mapreduce.Job
-
The user-specified job name.
- getJobName() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the user-specified job name.
- getJobname() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'jobName' field
- getJobName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'jobName' field.
- getJobName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the Job name
- getJobName() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the user-specified job name.
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJobName() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the user-specified job name.
- getJobPriority() - Method in class org.apache.hadoop.mapred.JobConf
-
- getJobPriority() - Method in class org.apache.hadoop.mapred.JobStatus
-
Return the priority of the job
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.Builder
-
Gets the value of the 'jobQueueName' field
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
Gets the value of the 'jobQueueName' field.
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent
-
Get the new Job queue name
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'jobQueueName' field
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'jobQueueName' field.
- getJobQueueName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the Job queue name
- getJobRunState(int) - Static method in class org.apache.hadoop.mapred.JobStatus
-
Helper method to get human-readable state of the job.
- getJobSetupCleanupNeeded() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether job-setup and job-cleanup is needed for the job
- getJobSetupCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getJobSetupCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getJobSetupCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get whether job-setup and job-cleanup is needed for the job
- getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Gets all the jobs which were added to particular Job Queue
- getJobSplitFile(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
- getJobSplitMetaFile(Path) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
- getJobState() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Returns the current state of the Job.
- getJobState() - Method in class org.apache.hadoop.mapreduce.Job
-
Returns the current state of the Job.
- getJobState() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getJobStatus() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Returns a snapshot of the current status,
JobStatus
, of the Job.
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Gets the value of the 'jobStatus' field
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Gets the value of the 'jobStatus' field.
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.Builder
-
Gets the value of the 'jobStatus' field
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
Gets the value of the 'jobStatus' field.
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Gets the value of the 'jobStatus' field
- getJobStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Gets the value of the 'jobStatus' field.
- getJobStatus(JobID) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Grab a handle to a job that is already known to the JobTracker.
- getJobStatuses() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get the jobs submitted to queue
- getJobSubmitter(FileSystem, ClientProtocol) - Method in class org.apache.hadoop.mapreduce.Job
-
Only for mocking via unit tests.
- getJobToken(Credentials) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
- getJobTokenSecret() - Method in class org.apache.hadoop.mapred.Task
-
Get the job token secret
- getJobTrackerState() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getJobTrackerStatus() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the JobTracker's status.
- getJobTrackerStatus() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the JobTracker's status.
- getJobTrackerStatus() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get the JobTracker's status.
- getJtIdentifier() - Method in class org.apache.hadoop.mapreduce.JobID
-
- getKeepCommandFile(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Does the user want to keep the command file for debugging? If this is
true, pipes will write a copy of the command data to a file in the
task directory named "downlink.data", which may be used to run the C++
program under the debugger.
- getKeepFailedTaskFiles() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the temporary files for failed tasks be kept?
- getKeepTaskFilesPattern() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the regular expression that is matched against the task names
to see if we need to keep the files.
- getKey() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- getKey() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Gets the current raw key.
- getKey() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getKey() - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getKeyClass() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getKeyClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getKeyClassName() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the key class for this SequenceFile.
- getKeyClassName() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the key class for this SequenceFile.
- getKeyFieldComparatorOption() - Method in class org.apache.hadoop.mapred.JobConf
-
- getKeyFieldComparatorOption(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- getKeyFieldPartitionerOption() - Method in class org.apache.hadoop.mapred.JobConf
-
- getKeyFieldPartitionerOption(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getKind() - Method in class org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier
-
- getKind() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- getKind() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier.Renewer
-
- getLatestAMInfo() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Gets the value of the 'launchTime' field
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Gets the value of the 'launchTime' field.
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent
-
Get the Job launch time
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Gets the value of the 'launchTime' field
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Gets the value of the 'launchTime' field.
- getLaunchTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get the launch time
- getLeafQueueNames() - Method in class org.apache.hadoop.mapred.QueueManager
-
Return the set of leaf level queues configured in the system to
which jobs are submitted.
- getLength() - Method in class org.apache.hadoop.mapred.FileSplit
-
The number of bytes in the file to process.
- getLength() - Method in class org.apache.hadoop.mapred.IFile.Reader
-
- getLength() - Method in interface org.apache.hadoop.mapred.InputSplit
-
Get the total number of bytes in the data of the InputSplit
.
- getLength() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Return the aggregate length of all child InputSplits currently added.
- getLength(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Get the length of ith child InputSplit.
- getLength() - Method in class org.apache.hadoop.mapred.Merger.Segment
-
- getLength() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Get the size of the split, so that the input splits can be sorted by size.
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
- getLength(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the length of the ith Path
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The number of bytes in the file to process.
- getLength() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Return the aggregate length of all child InputSplits currently added.
- getLength(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Get the length of ith child InputSplit.
- getLength() - Method in class org.apache.hadoop.mapreduce.task.reduce.InMemoryReader
-
- getLengths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns an array containing the lengths of the files in the split
- getLocalCacheArchives(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getLocalCacheArchives() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Deprecated.
the array returned only includes the items the were
downloaded. There is no way to map this to what is returned by
JobContext.getCacheArchives()
.
- getLocalCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getLocalCacheArchives() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getLocalCacheArchives() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Return the path array of the localized caches
- getLocalCacheFiles(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- getLocalCacheFiles() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Deprecated.
the array returned only includes the items the were
downloaded. There is no way to map this to what is returned by
JobContext.getCacheFiles()
.
- getLocalCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getLocalCacheFiles() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getLocalCacheFiles() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Return the path array of the localized files
- getLocalDirAllocator() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getLocalDirs() - Method in class org.apache.hadoop.mapred.JobConf
-
- getLocalFS() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getLocality() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'locality' field
- getLocality() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'locality' field.
- getLocality() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the locality
- getLocalMapFiles() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getLocalPath(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Constructs a local file name.
- getLocation(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
getLocations from ith InputSplit.
- getLocation() - Method in class org.apache.hadoop.mapred.SplitLocationInfo
-
- getLocation(int) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
getLocations from ith InputSplit.
- getLocationInfo() - Method in class org.apache.hadoop.mapred.FileSplit
-
- getLocationInfo() - Method in interface org.apache.hadoop.mapred.InputSplitWithLocationInfo
-
Gets info about which nodes the input split is stored on and how it is
stored at each location.
- getLocationInfo() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Gets info about which nodes the input split is stored on and how it is
stored at each location.
- getLocationInfo() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapred.FileSplit
-
- getLocations() - Method in interface org.apache.hadoop.mapred.InputSplit
-
Get the list of hostnames where the input split is located.
- getLocations() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
-
Collect a set of hosts from all child InputSplits.
- getLocations() - Method in class org.apache.hadoop.mapred.MultiFileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.InputSplit
-
Get the list of nodes by name where the data for the split would be local.
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
Get the list of nodes by name where the data for the split would be local.
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns all the Paths where this input-split resides
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputSplit
-
Collect a set of hosts from all child InputSplits.
- getLocations() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getLocations() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getLogFileParams(JobID, TaskAttemptID) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Gets the location of the log file for a job if no taskAttemptId is
specified, otherwise gets the log location for the taskAttemptId.
- getLogParams(JobID, TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get log parameters for the specified jobID or taskAttemptID
- getLowerClause() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getMapCompletionEvents(JobID, int, int, TaskAttemptID) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Called by a reduce task to get the map output locations for finished maps.
- getMapContext(MapContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT>) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper
-
Get a wrapped Mapper.Context
for custom implementations.
- getMapCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'mapCounters' field
- getMapCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'mapCounters' field.
- getMapCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the Map counters for the job
- getMapCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getMapDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the map task's debug script.
- getMapFinished() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of maps that finished
- getMapFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get map phase finish time for the task.
- getMapFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getMapFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'mapFinishTime' field
- getMapFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'mapFinishTime' field.
- getMapFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the map phase finish time
- getMapId() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapOutput
-
- getMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
-
Get the CompressionCodec
for compressing the map outputs.
- getMapOutputFile() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getMapOutputFile() - Method in class org.apache.hadoop.mapred.Task
-
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the key class for the map output data.
- getMapOutputKeyClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the key class for the map output data.
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the key class for the map output data.
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the value class for the map output data.
- getMapOutputValueClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the value class for the map output data.
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the value class for the map output data.
- getMapper() - Method in class org.apache.hadoop.mapred.MapRunner
-
- getMapperClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the
Mapper
class for the job.
- getMapperClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the
Mapper
class for the job.
- getMapperClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Get the application's mapper class.
- getMapperClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMapperClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMapperClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the
Mapper
class for the job.
- getMapperMaxSkipRecords(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of acceptable skip records surrounding the bad record PER
bad record in mapper.
- getMapProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getMapredJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getMapredJobId() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMapRunnerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getMapsForHost(MapHost) - Method in class org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl
-
- getMapSlotCapacity() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of map slots in the cluster.
- getMapSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job for map tasks?
Defaults to true
.
- getMapStarted() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of maps that were started
- getMapTask() - Method in class org.apache.hadoop.mapred.MapOutputCollector.Context
-
- getMapTaskCompletionEvents() - Method in class org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate
-
- getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the map tasks of a job.
- getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of currently running map tasks in the cluster.
- getMapTasks() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.AnalyzedJob
-
Get the map tasks list
- getMasterAddress(Configuration) - Static method in class org.apache.hadoop.mapred.Master
-
- getMasterPrincipal(Configuration) - Static method in class org.apache.hadoop.mapred.Master
-
- getMasterUserName(Configuration) - Static method in class org.apache.hadoop.mapred.Master
-
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapreduce.map.maxattempts
property.
- getMaxMapAttempts() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapred.map.max.attempts
property.
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMaxMapAttempts() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapred.map.max.attempts
property.
- getMaxMapTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the maximum percentage of map tasks that can fail without
the job being aborted.
- getMaxMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the maximum capacity for running map tasks in the cluster.
- getMaxMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getMaxPhysicalMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
this variable is deprecated and nolonger in use.
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapreduce.reduce.maxattempts
property.
- getMaxReduceAttempts() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapred.reduce.max.attempts
property.
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getMaxReduceAttempts() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapred.reduce.max.attempts
property.
- getMaxReduceTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the maximum percentage of reduce tasks that can fail without
the job being aborted.
- getMaxReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the maximum capacity for running reduce tasks in the cluster.
- getMaxSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the maximum split size.
- getMaxStringSize() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getMaxTaskFailuresPerTracker() - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Get the maximum no.
- getMaxVirtualMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
-
- getMemory() - Method in class org.apache.hadoop.mapreduce.jobhistory.NormalizedResourceEvent
-
the normalized memory
- getMemoryForMapTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Get memory required to run a map task of the job, in MB.
- getMemoryForReduceTask() - Method in class org.apache.hadoop.mapred.JobConf
-
Get memory required to run a reduce task of the job, in MB.
- getMergedMapOutputsCounter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getMergePhase() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getMessage() - Method in exception org.apache.hadoop.mapred.InvalidInputException
-
Get a summary message of the problems found.
- getMessage() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Get a summary message of the problems found.
- getMessage() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
- getMinSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Get the minimum split size
- getMRv2LogDir() - Static method in class org.apache.hadoop.mapred.TaskLog
-
- getName() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getName() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getName() - Method in interface org.apache.hadoop.mapreduce.Counter
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- getName() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Get the internal name of the group
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- getName() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Gets the value of the 'name' field
- getName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
Gets the value of the 'name' field.
- getName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Gets the value of the 'name' field
- getName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Gets the value of the 'name' field.
- getName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters.Builder
-
Gets the value of the 'name' field
- getName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
Gets the value of the 'name' field.
- getNamedOutputFormatClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the named output OutputFormat.
- getNamedOutputKeyClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the key class for a named output.
- getNamedOutputs() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns iterator with the defined name outputs.
- getNamedOutputsList(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns list of channel names.
- getNamedOutputValueClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Returns the value class for a named output.
- getNeededMem() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getNewJobID() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Allocate a name for the job.
- getNextEvent() - Method in class org.apache.hadoop.mapreduce.jobhistory.EventReader
-
Get the next event from the stream
- getNextRecordRange() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get the next record range which is going to be processed by Task.
- getNode() - Method in class org.apache.hadoop.mapred.join.Parser.NodeToken
-
- getNode() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNode() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.NodeToken
-
- getNode() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getNodeId() - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- getNodeManagerHost() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Gets the value of the 'nodeManagerHost' field
- getNodeManagerHost() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Gets the value of the 'nodeManagerHost' field.
- getNodeManagerHost() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getNodeManagerHost() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo
-
- getNodeManagerHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Gets the value of the 'nodeManagerHttpPort' field
- getNodeManagerHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Gets the value of the 'nodeManagerHttpPort' field.
- getNodeManagerHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getNodeManagerHttpPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo
-
- getNodeManagerPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Gets the value of the 'nodeManagerPort' field
- getNodeManagerPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Gets the value of the 'nodeManagerPort' field.
- getNodeManagerPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getNodeManagerPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo
-
- getNum() - Method in class org.apache.hadoop.mapred.join.Parser.NumToken
-
- getNum() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getNum() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.NumToken
-
- getNum() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getNumberOfThreads(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
The number of threads in the thread pool that will run the map function.
- getNumExcludedNodes() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of excluded hosts in the cluster.
- getNumFailedCleanups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of failed cleanup tasks
- getNumFailedMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of failed maps
- getNumFailedReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of failed reduces
- getNumFailedSetups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of failed set up tasks
- getNumFinishedCleanups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of cleanup tasks that finished
- getNumFinishedSetups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of finished set up tasks
- getNumKilledCleanups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of killed cleanup tasks
- getNumKilledMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of killed maps
- getNumKilledReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of killed reduces
- getNumKilledSetups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of killed set up tasks
- getNumKnownMapOutputs() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapHost
-
- getNumLinesPerSplit(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Get the number of lines per split
- getNumMaps() - Method in class org.apache.hadoop.mapred.ReduceTask
-
- getNumMapTasks() - Method in class org.apache.hadoop.mapred.JobConf
-
Get configured the number of reduce tasks for this job.
- getNumPaths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the number of Paths in the split
- getNumReduceTasks() - Method in class org.apache.hadoop.mapred.JobConf
-
Get configured the number of reduce tasks for this job.
- getNumReduceTasks() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get configured the number of reduce tasks for this job.
- getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get configured the number of reduce tasks for this job.
- getNumReservedSlots() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getNumSlots() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getNumSlotsRequired() - Method in class org.apache.hadoop.mapred.Task
-
- getNumTasksToExecutePerJvm() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the number of tasks that a spawned JVM should execute
- getNumUsedSlots() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getOccupiedMapSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get number of occupied map slots in the cluster.
- getOccupiedReduceSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of occupied reduce slots in the cluster.
- getOffset(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the start offset of the ith Path
- getOperations() - Method in class org.apache.hadoop.mapreduce.QueueAclsInfo
-
Get opearations allowed on queue.
- getOutputCommitter() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
Get the output committer for this output format.
- getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl
-
- getOutputCommitter() - Method in interface org.apache.hadoop.mapreduce.TaskInputOutputContext
-
- getOutputCompressionType(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Get the SequenceFile.CompressionType
for the output SequenceFile
.
- getOutputCompressionType(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
Get the SequenceFile.CompressionType
for the output SequenceFile
.
- getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the CompressionCodec
for compressing the job outputs.
- getOutputCompressorClass(JobContext, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the CompressionCodec
for compressing the job outputs.
- getOutputFieldCount() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFieldNames() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputFile() - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return the path to local map output file created earlier
- getOutputFile() - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Return the path to local map output file created earlier
- getOutputFileForWrite(long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map output file name.
- getOutputFileForWrite(long) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local map output file name.
- getOutputFileForWriteInVolume(Path) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map output file name on the same volume.
- getOutputFileForWriteInVolume(Path) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local map output file name on the same volume.
- getOutputFormat() - Method in class org.apache.hadoop.mapred.JobConf
-
- getOutputFormatClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getOutputIndexFile() - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return the path to a local map output index file created earlier
- getOutputIndexFile() - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Return the path to a local map output index file created earlier
- getOutputIndexFileForWrite(long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map output index file name.
- getOutputIndexFileForWrite(long) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local map output index file name.
- getOutputIndexFileForWriteInVolume(Path) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map output index file name on the same volume.
- getOutputIndexFileForWriteInVolume(Path) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local map output index file name on the same volume.
- getOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the key class for the job output data.
- getOutputKeyClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the key class for the job output data.
- getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the key class for the job output data.
- getOutputKeyComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the RawComparator
comparator used to compare keys.
- getOutputName(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the base output name for the output file.
- getOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the Path
to the output directory for the map-reduce job.
- getOutputPath(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the Path
to the output directory for the map-reduce job.
- getOutputSize() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Returns the number of bytes of output from this map.
- getOutputStream(int) - Method in class org.apache.hadoop.mapred.BackupStore
-
For writing the first key and value bytes directly from the
value iterators, pass the current underlying output stream
- getOutputStream() - Method in class org.apache.hadoop.mapred.IFile.Writer
-
- getOutputTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- getOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the value class for job outputs.
- getOutputValueClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the value class for job outputs.
- getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the value class for job outputs.
- getOutputValueGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the user defined WritableComparable
comparator for
grouping keys of inputs to the reduce.
- getOwner() - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- getParseException() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser
-
Get the parse exception, if any.
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
-
- getPartition(K2, V2, int) - Method in interface org.apache.hadoop.mapred.Partitioner
-
Get the paritition number for a given key (hence record) given the total
number of partitions i.e.
- getPartition() - Method in class org.apache.hadoop.mapred.Task
-
Get the index of this task within the job.
- getPartition(BinaryComparable, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Use (the specified slice of the array returned by)
BinaryComparable.getBytes()
to partition.
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner
-
- getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(int, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
- getPartition(KEY, VALUE, int) - Method in class org.apache.hadoop.mapreduce.Partitioner
-
Get the partition number for a given key (hence record) given the total
number of partitions i.e.
- getPartitionerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getPartitionerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getPartitionFile(JobConf) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- getPartitionFile(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Get the path to the SequenceFile storing the sorted partition keyset.
- getPath() - Method in class org.apache.hadoop.mapred.FileSplit
-
The file containing this split's data.
- getPath(int) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns the ith Path
- getPath() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The file containing this split's data.
- getPathForCustomFile(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to generate a Path
for a file that is unique for
the task within the job output directory.
- getPathForWorkFile(TaskInputOutputContext<?, ?, ?, ?>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Helper function to generate a Path
for a file that is unique for
the task within the job output directory.
- getPaths() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns all the Paths in the split
- getPhase() - Method in class org.apache.hadoop.mapred.Task
-
Return current phase of the task.
- getPhase() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get current phase of this task.
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'physMemKbytes' field
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'physMemKbytes' field.
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'physMemKbytes' field
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'physMemKbytes' field.
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'physMemKbytes' field
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'physMemKbytes' field.
- getPhysMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'port' field
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'port' field.
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the tracker rpc port
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'port' field
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'port' field.
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the tracker rpc port
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'port' field
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'port' field.
- getPort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the rpc port for the host where the attempt executed
- getPos() - Method in class org.apache.hadoop.mapred.FixedLengthRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Unsupported (returns zero in all cases).
- getPos() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request position from proxied RR.
- getPos() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
return the amount of data processed
- getPos() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- getPos() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
- getPos() - Method in interface org.apache.hadoop.mapred.RecordReader
-
Returns the current position in the input.
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getPos() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Deprecated.
- getPos() - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthRecordReader
-
- getPosition() - Method in class org.apache.hadoop.mapred.IFile.Reader
-
- getPosition() - Method in class org.apache.hadoop.mapred.IFileInputStream
-
- getPosition() - Method in class org.apache.hadoop.mapred.Merger.Segment
-
- getPosition() - Method in class org.apache.hadoop.mapreduce.task.reduce.InMemoryReader
-
- getPrefix(boolean) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
Returns the prefix to use for the configuration of the chain depending if
it is for a Mapper or a Reducer.
- getPriority() - Method in class org.apache.hadoop.mapreduce.Job
-
Get scheduling info of the job.
- getPriority() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getPriority() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.Builder
-
Gets the value of the 'priority' field
- getPriority() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
Gets the value of the 'priority' field.
- getPriority() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChangeEvent
-
Get the job priority
- getPriority() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Return the priority of the job
- getProblems() - Method in exception org.apache.hadoop.mapred.InvalidInputException
-
Get the complete list of the problems reported.
- getProblems() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
-
Get the complete list of the problems reported.
- getProfileEnabled() - Method in class org.apache.hadoop.mapred.JobConf
-
Get whether the task profiling is enabled.
- getProfileEnabled() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether the task profiling is enabled.
- getProfileEnabled() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getProfileEnabled() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getProfileEnabled() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get whether the task profiling is enabled.
- getProfileParams() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the profiler configuration arguments.
- getProfileParams() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the profiler configuration arguments.
- getProfileParams() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getProfileParams() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getProfileParams() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the profiler configuration arguments.
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Get the range of maps or reduces to profile.
- getProfileTaskRange(boolean) - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the range of maps or reduces to profile.
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the range of maps or reduces to profile.
- getProgress() - Method in class org.apache.hadoop.mapred.FixedLengthRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Report progress as the minimum of all child RR progress.
- getProgress() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Request progress from proxied RR.
- getProgress() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
return progress based on the amount of data processed so far.
- getProgress() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReaderWrapper
-
- getProgress() - Method in class org.apache.hadoop.mapred.LineRecordReader
-
Get the progress within the split
- getProgress() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- getProgress() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Gets the Progress object; this has a float (0.0 - 1.0)
indicating the bytes processed by the iterator so far
- getProgress() - Method in interface org.apache.hadoop.mapred.RecordReader
-
- getProgress() - Method in interface org.apache.hadoop.mapred.Reporter
-
Get the progress of the task.
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapred.Task
-
- getProgress() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- getProgress() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
- getProgress() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
The current progress of the record reader through its data.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
return progress based on the amount of data processed so far.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReaderWrapper
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
-
Get the progress within the split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
Return the progress within the input split
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Report progress as the minimum of all child RR progress.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Request progress from proxied RR.
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.RecordReader
-
The current progress of the record reader through its data.
- getProgress() - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
Get the current progress.
- getProgress() - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- getProgress() - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- getProgress() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
The current progress of the task attempt.
- getProgress() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
The amount completed, between zero and one.
- getProgressible() - Method in interface org.apache.hadoop.mapred.JobContext
-
Get the progress mechanism for reporting progress.
- getProgressible() - Method in class org.apache.hadoop.mapred.JobContextImpl
-
Get the progress mechanism for reporting progress.
- getProgressible() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
- getProgressible() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
- getProgressPollInterval(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
The interval at which monitorAndPrintJob() prints status
- getProperties() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get properties.
- getQueue(String) - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get queue information for the specified name.
- getQueue() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get queue name
- getQueue(String) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Gets scheduling information associated with the particular Job queue
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapred.JobClient
-
Gets the Queue ACLs for current user
- getQueueAclsForCurrentUser() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Gets the Queue ACLs for current user
- getQueueAclsForCurrentUser() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Gets the Queue ACLs for current user
- getQueueAdmins(String) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get the administrators of the given job-queue.
- getQueueChildren() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get immediate children.
- getQueueInfo(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Gets the queue information associated to a particular Job Queue
- getQueueName() - Method in class org.apache.hadoop.mapred.JobConf
-
Return the name of the queue to which this job is submitted.
- getQueueName() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the name of the queue to which the job is submitted.
- getQueueName() - Method in class org.apache.hadoop.mapreduce.QueueAclsInfo
-
Get queue name.
- getQueueName() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Get the queue name from JobQueueInfo
- getQueues() - Method in class org.apache.hadoop.mapred.JobClient
-
Return an array of queue information objects about all the Job Queues
configured.
- getQueues() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get all the queues in cluster.
- getQueues() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Gets set of Queues associated with the Job Tracker
- getQueueState() - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Deprecated.
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'rackname' field
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'rackname' field.
- getRackName() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the rack name
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'rackname' field
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'rackname' field.
- getRackName() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the rack name of the node where the attempt ran
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'rackname' field
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'rackname' field.
- getRackName() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the rackname where the attempt executed
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'rackname' field
- getRackname() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'rackname' field.
- getRackName() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the rack name of the node where the attempt ran
- getRawDataLength() - Method in class org.apache.hadoop.mapred.Merger.Segment
-
- getRawLength() - Method in class org.apache.hadoop.mapred.IFile.Writer
-
- getReaders(FileSystem, Path, Configuration) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
Open the output generated by this format.
- getReaders(Configuration, Path) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Open the output generated by this format.
- getReaders(Path, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
Open the output generated by this format.
- getReadyJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getReadyJobsList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getReasonForBlacklist() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets the reason for which the tasktracker was blacklisted.
- getReasonForBlackListing() - Method in class org.apache.hadoop.mapred.ClusterStatus.BlackListInfo
-
Gets the reason for which the tasktracker was blacklisted.
- getRecordLength(Configuration) - Static method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
Get record length value
- getRecordLength(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
Get record length value
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.InputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.join.ComposableInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
This is not implemented yet.
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineSequenceFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
-
Create a record reader for the given split
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.TextInputFormat
-
- getRecordReaderQueue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Return sorted list of RecordReaders for this composite.
- getRecordReaderQueue() - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Return sorted list of RecordReaders for this composite.
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.FileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.FilterOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.LazyOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
Create a composite record writer that can write key/value data to different
output files
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.MapFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in interface org.apache.hadoop.mapred.OutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FilterOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.MapFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
-
- getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
-
- getReduceCombineInputCounter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReduceCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'reduceCounters' field
- getReduceCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'reduceCounters' field.
- getReduceCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the reduce counters for the job
- getReduceCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getReduceDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the reduce task's debug Script
- getReduceFinished() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of reducers that finished
- getReduceId() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReduceProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getReducerClass() - Method in class org.apache.hadoop.mapred.JobConf
-
- getReducerClass() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
- getReducerClass() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getReducerClass() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getReducerClass() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
- getReducerContext(ReduceContext<KEYIN, VALUEIN, KEYOUT, VALUEOUT>) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer
-
A a wrapped Reducer.Context
for custom implementations.
- getReducerMaxSkipGroups(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the number of acceptable skip groups surrounding the bad group PER
bad group in reducer.
- getReduceShuffleBytes() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReduceSlotCapacity() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of reduce slots in the cluster.
- getReduceSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job for reduce tasks?
Defaults to true
.
- getReduceStarted() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of Reducers that were started
- getReduceTask() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the reduce tasks of a job.
- getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
-
- getReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of currently running reduce tasks in the cluster.
- getReduceTasks() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.AnalyzedJob
-
Get the reduce tasks list
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getReport() - Method in interface org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregator
-
- getReport() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getReportDetails() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getReporter() - Method in class org.apache.hadoop.mapred.MapOutputCollector.Context
-
- getReporter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getReportItems() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueHistogram
-
- getRepresentingCharacter(TaskType) - Static method in class org.apache.hadoop.mapreduce.TaskID
-
Gets the character representing the
TaskType
- getReservedMapSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get number of reserved map slots in the cluster.
- getReservedMem() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getReservedReduceSlots() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of reserved reduce slots in the cluster.
- getRootQueues() - Method in class org.apache.hadoop.mapred.JobClient
-
Returns an array of queue information objects about root level queues
configured
- getRootQueues() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Gets the root level queues.
- getRootQueues() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Gets the root level queues.
- getRunningJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getRunningJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getRunningMaps() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of running map tasks in the cluster.
- getRunningReduces() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of running reduce tasks in the cluster.
- getRunningTaskAttemptIds() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
Get the running task attempt IDs for this task
- getRunningTaskAttempts() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get the running task attempt IDs for this task
- getRunState() - Method in class org.apache.hadoop.mapred.JobStatus
-
- getRunState() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
-
For each split sampled, emit when the ratio of the number of records
retained to the total record count is less than the specified
frequency.
- getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
-
Randomize the split order, then take the specified number of keys from
each split sampled, where each key is selected with the specified
probability and possibly replaced by a subsequently selected key when
the quota of keys from that split is satisfied.
- getSample(InputFormat<K, V>, JobConf) - Method in interface org.apache.hadoop.mapred.lib.InputSampler.Sampler
-
For a given job, collect and return a subset of the keys from the
input data.
- getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
-
From each split sampled, take the first numSamples / numSplits records.
- getSample(InputFormat<K, V>, Job) - Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.IntervalSampler
-
For each split sampled, emit when the ratio of the number of records
retained to the total record count is less than the specified
frequency.
- getSample(InputFormat<K, V>, Job) - Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.RandomSampler
-
Randomize the split order, then take the specified number of keys from
each split sampled, where each key is selected with the specified
probability and possibly replaced by a subsequently selected key when
the quota of keys from that split is satisfied.
- getSample(InputFormat<K, V>, Job) - Method in interface org.apache.hadoop.mapreduce.lib.partition.InputSampler.Sampler
-
For a given job, collect and return a subset of the keys from the
input data.
- getSample(InputFormat<K, V>, Job) - Method in class org.apache.hadoop.mapreduce.lib.partition.InputSampler.SplitSampler
-
From each split sampled, take the first numSamples / numSplits records.
- getSchedulerInfo(String) - Method in class org.apache.hadoop.mapred.QueueManager
-
Return the scheduler information configured for this queue.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapreduce.Job
-
Get scheduling info of the job.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Gets the Scheduling information associated to a particular Job.
- getSchedulingInfo() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Gets the scheduling information associated to particular job queue.
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
- getSchema() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
- getScheme() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getSecretKey(Credentials, Text) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
auxiliary method to get user's secret keys..
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBRecordReader
-
Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
Returns the query for selecting the records,
subclasses can override this for custom behaviour.
- getSelectQuery() - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Returns the query for selecting the records from an Oracle DB.
- getSequenceFileOutputKeyClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Get the key class for the SequenceFile
- getSequenceFileOutputKeyClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Get the key class for the SequenceFile
- getSequenceFileOutputValueClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Get the value class for the SequenceFile
- getSequenceFileOutputValueClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Get the value class for the SequenceFile
- getSequenceWriter(TaskAttemptContext, Class<?>, Class<?>) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- getSessionId() - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- getSetupFinished() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of setup tasks that finished
- getSetupProgress() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getSetupStarted() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of setup tasks that started
- getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
-
Get the information of the current state of the setup tasks of a job.
- getShuffledMapsCounter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getShuffleFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get shuffle finish time for the task.
- getShuffleFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getShuffleFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'shuffleFinishTime' field
- getShuffleFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'shuffleFinishTime' field.
- getShuffleFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the finish time of the shuffle phase
- getShufflePort() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getShufflePort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'shufflePort' field
- getShufflePort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'shufflePort' field.
- getShufflePort() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the shuffle port
- getShuffleSecret() - Method in class org.apache.hadoop.mapred.Task
-
Get the secret key used to authenticate the shuffle
- getShuffleSecretKey(Credentials) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
- getSize() - Method in class org.apache.hadoop.mapred.IFileInputStream
-
- getSize() - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- getSize() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapOutput
-
- getSkipOutputPath(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Get the directory to which skipped records are written.
- getSkipRanges() - Method in class org.apache.hadoop.mapred.Task
-
Get skipRanges.
- getSortComparator() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the RawComparator
comparator used to compare keys.
- getSortComparator() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getSortComparator() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getSortComparator() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the RawComparator
comparator used to compare keys.
- getSortFinishTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get sort finish time for the task,.
- getSortFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getSortFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'sortFinishTime' field
- getSortFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'sortFinishTime' field.
- getSortFinishTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the finish time of the sort phase
- getSortPhase() - Method in class org.apache.hadoop.mapred.MapTask
-
- getSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
-
Should speculative execution be used for this job?
Defaults to true
.
- getSpilledRecordsCounter() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getSpillFile(int) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return a local map spill file created earlier.
- getSpillFile(int) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Return a local map spill file created earlier.
- getSpillFileForWrite(int, long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map spill file name.
- getSpillFileForWrite(int, long) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local map spill file name.
- getSpillIndexFile(int) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Return a local map spill index file created earlier
- getSpillIndexFile(int) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Return a local map spill index file created earlier
- getSpillIndexFileForWrite(int, long) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
Create a local map spill index file name.
- getSpillIndexFileForWrite(int, long) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
Create a local map spill index file name.
- getSplit() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getSplitHosts(BlockLocation[], long, long, NetworkTopology) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
This function identifies and returns the hosts that contribute
most for a given split.
- getSplitIndex() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getSplitLocation() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitIndex
-
- getSplitLocation() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getSplitLocations() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getSplitLocations() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Gets the value of the 'splitLocations' field
- getSplitLocations() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Gets the value of the 'splitLocations' field.
- getSplitLocations() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
Get the split locations, applicable for map tasks
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- getSplits(JobConf, int) - Method in interface org.apache.hadoop.mapred.InputFormat
-
Logically split the set of input files for the job.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
-
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
-
Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Logically split the set of input files for the job.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat
-
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Generate the list of files and make them into FileSplits.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Logically splits the set of input files for the job, splits N lines
of the input as one split.
- getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
- getSplitsForFile(FileStatus, Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
- getSplitter(int) - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
- getSplitter(int) - Method in class org.apache.hadoop.mapreduce.lib.db.OracleDataDrivenDBInputFormat
-
- getStagingAreaDir() - Method in class org.apache.hadoop.mapred.JobClient
-
Fetch the staging area directory for the application
- getStagingAreaDir() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Grab the jobtracker's view of the staging directory path where
job-specific files will be placed.
- getStagingAreaDir() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get a hint from the JobTracker
where job-specific files are to be placed.
- getStagingDir(Cluster, Configuration) - Static method in class org.apache.hadoop.mapreduce.JobSubmissionFiles
-
Initializes the staging directory and returns the path.
- getStart() - Method in class org.apache.hadoop.mapred.FileSplit
-
The position of the first byte in the file to process.
- getStart() - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat.DBInputSplit
-
- getStart() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
-
The position of the first byte in the file to process.
- getStartOffset() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- getStartOffset() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitIndex
-
- getStartOffset() - Method in class org.apache.hadoop.mapreduce.split.JobSplit.TaskSplitMetaInfo
-
- getStartOffsets() - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileSplit
-
Returns an array containing the start offsets of the files in the split
- getStartTime() - Method in class org.apache.hadoop.mapred.TaskStatus
-
Get start time of the task.
- getStartTime() - Method in class org.apache.hadoop.mapreduce.Job
-
Get start time of the job.
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Gets the value of the 'startTime' field
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Gets the value of the 'startTime' field.
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.AMInfo
-
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'startTime' field
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'startTime' field.
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the start time
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Gets the value of the 'startTime' field
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Gets the value of the 'startTime' field.
- getStartTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
Get the start time of the task
- getStartTime() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getStartTime() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
Get start time of task.
- getState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- getState() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'state' field
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'state' field.
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the state string
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'state' field
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'state' field.
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the state string
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'state' field
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'state' field.
- getState() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the state string
- getState() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getState() - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Return the queue state
- getState(String) - Static method in enum org.apache.hadoop.mapreduce.QueueState
-
- getState() - Method in class org.apache.hadoop.mapreduce.task.reduce.MapHost
-
- getState() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
The most recent state, reported by the Reporter.
- getStatement() - Method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat.DBRecordWriter
-
- getStatement() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getStateName() - Method in enum org.apache.hadoop.mapreduce.QueueState
-
- getStateString() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getStatus() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.Job
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get the status
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChangedEvent
-
Get the event status
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
Get the status
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'status' field
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'status' field.
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'status' field
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'status' field.
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Gets the value of the 'status' field
- getStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Gets the value of the 'status' field.
- getStatus() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getStatus() - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
Get the last set status message.
- getStatus() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the last set status message.
- getStatus() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns enum Status.SUCESS or Status.FAILURE.
- getStr() - Method in class org.apache.hadoop.mapred.join.Parser.StrToken
-
- getStr() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getStr() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.StrToken
-
- getStr() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Gets the value of the 'submitTime' field
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Gets the value of the 'submitTime' field.
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent
-
Get the Job submit time
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'submitTime' field
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'submitTime' field.
- getSubmitTime() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the submit time
- getSuccessfulAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getSuccessfulAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Gets the value of the 'successfulAttemptId' field
- getSuccessfulAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Gets the value of the 'successfulAttemptId' field.
- getSuccessfulJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getSuccessfulJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getSuccessfulTaskAttempt() - Method in class org.apache.hadoop.mapred.TaskReport
-
Get the attempt ID that took this task to completion
- getSuccessfulTaskAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get successful task attempt id
- getSuccessfulTaskAttemptId() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
Get the attempt ID that took this task to completion
- getSum() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.DoubleValueSum
-
- getSum() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueSum
-
- getSymlink(Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
Deprecated.
symlinks are always created.
- getSymlink() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Deprecated.
- getSymlink() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getSymlink() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getSymlink() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
This method checks to see if symlinks are to be create for the
localized cache files in the current working directory
- getSystemDir() - Method in class org.apache.hadoop.mapred.JobClient
-
Grab the jobtracker system directory path where job-specific files are to be placed.
- getSystemDir() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Grab the jobtracker system directory path where
job-specific files will be placed.
- getSystemDir() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Grab the jobtracker system directory path
where job-specific files are to be placed.
- getTableName() - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- getTask() - Method in class org.apache.hadoop.mapred.JvmTask
-
- getTask(JvmContext) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Called when a child task process starts, to get its task.
- getTaskAttemptID() - Method in interface org.apache.hadoop.mapred.TaskAttemptContext
-
- getTaskAttemptID() - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Get the taskAttemptID.
- getTaskAttemptId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns task id.
- getTaskAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the attempt id
- getTaskAttemptId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the attempt id
- getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
Get the unique name for this task attempt.
- getTaskAttemptID() - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Get the unique name for this task attempt.
- getTaskAttemptId() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns task id.
- getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Deprecated.
- getTaskAttemptIDsPattern(String, Integer, TaskType, Integer, Integer) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
Deprecated.
- getTaskAttemptPath(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- getTaskAttemptPath(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a task attempt is stored until
that task is committed.
- getTaskAttemptPath(TaskAttemptContext, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Compute the path where the output of a task attempt is stored until
that task is committed.
- getTaskCleanupNeeded() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get whether task-cleanup is needed for the job
- getTaskCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getTaskCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getTaskCleanupNeeded() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get whether task-cleanup is needed for the job
- getTaskCompletionEvents(int) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(int, int) - Method in class org.apache.hadoop.mapreduce.Job
-
Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Get events indicating completion (success/failure) of component tasks.
- getTaskCompletionEvents(JobID, int, int) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get task completion events for the jobid, starting from fromEventId.
- getTaskCounters() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
A table of counters.
- getTaskDiagnostics(TaskAttemptID) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Gets the diagnostic messages for a given task attempt.
- getTaskDiagnostics(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Job
-
Gets the diagnostic messages for a given task attempt.
- getTaskDiagnostics(TaskAttemptID) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Get the diagnostics for a given task in a given job
- getTaskID() - Method in class org.apache.hadoop.mapred.Task
-
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
Getter/Setter methods for log4j.
- getTaskId() - Method in class org.apache.hadoop.mapred.TaskReport
-
The string of the task id.
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskReport
-
The id of the task.
- getTaskID() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the task ID
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the Task ID
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the task ID
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the task id
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the task id
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the task id
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get task id
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
Get the task id
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.Builder
-
Gets the value of the 'taskid' field
- getTaskid() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
Gets the value of the 'taskid' field.
- getTaskId() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdatedEvent
-
Get the task ID
- getTaskID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the
TaskID
object that this task attempt belongs to
- getTaskId() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
The string of the task ID.
- getTaskID() - Method in class org.apache.hadoop.mapreduce.TaskReport
-
The ID of the task.
- getTaskIDsPattern(String, Integer, Boolean, Integer) - Static method in class org.apache.hadoop.mapred.TaskID
-
- getTaskIDsPattern(String, Integer, TaskType, Integer) - Static method in class org.apache.hadoop.mapred.TaskID
-
Deprecated.
- getTaskLogFile(TaskAttemptID, boolean, TaskLog.LogName) - Static method in class org.apache.hadoop.mapred.TaskLog
-
- getTaskLogLength(JobConf) - Static method in class org.apache.hadoop.mapred.TaskLog
-
Get the desired maximum length of task's logs.
- getTaskLogsUrl(String, JobHistoryParser.TaskAttemptInfo) - Static method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer
-
Return the TaskLogsUrl of a particular TaskAttempt
- getTaskLogURL(TaskAttemptID, String) - Static method in class org.apache.hadoop.mapreduce.tools.CLI
-
- getTaskLogUrl(String, String, String, String) - Static method in class org.apache.hadoop.mapreduce.util.HostUtil
-
Construct the taskLogUrl
- getTaskLogUrl(String, String, String) - Static method in class org.apache.hadoop.mapreduce.util.HostUtil
-
- getTaskOutputFilter(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
-
Get the task output filter out of the JobConf.
- getTaskOutputFilter() - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
- getTaskOutputFilter(Configuration) - Static method in class org.apache.hadoop.mapreduce.Job
-
Get the task output filter.
- getTaskOutputPath(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to create the task's temporary output directory and
return the path to the task's output file.
- getTaskReports(TaskType) - Method in class org.apache.hadoop.mapreduce.Job
-
Get the information of the current state of the tasks of a job.
- getTaskReports(JobID, TaskType) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Grab a bunch of info on the tasks that make up the job
- getTaskRunTime() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Returns time (in millisec) the task took to complete.
- getTaskStatus() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Returns enum Status.SUCESS or Status.FAILURE.
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'taskStatus' field
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'taskStatus' field.
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the task status
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'taskStatus' field
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'taskStatus' field.
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the task status
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'taskStatus' field
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'taskStatus' field.
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the task status
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the task status
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the task status
- getTaskStatus() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get task status
- getTaskTracker() - Method in class org.apache.hadoop.mapred.TaskStatus
-
- getTaskTrackerCount() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the number of active trackers in the cluster.
- getTaskTrackerExpiryInterval() - Method in class org.apache.hadoop.mapreduce.Cluster
-
Get the tasktracker expiry interval for the cluster
- getTaskTrackerExpiryInterval() - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
- getTaskTrackerHttp() - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
http location of the tasktracker where this task ran.
- getTaskTrackerName() - Method in class org.apache.hadoop.mapreduce.TaskTrackerInfo
-
Gets the tasktracker's name.
- getTaskTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the number of task trackers in the cluster.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskInfo
-
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.NormalizedResourceEvent
-
the tasktype for the event.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
Get task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Gets the value of the 'taskType' field
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Gets the value of the 'taskType' field.
- getTaskType() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
Get the task type
- getTaskType() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
-
Returns the TaskType of the TaskAttemptID
- getTaskType() - Method in class org.apache.hadoop.mapreduce.TaskID
-
Get the type of the task
- getTaskType(char) - Static method in class org.apache.hadoop.mapreduce.TaskID
-
Gets the
TaskType
corresponding to the character
- getThreadState() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getTimestamp(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- getTotalCleanups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of clean up tasks
- getTotalCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Gets the value of the 'totalCounters' field
- getTotalCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Gets the value of the 'totalCounters' field.
- getTotalCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
Get the counters for the job
- getTotalCounters() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getTotalJobSubmissions() - Method in class org.apache.hadoop.mapreduce.ClusterMetrics
-
Get the total number of job submissions in the cluster.
- getTotalLogFileSize() - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- getTotalMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get total maps
- getTotalMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getTotalMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Gets the value of the 'totalMaps' field
- getTotalMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Gets the value of the 'totalMaps' field.
- getTotalMaps() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get the total number of maps
- getTotalReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get total reduces
- getTotalReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getTotalReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Gets the value of the 'totalReduces' field
- getTotalReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Gets the value of the 'totalReduces' field.
- getTotalReduces() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get the total number of reduces
- getTotalSetups() - Method in class org.apache.hadoop.mapreduce.jobhistory.HistoryViewer.SummarizedJob
-
Get number of set up tasks
- getTrackerName() - Method in class org.apache.hadoop.mapred.ClusterStatus.BlackListInfo
-
Gets the blacklisted tasktracker's name.
- getTrackerName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.TaskAttemptInfo
-
- getTrackerName() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Gets the value of the 'trackerName' field
- getTrackerName() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Gets the value of the 'trackerName' field.
- getTrackerName() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
Get the tracker name
- getTrackingURL() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the URL where some job progress information will be displayed.
- getTrackingURL() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the URL where some job progress information will be displayed.
- getTrackingUrl() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Get the link to the web-ui for details of the job.
- getTTExpiryInterval() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Get the tasktracker expiry interval for the cluster
- getType() - Method in class org.apache.hadoop.mapred.join.Parser.Token
-
- getType() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event.Builder
-
Gets the value of the 'type' field
- getType() - Method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
Gets the value of the 'type' field.
- getType() - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Token
-
- getUberized() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getUberized() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Gets the value of the 'uberized' field
- getUberized() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Gets the value of the 'uberized' field.
- getUberized() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
Get whether the job's map and reduce stages were combined
- getUmbilical() - Method in class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getUnderlyingCounter() - Method in interface org.apache.hadoop.mapreduce.Counter
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getUnderlyingCounter() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getUnderlyingGroup() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- getUnderlyingGroup() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
- getUniqueFile(TaskAttemptContext, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Generate a unique filename, based on the task id, name, and extension
- getUniqueItems() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
- getUniqueName(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Helper function to generate a name that is unique for the task.
- getUpperClause() - Method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat.DataDrivenDBInputSplit
-
- getURL() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the link to the web-ui for details of the job.
- getUsedMem() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getUsedMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
-
Deprecated.
- getUseNewMapper() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the framework use the new context-object code for running
the mapper?
- getUseNewReducer() - Method in class org.apache.hadoop.mapred.JobConf
-
Should the framework use the new context-object code for running
the reducer?
- getUser() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the reported username for this job.
- getUser() - Method in class org.apache.hadoop.mapred.JobProfile
-
Get the user id.
- getUser() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the reported username for this job.
- getUser() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getUser() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getUser() - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier
- getUser() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the reported username for this job.
- getUsername() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser.JobInfo
-
- getUserName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'userName' field
- getUserName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'userName' field.
- getUserName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the user name
- getUsername() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMax
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.LongValueMin
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
- getVal() - Method in class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
- getValue() - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- getValue() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.MRResultIterator
-
- getValue() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
-
Gets the current raw value.
- getValue() - Method in interface org.apache.hadoop.mapreduce.Counter
-
What is the current value of this counter?
- getValue() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- getValue() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- getValue() - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- getValue() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Gets the value of the 'value' field
- getValue() - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
Gets the value of the 'value' field.
- getValue() - Method in enum org.apache.hadoop.mapreduce.JobStatus.State
-
- getValue() - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- getValue(String, String, String, T) - Static method in class org.apache.hadoop.mapreduce.util.ResourceBundles
-
Get a resource given bundle name and key
- getValueAggregatorDescriptor(String, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- getValueClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- getValueClassName() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the value class for this SequenceFile.
- getValueClassName() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
Retrieve the name of the value class for this SequenceFile.
- getValues() - Method in class org.apache.hadoop.mapred.PeriodicStatsAccumulator
-
- getValues() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getValues() - Method in interface org.apache.hadoop.mapreduce.ReduceContext
-
Iterate through the values for the current key, reusing the same value
object, which is stored in the context.
- getValues() - Method in class org.apache.hadoop.mapreduce.task.ReduceContextImpl
-
Iterate through the values for the current key, reusing the same value
object, which is stored in the context.
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Gets the value of the 'vMemKbytes' field
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Gets the value of the 'vMemKbytes' field.
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Gets the value of the 'vMemKbytes' field
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Gets the value of the 'vMemKbytes' field.
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Gets the value of the 'vMemKbytes' field
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Gets the value of the 'vMemKbytes' field.
- getVMemKbytes() - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
- getWaitingJobList() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
- getWaitingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
-
- getWorkflowAdjacencies() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'workflowAdjacencies' field
- getWorkflowAdjacencies() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'workflowAdjacencies' field.
- getWorkflowAdjacencies() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the adjacencies of the workflow
- getWorkflowId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'workflowId' field
- getWorkflowId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'workflowId' field.
- getWorkflowId() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the id of the workflow
- getWorkflowName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'workflowName' field
- getWorkflowName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'workflowName' field.
- getWorkflowName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the name of the workflow
- getWorkflowNodeName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'workflowNodeName' field
- getWorkflowNodeName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'workflowNodeName' field.
- getWorkflowNodeName() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the node name of the workflow
- getWorkflowTags() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Gets the value of the 'workflowTags' field
- getWorkflowTags() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Gets the value of the 'workflowTags' field.
- getWorkflowTags() - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
Get the workflow tags
- getWorkingDirectory() - Method in class org.apache.hadoop.mapred.JobConf
-
Get the current working directory for the default file system.
- getWorkingDirectory() - Method in interface org.apache.hadoop.mapreduce.JobContext
-
Get the current working directory for the default file system.
- getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Get the current working directory for the default file system.
- getWorkOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Get the Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkOutputPath(TaskInputOutputContext<?, ?, ?, ?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Get the Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWorkPath(TaskAttemptContext, Path) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- getWorkPath() - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Get the directory that the task should write results into.
- getWriteAllCounters() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Get the "writeAllCounters" option
- GROUP - Static variable in class org.apache.hadoop.mapreduce.lib.map.RegexMapper
-
- GROUP_COMPARATOR_CLASS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- groups - Variable in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
Deprecated.
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.Event
-
- SCHEMA$ - Static variable in enum org.apache.hadoop.mapreduce.jobhistory.EventType
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
- SCHEMA$ - Static variable in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
- SecureShuffleUtils - Class in org.apache.hadoop.mapreduce.security
-
utilities for generating kyes, hashes and verifying them for shuffle
- SecureShuffleUtils() - Constructor for class org.apache.hadoop.mapreduce.security.SecureShuffleUtils
-
- seek(long) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- selectToken(Text, Collection<Token<? extends TokenIdentifier>>) - Method in class org.apache.hadoop.mapreduce.security.token.JobTokenSelector
-
- SEPARATOR - Static variable in class org.apache.hadoop.mapreduce.ID
-
- SEPERATOR - Static variable in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
-
- SequenceFileAsBinaryInputFormat - Class in org.apache.hadoop.mapred
-
InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
- SequenceFileAsBinaryInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- SequenceFileAsBinaryInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
- SequenceFileAsBinaryInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat
-
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader - Class in org.apache.hadoop.mapred
-
Read records from a SequenceFile as binary (raw) bytes.
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader - Class in org.apache.hadoop.mapreduce.lib.input
-
Read records from a SequenceFile as binary (raw) bytes.
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- SequenceFileAsBinaryOutputFormat - Class in org.apache.hadoop.mapred
-
An
OutputFormat
that writes keys, values to
SequenceFile
s in binary(raw) format
- SequenceFileAsBinaryOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- SequenceFileAsBinaryOutputFormat - Class in org.apache.hadoop.mapreduce.lib.output
-
An
OutputFormat
that writes keys,
values to
SequenceFile
s in binary(raw) format
- SequenceFileAsBinaryOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes - Class in org.apache.hadoop.mapred
-
Inner class used for appendRaw
- SequenceFileAsBinaryOutputFormat.WritableValueBytes() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes(BytesWritable) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes - Class in org.apache.hadoop.mapreduce.lib.output
-
Inner class used for appendRaw
- SequenceFileAsBinaryOutputFormat.WritableValueBytes() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes(BytesWritable) - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapred
-
This class is similar to SequenceFileInputFormat,
except it generates SequenceFileAsTextRecordReader
which converts the input keys and values to their
String forms by calling toString() method.
- SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
-
This class is similar to SequenceFileInputFormat, except it generates
SequenceFileAsTextRecordReader which converts the input keys and values
to their String forms by calling toString() method.
- SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextInputFormat
-
- SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapred
-
This class converts the input keys and values to their String forms by calling toString()
method.
- SequenceFileAsTextRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapreduce.lib.input
-
This class converts the input keys and values to their String forms by
calling toString() method.
- SequenceFileAsTextRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileAsTextRecordReader
-
- SequenceFileInputFilter<K,V> - Class in org.apache.hadoop.mapred
-
A class that allows a map/red job to work on a sample of sequence files.
- SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter
-
- SequenceFileInputFilter<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
A class that allows a map/red job to work on a sample of sequence files.
- SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
- SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapred
-
filter interface
- SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapreduce.lib.input
-
filter interface
- SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapred
-
base class for Filters
- SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
-
- SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapreduce.lib.input
-
base class for Filters
- SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.FilterBase
-
- SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapred
-
This class returns a set of records by examing the MD5 digest of its
key against a filtering frequency f.
- SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
- SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapreduce.lib.input
-
This class returns a set of records by examing the MD5 digest of its
key against a filtering frequency f.
- SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.MD5Filter
-
- SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapred
-
This class returns a percentage of records
The percentage is determined by a filtering frequency f using
the criteria record# % f == 0.
- SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
- SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapreduce.lib.input
-
This class returns a percentage of records
The percentage is determined by a filtering frequency f using
the criteria record# % f == 0.
- SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.PercentFilter
-
- SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapred
-
Records filter by matching key to regex
- SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
- SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapreduce.lib.input
-
Records filter by matching key to regex
- SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.RegexFilter
-
- SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred
-
- SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
-
- SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapred
-
- SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
-
- SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
- SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapred
-
- SequenceFileRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
-
- SequenceFileRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
-
- SESSION_TIMEZONE_KEY - Static variable in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Configuration key to set to a timezone string.
- setAcls(Map<CharSequence, CharSequence>) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'acls' field
- setAcls(Map<CharSequence, CharSequence>) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'acls' field.
- setAggregatorDescriptors(JobConf, Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- setAggregatorDescriptors(Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJob
-
- setApplicationAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Sets the value of the 'applicationAttemptId' field
- setApplicationAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Sets the value of the 'applicationAttemptId' field.
- setApplicationId(String) - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- setArchiveTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- setAssignedJobID(JobID) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
setAssignedJobID should not be called.
JOBID is set by the framework.
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'attemptId' field
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'attemptId' field.
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'attemptId' field
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'attemptId' field.
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'attemptId' field
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'attemptId' field.
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'attemptId' field
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'attemptId' field.
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'attemptId' field
- setAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'attemptId' field.
- setAttemptsToStartSkipping(Configuration, int) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the number of Task attempts AFTER which skip mode
will be kicked off.
- setAutoIncrMapperProcCount(Configuration, boolean) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- setAutoIncrReducerProcCount(Configuration, boolean) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
- setAvataar(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'avataar' field
- setAvataar(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'avataar' field.
- setBoundingQuery(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Set the user-defined bounding query to use with a user-defined query.
- setCacheArchives(URI[], Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- setCacheArchives(URI[]) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the given set of archives
- setCacheFiles(URI[], Configuration) - Static method in class org.apache.hadoop.mapreduce.filecache.DistributedCache
-
- setCacheFiles(URI[]) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the given set of files
- setCancelDelegationTokenUponJobCompletion(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Sets the flag that will allow the JobTracker to cancel the HDFS delegation
tokens upon job completion.
- setChildren(List<JobQueueInfo>) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
- setCleanupProgress(float) - Method in class org.apache.hadoop.mapred.JobStatus
-
Sets the cleanup progress of this job
- setCleanupProgress(float) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Sets the cleanup progress of this job
- setClockSplits(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'clockSplits' field
- setClockSplits(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'clockSplits' field.
- setClockSplits(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'clockSplits' field
- setClockSplits(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'clockSplits' field.
- setClockSplits(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'clockSplits' field
- setClockSplits(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'clockSplits' field.
- setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the combiner class for the job.
- setCombinerKeyGroupingComparator(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user defined RawComparator
comparator for
grouping keys in the input to the combiner.
- setCombinerKeyGroupingComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setCompressMapOutput(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Should the map outputs be compressed before transfer?
- setCompressOutput(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set whether the output of the job is compressed.
- setCompressOutput(Job, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set whether the output of the job is compressed.
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.MapOutputFile
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.MROutputFiles
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
configure the filter according to configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
configure the filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
configure the Filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapred.Task
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.MD5Filter
-
configure the filter according to configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.PercentFilter
-
configure the filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.RegexFilter
-
configure the Filter by checking the configuration
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- setConf(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Read in the partition file and build indexing data structures.
- setContainerId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Sets the value of the 'containerId' field
- setContainerId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Sets the value of the 'containerId' field.
- setContainerId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'containerId' field
- setContainerId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'containerId' field.
- setContainerId(String) - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- setCounters(Counters) - Method in class org.apache.hadoop.mapred.TaskStatus
-
Set the task's counters.
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'counters' field
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'counters' field.
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'counters' field
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'counters' field.
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'counters' field
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'counters' field.
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'counters' field
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'counters' field.
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'counters' field
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'counters' field.
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Sets the value of the 'counters' field
- setCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Sets the value of the 'counters' field.
- setCountersEnabled(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
-
Enables or disables counters for the named outputs.
- setCountersEnabled(Job, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
-
Enables or disables counters for the named outputs.
- setCounts(List<JhCounter>) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Sets the value of the 'counts' field
- setCounts(List<JhCounter>) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Sets the value of the 'counts' field.
- setCpuUsages(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'cpuUsages' field
- setCpuUsages(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'cpuUsages' field.
- setCpuUsages(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'cpuUsages' field
- setCpuUsages(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'cpuUsages' field.
- setCpuUsages(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'cpuUsages' field
- setCpuUsages(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'cpuUsages' field.
- setCredentials(Credentials) - Method in class org.apache.hadoop.mapred.JobConf
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStartedEvent
-
- setDatum(Object) - Method in interface org.apache.hadoop.mapreduce.jobhistory.HistoryEvent
-
Set the Avro datum wrapped by this.
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInitedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChangeEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChangeEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChangedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmittedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletionEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinishedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.NormalizedResourceEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinishedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinishedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStartedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletionEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStartedEvent
-
- setDatum(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdatedEvent
-
- setDiagnosticInfo(String) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setDiagnostics(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Sets the value of the 'diagnostics' field
- setDiagnostics(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Sets the value of the 'diagnostics' field.
- setDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- setDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
-
- setDisplayName(String) - Method in interface org.apache.hadoop.mapreduce.Counter
-
Deprecated.
(and no-op by default)
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounter
-
Deprecated.
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- setDisplayName(String) - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
Set the display name of the group
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
Deprecated.
- setDisplayName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Sets the value of the 'displayName' field
- setDisplayName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
Sets the value of the 'displayName' field.
- setDisplayName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Sets the value of the 'displayName' field
- setDisplayName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Sets the value of the 'displayName' field.
- setError(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'error' field
- setError(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'error' field.
- setError(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'error' field
- setError(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'error' field.
- setEvent(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.Event.Builder
-
Sets the value of the 'event' field
- setEvent(Object) - Method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
Sets the value of the 'event' field.
- setEventId(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
set event Id.
- setEventId(int) - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
set event Id.
- setExecutable(JobConf, String) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set the URI for the application's executable.
- setFailedDueToAttempt(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'failedDueToAttempt' field
- setFailedDueToAttempt(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'failedDueToAttempt' field.
- setFailedMaps(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'failedMaps' field
- setFailedMaps(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'failedMaps' field.
- setFailedReduces(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'failedReduces' field
- setFailedReduces(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'failedReduces' field.
- setFailureInfo(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
- setFailureInfo(String) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set diagnostic information.
- setFileTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- setFilterClass(Configuration, Class) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter
-
set the filter class
- setFilterClass(Job, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter
-
set the filter class
- setFinishedMaps(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'finishedMaps' field
- setFinishedMaps(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'finishedMaps' field.
- setFinishedMaps(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Sets the value of the 'finishedMaps' field
- setFinishedMaps(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Sets the value of the 'finishedMaps' field.
- setFinishedReduces(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'finishedReduces' field
- setFinishedReduces(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'finishedReduces' field.
- setFinishedReduces(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Sets the value of the 'finishedReduces' field
- setFinishedReduces(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Sets the value of the 'finishedReduces' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the finish time of the job
- setFinishTime(long) - Method in class org.apache.hadoop.mapred.TaskReport
-
set finish time of task.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.Builder
-
Sets the value of the 'finishTime' field
- setFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
Sets the value of the 'finishTime' field.
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the finish time of the job
- setFinishTime(long) - Method in class org.apache.hadoop.mapreduce.TaskReport
-
set finish time of task.
- setFormat(JobConf) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
-
Interpret a given string as a composite expression.
- setFormat(Configuration) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeInputFormat
-
Interpret a given string as a composite expression.
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
set the filtering frequency in configuration
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
set the frequency and stores it in conf
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.MD5Filter
-
set the filtering frequency in configuration
- setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.PercentFilter
-
set the frequency and stores it in conf
- setGroupingComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setGroups(List<JhCounterGroup>) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters.Builder
-
Sets the value of the 'groups' field
- setGroups(List<JhCounterGroup>) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
Sets the value of the 'groups' field.
- setHistoryFile(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the job history file url for a completed job
- setHistoryFile(String) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the job history file url for a completed job
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'hostname' field
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'hostname' field.
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'hostname' field
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'hostname' field.
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'hostname' field
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'hostname' field.
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'hostname' field
- setHostname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'hostname' field.
- setHttpPort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'httpPort' field
- setHttpPort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'httpPort' field.
- setID(int) - Method in class org.apache.hadoop.mapred.join.Parser.Node
-
- setID(int) - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
- setIncludeAllCounters(boolean) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setInput(JobConf, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInput(JobConf, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInput(Job, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
Note that the "orderBy" column is called the "splitBy" in this version.
- setInput(Job, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
setInput() takes a custom query and a separate "bounding query" to use
instead of the custom "count query" used by DBInputFormat.
- setInput(Job, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInput(Job, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBInputFormat
-
Initializes the map-part of the job with the appropriate input settings.
- setInputBoundingQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputClass(Class<? extends DBWritable>) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputConditions(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputCountQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputDataLength(long) - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- setInputDataLocations(String[]) - Method in class org.apache.hadoop.mapreduce.split.JobSplit.SplitMetaInfo
-
- setInputDirRecursive(Job, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- setInputFieldNames(String...) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputFormat(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the
InputFormat
implementation for the map-reduce job.
- setInputFormatClass(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setInputOrderBy(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputPathFilter(JobConf, Class<? extends PathFilter>) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Set a PathFilter to be applied to the input paths for the map-reduce job.
- setInputPathFilter(Job, Class<? extends PathFilter>) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set a PathFilter to be applied to the input paths for the map-reduce job.
- setInputPaths(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Sets the given comma separated paths as the list of inputs
for the map-reduce job.
- setInputPaths(JobConf, Path...) - Static method in class org.apache.hadoop.mapred.FileInputFormat
-
Set the array of Path
s as the list of inputs
for the map-reduce job.
- setInputPaths(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Sets the given comma separated paths as the list of inputs
for the map-reduce job.
- setInputPaths(Job, Path...) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set the array of Path
s as the list of inputs
for the map-reduce job.
- setInputQuery(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setInputSplit(InputSplit) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- setInputTableName(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setIsCleanup(boolean) - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
Set whether the task is a cleanup attempt or not.
- setIsJavaMapper(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the Mapper is written in Java.
- setIsJavaRecordReader(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the job is using a Java RecordReader.
- setIsJavaRecordWriter(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the job will use a Java RecordWriter.
- setIsJavaReducer(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether the Reducer is written in Java.
- setJar(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user jar for the map-reduce job.
- setJar(String) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the job jar
- setJarByClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the job's jar file by finding an example class location.
- setJarByClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the Jar by finding where a given class came from.
- setJob(Job) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the mapreduce job
- setJobACLs(Map<JobACL, AccessControlList>) - Method in class org.apache.hadoop.mapred.JobStatus
-
- setJobACLs(Map<JobACL, AccessControlList>) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the job acls.
- setJobConf(JobConf) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Set the mapred job conf for this job.
- setJobConfPath(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'jobConfPath' field
- setJobConfPath(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'jobConfPath' field.
- setJobEndNotificationURI(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- setJobFile(String) - Method in class org.apache.hadoop.mapred.Task
-
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'jobid' field.
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Sets the value of the 'jobid' field
- setJobid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Sets the value of the 'jobid' field.
- setJobID(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the job ID for this job.
- setJobID(JobID) - Method in class org.apache.hadoop.mapreduce.task.JobContextImpl
-
Set the JobID.
- setJobName(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user-specified job name.
- setJobName(String) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the user-specified job name.
- setJobName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'jobName' field
- setJobName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'jobName' field.
- setJobName(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the job name for this job.
- setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobConf
-
- setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the priority of the job, defaulting to NORMAL.
- setJobPriority(String) - Method in interface org.apache.hadoop.mapred.RunningJob
-
Set the priority of a running job.
- setJobPriority(JobID, String) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Set the priority of the specified job
- setJobQueueName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange.Builder
-
Sets the value of the 'jobQueueName' field
- setJobQueueName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobQueueChange
-
Sets the value of the 'jobQueueName' field.
- setJobQueueName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'jobQueueName' field
- setJobQueueName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'jobQueueName' field.
- setJobSetupCleanupNeeded(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Specify whether job-setup and job-cleanup is needed for the job
- setJobState(ControlledJob.State) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the state for this job.
- setJobStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Sets the value of the 'jobStatus' field
- setJobStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Sets the value of the 'jobStatus' field.
- setJobStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged.Builder
-
Sets the value of the 'jobStatus' field
- setJobStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobStatusChanged
-
Sets the value of the 'jobStatus' field.
- setJobStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion.Builder
-
Sets the value of the 'jobStatus' field
- setJobStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobUnsuccessfulCompletion
-
Sets the value of the 'jobStatus' field.
- setJobStatuses(JobStatus[]) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
- setJobStatuses(JobStatus[]) - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
- setJobToken(Token<? extends TokenIdentifier>, Credentials) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
store job token
- setJobTokenSecret(SecretKey) - Method in class org.apache.hadoop.mapred.Task
-
Set the job token secret
- setKeepCommandFile(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
Set whether to keep the command file for debugging
- setKeepFailedTaskFiles(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the framework should keep the intermediate files for
failed tasks.
- setKeepTaskFilesPattern(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set a regular expression for task names that should be kept.
- setKeyComparator(Class<? extends WritableComparator>) - Method in class org.apache.hadoop.mapred.join.Parser.Node
-
- setKeyComparator(Class<? extends WritableComparator>) - Method in class org.apache.hadoop.mapreduce.lib.join.Parser.Node
-
- setKeyFieldComparatorOptions(String) - Method in class org.apache.hadoop.mapred.JobConf
-
- setKeyFieldComparatorOptions(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedComparator
-
- setKeyFieldPartitionerOptions(String) - Method in class org.apache.hadoop.mapred.JobConf
-
- setKeyFieldPartitionerOptions(Job, String) - Method in class org.apache.hadoop.mapreduce.lib.partition.KeyFieldBasedPartitioner
-
- setKeyValue(Text, Text, byte[], int, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.KeyValueLineRecordReader
-
- setLaunchTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Sets the value of the 'launchTime' field
- setLaunchTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Sets the value of the 'launchTime' field.
- setLaunchTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Sets the value of the 'launchTime' field
- setLaunchTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Sets the value of the 'launchTime' field.
- setLeftOffset(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Set the subarray to be used for partitioning to
bytes[offset:]
in Python syntax.
- setLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- setLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
-
Deprecated.
- setLocality(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'locality' field
- setLocality(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'locality' field.
- setLocalMapFiles(Map<TaskAttemptID, MapOutputFile>) - Method in class org.apache.hadoop.mapred.ReduceTask
-
Register the set of mapper outputs created by a LocalJobRunner-based
job with this ReduceTask so it knows where to fetch from.
- setMapCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'mapCounters' field
- setMapCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'mapCounters' field.
- setMapDebugScript(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the debug script to run when the map tasks fail.
- setMapFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'mapFinishTime' field
- setMapFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'mapFinishTime' field.
- setMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the given class as the CompressionCodec
for the map outputs.
- setMapOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the key class for the map output data.
- setMapOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the key class for the map output data.
- setMapOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the value class for the map output data.
- setMapOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the value class for the map output data.
- setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the
Mapper
class for the job.
- setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setMapperClass(Job, Class<? extends Mapper<K1, V1, K2, V2>>) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Set the application's mapper class.
- setMapperConf(boolean, Configuration, Class<?>, Class<?>, Class<?>, Class<?>, Configuration, int, String) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- setMapperMaxSkipRecords(Configuration, long) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the number of acceptable skip records surrounding the bad record PER
bad record in mapper.
- setMapProgress(float) - Method in class org.apache.hadoop.mapred.JobStatus
-
Sets the map progress of this job
- setMapProgress(float) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Sets the map progress of this job
- setMapredJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- setMapRunnerClass(Class<? extends MapRunnable>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMapSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Turn speculative execution on or off for this job for map tasks.
- setMapSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Turn speculative execution on or off for this job for map tasks.
- setMaxInputSplitSize(Job, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set the maximum split size
- setMaxItems(long) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.UniqValueCount
-
Set the limit on the number of unique values
- setMaxMapAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Set the number of maximum attempts that will be made to run a
map task.
- setMaxMapAttempts(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Expert: Set the number of maximum attempts that will be made to run a
map task.
- setMaxMapTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Set the maximum percentage of map tasks that can fail without the
job being aborted.
- setMaxPhysicalMemoryForTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- setMaxReduceAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Expert: Set the number of maximum attempts that will be made to run a
reduce task.
- setMaxReduceAttempts(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Expert: Set the number of maximum attempts that will be made to run a
reduce task.
- setMaxReduceTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the maximum percentage of reduce tasks that can fail without the job
being aborted.
- setMaxSplitSize(long) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Specify the maximum size (in bytes) of each split.
- setMaxTaskFailuresPerTracker(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the maximum no.
- setMaxVirtualMemoryForTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMemoryForMapTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMemoryForReduceTask(long) - Method in class org.apache.hadoop.mapred.JobConf
-
- setMessage(String) - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Set the message for this job.
- setMinInputSplitSize(Job, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
Set the minimum input split size
- setMinSplitSize(long) - Method in class org.apache.hadoop.mapred.FileInputFormat
-
- setMinSplitSizeNode(long) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Specify the minimum size (in bytes) of each split per node.
- setMinSplitSizeRack(long) - Method in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
Specify the minimum size (in bytes) of each split per rack.
- setName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Sets the value of the 'name' field
- setName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
Sets the value of the 'name' field.
- setName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup.Builder
-
Sets the value of the 'name' field
- setName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounterGroup
-
Sets the value of the 'name' field.
- setName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters.Builder
-
Sets the value of the 'name' field
- setName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounters
-
Sets the value of the 'name' field.
- setNeededMem(int) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- setNextRecordRange(SortedRanges.Range) - Method in class org.apache.hadoop.mapred.TaskStatus
-
Set the next record range which is going to be processed by Task.
- setNodeId(String) - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- setNodeManagerHost(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Sets the value of the 'nodeManagerHost' field
- setNodeManagerHost(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Sets the value of the 'nodeManagerHost' field.
- setNodeManagerHttpPort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Sets the value of the 'nodeManagerHttpPort' field
- setNodeManagerHttpPort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Sets the value of the 'nodeManagerHttpPort' field.
- setNodeManagerPort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Sets the value of the 'nodeManagerPort' field
- setNodeManagerPort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Sets the value of the 'nodeManagerPort' field.
- setNumberOfThreads(Job, int) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
-
Set the number of threads in the pool for running maps.
- setNumLinesPerSplit(Job, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.NLineInputFormat
-
Set the number of lines per split
- setNumMapTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the number of map tasks for this job.
- setNumReduceTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the requisite number of reduce tasks for this job.
- setNumReduceTasks(int) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the number of reduce tasks for the job.
- setNumReservedSlots(int) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- setNumTasksToExecutePerJvm(int) - Method in class org.apache.hadoop.mapred.JobConf
-
Sets the number of tasks that a spawned task JVM should run
before it exits
- setNumUsedSlots(int) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- setOffsets(Configuration, int, int) - Static method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Set the subarray to be used for partitioning to
bytes[left:(right+1)]
in Python syntax.
- setOutput(JobConf, String, String...) - Static method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job with the appropriate output settings
- setOutput(JobConf, String, int) - Static method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job with the appropriate output settings
- setOutput(Job, String, String...) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job with
the appropriate output settings
- setOutput(Job, String, int) - Static method in class org.apache.hadoop.mapreduce.lib.db.DBOutputFormat
-
Initializes the reduce-part of the job
with the appropriate output settings
- setOutputCommitter(Class<? extends OutputCommitter>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setOutputCompressionType(JobConf, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
Set the SequenceFile.CompressionType
for the output SequenceFile
.
- setOutputCompressionType(Job, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
-
Set the SequenceFile.CompressionType
for the output SequenceFile
.
- setOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set the CompressionCodec
to be used to compress job outputs.
- setOutputCompressorClass(Job, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set the CompressionCodec
to be used to compress job outputs.
- setOutputFieldCount(int) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setOutputFieldNames(String...) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setOutputFormat(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setOutputFormatClass(JobConf, Class<? extends OutputFormat>) - Static method in class org.apache.hadoop.mapred.lib.LazyOutputFormat
-
Set the underlying output format for LazyOutputFormat.
- setOutputFormatClass(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setOutputFormatClass(Job, Class<? extends OutputFormat>) - Static method in class org.apache.hadoop.mapreduce.lib.output.LazyOutputFormat
-
Set the underlying output format for LazyOutputFormat.
- setOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the key class for the job output data.
- setOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the key class for the job output data.
- setOutputKeyComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the RawComparator
comparator used to compare keys.
- setOutputName(JobContext, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set the base output name for output file to be created.
- setOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set the Path
of the output directory for the map-reduce job.
- setOutputPath(Job, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
-
Set the Path
of the output directory for the map-reduce job.
- setOutputTableName(String) - Method in class org.apache.hadoop.mapreduce.lib.db.DBConfiguration
-
- setOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the value class for job outputs.
- setOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the value class for job outputs.
- setOutputValueGroupingComparator(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the user defined RawComparator
comparator for
grouping keys in the input to the reduce.
- setOwner(String) - Method in class org.apache.hadoop.mapreduce.v2.LogParams
-
- setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setPartitionFile(JobConf, Path) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
-
- setPartitionFile(Configuration, Path) - Static method in class org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner
-
Set the path to the SequenceFile storing the sorted partition keyset.
- setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
- setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFilter.RegexFilter
-
Define the filtering regex and stores it in conf
- setPhase(TaskStatus.Phase) - Method in class org.apache.hadoop.mapred.Task
-
Set current phase of the task.
- setPhase(TaskStatus.Phase) - Method in class org.apache.hadoop.mapred.TaskStatus
-
Set current phase of this task.
- setPhysMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'physMemKbytes' field
- setPhysMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'physMemKbytes' field.
- setPhysMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'physMemKbytes' field
- setPhysMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'physMemKbytes' field.
- setPhysMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'physMemKbytes' field
- setPhysMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'physMemKbytes' field.
- setPort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'port' field
- setPort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'port' field.
- setPort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'port' field
- setPort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'port' field.
- setPort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'port' field
- setPort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'port' field.
- setPriority(JobPriority) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the priority of a running job.
- setPriority(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange.Builder
-
Sets the value of the 'priority' field
- setPriority(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobPriorityChange
-
Sets the value of the 'priority' field.
- setPriority(JobPriority) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the priority of the job, defaulting to NORMAL.
- setProfileEnabled(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the system should collect profiler information for some of
the tasks in this job? The information is stored in the user log
directory.
- setProfileEnabled(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Set whether the system should collect profiler information for some of
the tasks in this job? The information is stored in the user log
directory.
- setProfileParams(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the profiler configuration arguments.
- setProfileParams(String) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the profiler configuration arguments.
- setProfileTaskRange(boolean, String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the ranges of maps or reduces to profile.
- setProfileTaskRange(boolean, String) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the ranges of maps or reduces to profile.
- setProgress(float) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- setProgress(float) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setProperties(Properties) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
- setProperties(Properties) - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
- setQueue(String) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set queue name
- setQueueChildren(List<QueueInfo>) - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
- setQueueName(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the name of the queue to which this job should be submitted.
- setQueueName(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Set the queue name of the JobQueueInfo
- setQueueName(String) - Method in class org.apache.hadoop.mapreduce.QueueAclsInfo
-
- setQueueName(String) - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Set the queue name of the JobQueueInfo
- setQueueState(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Set the state of the queue
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'rackname' field
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'rackname' field.
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'rackname' field
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'rackname' field.
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'rackname' field
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'rackname' field.
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'rackname' field
- setRackname(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'rackname' field.
- setRecordLength(Configuration, int) - Static method in class org.apache.hadoop.mapred.FixedLengthInputFormat
-
Set the length of each record
- setRecordLength(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.input.FixedLengthInputFormat
-
Set the length of each record
- setReduceCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'reduceCounters' field
- setReduceCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'reduceCounters' field.
- setReduceDebugScript(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the debug script to run when the reduce tasks fail.
- setReduceProgress(float) - Method in class org.apache.hadoop.mapred.JobStatus
-
Sets the reduce progress of this Job
- setReduceProgress(float) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Sets the reduce progress of this Job
- setReducer(JobConf, Class<? extends Reducer<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainReducer
-
Sets the Reducer class to the chain job's JobConf.
- setReducer(Job, Class<? extends Reducer>, Class<?>, Class<?>, Class<?>, Class<?>, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
Sets the Reducer class to the chain job.
- setReducer(Job, Class<? extends Reducer>, Class<?>, Class<?>, Class<?>, Class<?>, Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
Sets the
Reducer
class to the chain job.
- setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
-
- setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapreduce.Job
-
- setReducerConf(Configuration, Class<?>, Class<?>, Class<?>, Class<?>, Configuration, String) - Static method in class org.apache.hadoop.mapreduce.lib.chain.Chain
-
- setReducerMaxSkipGroups(Configuration, long) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the number of acceptable skip groups surrounding the bad group PER
bad group in reducer.
- setReduceSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Turn speculative execution on or off for this job for reduce tasks.
- setReduceSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Turn speculative execution on or off for this job for reduce tasks.
- setReservedMem(int) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- setRetired() - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the job retire flag to true.
- setRetired() - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the job retire flag to true.
- setRightOffset(Configuration, int) - Static method in class org.apache.hadoop.mapreduce.lib.partition.BinaryPartitioner
-
Set the subarray to be used for partitioning to
bytes[:(offset+1)]
in Python syntax.
- setRunningTaskAttemptIds(Collection<TaskAttemptID>) - Method in class org.apache.hadoop.mapreduce.TaskReport
-
set running attempt(s) of the task.
- setRunningTaskAttempts(Collection<TaskAttemptID>) - Method in class org.apache.hadoop.mapred.TaskReport
-
set running attempt(s) of the task.
- setRunState(int) - Method in class org.apache.hadoop.mapred.JobStatus
-
Change the current run state of the job.
- setRunState(TaskStatus.State) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setSchedulerInfo(String, Object) - Method in class org.apache.hadoop.mapred.QueueManager
-
Set a generic Object that represents scheduling information relevant
to a queue.
- setSchedulingInfo(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
-
Set the scheduling information associated to particular job queue
- setSchedulingInfo(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
Used to set the scheduling information associated to a particular Job.
- setSchedulingInfo(String) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Used to set the scheduling information associated to a particular Job.
- setSchedulingInfo(String) - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Set the scheduling information associated to particular job queue
- setSequenceFileOutputKeyClass(JobConf, Class<?>) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Set the key class for the SequenceFile
- setSequenceFileOutputKeyClass(Job, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Set the key class for the SequenceFile
- setSequenceFileOutputValueClass(JobConf, Class<?>) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
Set the value class for the SequenceFile
- setSequenceFileOutputValueClass(Job, Class<?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileAsBinaryOutputFormat
-
Set the value class for the SequenceFile
- setSessionId(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Deprecated.
- setSessionTimeZone(Configuration, Connection) - Static method in class org.apache.hadoop.mapreduce.lib.db.OracleDBRecordReader
-
Set session time zone
- setSetupProgress(float) - Method in class org.apache.hadoop.mapred.JobStatus
-
Sets the setup progress of this job
- setSetupProgress(float) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Sets the setup progress of this job
- setShuffleFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'shuffleFinishTime' field
- setShuffleFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'shuffleFinishTime' field.
- setShufflePort(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'shufflePort' field
- setShufflePort(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'shufflePort' field.
- setShuffleSecret(SecretKey) - Method in class org.apache.hadoop.mapred.Task
-
Set the secret key used to authenticate the shuffle
- setShuffleSecretKey(byte[], Credentials) - Static method in class org.apache.hadoop.mapreduce.security.TokenCache
-
- setSkipOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
-
Set the directory to which skipped records are written.
- setSkipping(boolean) - Method in class org.apache.hadoop.mapred.Task
-
Sets whether to run Task in skipping mode.
- setSkipRanges(SortedRanges) - Method in class org.apache.hadoop.mapred.Task
-
Set skipRanges.
- setSortComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
-
Define the comparator that controls how the keys are sorted before they
are passed to the
Reducer
.
- setSortFinishTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'sortFinishTime' field
- setSortFinishTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'sortFinishTime' field.
- setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Turn speculative execution on or off for this job.
- setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapreduce.Job
-
Turn speculative execution on or off for this job.
- setSplitLocations(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Sets the value of the 'splitLocations' field
- setSplitLocations(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Sets the value of the 'splitLocations' field.
- setStartTime(long) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the start time of the job
- setStartTime(long) - Method in class org.apache.hadoop.mapred.TaskReport
-
set start time of the task.
- setStartTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted.Builder
-
Sets the value of the 'startTime' field
- setStartTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Sets the value of the 'startTime' field.
- setStartTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'startTime' field
- setStartTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'startTime' field.
- setStartTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Sets the value of the 'startTime' field
- setStartTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Sets the value of the 'startTime' field.
- setStartTime(long) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the start time of the job
- setStartTime(long) - Method in class org.apache.hadoop.mapreduce.TaskReport
-
set start time of the task.
- setState(int) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
Deprecated.
- setState(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'state' field
- setState(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'state' field.
- setState(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'state' field
- setState(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'state' field.
- setState(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'state' field
- setState(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'state' field.
- setState(JobStatus.State) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Change the current run state of the job.
- setState(QueueState) - Method in class org.apache.hadoop.mapreduce.QueueInfo
-
Set the state of the queue
- setStatement(PreparedStatement) - Method in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- setStateString(String) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setStatus(String) - Method in interface org.apache.hadoop.mapred.Reporter
-
Set the status description for the task.
- setStatus(String) - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- setStatus(String) - Method in class org.apache.hadoop.mapred.TaskAttemptContextImpl
-
Set the current status of the task to the given string.
- setStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'status' field
- setStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'status' field.
- setStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'status' field
- setStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'status' field.
- setStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Sets the value of the 'status' field
- setStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Sets the value of the 'status' field.
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.lib.map.WrappedMapper.Context
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer.Context
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.StatusReporter
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl.DummyReporter
-
- setStatus(String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
Set the current status of the task to the given string.
- setStatus(String) - Method in interface org.apache.hadoop.mapreduce.TaskAttemptContext
-
Set the current status of the task to the given string.
- setStatusString(String) - Method in class org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
-
- setSubmitTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange.Builder
-
Sets the value of the 'submitTime' field
- setSubmitTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Sets the value of the 'submitTime' field.
- setSubmitTime(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'submitTime' field
- setSubmitTime(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'submitTime' field.
- setSuccessfulAttempt(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskReport
-
set successful attempt ID of the task.
- setSuccessfulAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Sets the value of the 'successfulAttemptId' field
- setSuccessfulAttemptId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Sets the value of the 'successfulAttemptId' field.
- setSuccessfulAttemptId(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.TaskReport
-
set successful attempt ID of the task.
- setTaskAttemptId(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Sets task id.
- setTaskAttemptId(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Sets task id.
- setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- setTaskID(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Sets the value of the 'taskid' field.
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated.Builder
-
Sets the value of the 'taskid' field
- setTaskid(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskUpdated
-
Sets the value of the 'taskid' field.
- setTaskOutputFilter(JobClient.TaskStatusFilter) - Method in class org.apache.hadoop.mapred.JobClient
-
Deprecated.
- setTaskOutputFilter(JobConf, JobClient.TaskStatusFilter) - Static method in class org.apache.hadoop.mapred.JobClient
-
Modify the JobConf to set the task output filter.
- setTaskOutputFilter(Configuration, Job.TaskStatusFilter) - Static method in class org.apache.hadoop.mapreduce.Job
-
Modify the Configuration to set the task output filter.
- setTaskRunTime(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Set the task completion time
- setTaskRunTime(int) - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Set the task completion time
- setTaskStatus(TaskCompletionEvent.Status) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Set task status.
- setTaskStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'taskStatus' field
- setTaskStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'taskStatus' field.
- setTaskStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'taskStatus' field
- setTaskStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'taskStatus' field.
- setTaskStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'taskStatus' field
- setTaskStatus(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'taskStatus' field.
- setTaskStatus(TaskCompletionEvent.Status) - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Set task status.
- setTaskTracker(String) - Method in class org.apache.hadoop.mapred.TaskStatus
-
- setTaskTrackerHttp(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
Set task tracker http location.
- setTaskTrackerHttp(String) - Method in class org.apache.hadoop.mapreduce.TaskCompletionEvent
-
Set task tracker http location.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Sets the value of the 'taskType' field.
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted.Builder
-
Sets the value of the 'taskType' field
- setTaskType(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Sets the value of the 'taskType' field.
- setTotalCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished.Builder
-
Sets the value of the 'totalCounters' field
- setTotalCounters(JhCounters) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobFinished
-
Sets the value of the 'totalCounters' field.
- setTotalLogFileSize(long) - Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- setTotalMaps(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Sets the value of the 'totalMaps' field
- setTotalMaps(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Sets the value of the 'totalMaps' field.
- setTotalReduces(int) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Sets the value of the 'totalReduces' field
- setTotalReduces(Integer) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Sets the value of the 'totalReduces' field.
- setTrackerName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted.Builder
-
Sets the value of the 'trackerName' field
- setTrackerName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Sets the value of the 'trackerName' field.
- setTrackingUrl(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
Set the link to the web-ui for details of the job.
- setTrackingUrl(String) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set the link to the web-ui for details of the job.
- setType(EventType) - Method in class org.apache.hadoop.mapreduce.jobhistory.Event.Builder
-
Sets the value of the 'type' field
- setType(EventType) - Method in class org.apache.hadoop.mapreduce.jobhistory.Event
-
Sets the value of the 'type' field.
- setUber(boolean) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
Set uber-mode flag
- setUberized(boolean) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited.Builder
-
Sets the value of the 'uberized' field
- setUberized(Boolean) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobInited
-
Sets the value of the 'uberized' field.
- setup(Configuration) - Static method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorJobBase
-
- setup(Mapper<K1, V1, Text, Text>.Context) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorMapper
-
- setup(Reducer<Text, Text, Text, Text>.Context) - Method in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorReducer
-
- setup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.lib.chain.ChainMapper
-
- setup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.lib.chain.ChainReducer
-
- setup(Mapper<K, V, Text, Text>.Context) - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionMapper
-
- setup(Reducer<Text, Text, Text, Text>.Context) - Method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionReducer
-
- setup(Mapper<K1, V1, K2, V2>.Context) - Method in class org.apache.hadoop.mapreduce.lib.input.DelegatingMapper
-
- setup(Mapper<K, Text, Text, LongWritable>.Context) - Method in class org.apache.hadoop.mapreduce.lib.map.RegexMapper
-
- setup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
-
Called once at the beginning of the task.
- setup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
-
Called once at the start of the task.
- SETUP_CLEANUP_NEEDED - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- setupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- setupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
For the framework to setup the job output during initialization.
- setupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- setupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
Create the temporary directory that is the root of all of the task
work directories.
- setupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
For the framework to setup the job output during initialization.
- setupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
-
- setupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
-
Get the progress of the job's setup-tasks, as a float between 0.0
and 1.0.
- setupProgress() - Method in class org.apache.hadoop.mapreduce.Job
-
Get the progress of the job's setup-tasks, as a float between 0.0
and 1.0.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
-
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
Sets up output for the task.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
-
This method implements the new interface by calling the old method.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
No task setup required.
- setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
-
Sets up output for the task.
- setUsedMem(int) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- setUseNewMapper(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the framework should use the new api for the mapper.
- setUseNewReducer(boolean) - Method in class org.apache.hadoop.mapred.JobConf
-
Set whether the framework should use the new api for the reducer.
- setUser(String) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the reported username for this job.
- setUser(String) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the reported username for this job.
- setUsername(String) - Method in class org.apache.hadoop.mapred.JobStatus
-
- setUserName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'userName' field
- setUserName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'userName' field.
- setUsername(String) - Method in class org.apache.hadoop.mapreduce.JobStatus
-
- setValue(long) - Method in class org.apache.hadoop.mapred.Counters.Counter
-
- setValue(long) - Method in interface org.apache.hadoop.mapreduce.Counter
-
Set this counter by the given value
- setValue(long) - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.FSCounter
-
- setValue(long) - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup.FrameworkCounter
-
- setValue(long) - Method in class org.apache.hadoop.mapreduce.counters.GenericCounter
-
- setValue(long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter.Builder
-
Sets the value of the 'value' field
- setValue(Long) - Method in class org.apache.hadoop.mapreduce.jobhistory.JhCounter
-
Sets the value of the 'value' field.
- setVMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished.Builder
-
Sets the value of the 'vMemKbytes' field
- setVMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Sets the value of the 'vMemKbytes' field.
- setVMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished.Builder
-
Sets the value of the 'vMemKbytes' field
- setVMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Sets the value of the 'vMemKbytes' field.
- setVMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion.Builder
-
Sets the value of the 'vMemKbytes' field
- setVMemKbytes(List<Integer>) - Method in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Sets the value of the 'vMemKbytes' field.
- setWorkflowAdjacencies(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'workflowAdjacencies' field
- setWorkflowAdjacencies(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'workflowAdjacencies' field.
- setWorkflowId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'workflowId' field
- setWorkflowId(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'workflowId' field.
- setWorkflowName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'workflowName' field
- setWorkflowName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'workflowName' field.
- setWorkflowNodeName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'workflowNodeName' field
- setWorkflowNodeName(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'workflowNodeName' field.
- setWorkflowTags(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted.Builder
-
Sets the value of the 'workflowTags' field
- setWorkflowTags(CharSequence) - Method in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Sets the value of the 'workflowTags' field.
- setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapred.JobConf
-
Set the current working directory for the default file system.
- setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapreduce.Job
-
Set the current working directory for the default file system.
- setWorkOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
-
Set the Path
of the task's temporary output directory
for the map-reduce job.
- setWriteAllCounters(boolean) - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounters
-
Set the "writeAllCounters" option to true or false
- setWriter(IFile.Writer<K, V>) - Method in class org.apache.hadoop.mapred.Task.CombineOutputCollector
-
- setWriteSkipRecs(boolean) - Method in class org.apache.hadoop.mapred.Task
-
Set whether to write skip records.
- shiftBufferedKey() - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer.BlockingBuffer
-
Set position from last mark to end of writable buffer, then rewrite
the data between last mark and kvindex.
- shouldDie() - Method in class org.apache.hadoop.mapred.JvmTask
-
- shouldReset() - Method in class org.apache.hadoop.mapred.MapTaskCompletionEventsUpdate
-
- shuffle(MapHost, InputStream, long, long, ShuffleClientMetrics, Reporter) - Method in class org.apache.hadoop.mapreduce.task.reduce.MapOutput
-
- Shuffle<K,V> - Class in org.apache.hadoop.mapreduce.task.reduce
-
- Shuffle() - Constructor for class org.apache.hadoop.mapreduce.task.reduce.Shuffle
-
- Shuffle.ShuffleError - Exception in org.apache.hadoop.mapreduce.task.reduce
-
- SHUFFLE_CONNECT_TIMEOUT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_CONSUMER_PLUGIN - Static variable in interface org.apache.hadoop.mapreduce.MRConfig
-
- SHUFFLE_EXCEPTION_MSG_REGEX - Static variable in interface org.apache.hadoop.mapreduce.server.jobtracker.JTConfig
-
- SHUFFLE_EXCEPTION_STACK_REGEX - Static variable in interface org.apache.hadoop.mapreduce.server.jobtracker.JTConfig
-
- SHUFFLE_FETCH_FAILURES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_INPUT_BUFFER_PERCENT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_MEMORY_LIMIT_PERCENT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_MERGE_PERCENT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_NOTIFY_READERROR - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_PARALLEL_COPIES - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_READ_TIMEOUT - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SHUFFLE_SSL_ENABLED_DEFAULT - Static variable in interface org.apache.hadoop.mapreduce.MRConfig
-
- SHUFFLE_SSL_ENABLED_KEY - Static variable in interface org.apache.hadoop.mapreduce.MRConfig
-
- ShuffleClientMetrics - Class in org.apache.hadoop.mapreduce.task.reduce
-
- ShuffleConsumerPlugin<K,V> - Interface in org.apache.hadoop.mapred
-
ShuffleConsumerPlugin for serving Reducers.
- ShuffleConsumerPlugin.Context<K,V> - Class in org.apache.hadoop.mapred
-
- ShuffleConsumerPlugin.Context(TaskAttemptID, JobConf, FileSystem, TaskUmbilicalProtocol, LocalDirAllocator, Reporter, CompressionCodec, Class<? extends Reducer>, Task.CombineOutputCollector<K, V>, Counters.Counter, Counters.Counter, Counters.Counter, Counters.Counter, Counters.Counter, Counters.Counter, TaskStatus, Progress, Progress, Task, MapOutputFile, Map<TaskAttemptID, MapOutputFile>) - Constructor for class org.apache.hadoop.mapred.ShuffleConsumerPlugin.Context
-
- shuffleError(TaskAttemptID, String) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Report that a reduce-task couldn't shuffle map-outputs.
- shuffleFinishTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Deprecated.
- ShuffleHeader - Class in org.apache.hadoop.mapreduce.task.reduce
-
Shuffle Header information that is sent by the TaskTracker and
deciphered by the Fetcher thread of Reduce task
- ShuffleHeader() - Constructor for class org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader
-
- ShuffleHeader(String, long, long, int) - Constructor for class org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader
-
- shufflePort - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Deprecated.
- ShuffleScheduler<K,V> - Interface in org.apache.hadoop.mapreduce.task.reduce
-
- ShuffleSchedulerImpl<K,V> - Class in org.apache.hadoop.mapreduce.task.reduce
-
- ShuffleSchedulerImpl(JobConf, TaskStatus, TaskAttemptID, ExceptionReporter, Progress, Counters.Counter, Counters.Counter, Counters.Counter) - Constructor for class org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl
-
- shuffleSecret - Variable in class org.apache.hadoop.mapred.Task
-
- sigQuitProcess(String) - Static method in class org.apache.hadoop.mapreduce.util.ProcessTree
-
Sends SIGQUIT to process; Java programs will dump their stack to
stdout.
- sigQuitProcessGroup(String) - Static method in class org.apache.hadoop.mapreduce.util.ProcessTree
-
Sends SIGQUIT to all processes belonging to the same process group,
ordering all processes in the group to send their stack dump to
stdout.
- size() - Method in class org.apache.hadoop.mapred.Counters.Group
-
- size() - Method in class org.apache.hadoop.mapred.Counters
-
- size() - Method in class org.apache.hadoop.mapred.SpillRecord
-
Return number of IndexRecord entries in this spill.
- size() - Method in class org.apache.hadoop.mapreduce.counters.AbstractCounterGroup
-
- size() - Method in interface org.apache.hadoop.mapreduce.counters.CounterGroupBase
-
- size() - Method in class org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup
-
- size() - Method in class org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup
-
- size() - Method in class org.apache.hadoop.mapreduce.lib.join.TupleWritable
-
The number of children in this Tuple.
- skip(long) - Method in class org.apache.hadoop.mapred.IFileInputStream
-
- skip(K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
-
Skip key-value pairs with keys less than or equal to the key provided.
- skip(K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
Pass skip key to child RRs.
- skip(K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
Skip key-value pairs with keys less than or equal to the key provided.
- skip(K) - Method in class org.apache.hadoop.mapreduce.lib.join.CompositeRecordReader
-
Pass skip key to child RRs.
- skip(K) - Method in class org.apache.hadoop.mapreduce.lib.join.WrappedRecordReader
-
Skip key-value pairs with keys less than or equal to the key provided.
- SKIP_OUTDIR - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SKIP_RECORDS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SKIP_START_ATTEMPTS - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SkipBadRecords - Class in org.apache.hadoop.mapred
-
Utility class for skip bad records functionality.
- SkipBadRecords() - Constructor for class org.apache.hadoop.mapred.SkipBadRecords
-
- sortFinishTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Deprecated.
- specToString(String, String, int, List<Integer>, List<Integer>) - Static method in class org.apache.hadoop.mapreduce.lib.fieldsel.FieldSelectionHelper
-
- SPECULATIVE_SLOWNODE_THRESHOLD - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SPECULATIVE_SLOWTASK_THRESHOLD - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SPECULATIVECAP - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- spilledRecordsCounter - Variable in class org.apache.hadoop.mapred.Task
-
- SpillRecord - Class in org.apache.hadoop.mapred
-
- SpillRecord(int) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- SpillRecord(Path, JobConf) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- SpillRecord(Path, JobConf, String) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- SpillRecord(Path, JobConf, Checksum, String) - Constructor for class org.apache.hadoop.mapred.SpillRecord
-
- split - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.BigDecimalSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.BooleanSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.DateSplitter
-
- split(Configuration, ResultSet, String) - Method in interface org.apache.hadoop.mapreduce.lib.db.DBSplitter
-
Given a ResultSet containing one record (and already advanced to that record)
with two columns (a low value, and a high value, both of the same type), determine
a set of splits that span the given values.
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.FloatSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.IntegerSplitter
-
- split(Configuration, ResultSet, String) - Method in class org.apache.hadoop.mapreduce.lib.db.TextSplitter
-
This method needs to determine the splits between two user-provided strings.
- split - Variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileRecordReader
-
- SPLIT_FILE - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SPLIT_MAXSIZE - Static variable in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- SPLIT_METAINFO_MAXSIZE - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- SPLIT_MINSIZE - Static variable in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
-
- SPLIT_MINSIZE_PERNODE - Static variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- SPLIT_MINSIZE_PERRACK - Static variable in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat
-
- SplitLineReader - Class in org.apache.hadoop.mapreduce.lib.input
-
- SplitLineReader(InputStream, byte[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.SplitLineReader
-
- SplitLineReader(InputStream, Configuration, byte[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.SplitLineReader
-
- SplitLocationInfo - Class in org.apache.hadoop.mapred
-
- SplitLocationInfo(String, boolean) - Constructor for class org.apache.hadoop.mapred.SplitLocationInfo
-
- splitLocations - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Deprecated.
- SplitMetaInfoReader - Class in org.apache.hadoop.mapreduce.split
-
A utility that reads the split meta info and creates
split meta info objects
- SplitMetaInfoReader() - Constructor for class org.apache.hadoop.mapreduce.split.SplitMetaInfoReader
-
- startCommunicationThread() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- startOffset - Variable in class org.apache.hadoop.mapred.IndexRecord
-
- startTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.AMStarted
-
Deprecated.
- startTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptStarted
-
Deprecated.
- startTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskStarted
-
Deprecated.
- state - Variable in class org.apache.hadoop.mapreduce.jobhistory.MapAttemptFinished
-
Deprecated.
- state - Variable in class org.apache.hadoop.mapreduce.jobhistory.ReduceAttemptFinished
-
Deprecated.
- state - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptFinished
-
Deprecated.
- statement - Variable in class org.apache.hadoop.mapreduce.lib.db.DBRecordReader
-
- STATIC_RESOLUTIONS - Static variable in interface org.apache.hadoop.mapreduce.MRConfig
-
- status - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskAttemptUnsuccessfulCompletion
-
Deprecated.
- status - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFailed
-
Deprecated.
- status - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Deprecated.
- StatusReporter - Class in org.apache.hadoop.mapreduce
-
- StatusReporter() - Constructor for class org.apache.hadoop.mapreduce.StatusReporter
-
- statusUpdate(TaskUmbilicalProtocol) - Method in class org.apache.hadoop.mapred.Task
-
Send a status update to the task tracker
- statusUpdate(TaskAttemptID, TaskStatus) - Method in interface org.apache.hadoop.mapred.TaskUmbilicalProtocol
-
Report child's progress to parent.
- STDERR_LOGFILE_ENV - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- STDOUT_LOGFILE_ENV - Static variable in interface org.apache.hadoop.mapreduce.MRJobConfig
-
- stop() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
set the thread state to STOPPING so that the
thread will stop when it wakes up.
- stopCommunicationThread() - Method in class org.apache.hadoop.mapred.Task.TaskReporter
-
- StreamBackedIterator<X extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapred.join
-
This class provides an implementation of ResetableIterator.
- StreamBackedIterator() - Constructor for class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- StreamBackedIterator<X extends org.apache.hadoop.io.Writable> - Class in org.apache.hadoop.mapreduce.lib.join
-
This class provides an implementation of ResetableIterator.
- StreamBackedIterator() - Constructor for class org.apache.hadoop.mapreduce.lib.join.StreamBackedIterator
-
- STRING_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- STRING_VALUE_MAX - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- STRING_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- STRING_VALUE_MIN - Static variable in class org.apache.hadoop.mapreduce.lib.aggregate.ValueAggregatorBaseDescriptor
-
- StringValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
-
This class implements a value aggregator that maintain the biggest of
a sequence of strings.
- StringValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
-
- StringValueMax - Class in org.apache.hadoop.mapreduce.lib.aggregate
-
This class implements a value aggregator that maintain the biggest of
a sequence of strings.
- StringValueMax() - Constructor for class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMax
-
the default constructor
- StringValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
-
This class implements a value aggregator that maintain the smallest of
a sequence of strings.
- StringValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
-
- StringValueMin - Class in org.apache.hadoop.mapreduce.lib.aggregate
-
This class implements a value aggregator that maintain the smallest of
a sequence of strings.
- StringValueMin() - Constructor for class org.apache.hadoop.mapreduce.lib.aggregate.StringValueMin
-
the default constructor
- submit() - Method in class org.apache.hadoop.mapreduce.Job
-
Submit the job to the cluster and return immediately.
- submit() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob
-
Submit this job to mapred.
- SUBMIT_REPLICATION - Static variable in class org.apache.hadoop.mapreduce.Job
-
- submitJob(String) - Method in class org.apache.hadoop.mapred.JobClient
-
Submit a job to the MR system.
- submitJob(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
-
Submit a job to the MR system.
- submitJob(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
-
- submitJob(JobID, String, Credentials) - Method in interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
-
Submit a Job for execution.
- submitJobInternal(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
-
- Submitter - Class in org.apache.hadoop.mapred.pipes
-
The main entry point and job submitter.
- Submitter() - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
-
- Submitter(Configuration) - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
-
- submitTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.JobInfoChange
-
Deprecated.
- submitTime - Variable in class org.apache.hadoop.mapreduce.jobhistory.JobSubmitted
-
Deprecated.
- SUBSTITUTE_TOKEN - Static variable in class org.apache.hadoop.mapreduce.lib.db.DataDrivenDBInputFormat
-
If users are providing their own query, the following string is expected to
appear in the WHERE clause, which will be substituted with a pair of conditions
on the input to allow input splits to parallelise the import.
- SUCCEEDED - Static variable in class org.apache.hadoop.mapred.JobStatus
-
- SUCCEEDED_FILE_NAME - Static variable in class org.apache.hadoop.mapred.FileOutputCommitter
-
- SUCCEEDED_FILE_NAME - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- SUCCESS - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- SUCCESS - Static variable in interface org.apache.hadoop.mapred.MRConstants
-
- successFetch() - Method in class org.apache.hadoop.mapreduce.task.reduce.ShuffleClientMetrics
-
- SUCCESSFUL_JOB_OUTPUT_DIR_MARKER - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
-
- successfulAttemptId - Variable in class org.apache.hadoop.mapreduce.jobhistory.TaskFinished
-
Deprecated.
- sum(Counters, Counters) - Static method in class org.apache.hadoop.mapred.Counters
-
Convenience method for computing the sum of two sets of counters.
- suspend() - Method in class org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl
-
suspend the running thread
- swap(int, int) - Method in class org.apache.hadoop.mapred.MapTask.MapOutputBuffer
-
Swap metadata for items i, j
- syncLogs(String, TaskAttemptID, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
-
- syncLogs() - Static method in class org.apache.hadoop.mapred.TaskLog
-
- syncLogsShutdown(ScheduledExecutorService) - Static method in class org.apache.hadoop.mapred.TaskLog
-