Class/Object

org.apache.flink.streaming.api.scala

StreamExecutionEnvironment

Related Docs: object StreamExecutionEnvironment | package scala

Permalink

class StreamExecutionEnvironment extends AnyRef

Annotations
@Public()
Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. StreamExecutionEnvironment
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new StreamExecutionEnvironment(javaEnv: environment.StreamExecutionEnvironment)

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def addDefaultKryoSerializer(type: Class[_], serializerClass: Class[_ <: Serializer[_]]): Unit

    Permalink

    Adds a new Kryo default serializer to the Runtime.

    Adds a new Kryo default serializer to the Runtime.

    type

    The class of the types serialized with the given serializer.

    serializerClass

    The class of the serializer to use.

  5. def addDefaultKryoSerializer[T <: Serializer[_] with Serializable](type: Class[_], serializer: T): Unit

    Permalink

    Adds a new Kryo default serializer to the Runtime.

    Adds a new Kryo default serializer to the Runtime.

    Note that the serializer instance must be serializable (as defined by java.io.Serializable), because it may be distributed to the worker nodes by java serialization.

    type

    The class of the types serialized with the given serializer.

    serializer

    The serializer to use.

  6. def addJobListener(jobListener: JobListener): Unit

    Permalink
  7. def addSource[T](function: (SourceContext[T]) ⇒ Unit)(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Create a DataStream using a user defined source function for arbitrary source functionality.

  8. def addSource[T](function: SourceFunction[T])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Create a DataStream using a user defined source function for arbitrary source functionality.

    Create a DataStream using a user defined source function for arbitrary source functionality. By default sources have a parallelism of 1. To enable parallel execution, the user defined source should implement ParallelSourceFunction or extend RichParallelSourceFunction. In these cases the resulting source will have the parallelism of the environment. To change this afterwards call DataStreamSource.setParallelism(int)

  9. def addSourceV2[T](function: SourceFunctionV2[T])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Create a DataStream using a user defined source function v2 for arbitrary source functionality.

    Create a DataStream using a user defined source function v2 for arbitrary source functionality. By default sources have a parallelism of 1. To enable parallel execution, the user defined source should implement ParallelSourceFunction or extend RichParallelSourceFunction. In these cases the resulting source will have the parallelism of the environment. To change this afterwards call DataStreamSource.setParallelism(int)

  10. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  11. def cancel(jobId: String): Unit

    Permalink
  12. def cancelWithSavepoint(jobId: String, path: String): String

    Permalink
  13. def clearTransformations: StreamExecutionEnvironment

    Permalink
  14. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  15. def createInput[T](inputFormat: InputFormat[T, _])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Generic method to create an input data stream with a specific input format.

    Generic method to create an input data stream with a specific input format. Since all data streams need specific information about their types, this method needs to determine the type of the data produced by the input format. It will attempt to determine the data type by reflection, unless the input format implements the ResultTypeQueryable interface.

    Annotations
    @PublicEvolving()
  16. def createInputV2[T](inputFormat: InputFormat[T, _])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Generic method to create an input data stream with a specific input format.

    Generic method to create an input data stream with a specific input format. Since all data streams need specific information about their types, this method needs to determine the type of the data produced by the input format. It will attempt to determine the data type by reflection, unless the input format implements the ResultTypeQueryable interface.

    Annotations
    @PublicEvolving()
  17. def disableCheckpointing(): StreamExecutionEnvironment

    Permalink
  18. def disableOperatorChaining(): StreamExecutionEnvironment

    Permalink

    Disables operator chaining for streaming operators.

    Disables operator chaining for streaming operators. Operator chaining allows non-shuffle operations to be co-located in the same thread fully avoiding serialization and de-serialization.

    Annotations
    @PublicEvolving()
  19. def disableSlotSharing(): StreamExecutionEnvironment

    Permalink
  20. def enableCheckpointing(interval: Long): StreamExecutionEnvironment

    Permalink

    Enables checkpointing for the streaming job.

    Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint.

    The job draws checkpoints periodically, in the given interval. The program will use CheckpointingMode.EXACTLY_ONCE mode. The state will be stored in the configured state backend.

    NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. For that reason, iterative jobs will not be started if used with enabled checkpointing. To override this mechanism, use the CheckpointingMode, boolean) method.

    interval

    Time interval between state checkpoints in milliseconds.

  21. def enableCheckpointing(interval: Long, mode: CheckpointingMode): StreamExecutionEnvironment

    Permalink

    Enables checkpointing for the streaming job.

    Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint.

    The job draws checkpoints periodically, in the given interval. The system uses the given CheckpointingMode for the checkpointing ("exactly once" vs "at least once"). The state will be stored in the configured state backend.

    NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. For that reason, iterative jobs will not be started if used with enabled checkpointing. To override this mechanism, use the CheckpointingMode, boolean) method.

    interval

    Time interval between state checkpoints in milliseconds.

    mode

    The checkpointing mode, selecting between "exactly once" and "at least once" guarantees.

  22. def enableSlotSharing(): StreamExecutionEnvironment

    Permalink
  23. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  24. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  25. def execute(jobName: String, savePointSetting: SavepointRestoreSettings): JobExecutionResult

    Permalink
  26. def execute(jobName: String): JobExecutionResult

    Permalink

    Triggers the program execution.

    Triggers the program execution. The environment will execute all parts of the program that have resulted in a "sink" operation. Sink operations are for example printing results or forwarding them to a message queue.

    The program execution will be logged and displayed with the provided name.

  27. def execute(): JobExecutionResult

    Permalink

    Triggers the program execution.

    Triggers the program execution. The environment will execute all parts of the program that have resulted in a "sink" operation. Sink operations are for example printing results or forwarding them to a message queue.

    The program execution will be logged and displayed with a generated default name.

  28. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  29. def fromCollection[T](data: Iterator[T])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Creates a DataStream from the given Iterator.

    Creates a DataStream from the given Iterator.

    Note that this operation will result in a non-parallel data source, i.e. a data source with a parallelism of one.

  30. def fromCollection[T](data: Seq[T])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Creates a DataStream from the given non-empty Seq.

    Creates a DataStream from the given non-empty Seq. The elements need to be serializable because the framework may move the elements into the cluster if needed.

    Note that this operation will result in a non-parallel data source, i.e. a data source with a parallelism of one.

  31. def fromCollectionV2[T](data: Seq[T])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Creates a DataStream from the given non-empty Seq.

    Creates a DataStream from the given non-empty Seq. The elements need to be serializable because the framework may move the elements into the cluster if needed.

    Note that this operation will result in a non-parallel data source v2, i.e. a data source with a parallelism of one.

  32. def fromElements[T](data: T*)(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Creates a DataStream that contains the given elements.

    Creates a DataStream that contains the given elements. The elements must all be of the same type.

    Note that this operation will result in a non-parallel data source, i.e. a data source with a parallelism of one.

  33. def fromElementsV2[T](data: T*)(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Creates a DataStream that contains the given elements.

    Creates a DataStream that contains the given elements. The elements must all be of the same type.

    Note that this operation will result in a non-parallel data source v2, i.e. a data source with a parallelism of one.

  34. def fromParallelCollection[T](data: SplittableIterator[T])(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Creates a DataStream from the given SplittableIterator.

  35. def generateSequence(from: Long, to: Long): DataStream[Long]

    Permalink

    Creates a new DataStream that contains a sequence of numbers.

    Creates a new DataStream that contains a sequence of numbers. This source is a parallel source. If you manually set the parallelism to 1 the emitted elements are in order.

  36. def getBufferTimeout: Long

    Permalink

    Gets the default buffer timeout set for this environment

  37. def getCachedFiles: List[Tuple2[String, DistributedCacheEntry]]

    Permalink

    Gets cache files.

  38. def getCheckpointConfig: CheckpointConfig

    Permalink

    Gets the checkpoint config, which defines values like checkpoint interval, delay between checkpoints, etc.

  39. def getCheckpointingMode: CheckpointingMode

    Permalink
  40. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  41. def getConfig: ExecutionConfig

    Permalink

    Gets the config object.

  42. def getCustomConfiguration: Configuration

    Permalink

    Returns the custom configuration for the environment.

  43. def getDefaultResources: ResourceSpec

    Permalink
  44. def getExecutionPlan: String

    Permalink

    Creates the plan with which the system will execute the program, and returns it as a String using a JSON representation of the execution data flow graph.

    Creates the plan with which the system will execute the program, and returns it as a String using a JSON representation of the execution data flow graph. Note that this needs to be called, before the plan is executed.

  45. def getJavaEnv: environment.StreamExecutionEnvironment

    Permalink

    returns

    the wrapped Java environment

  46. def getJobListeners: List[JobListener]

    Permalink
  47. def getMaxParallelism: Int

    Permalink

    Returns the maximum degree of parallelism defined for the program.

    Returns the maximum degree of parallelism defined for the program.

    The maximum degree of parallelism specifies the upper limit for dynamic scaling. It also defines the number of key groups used for partitioned state.

  48. def getParallelism: Int

    Permalink

    Returns the default parallelism for this execution environment.

    Returns the default parallelism for this execution environment. Note that this value can be overridden by individual operations using DataStream#setParallelism(int)

  49. def getRestartStrategy: RestartStrategyConfiguration

    Permalink

    Returns the specified restart strategy configuration.

    Returns the specified restart strategy configuration.

    returns

    The restart strategy configuration to be used

    Annotations
    @PublicEvolving()
  50. def getStateBackend: StateBackend

    Permalink

    Returns the state backend that defines how to store and checkpoint state.

    Returns the state backend that defines how to store and checkpoint state.

    Annotations
    @PublicEvolving()
  51. def getStreamGraph: StreamGraph

    Permalink

    Getter of the org.apache.flink.streaming.api.graph.StreamGraph of the streaming job.

    Getter of the org.apache.flink.streaming.api.graph.StreamGraph of the streaming job.

    returns

    The StreamGraph representing the transformations

    Annotations
    @Internal()
  52. def getStreamTimeCharacteristic: TimeCharacteristic

    Permalink

    Gets the time characteristic/

    Gets the time characteristic/

    returns

    The time characteristic.

    Annotations
    @PublicEvolving()
    See also

    #setStreamTimeCharacteristic

  53. def getWrappedStreamExecutionEnvironment: environment.StreamExecutionEnvironment

    Permalink

    Getter of the wrapped org.apache.flink.streaming.api.environment.StreamExecutionEnvironment

    Getter of the wrapped org.apache.flink.streaming.api.environment.StreamExecutionEnvironment

    returns

    The encased ExecutionEnvironment

    Annotations
    @Internal()
  54. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  55. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  56. def isMultiHeadChainMode: Boolean

    Permalink
  57. def isSlotSharingEnabled: Boolean

    Permalink
  58. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  59. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  60. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  61. def readFile[T](inputFormat: FileInputFormat[T], filePath: String, watchType: FileProcessingMode, interval: Long)(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Reads the contents of the user-specified path based on the given FileInputFormat.

    Reads the contents of the user-specified path based on the given FileInputFormat. Depending on the provided FileProcessingMode, the source may periodically monitor (every interval ms) the path for new data (FileProcessingMode.PROCESS_CONTINUOUSLY), or process once the data currently in the path and exit (FileProcessingMode.PROCESS_ONCE). In addition, if the path contains files not to be processed, the user can specify a custom FilePathFilter. As a default implementation you can use FilePathFilter.createDefaultFilter().

    ** NOTES ON CHECKPOINTING: ** If the watchType is set to FileProcessingMode#PROCESS_ONCE, the source monitors the path ** once **, creates the FileInputSplits to be processed, forwards them to the downstream readers to read the actual data, and exits, without waiting for the readers to finish reading. This implies that no more checkpoint barriers are going to be forwarded after the source exits, thus having no checkpoints after that point.

    inputFormat

    The input format used to create the data stream

    filePath

    The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path")

    watchType

    The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit

    interval

    In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans

    returns

    The data stream that represents the data read from the given file

    Annotations
    @PublicEvolving()
  62. def readFile[T](inputFormat: FileInputFormat[T], filePath: String)(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Reads the given file with the given input format.

    Reads the given file with the given input format. The file path should be passed as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path").

  63. def readFileStream(StreamPath: String, intervalMillis: Long = 100, watchType: WatchType = ...): DataStream[String]

    Permalink

    Creates a DataStream that contains the contents of file created while system watches the given path.

    Creates a DataStream that contains the contents of file created while system watches the given path. The file will be read with the system's default character set. The user can check the monitoring interval in milliseconds, and the way file modifications are handled. By default it checks for only new files every 100 milliseconds.

    Annotations
    @Deprecated
  64. def readTextFile(filePath: String, charsetName: String): DataStream[String]

    Permalink

    Creates a data stream that represents the Strings produced by reading the given file line wise.

    Creates a data stream that represents the Strings produced by reading the given file line wise. The character set with the given name will be used to read the files.

  65. def readTextFile(filePath: String): DataStream[String]

    Permalink

    Creates a DataStream that represents the Strings produced by reading the given file line wise.

    Creates a DataStream that represents the Strings produced by reading the given file line wise. The file will be read with the system's default character set.

  66. def registerCachedFile(filePath: String, name: String, executable: Boolean): Unit

    Permalink

    Registers a file at the distributed cache under the given name.

    Registers a file at the distributed cache under the given name. The file will be accessible from any user-defined function in the (distributed) runtime under a local path. Files may be local files (as long as all relevant workers have access to it), or files in a distributed file system. The runtime will copy the files temporarily to a local cache, if needed.

    The org.apache.flink.api.common.functions.RuntimeContext can be obtained inside UDFs via org.apache.flink.api.common.functions.RichFunction#getRuntimeContext() and provides access org.apache.flink.api.common.cache.DistributedCache via org.apache.flink.api.common.functions.RuntimeContext#getDistributedCache().

    filePath

    The path of the file, as a URI (e.g. "file:///some/path" or "hdfs://host:port/and/path")

    name

    The name under which the file is registered.

    executable

    flag indicating whether the file should be executable

  67. def registerCachedFile(filePath: String, name: String): Unit

    Permalink

    Registers a file at the distributed cache under the given name.

    Registers a file at the distributed cache under the given name. The file will be accessible from any user-defined function in the (distributed) runtime under a local path. Files may be local files (as long as all relevant workers have access to it), or files in a distributed file system. The runtime will copy the files temporarily to a local cache, if needed.

    The org.apache.flink.api.common.functions.RuntimeContext can be obtained inside UDFs via org.apache.flink.api.common.functions.RichFunction#getRuntimeContext() and provides access org.apache.flink.api.common.cache.DistributedCache via org.apache.flink.api.common.functions.RuntimeContext#getDistributedCache().

    filePath

    The path of the file, as a URI (e.g. "file:///some/path" or "hdfs://host:port/and/path")

    name

    The name under which the file is registered.

  68. def registerType(typeClass: Class[_]): Unit

    Permalink

    Registers the given type with the serialization stack.

    Registers the given type with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written.

  69. def registerTypeWithKryoSerializer(clazz: Class[_], serializer: Class[_ <: Serializer[_]]): Unit

    Permalink

    Registers the given type with the serializer at the KryoSerializer.

  70. def registerTypeWithKryoSerializer[T <: Serializer[_] with Serializable](clazz: Class[_], serializer: T): Unit

    Permalink

    Registers the given type with the serializer at the KryoSerializer.

    Registers the given type with the serializer at the KryoSerializer.

    Note that the serializer instance must be serializable (as defined by java.io.Serializable), because it may be distributed to the worker nodes by java serialization.

  71. def setBufferTimeout(timeoutMillis: Long): StreamExecutionEnvironment

    Permalink

    Sets the maximum time frequency (milliseconds) for the flushing of the output buffers.

    Sets the maximum time frequency (milliseconds) for the flushing of the output buffers. By default the output buffers flush frequently to provide low latency and to aid smooth developer experience. Setting the parameter can result in three logical modes:

    • A positive integer triggers flushing periodically by that integer
    • 0 triggers flushing after every record thus minimizing latency
    • -1 triggers flushing only when the output buffer is full thus maximizing throughput
  72. def setDefaultResources(resources: ResourceSpec): StreamExecutionEnvironment

    Permalink
  73. def setJobType(jobType: JobType): StreamExecutionEnvironment

    Permalink
  74. def setMaxParallelism(maxParallelism: Int): Unit

    Permalink

    Sets the maximum degree of parallelism defined for the program.

    Sets the maximum degree of parallelism defined for the program. The maximum degree of parallelism specifies the upper limit for dynamic scaling. It also defines the number of key groups used for partitioned state.

  75. def setMultiHeadChainMode(multiHeadChainMode: Boolean): StreamExecutionEnvironment

    Permalink
  76. def setParallelism(parallelism: Int): Unit

    Permalink

    Sets the parallelism for operations executed through this environment.

    Sets the parallelism for operations executed through this environment. Setting a parallelism of x here will cause all operators (such as join, map, reduce) to run with x parallel instances. This value can be overridden by specific operations using DataStream#setParallelism(int).

  77. def setRestartStrategy(restartStrategyConfiguration: RestartStrategyConfiguration): Unit

    Permalink

    Sets the restart strategy configuration.

    Sets the restart strategy configuration. The configuration specifies which restart strategy will be used for the execution graph in case of a restart.

    restartStrategyConfiguration

    Restart strategy configuration to be set

    Annotations
    @PublicEvolving()
  78. def setStateBackend(backend: StateBackend): StreamExecutionEnvironment

    Permalink

    Sets the state backend that describes how to store and checkpoint operator state.

    Sets the state backend that describes how to store and checkpoint operator state. It defines both which data structures hold state during execution (for example hash tables, RockDB, or other data stores) as well as where checkpointed data will be persisted.

    State managed by the state backend includes both keyed state that is accessible on keyed streams, as well as state maintained directly by the user code that implements CheckpointedFunction.

    The org.apache.flink.runtime.state.memory.MemoryStateBackend, for example, maintains the state in heap memory, as objects. It is lightweight without extra dependencies, but can checkpoint only small states (some counters).

    In contrast, the org.apache.flink.runtime.state.filesystem.FsStateBackend stores checkpoints of the state (also maintained as heap objects) in files. When using a replicated file system (like HDFS, S3, MapR FS, Tachyon, etc) this will guarantee that state is not lost upon failures of individual nodes and that streaming program can be executed highly available and strongly consistent.

    Annotations
    @PublicEvolving()
  79. def setStreamTimeCharacteristic(characteristic: TimeCharacteristic): Unit

    Permalink

    Sets the time characteristic for all streams create from this environment, e.g., processing time, event time, or ingestion time.

    Sets the time characteristic for all streams create from this environment, e.g., processing time, event time, or ingestion time.

    If you set the characteristic to IngestionTime of EventTime this will set a default watermark update interval of 200 ms. If this is not applicable for your application you should change it using org.apache.flink.api.common.ExecutionConfig#setAutoWatermarkInterval(long)

    characteristic

    The time characteristic.

    Annotations
    @PublicEvolving()
  80. def socketTextStream(hostname: String, port: Int, delimiter: Char = '\n', maxRetry: Long = 0): DataStream[String]

    Permalink

    Creates a new DataStream that contains the strings received infinitely from socket.

    Creates a new DataStream that contains the strings received infinitely from socket. Received strings are decoded by the system's default character set. The maximum retry interval is specified in seconds, in case of temporary service outage reconnection is initiated every second.

    Annotations
    @PublicEvolving()
  81. def stopJob(jobId: JobID): Unit

    Permalink

    Stop a submitted job with JobID.

  82. def submit(): JobSubmissionResult

    Permalink
  83. def submit(jobName: String): JobSubmissionResult

    Permalink
  84. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  85. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  86. def triggerSavepoint(jobId: String, path: String): String

    Permalink
  87. def triggerSavepoint(jobId: String): String

    Permalink
  88. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  89. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  90. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def enableCheckpointing(): StreamExecutionEnvironment

    Permalink

    Method for enabling fault-tolerance.

    Method for enabling fault-tolerance. Activates monitoring and backup of streaming operator states. Time interval between state checkpoints is specified in in millis.

    Setting this option assumes that the job is used in production and thus if not stated explicitly otherwise with calling the setRestartStrategy method in case of failure the job will be resubmitted to the cluster indefinitely.

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  2. def enableCheckpointing(interval: Long, mode: CheckpointingMode, force: Boolean): StreamExecutionEnvironment

    Permalink

    Enables checkpointing for the streaming job.

    Enables checkpointing for the streaming job. The distributed state of the streaming dataflow will be periodically snapshotted. In case of a failure, the streaming dataflow will be restarted from the latest completed checkpoint.

    The job draws checkpoints periodically, in the given interval. The state will be stored in the configured state backend.

    NOTE: Checkpointing iterative streaming dataflows in not properly supported at the moment. If the "force" parameter is set to true, the system will execute the job nonetheless.

    interval

    Time interval between state checkpoints in millis.

    mode

    The checkpointing mode, selecting between "exactly once" and "at least once" guarantees.

    force

    If true checkpointing will be enabled for iterative jobs as well.

    Annotations
    @deprecated @PublicEvolving()
    Deprecated
  3. def getNumberOfExecutionRetries: Int

    Permalink

    Gets the number of times the system will try to re-execute failed tasks.

    Gets the number of times the system will try to re-execute failed tasks. A value of "-1" indicates that the system default value (as defined in the configuration) should be used.

    Annotations
    @PublicEvolving()
    Deprecated

    This method will be replaced by getRestartStrategy. The FixedDelayRestartStrategyConfiguration contains the number of execution retries.

  4. def readFile[T](inputFormat: FileInputFormat[T], filePath: String, watchType: FileProcessingMode, interval: Long, filter: FilePathFilter)(implicit arg0: TypeInformation[T]): DataStream[T]

    Permalink

    Reads the contents of the user-specified path based on the given FileInputFormat.

    Reads the contents of the user-specified path based on the given FileInputFormat. Depending on the provided FileProcessingMode.

    inputFormat

    The input format used to create the data stream

    filePath

    The path of the file, as a URI (e.g., "file:///some/local/file" or "hdfs://host:port/file/path")

    watchType

    The mode in which the source should operate, i.e. monitor path and react to new data, or process once and exit

    interval

    In the case of periodic path monitoring, this specifies the interval (in millis) between consecutive path scans

    filter

    The files to be excluded from the processing

    returns

    The data stream that represents the data read from the given file

    Annotations
    @PublicEvolving() @Deprecated
    Deprecated

    Use FileInputFormat#setFilesFilter(FilePathFilter) to set a filter and String, FileProcessingMode, long)

  5. def setNumberOfExecutionRetries(numRetries: Int): Unit

    Permalink

    Sets the number of times that failed tasks are re-executed.

    Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of "-1" indicates that the system default value (as defined in the configuration) should be used.

    Annotations
    @PublicEvolving()
    Deprecated

    This method will be replaced by setRestartStrategy(). The FixedDelayRestartStrategyConfiguration contains the number of execution retries.

  6. def setStateBackend(backend: AbstractStateBackend): StreamExecutionEnvironment

    Permalink

    Annotations
    @Deprecated @PublicEvolving()
    Deprecated

    Use StreamExecutionEnvironment.setStateBackend(StateBackend) instead.

Inherited from AnyRef

Inherited from Any

Ungrouped