org.apache.spark.sql.execution.streaming

StreamExecution

class StreamExecution extends StreamingQuery with ProgressReporter with Logging

Manages the execution of a streaming Spark SQL query that is occurring in a separate thread. Unlike a standard query, a streaming query executes repeatedly each time new data arrives at any Source present in the query plan. Whenever new data arrives, a QueryExecution is created and the results are committed transactionally to the given Sink.

Linear Supertypes
ProgressReporter, Logging, StreamingQuery, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. StreamExecution
  2. ProgressReporter
  3. Logging
  4. StreamingQuery
  5. AnyRef
  6. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new StreamExecution(sparkSession: SparkSession, name: String, checkpointRoot: String, analyzedPlan: LogicalPlan, sink: Sink, trigger: Trigger, triggerClock: Clock, outputMode: OutputMode, deleteCheckpointOnStop: Boolean)

    deleteCheckpointOnStop

    whether to delete the checkpoint if the query is stopped without errors

Type Members

  1. case class ExecutionStats(inputRows: Map[Source, Long], stateOperators: Seq[StateOperatorProgress], eventTimeStats: Map[String, String]) extends Product with Serializable

    Definition Classes
    ProgressReporter

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. var availableOffsets: StreamProgress

    Tracks the offsets that are available to be processed, but have not yet be committed to the sink.

    Tracks the offsets that are available to be processed, but have not yet be committed to the sink. Only the scheduler thread should modify this field, and only in atomic steps. Other threads should make a shallow copy if they are going to access this field more than once, since the field's value may change at any time.

    Definition Classes
    StreamExecutionProgressReporter
  8. def awaitInitialization(timeoutMs: Long): Unit

    Await until all fields of the query have been initialized.

  9. def awaitTermination(timeoutMs: Long): Boolean

    Waits for the termination of this query, either by query.stop() or by an exception.

    Waits for the termination of this query, either by query.stop() or by an exception. If the query has terminated with an exception, then the exception will be thrown. Otherwise, it returns whether the query has terminated or not within the timeoutMs milliseconds.

    If the query has terminated, then all subsequent calls to this method will either return true immediately (if the query was terminated by stop()), or throw the exception immediately (if the query has terminated with exception).

    Definition Classes
    StreamExecutionStreamingQuery
    Since

    2.0.0

    Exceptions thrown
    StreamingQueryException

    if the query has terminated with an exception

  10. def awaitTermination(): Unit

    Waits for the termination of this query, either by query.stop() or by an exception.

    Waits for the termination of this query, either by query.stop() or by an exception. If the query has terminated with an exception, then the exception will be thrown.

    If the query has terminated, then all subsequent calls to this method will either return immediately (if the query was terminated by stop()), or throw the exception immediately (if the query has terminated with exception).

    Definition Classes
    StreamExecutionStreamingQuery
    Since

    2.0.0

    Exceptions thrown
    StreamingQueryException

    if the query has terminated with an exception.

  11. val batchCommitLog: BatchCommitLog

    A log that records the batch ids that have completed.

    A log that records the batch ids that have completed. This is used to check if a batch was fully processed, and its output was committed to the sink, hence no need to process it again. This is used (for instance) during restart, to help identify which batch to run next.

  12. val checkpointRoot: String

  13. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. var committedOffsets: StreamProgress

    Tracks how much data we have processed and committed to the sink or state store from each input source.

    Tracks how much data we have processed and committed to the sink or state store from each input source. Only the scheduler thread should modify this field, and only in atomic steps. Other threads should make a shallow copy if they are going to access this field more than once, since the field's value may change at any time.

    Definition Classes
    StreamExecutionProgressReporter
  15. var currentBatchId: Long

    The current batchId or -1 if execution has not yet been initialized.

    The current batchId or -1 if execution has not yet been initialized.

    Attributes
    protected
    Definition Classes
    StreamExecutionProgressReporter
  16. var currentStatus: StreamingQueryStatus

    Attributes
    protected
    Definition Classes
    ProgressReporter
  17. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  18. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  19. def exception: Option[StreamingQueryException]

    Returns the StreamingQueryException if the query was terminated by an exception.

    Returns the StreamingQueryException if the query was terminated by an exception.

    Definition Classes
    StreamExecutionStreamingQuery
  20. def explain(): Unit

    Prints the physical plan to the console for debugging purposes.

    Prints the physical plan to the console for debugging purposes.

    Definition Classes
    StreamExecutionStreamingQuery
    Since

    2.0.0

  21. def explain(extended: Boolean): Unit

    Prints the physical plan to the console for debugging purposes.

    Prints the physical plan to the console for debugging purposes.

    extended

    whether to do extended explain or not

    Definition Classes
    StreamExecutionStreamingQuery
    Since

    2.0.0

  22. def explainInternal(extended: Boolean): String

    Expose for tests

  23. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  24. def finishTrigger(hasNewData: Boolean): Unit

    Finalizes the query progress and adds it to list of recent status updates.

    Finalizes the query progress and adds it to list of recent status updates.

    Attributes
    protected
    Definition Classes
    ProgressReporter
  25. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  26. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  27. val id: UUID

    Returns the unique id of this query that persists across restarts from checkpoint data.

    Returns the unique id of this query that persists across restarts from checkpoint data. That is, this id is generated when a query is started for the first time, and will be the same every time it is restarted from checkpoint data. Also see runId.

    Definition Classes
    StreamExecutionProgressReporterStreamingQuery
    Since

    2.1.0

  28. def initializeLogIfNecessary(isInterpreter: Boolean): Unit

    Attributes
    protected
    Definition Classes
    Logging
  29. def isActive: Boolean

    Whether the query is currently active or not

    Whether the query is currently active or not

    Definition Classes
    StreamExecutionStreamingQuery
  30. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  31. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  32. var lastExecution: IncrementalExecution

    Definition Classes
    StreamExecutionProgressReporter
  33. def lastProgress: StreamingQueryProgress

    Returns the most recent query progress update or null if there were no progress updates.

    Returns the most recent query progress update or null if there were no progress updates.

    Definition Classes
    ProgressReporter
  34. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  35. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  36. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  37. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  38. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  39. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  40. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  41. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  42. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  43. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  44. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  45. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  46. lazy val logicalPlan: LogicalPlan

    Definition Classes
    StreamExecutionProgressReporter
  47. val microBatchThread: StreamExecutionThread

    The thread that runs the micro-batches of this stream.

    The thread that runs the micro-batches of this stream. Note that this thread must be org.apache.spark.util.UninterruptibleThread to workaround KAFKA-1894: interrupting a running KafkaConsumer may cause endless loop.

  48. val name: String

    Returns the user-specified name of the query, or null if not specified.

    Returns the user-specified name of the query, or null if not specified. This name can be specified in the org.apache.spark.sql.streaming.DataStreamWriter as dataframe.writeStream.queryName("query").start(). This name, if set, must be unique across all active queries.

    Definition Classes
    StreamExecutionProgressReporterStreamingQuery
    Since

    2.0.0

  49. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  50. var newData: Map[Source, DataFrame]

    Holds the most recent input data for each source.

    Holds the most recent input data for each source.

    Attributes
    protected
    Definition Classes
    StreamExecutionProgressReporter
  51. final def notify(): Unit

    Definition Classes
    AnyRef
  52. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  53. val offsetLog: OffsetSeqLog

    A write-ahead-log that records the offsets that are present in each batch.

    A write-ahead-log that records the offsets that are present in each batch. In order to ensure that a given batch will always consist of the same data, we write to this log *before* any processing is done. Thus, the Nth record in this log indicated data that is currently being processed and the N-1th entry indicates which offsets have been durably committed to the sink.

  54. var offsetSeqMetadata: OffsetSeqMetadata

    Metadata associated with the offset seq of a batch in the query.

    Metadata associated with the offset seq of a batch in the query.

    Attributes
    protected
    Definition Classes
    StreamExecutionProgressReporter
  55. val outputMode: OutputMode

  56. def postEvent(event: Event): Unit

    Attributes
    protected
    Definition Classes
    StreamExecutionProgressReporter
  57. def processAllAvailable(): Unit

    Blocks until all available data in the source has been processed and committed to the sink.

    Blocks until all available data in the source has been processed and committed to the sink. This method is intended for testing. Note that in the case of continually arriving data, this method may block forever. Additionally, this method is only guaranteed to block until data that has been synchronously appended data to a org.apache.spark.sql.execution.streaming.Source prior to invocation. (i.e. getOffset must immediately reflect the addition).

    Definition Classes
    StreamExecutionStreamingQuery
    Since

    2.0.0

  58. def recentProgress: Array[StreamingQueryProgress]

    Returns an array containing the most recent query progress updates.

    Returns an array containing the most recent query progress updates.

    Definition Classes
    ProgressReporter
  59. def reportTimeTaken[T](triggerDetailKey: String)(body: ⇒ T): T

    Records the duration of running body for the next query progress update.

    Records the duration of running body for the next query progress update.

    Attributes
    protected
    Definition Classes
    ProgressReporter
  60. val runId: UUID

    Returns the unique id of this run of the query.

    Returns the unique id of this run of the query. That is, every start/restart of a query will generated a unique runId. Therefore, every time a query is restarted from checkpoint, it will have the same id but different runIds.

    Definition Classes
    StreamExecutionProgressReporterStreamingQuery
  61. val sink: Sink

    Definition Classes
    StreamExecutionProgressReporter
  62. var sources: Seq[Source]

    All stream sources present in the query plan.

    All stream sources present in the query plan. This will be set when generating logical plan.

    Attributes
    protected
    Definition Classes
    StreamExecutionProgressReporter
  63. val sparkSession: SparkSession

    Returns the SparkSession associated with this.

    Returns the SparkSession associated with this.

    Definition Classes
    StreamExecutionProgressReporterStreamingQuery
    Since

    2.0.0

  64. def start(): Unit

    Starts the execution.

    Starts the execution. This returns only after the thread has started and QueryStartedEvent has been posted to all the listeners.

  65. def startTrigger(): Unit

    Begins recording statistics about query progress for a given trigger.

    Begins recording statistics about query progress for a given trigger.

    Attributes
    protected
    Definition Classes
    ProgressReporter
  66. def status: StreamingQueryStatus

    Returns the current status of the query.

    Returns the current status of the query.

    Definition Classes
    ProgressReporter
  67. def stop(): Unit

    Signals to the thread executing micro-batches that it should stop running after the next batch.

    Signals to the thread executing micro-batches that it should stop running after the next batch. This method blocks until the thread stops running.

    Definition Classes
    StreamExecutionStreamingQuery
  68. val streamMetadata: StreamMetadata

    Metadata associated with the whole query

    Metadata associated with the whole query

    Attributes
    protected
  69. lazy val streamMetrics: MetricsReporter

    Used to report metrics to coda-hale.

    Used to report metrics to coda-hale. This uses id for easier tracking across restarts.

  70. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  71. def toString(): String

    Definition Classes
    StreamExecution → AnyRef → Any
  72. val trigger: Trigger

  73. val triggerClock: Clock

    Definition Classes
    StreamExecutionProgressReporter
  74. def updateStatusMessage(message: String): Unit

    Updates the message returned in status.

    Updates the message returned in status.

    Attributes
    protected
    Definition Classes
    ProgressReporter
  75. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  76. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  77. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from ProgressReporter

Inherited from Logging

Inherited from StreamingQuery

Inherited from AnyRef

Inherited from Any

Ungrouped