Recreate a SnappyStreamingContext from a checkpoint file using an existing SparkContext.
Recreate a SnappyStreamingContext from a checkpoint file using an existing SparkContext.
Path to the directory that was specified as the checkpoint directory
Existing SparkContext
Recreate a SnappyStreamingContext from a checkpoint file.
Recreate a SnappyStreamingContext from a checkpoint file.
Path to the directory that was specified as the checkpoint directory
Recreate a SnappyStreamingContext from a checkpoint file.
Recreate a SnappyStreamingContext from a checkpoint file.
Path to the directory that was specified as the checkpoint directory
Optional, configuration object if necessary for reading from HDFS compatible filesystems
Create a SnappyStreamingContext by providing the configuration necessary for a new SparkContext.
Create a SnappyStreamingContext by providing the configuration necessary for a new SparkContext.
a org.apache.spark.SparkConf object specifying Spark parameters
the time interval at which streaming data will be divided into batches
Create a SnappyStreamingContext using an existing SparkContext.
Create a SnappyStreamingContext using an existing SparkContext.
existing SparkContext
the time interval at which streaming data will be divided into batches
Creates a SchemaDStream from an DStream of Product (e.g.
Creates a SchemaDStream from an DStream of Product (e.g. case classes).
Registers and executes given SQL query and returns SchemaDStream to consume the results
Registers and executes given SQL query and returns SchemaDStream to consume the results
the query to register
Start the execution of the streams.
Start the execution of the streams. Also registers population of AQP tables from stream tables if present.
IllegalStateException
if the StreamingContext is already stopped
Main entry point for SnappyData extensions to Spark Streaming. A SnappyStreamingContext extends Spark's org.apache.spark.streaming.StreamingContext to provides an ability to manipulate SQL like query on org.apache.spark.streaming.dstream.DStream. You can apply schema and register continuous SQL queries(CQ) over the data streams. A single shared SnappyStreamingContext makes it possible to re-use Executors across client connections or applications.