com.memsql.spark.etl.api

SimpleByteArrayTransformer

abstract class SimpleByteArrayTransformer extends ByteArrayTransformer

Convenience wrapper around ByteArrayExtractor for initialization and transformation of extracted org.apache.spark.rdd.RDDs.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. SimpleByteArrayTransformer
  2. ByteArrayTransformer
  3. Transformer
  4. Serializable
  5. Serializable
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new SimpleByteArrayTransformer()

Abstract Value Members

  1. abstract def transform(sqlContext: SQLContext, rdd: RDD[Array[Byte]], config: UserTransformConfig, logger: PhaseLogger): DataFrame

    Convenience method for transforming org.apache.spark.rdd.RDDs into org.apache.spark.sql.DataFrame> This is called once per batch on the org.apache.spark.rdd.RDD generated by the Extractor and the result is passed to the Loader.

    Convenience method for transforming org.apache.spark.rdd.RDDs into org.apache.spark.sql.DataFrame> This is called once per batch on the org.apache.spark.rdd.RDD generated by the Extractor and the result is passed to the Loader.

    sqlContext

    The SQLContext that is used to run this pipeline. NOTE: If the pipeline is running in MemSQL Streamliner, this is an instance of com.memsql.spark.context.MemSQLContext, which has additional metadata about the MemSQL cluster.

    rdd

    The org.apache.spark.rdd.RDD for this batch generated by the Extractor.

    config

    The user defined configuration passed from MemSQL Ops.

    logger

    A logger instance that is integrated with MemSQL Ops.

    returns

    A org.apache.spark.sql.DataFrame with the transformed data to be loaded.

Concrete Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. final var byteUtils: ByteUtils.type

    Definition Classes
    ByteArrayTransformer
  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  11. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  14. def initialize(sqlContext: SQLContext, config: UserTransformConfig, logger: PhaseLogger): Unit

    Initialization code for your Transformer.

    Initialization code for your Transformer. This is called after instantiation of your Transformer and before Transformer.transform. The default implementation does nothing.

    sqlContext

    The SQLContext that is used to run this pipeline. NOTE: If the pipeline is running in MemSQL Streamliner, this is an instance of com.memsql.spark.context.MemSQLContext, which has additional metadata about the MemSQL cluster.

    config

    The user defined configuration passed from MemSQL Ops.

    logger

    A logger instance that is integrated with MemSQL Ops.

  15. final def initialize(sqlContext: SQLContext, config: PhaseConfig, logger: PhaseLogger): Unit

    Initialization code for this Extractor

    Initialization code for this Extractor

    sqlContext

    The SQLContext that is used to run this pipeline. NOTE: If the pipeline is running in MemSQL Streamliner, this is an instance of com.memsql.spark.context.MemSQLContext, which has additional metadata about the MemSQL cluster.

    config

    The Transformer configuration passed from MemSQL Ops.

    logger

    A logger instance that is integrated with MemSQL Ops.

    Definition Classes
    SimpleByteArrayTransformerTransformer
  16. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  17. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  18. final def notify(): Unit

    Definition Classes
    AnyRef
  19. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  20. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  21. def toString(): String

    Definition Classes
    AnyRef → Any
  22. final def transform(sqlContext: SQLContext, rdd: RDD[Array[Byte]], config: PhaseConfig, logger: PhaseLogger): DataFrame

    Transforms the incoming org.apache.spark.rdd.RDD into a org.apache.spark.sql.DataFrame.

    sqlContext

    The SQLContext that is used to run this pipeline. NOTE: If the pipeline is running in MemSQL Streamliner, this is an instance of com.memsql.spark.context.MemSQLContext, which has additional metadata about the MemSQL cluster.

    rdd

    The org.apache.spark.rdd.RDD generated by the Extractor for this batch.

    config

    The Transformer configuration passed from MemSQL Ops.

    logger

    A logger instance that is integrated with MemSQL Ops.

    returns

    A org.apache.spark.sql.DataFrame with the transformed data to be loaded.

    Definition Classes
    SimpleByteArrayTransformerTransformer
  23. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  25. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from ByteArrayTransformer

Inherited from Transformer[Array[Byte]]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped