com.datastax.spark.connector.rdd.partitioner

CassandraPartitionedRDD

Related Doc: package partitioner

class CassandraPartitionedRDD[T] extends RDD[T]

RDD created by repartitionByCassandraReplica with preferred locations mapping to the CassandraReplicas each partition was created for.

Linear Supertypes
RDD[T], Logging, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. CassandraPartitionedRDD
  2. RDD
  3. Logging
  4. Serializable
  5. Serializable
  6. AnyRef
  7. Any
Implicitly
  1. by toPairRDDFunctions
  2. by toRDDFunctions
  3. by any2stringadd
  4. by StringFormat
  5. by Ensuring
  6. by ArrowAssoc
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new CassandraPartitionedRDD(prev: RDD[T], keyspace: String, table: String)(implicit ct: ClassTag[T])

Value Members

  1. final def !=(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Definition Classes
    AnyRef → Any
  3. def +(other: String): String

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to any2stringadd[CassandraPartitionedRDD[T]] performed by method any2stringadd in scala.Predef.
    Definition Classes
    any2stringadd
  4. def ++(other: RDD[T]): RDD[T]

    Definition Classes
    RDD
  5. def ->[B](y: B): (CassandraPartitionedRDD[T], B)

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to ArrowAssoc[CassandraPartitionedRDD[T]] performed by method ArrowAssoc in scala.Predef. This conversion will take place only if T is (Nothing, Nothing) (T =:= (Nothing, Nothing)).
    Definition Classes
    ArrowAssoc
    Annotations
    @inline()
  6. final def ==(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  7. def aggregate[U](zeroValue: U)(seqOp: (U, T) ⇒ U, combOp: (U, U) ⇒ U)(implicit arg0: ClassTag[U]): U

    Definition Classes
    RDD
  8. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  9. def cache(): CassandraPartitionedRDD.this.type

    Definition Classes
    RDD
  10. def cartesian[U](other: RDD[U])(implicit arg0: ClassTag[U]): RDD[(T, U)]

    Definition Classes
    RDD
  11. def checkpoint(): Unit

    Definition Classes
    RDD
  12. def clearDependencies(): Unit

    Attributes
    protected
    Definition Classes
    RDD
  13. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  14. def coalesce(numPartitions: Int, shuffle: Boolean)(implicit ord: Ordering[T]): RDD[T]

    Definition Classes
    RDD
  15. def collect[U](f: PartialFunction[T, U])(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
  16. def collect(): Array[T]

    Definition Classes
    RDD
  17. def compute(split: Partition, context: TaskContext): Iterator[T]

    Definition Classes
    CassandraPartitionedRDD → RDD
  18. def context: SparkContext

    Definition Classes
    RDD
  19. def count(): Long

    Definition Classes
    RDD
  20. def countApprox(timeout: Long, confidence: Double): PartialResult[BoundedDouble]

    Definition Classes
    RDD
    Annotations
    @Experimental()
  21. def countApproxDistinct(relativeSD: Double): Long

    Definition Classes
    RDD
  22. def countApproxDistinct(p: Int, sp: Int): Long

    Definition Classes
    RDD
    Annotations
    @Experimental()
  23. def countByValue()(implicit ord: Ordering[T]): Map[T, Long]

    Definition Classes
    RDD
  24. def countByValueApprox(timeout: Long, confidence: Double)(implicit ord: Ordering[T]): PartialResult[Map[T, BoundedDouble]]

    Definition Classes
    RDD
    Annotations
    @Experimental()
  25. final def dependencies: Seq[Dependency[_]]

    Definition Classes
    RDD
  26. def distinct(): RDD[T]

    Definition Classes
    RDD
  27. def distinct(numPartitions: Int)(implicit ord: Ordering[T]): RDD[T]

    Definition Classes
    RDD
  28. def ensuring(cond: (CassandraPartitionedRDD[T]) ⇒ Boolean, msg: ⇒ Any): CassandraPartitionedRDD[T]

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to Ensuring[CassandraPartitionedRDD[T]] performed by method Ensuring in scala.Predef.
    Definition Classes
    Ensuring
  29. def ensuring(cond: (CassandraPartitionedRDD[T]) ⇒ Boolean): CassandraPartitionedRDD[T]

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to Ensuring[CassandraPartitionedRDD[T]] performed by method Ensuring in scala.Predef.
    Definition Classes
    Ensuring
  30. def ensuring(cond: Boolean, msg: ⇒ Any): CassandraPartitionedRDD[T]

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to Ensuring[CassandraPartitionedRDD[T]] performed by method Ensuring in scala.Predef.
    Definition Classes
    Ensuring
  31. def ensuring(cond: Boolean): CassandraPartitionedRDD[T]

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to Ensuring[CassandraPartitionedRDD[T]] performed by method Ensuring in scala.Predef.
    Definition Classes
    Ensuring
  32. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  33. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  34. def filter(f: (T) ⇒ Boolean): RDD[T]

    Definition Classes
    RDD
  35. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  36. def first(): T

    Definition Classes
    RDD
  37. def firstParent[U](implicit arg0: ClassTag[U]): RDD[U]

    Attributes
    protected[org.apache.spark]
    Definition Classes
    RDD
  38. def flatMap[U](f: (T) ⇒ TraversableOnce[U])(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
  39. def fold(zeroValue: T)(op: (T, T) ⇒ T): T

    Definition Classes
    RDD
  40. def foreach(f: (T) ⇒ Unit): Unit

    Definition Classes
    RDD
  41. def foreachPartition(f: (Iterator[T]) ⇒ Unit): Unit

    Definition Classes
    RDD
  42. def formatted(fmtstr: String): String

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to StringFormat[CassandraPartitionedRDD[T]] performed by method StringFormat in scala.Predef.
    Definition Classes
    StringFormat
    Annotations
    @inline()
  43. def getCheckpointFile: Option[String]

    Definition Classes
    RDD
  44. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  45. def getDependencies: Seq[Dependency[_]]

    Attributes
    protected
    Definition Classes
    RDD
  46. def getPartitions: Array[Partition]

    This RDD was partitioned using the Replica Partitioner so we can use that to get preferred location data

    This RDD was partitioned using the Replica Partitioner so we can use that to get preferred location data

    Definition Classes
    CassandraPartitionedRDD → RDD
  47. def getPreferredLocations(split: Partition): Seq[String]

    This method currently uses thrift to determine the local endpoint and rpc endpoints from whatever endpoint the partition belongs to.

    This method currently uses thrift to determine the local endpoint and rpc endpoints from whatever endpoint the partition belongs to. This gives us a better chance of matching the bound interface of the Spark Executor. In addition we will add the HostName and HostAddress that we get for each of these endpoints to cover all of the logical choices.

    Definition Classes
    CassandraPartitionedRDD → RDD
  48. def getStorageLevel: StorageLevel

    Definition Classes
    RDD
  49. def glom(): RDD[Array[T]]

    Definition Classes
    RDD
  50. def groupBy[K](f: (T) ⇒ K, p: Partitioner)(implicit kt: ClassTag[K], ord: Ordering[K]): RDD[(K, Iterable[T])]

    Definition Classes
    RDD
  51. def groupBy[K](f: (T) ⇒ K, numPartitions: Int)(implicit kt: ClassTag[K]): RDD[(K, Iterable[T])]

    Definition Classes
    RDD
  52. def groupBy[K](f: (T) ⇒ K)(implicit kt: ClassTag[K]): RDD[(K, Iterable[T])]

    Definition Classes
    RDD
  53. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  54. val id: Int

    Definition Classes
    RDD
  55. def intersection(other: RDD[T], numPartitions: Int): RDD[T]

    Definition Classes
    RDD
  56. def intersection(other: RDD[T], partitioner: Partitioner)(implicit ord: Ordering[T]): RDD[T]

    Definition Classes
    RDD
  57. def intersection(other: RDD[T]): RDD[T]

    Definition Classes
    RDD
  58. def isCheckpointed: Boolean

    Definition Classes
    RDD
  59. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  60. def isTraceEnabled(): Boolean

    Attributes
    protected
    Definition Classes
    Logging
  61. final def iterator(split: Partition, context: TaskContext): Iterator[T]

    Definition Classes
    RDD
  62. def joinWithCassandraTable[R](keyspaceName: String, tableName: String, selectedColumns: ColumnSelector = AllColumns, joinColumns: ColumnSelector = PartitionKeyColumns)(implicit connector: CassandraConnector = ..., newType: ClassTag[R], rrf: RowReaderFactory[R], ev: ValidRDDType[R], currentType: ClassTag[T], rwf: RowWriterFactory[T]): CassandraJoinRDD[T, R]

    Uses the data from RDD to join with a Cassandra table without retrieving the entire table.

    Uses the data from RDD to join with a Cassandra table without retrieving the entire table. Any RDD which can be used to saveToCassandra can be used to joinWithCassandra as well as any RDD which only specifies the partition Key of a Cassandra Table. This method executes single partition requests against the Cassandra Table and accepts the functional modifiers that a normal com.datastax.spark.connector.rdd.CassandraTableScanRDD takes.

    By default this method only uses the Partition Key for joining but any combination of columns which are acceptable to C* can be used in the join. Specify columns using joinColumns as a parameter or the on() method.

    Example With Prior Repartitioning:

    val source = sc.parallelize(keys).map(x => new KVRow(x))
    val repart = source.repartitionByCassandraReplica(keyspace, tableName, 10)
    val someCass = repart.joinWithCassandraTable(keyspace, tableName)

    Example Joining on Clustering Columns:

    val source = sc.parallelize(keys).map(x => (x, x * 100))
    val someCass = source.joinWithCassandraTable(keyspace, wideTable).on(SomeColumns("key", "group"))
    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctions
  63. def keyBy[K](f: (T) ⇒ K): RDD[(K, T)]

    Definition Classes
    RDD
  64. def keyByCassandraReplica(keyspaceName: String, tableName: String)(implicit connector: CassandraConnector = ..., currentType: ClassTag[T], rwf: RowWriterFactory[T]): RDD[(Set[InetAddress], T)]

    Key every row in the RDD by with the IP Adresses of all of the Cassandra nodes which a contain a replica of the data specified by that row.

    Key every row in the RDD by with the IP Adresses of all of the Cassandra nodes which a contain a replica of the data specified by that row. The calling RDD must have rows that can be converted into the partition key of the given Cassandra Table.

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctions
  65. def log: Logger

    Attributes
    protected
    Definition Classes
    Logging
  66. def logDebug(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  67. def logDebug(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  68. def logError(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  69. def logError(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  70. def logInfo(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  71. def logInfo(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  72. def logName: String

    Attributes
    protected
    Definition Classes
    Logging
  73. def logTrace(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  74. def logTrace(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  75. def logWarning(msg: ⇒ String, throwable: Throwable): Unit

    Attributes
    protected
    Definition Classes
    Logging
  76. def logWarning(msg: ⇒ String): Unit

    Attributes
    protected
    Definition Classes
    Logging
  77. def map[U](f: (T) ⇒ U)(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
  78. def mapPartitions[U](f: (Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
  79. def mapPartitionsWithIndex[U](f: (Int, Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
  80. def max()(implicit ord: Ordering[T]): T

    Definition Classes
    RDD
  81. def min()(implicit ord: Ordering[T]): T

    Definition Classes
    RDD
  82. var name: String

    Definition Classes
    RDD
  83. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  84. final def notify(): Unit

    Definition Classes
    AnyRef
  85. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  86. def parent[U](j: Int)(implicit arg0: ClassTag[U]): RDD[U]

    Attributes
    protected[org.apache.spark]
    Definition Classes
    RDD
  87. val partitioner: Option[Partitioner]

    Definition Classes
    CassandraPartitionedRDD → RDD
  88. final def partitions: Array[Partition]

    Definition Classes
    RDD
  89. def persist(): CassandraPartitionedRDD.this.type

    Definition Classes
    RDD
  90. def persist(newLevel: StorageLevel): CassandraPartitionedRDD.this.type

    Definition Classes
    RDD
  91. def pipe(command: Seq[String], env: Map[String, String], printPipeContext: ((String) ⇒ Unit) ⇒ Unit, printRDDElement: (T, (String) ⇒ Unit) ⇒ Unit, separateWorkingDir: Boolean): RDD[String]

    Definition Classes
    RDD
  92. def pipe(command: String, env: Map[String, String]): RDD[String]

    Definition Classes
    RDD
  93. def pipe(command: String): RDD[String]

    Definition Classes
    RDD
  94. final def preferredLocations(split: Partition): Seq[String]

    Definition Classes
    RDD
  95. def randomSplit(weights: Array[Double], seed: Long): Array[RDD[T]]

    Definition Classes
    RDD
  96. def reduce(f: (T, T) ⇒ T): T

    Definition Classes
    RDD
  97. def repartition(numPartitions: Int)(implicit ord: Ordering[T]): RDD[T]

    Definition Classes
    RDD
  98. def repartitionByCassandraReplica(keyspaceName: String, tableName: String, partitionsPerHost: Int = 10)(implicit connector: CassandraConnector = ..., currentType: ClassTag[T], rwf: RowWriterFactory[T]): CassandraPartitionedRDD[T]

    Repartitions the data (via a shuffle) based upon the replication of the given keyspaceName and tableName.

    Repartitions the data (via a shuffle) based upon the replication of the given keyspaceName and tableName. Calling this method before using joinWithCassandraTable will ensure that requests will be coordinator local. partitionsPerHost Controls the number of Spark Partitions that will be created in this repartitioning event. The calling RDD must have rows that can be converted into the partition key of the given Cassandra Table.

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctions
  99. def sample(withReplacement: Boolean, fraction: Double, seed: Long): RDD[T]

    Definition Classes
    RDD
  100. def saveAsCassandraTable(keyspaceName: String, tableName: String, columns: ColumnSelector = AllColumns, writeConf: WriteConf = ...)(implicit connector: CassandraConnector = ..., rwf: RowWriterFactory[T], columnMapper: ColumnMapper[T]): Unit

    Saves the data from RDD to a new table with definition taken from the ColumnMapper for this class.

    Saves the data from RDD to a new table with definition taken from the ColumnMapper for this class.

    keyspaceName

    keyspace where to create a new table

    tableName

    name of the table to create; the table must not exist

    columns

    Selects the columns to save data to. Uses only the unique column names, and you must select at least all primary key columns. All other fields are discarded. Non-selected property/column names are left unchanged. This parameter does not affect table creation.

    writeConf

    additional configuration object allowing to set consistency level, batch size, etc.

    connector

    optional, implicit connector to Cassandra

    rwf

    factory for obtaining the row writer to be used to extract column values from items of the RDD

    columnMapper

    a column mapper determining the definition of the table

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctions
  101. def saveAsCassandraTableEx(table: TableDef, columns: ColumnSelector = AllColumns, writeConf: WriteConf = ...)(implicit connector: CassandraConnector = ..., rwf: RowWriterFactory[T]): Unit

    Saves the data from RDD to a new table defined by the given TableDef.

    Saves the data from RDD to a new table defined by the given TableDef.

    First it creates a new table with all columns from the TableDef and then it saves RDD content in the same way as saveToCassandra. The table must not exist prior to this call.

    table

    table definition used to create a new table

    columns

    Selects the columns to save data to. Uses only the unique column names, and you must select at least all primary key columns. All other fields are discarded. Non-selected property/column names are left unchanged. This parameter does not affect table creation.

    writeConf

    additional configuration object allowing to set consistency level, batch size, etc.

    connector

    optional, implicit connector to Cassandra

    rwf

    factory for obtaining the row writer to be used to extract column values from items of the RDD

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctions
  102. def saveAsObjectFile(path: String): Unit

    Definition Classes
    RDD
  103. def saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec]): Unit

    Definition Classes
    RDD
  104. def saveAsTextFile(path: String): Unit

    Definition Classes
    RDD
  105. def saveToCassandra(keyspaceName: String, tableName: String, columns: ColumnSelector = AllColumns, writeConf: WriteConf = ...)(implicit connector: CassandraConnector = ..., rwf: RowWriterFactory[T]): Unit

    Saves the data from RDD to a Cassandra table.

    Saves the data from RDD to a Cassandra table. Uses the specified column names.

    keyspaceName

    the name of the Keyspace to use

    tableName

    the name of the Table to use

    writeConf

    additional configuration object allowing to set consistency level, batch size, etc.

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctionsWritableToCassandra
    See also

    com.datastax.spark.connector.writer.WritableToCassandra

  106. def setName(_name: String): CassandraPartitionedRDD.this.type

    Definition Classes
    RDD
  107. def sortBy[K](f: (T) ⇒ K, ascending: Boolean, numPartitions: Int)(implicit ord: Ordering[K], ctag: ClassTag[K]): RDD[T]

    Definition Classes
    RDD
  108. def spanBy[U](f: (T) ⇒ U): RDD[(U, Iterable[T])]

    Applies a function to each item, and groups consecutive items having the same value together.

    Applies a function to each item, and groups consecutive items having the same value together. Contrary to groupBy, items from the same group must be already next to each other in the original collection. Works locally on each partition, so items from different partitions will never be placed in the same group.

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Definition Classes
    RDDFunctions
  109. def spanByKey: RDD[(K, Seq[V])]

    Groups items with the same key, assuming the items with the same key are next to each other in the collection.

    Groups items with the same key, assuming the items with the same key are next to each other in the collection. It does not perform shuffle, therefore it is much faster than using much more universal Spark RDD groupByKey. For this method to be useful with Cassandra tables, the key must represent a prefix of the primary key, containing at least the partition key of the Cassandra table.

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to PairRDDFunctions[K, V] performed by method toPairRDDFunctions in com.datastax.spark.connector. This conversion will take place only if T is (K, V) (T =:= (K, V)).
    Definition Classes
    PairRDDFunctions
  110. def sparkContext: SparkContext

    Definition Classes
    RDD
  111. def subtract(other: RDD[T], p: Partitioner)(implicit ord: Ordering[T]): RDD[T]

    Definition Classes
    RDD
  112. def subtract(other: RDD[T], numPartitions: Int): RDD[T]

    Definition Classes
    RDD
  113. def subtract(other: RDD[T]): RDD[T]

    Definition Classes
    RDD
  114. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  115. def take(num: Int): Array[T]

    Definition Classes
    RDD
  116. def takeOrdered(num: Int)(implicit ord: Ordering[T]): Array[T]

    Definition Classes
    RDD
  117. def takeSample(withReplacement: Boolean, num: Int, seed: Long): Array[T]

    Definition Classes
    RDD
  118. def toDebugString: String

    Definition Classes
    RDD
  119. def toJavaRDD(): JavaRDD[T]

    Definition Classes
    RDD
  120. def toLocalIterator: Iterator[T]

    Definition Classes
    RDD
  121. def toString(): String

    Definition Classes
    RDD → AnyRef → Any
  122. def top(num: Int)(implicit ord: Ordering[T]): Array[T]

    Definition Classes
    RDD
  123. def union(other: RDD[T]): RDD[T]

    Definition Classes
    RDD
  124. def unpersist(blocking: Boolean): CassandraPartitionedRDD.this.type

    Definition Classes
    RDD
  125. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  126. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  127. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  128. def zip[U](other: RDD[U])(implicit arg0: ClassTag[U]): RDD[(T, U)]

    Definition Classes
    RDD
  129. def zipPartitions[B, C, D, V](rdd2: RDD[B], rdd3: RDD[C], rdd4: RDD[D])(f: (Iterator[T], Iterator[B], Iterator[C], Iterator[D]) ⇒ Iterator[V])(implicit arg0: ClassTag[B], arg1: ClassTag[C], arg2: ClassTag[D], arg3: ClassTag[V]): RDD[V]

    Definition Classes
    RDD
  130. def zipPartitions[B, C, D, V](rdd2: RDD[B], rdd3: RDD[C], rdd4: RDD[D], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B], Iterator[C], Iterator[D]) ⇒ Iterator[V])(implicit arg0: ClassTag[B], arg1: ClassTag[C], arg2: ClassTag[D], arg3: ClassTag[V]): RDD[V]

    Definition Classes
    RDD
  131. def zipPartitions[B, C, V](rdd2: RDD[B], rdd3: RDD[C])(f: (Iterator[T], Iterator[B], Iterator[C]) ⇒ Iterator[V])(implicit arg0: ClassTag[B], arg1: ClassTag[C], arg2: ClassTag[V]): RDD[V]

    Definition Classes
    RDD
  132. def zipPartitions[B, C, V](rdd2: RDD[B], rdd3: RDD[C], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B], Iterator[C]) ⇒ Iterator[V])(implicit arg0: ClassTag[B], arg1: ClassTag[C], arg2: ClassTag[V]): RDD[V]

    Definition Classes
    RDD
  133. def zipPartitions[B, V](rdd2: RDD[B])(f: (Iterator[T], Iterator[B]) ⇒ Iterator[V])(implicit arg0: ClassTag[B], arg1: ClassTag[V]): RDD[V]

    Definition Classes
    RDD
  134. def zipPartitions[B, V](rdd2: RDD[B], preservesPartitioning: Boolean)(f: (Iterator[T], Iterator[B]) ⇒ Iterator[V])(implicit arg0: ClassTag[B], arg1: ClassTag[V]): RDD[V]

    Definition Classes
    RDD
  135. def zipWithIndex(): RDD[(T, Long)]

    Definition Classes
    RDD
  136. def zipWithUniqueId(): RDD[(T, Long)]

    Definition Classes
    RDD
  137. def [B](y: B): (CassandraPartitionedRDD[T], B)

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to ArrowAssoc[CassandraPartitionedRDD[T]] performed by method ArrowAssoc in scala.Predef. This conversion will take place only if T is (Nothing, Nothing) (T =:= (Nothing, Nothing)).
    Definition Classes
    ArrowAssoc

Shadowed Implicit Value Members

  1. val sparkContext: SparkContext

    Implicit information
    This member is added by an implicit conversion from CassandraPartitionedRDD[T] to RDDFunctions[T] performed by method toRDDFunctions in com.datastax.spark.connector.
    Shadowing
    This implicitly inherited member is shadowed by one or more members in this class.
    To access this member you can use a type ascription:
    (cassandraPartitionedRDD: RDDFunctions[T]).sparkContext
    Definition Classes
    RDDFunctionsWritableToCassandra

Deprecated Value Members

  1. def filterWith[A](constructA: (Int) ⇒ A)(p: (T, A) ⇒ Boolean): RDD[T]

    Definition Classes
    RDD
    Annotations
    @deprecated
    Deprecated

    (Since version 1.0.0) use mapPartitionsWithIndex and filter

  2. def flatMapWith[A, U](constructA: (Int) ⇒ A, preservesPartitioning: Boolean)(f: (T, A) ⇒ Seq[U])(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
    Annotations
    @deprecated
    Deprecated

    (Since version 1.0.0) use mapPartitionsWithIndex and flatMap

  3. def foreachWith[A](constructA: (Int) ⇒ A)(f: (T, A) ⇒ Unit): Unit

    Definition Classes
    RDD
    Annotations
    @deprecated
    Deprecated

    (Since version 1.0.0) use mapPartitionsWithIndex and foreach

  4. def mapPartitionsWithContext[U](f: (TaskContext, Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
    Annotations
    @DeveloperApi() @deprecated
    Deprecated

    (Since version 1.2.0) use TaskContext.get

  5. def mapPartitionsWithSplit[U](f: (Int, Iterator[T]) ⇒ Iterator[U], preservesPartitioning: Boolean)(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
    Annotations
    @deprecated
    Deprecated

    (Since version 0.7.0) use mapPartitionsWithIndex

  6. def mapWith[A, U](constructA: (Int) ⇒ A, preservesPartitioning: Boolean)(f: (T, A) ⇒ U)(implicit arg0: ClassTag[U]): RDD[U]

    Definition Classes
    RDD
    Annotations
    @deprecated
    Deprecated

    (Since version 1.0.0) use mapPartitionsWithIndex

  7. def toArray(): Array[T]

    Definition Classes
    RDD
    Annotations
    @deprecated
    Deprecated

    (Since version 1.0.0) use collect

Inherited from RDD[T]

Inherited from Logging

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Inherited by implicit conversion toPairRDDFunctions from CassandraPartitionedRDD[T] to PairRDDFunctions[K, V]

Inherited by implicit conversion toRDDFunctions from CassandraPartitionedRDD[T] to RDDFunctions[T]

Inherited by implicit conversion any2stringadd from CassandraPartitionedRDD[T] to any2stringadd[CassandraPartitionedRDD[T]]

Inherited by implicit conversion StringFormat from CassandraPartitionedRDD[T] to StringFormat[CassandraPartitionedRDD[T]]

Inherited by implicit conversion Ensuring from CassandraPartitionedRDD[T] to Ensuring[CassandraPartitionedRDD[T]]

Inherited by implicit conversion ArrowAssoc from CassandraPartitionedRDD[T] to ArrowAssoc[CassandraPartitionedRDD[T]]

Ungrouped