sealed
trait
ReduceKey extends Comparable[ReduceKey]
Abstract Value Members
-
abstract
def
byteArray: Array[Byte]
-
abstract
def
compareTo(that: ReduceKey): Int
-
abstract
def
length: Int
Concrete Value Members
-
final
def
!=(arg0: AnyRef): Boolean
-
final
def
!=(arg0: Any): Boolean
-
final
def
##(): Int
-
final
def
==(arg0: AnyRef): Boolean
-
final
def
==(arg0: Any): Boolean
-
final
def
asInstanceOf[T0]: T0
-
def
clone(): AnyRef
-
final
def
eq(arg0: AnyRef): Boolean
-
def
equals(arg0: Any): Boolean
-
def
finalize(): Unit
-
final
def
getClass(): Class[_]
-
def
hashCode(): Int
-
final
def
isInstanceOf[T0]: Boolean
-
final
def
ne(arg0: AnyRef): Boolean
-
final
def
notify(): Unit
-
final
def
notifyAll(): Unit
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
-
def
toString(): String
-
final
def
wait(): Unit
-
final
def
wait(arg0: Long, arg1: Int): Unit
-
final
def
wait(arg0: Long): Unit
A data structure used for shuffling data that supports comparison. We wrap ReduceKey around a normal key byte array so the byte array can be used in binary comparison, as well as enabling partitioning data based on the partitionCode field using a ReduceKeyPartitioner.
Note that this data structure needs to be serializable because in the case of a total order sort, Spark's range partitioner uses a collect operation to find the ranges. The collect serializes the ReduceKey objects and send them back to the master.