combine By Key
fun <K, V, C> <Error class: unknown class><<Error class: unknown class><K, V>>.combineByKey( createCombiner: (V) -> C, mergeValue: (C, V) -> C, mergeCombiner: (C, C) -> C, numPartitions: Int = dstream().ssc().sc().defaultParallelism(), mapSideCombine: Boolean = true): <Error class: unknown class><<Error class: unknown class><K, C>>
Content copied to clipboard
fun <K, V, C> <Error class: unknown class><<Error class: unknown class><K, V>>.combineByKey( createCombiner: (V) -> C, mergeValue: (C, V) -> C, mergeCombiner: (C, C) -> C, partitioner: <Error class: unknown class>, mapSideCombine: Boolean = true): <Error class: unknown class><<Error class: unknown class><K, C>>
Content copied to clipboard
Combine elements of each key in DStream's RDDs using custom functions. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions in the Spark core documentation for more information.