gradientCast

inline fun <ID : Any, Value, Distance : Comparable<Distance>> Aggregate<ID>.gradientCast(source: Boolean, local: Value, bottom: Distance, top: Distance, metric: Field<ID, Distance>, maxPaths: Int = Int.MAX_VALUE, isRiemannianManifold: Boolean = true, noinline accumulateData: (fromSource: Distance, toNeighbor: Distance, neighborData: Value) -> Value = { _, _, data -> data }, crossinline accumulateDistance: Reducer<Distance>): Value

Propagate local values across multiple spanning trees starting from all the devices in which source holds, retaining the value of the closest source.

If there are no sources, default to local value. The metric function is used to compute the distance between devices in form of a field of Distances. Distances must be in the [bottom, top] range, accumulateDistance is used to sum distances. accumulateData is used to modify data from neighbors on the fly, and defaults to the identity function.

This function features fast repair, and it is not subject to the rising value problem, see Fast self-healing gradients.

On the other hand, it requires larger messages and more processing than the classic bellmanFordGradientCast.


inline fun <ID : Any, Type> Aggregate<ID>.gradientCast(source: Boolean, local: Type, metric: Field<ID, Double>, maxPaths: Int = Int.MAX_VALUE, isRiemannianManifold: Boolean = true, noinline accumulateData: (fromSource: Double, toNeighbor: Double, data: Type) -> Type = { _, _, data -> data }, crossinline accumulateDistance: Reducer<Double> = Double::plus): Type

Propagate local values across multiple spanning trees starting from all the devices in which source holds, retaining the value of the closest source.

If there are no sources, default to local value. The metric function is used to compute the distance between devices in form of a field of Doubles, accumulateDistance is used to accumulate distances, defaulting to a plain sum. accumulateData is used to modify data from neighbors on the fly, and defaults to the identity function.

This function features fast repair, and it is not subject to the rising value problem, see Fast self-healing gradients.

On the other hand, it requires larger messages and more processing than the classic bellmanFordGradientCast.