Obtain a MonoidAggregator that uses an efficient append operation for faster aggregation
Obtain a MonoidAggregator that uses an efficient append operation for faster aggregation
Data input type
Aggregating Monoid type
Presentation (output) type
Function that appends the Monoid. Defines the MonoidAggregator.append method for this aggregator. Analogous to the 'seqop' function in Scala's sequence 'aggregate' method
The presentation function
The Monoid type class
The function 'appnd' is expected to obey the law:
appnd(t, f) == m.plus(t, appnd(m.zero, f))
Obtain a MonoidAggregator that uses an efficient append operation for faster aggregation.
Obtain a MonoidAggregator that uses an efficient append operation for faster aggregation. Equivalent to
appendMonoid(appnd, identity[T]_)(m)
Obtain an Aggregator that uses an efficient append operation for faster aggregation
Obtain an Aggregator that uses an efficient append operation for faster aggregation
Data input type
Aggregating Semigroup type
Presentation (output) type
The preparation function. Expected to construct an instance of type T from a single data element.
Function that appends the Semigroup. Defines the Aggregator.append method for this aggregator. Analogous to the 'seqop' function in Scala's sequence 'aggregate' method
The presentation function
The Semigroup type class
The functions 'appnd' and 'prep' are expected to obey the law:
appnd(t, f) == sg.plus(t, prep(f))
Obtain an Aggregator that uses an efficient append operation for faster aggregation.
Obtain an Aggregator that uses an efficient append operation for faster aggregation. Equivalent to
appendSemigroup(prep, appnd, identity[T]_)(sg)
Returns the lower bound of a given percentile where the percentile is between (0,1] The items that are iterated over cannot be negative.
Returns the intersection of a bounded percentile where the percentile is between (0,1] The items that are iterated over cannot be negative.
Using a constant amount of memory, give an approximate unique count (~ 1% error).
Using a constant amount of memory, give an approximate unique count (~ 1% error). This uses an exact set for up to 100 items, then HyperLogLog (HLL) with an 1.2% standard error which uses at most 8192 bytes for each HLL. For more control, see HyperLogLogAggregator.
This is a trivial aggregator that always returns a single value
How many items satisfy a predicate
Do any items satisfy some predicate
Do all items satisfy a predicate
Using Aggregator.prepare,present you can add to this aggregator
Take the first (left most in reduce order) item found
Immutable version of sortedReverseTake, for frameworks that check immutability of reduce functions.
Immutable version of sortedTake, for frameworks that check immutability of reduce functions.
Take the last (right most in reduce order) item found
Get the maximum item
Get the minimum item
An aggregator that sums Numeric values into Doubles.
An aggregator that sums Numeric values into Doubles.
This is really no more than converting to Double and then summing. The conversion to double means we don't have the overflow semantics of integer types on the jvm (e.g. Int.MaxValue + 1 == Int.MinValue).
Note that if you instead wanted to aggregate Numeric values of a type T into the same type T (e.g. if you want MonoidAggregator[T, T, T] for some Numeric type T), you can directly use Aggregator.fromMonoid[T] after importing the numericRing implicit:
> import com.twitter.algebird.Ring.numericRing > def numericAggregator[T: Numeric]: MonoidAggregator[T, T, T] = Aggregator.fromMonoid[T]
Randomly selects input items where each item has an independent probability 'prob' of being selected.
Randomly selects input items where each item has an independent probability 'prob' of being selected. This assumes that all sampled records can fit in memory, so use this only when the expected number of sampled values is small.
Selects exactly 'count' of the input records randomly (or all of the records if there are less then 'count' total records).
Selects exactly 'count' of the input records randomly (or all of the records if there are less then 'count' total records). This assumes that all 'count' of the records can fit in memory, so use this only for small values of 'count'.
This returns the number of items we find
Same as sortedReverseTake, but using a function that returns a value that has an Ordering.
Same as sortedReverseTake, but using a function that returns a value that has an Ordering.
This function is like writing list.sortBy(fn).reverse.take(count).
Same as sortedTake, but using a function that returns a value that has an Ordering.
Same as sortedTake, but using a function that returns a value that has an Ordering.
This function is like writing list.sortBy(fn).take(count).
Take the largest count
items using a heap
Take the smallest count
items using a heap
Put everything in a List.
Put everything in a List. Note, this could fill the memory if the List is very large.
Put everything in a Set.
Put everything in a Set. Note, this could fill the memory if the Set is very large.
This builds an in-memory Set, and then finally gets the size of that set.
This builds an in-memory Set, and then finally gets the size of that set. This may not be scalable if the Uniques are very large. You might check the approximateUniqueCount or HyperLogLog Aggregator to get an approximate version of this that is scalable.
Aggregators compose well.
To create a parallel aggregator that operates on a single input in parallel, use: GeneratedTupleAggregator.from2((agg1, agg2))