Class GroupByQueryEngineV2
- java.lang.Object
-
- org.apache.druid.query.groupby.epinephelinae.GroupByQueryEngineV2
-
public class GroupByQueryEngineV2 extends Object
Class that knows how to process a groupBy query on a singleStorageAdapter
. It returns aSequence
ofResultRow
objects that are not guaranteed to be in any particular order, and may not even be fully grouped. It is expected that a downstreamGroupByMergingQueryRunnerV2
will finish grouping these results. This code runs on data servers, like Historicals. Used byGroupingEngine.process(GroupByQuery, StorageAdapter, GroupByQueryMetrics)
.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static class
GroupByQueryEngineV2.GroupByEngineIterator<KeyType>
static class
GroupByQueryEngineV2.HashAggregateIterator
-
Field Summary
Fields Modifier and Type Field Description static org.apache.druid.query.groupby.epinephelinae.GroupByQueryEngineV2.GroupByStrategyFactory
STRATEGY_FACTORY
-
Method Summary
All Methods Static Methods Concrete Methods Modifier and Type Method Description static boolean
canPushDownLimit(ColumnSelectorFactory columnSelectorFactory, String columnName)
check if a column will operate correctly withLimitedBufferHashGrouper
for query limit pushdownstatic void
convertRowTypesToOutputTypes(List<DimensionSpec> dimensionSpecs, ResultRow resultRow, int resultRowDimensionStart)
static GroupByColumnSelectorPlus[]
createGroupBySelectorPlus(ColumnSelectorPlus<GroupByColumnSelectorStrategy>[] baseSelectorPlus, int dimensionStart)
static int
getCardinalityForArrayAggregation(GroupByQueryConfig querySpecificConfig, GroupByQuery query, StorageAdapter storageAdapter, ByteBuffer buffer)
Returns the cardinality of array needed to do array-based aggregation, or -1 if array-based aggregation is impossible.static boolean
hasNoExplodingDimensions(ColumnInspector inspector, List<DimensionSpec> dimensions)
Checks whether all "dimensions" are either single-valued, or if the input column or output dimension spec has specified a type thatTypeSignature.isArray()
.static Sequence<ResultRow>
process(GroupByQuery query, StorageAdapter storageAdapter, NonBlockingPool<ByteBuffer> intermediateResultsBufferPool, GroupByQueryConfig querySpecificConfig, DruidProcessingConfig processingConfig, GroupByQueryMetrics groupByQueryMetrics)
-
-
-
Method Detail
-
createGroupBySelectorPlus
public static GroupByColumnSelectorPlus[] createGroupBySelectorPlus(ColumnSelectorPlus<GroupByColumnSelectorStrategy>[] baseSelectorPlus, int dimensionStart)
-
process
public static Sequence<ResultRow> process(GroupByQuery query, @Nullable StorageAdapter storageAdapter, NonBlockingPool<ByteBuffer> intermediateResultsBufferPool, GroupByQueryConfig querySpecificConfig, DruidProcessingConfig processingConfig, @Nullable GroupByQueryMetrics groupByQueryMetrics)
-
getCardinalityForArrayAggregation
public static int getCardinalityForArrayAggregation(GroupByQueryConfig querySpecificConfig, GroupByQuery query, StorageAdapter storageAdapter, ByteBuffer buffer)
Returns the cardinality of array needed to do array-based aggregation, or -1 if array-based aggregation is impossible.
-
hasNoExplodingDimensions
public static boolean hasNoExplodingDimensions(ColumnInspector inspector, List<DimensionSpec> dimensions)
Checks whether all "dimensions" are either single-valued, or if the input column or output dimension spec has specified a type thatTypeSignature.isArray()
. Both cases indicate we don't want to explode the under-lying multi value column. Since selectors on non-existent columns will show up as full of nulls, they are effectively single valued, however capabilites on columns can also be null, for example during broker merge with an 'inline' datasource subquery, so we only return true from this method when capabilities are fully known.
-
convertRowTypesToOutputTypes
public static void convertRowTypesToOutputTypes(List<DimensionSpec> dimensionSpecs, ResultRow resultRow, int resultRowDimensionStart)
-
canPushDownLimit
public static boolean canPushDownLimit(ColumnSelectorFactory columnSelectorFactory, String columnName)
check if a column will operate correctly withLimitedBufferHashGrouper
for query limit pushdown
-
-