Package org.apache.druid.data.input
Class InputRowSchema
- java.lang.Object
-
- org.apache.druid.data.input.InputRowSchema
-
-
Constructor Summary
Constructors Constructor Description InputRowSchema(TimestampSpec timestampSpec, DimensionsSpec dimensionsSpec, ColumnsFilter columnsFilter)
InputRowSchema(TimestampSpec timestampSpec, DimensionsSpec dimensionsSpec, ColumnsFilter columnsFilter, Set<String> metricNames)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description ColumnsFilter
getColumnsFilter()
AColumnsFilter
that can filter down the list of columns that must be read after flattening.DimensionsSpec
getDimensionsSpec()
@NotNull Set<String>
getMetricNames()
TimestampSpec
getTimestampSpec()
-
-
-
Constructor Detail
-
InputRowSchema
public InputRowSchema(TimestampSpec timestampSpec, DimensionsSpec dimensionsSpec, ColumnsFilter columnsFilter)
-
InputRowSchema
public InputRowSchema(TimestampSpec timestampSpec, DimensionsSpec dimensionsSpec, ColumnsFilter columnsFilter, Set<String> metricNames)
-
-
Method Detail
-
getTimestampSpec
public TimestampSpec getTimestampSpec()
-
getDimensionsSpec
public DimensionsSpec getDimensionsSpec()
-
getColumnsFilter
public ColumnsFilter getColumnsFilter()
AColumnsFilter
that can filter down the list of columns that must be read after flattening. Logically, Druid applies ingestion spec components in a particular order: first flattenSpec (if any), then timestampSpec, then transformSpec, and finally dimensionsSpec and metricsSpec. If a flattenSpec is provided, this method returns a filter that should be applied after flattening. So, it will be based on what needs to pass between the flattenSpec and everything beyond it.
-
-