DataSource name in Druid.
Specify how columns are related, see org.sparklinedata.druid.metadata.FunctionalDependency.
Specify how columns are related, see org.sparklinedata.druid.metadata.FunctionalDependency. Specified as a list of functional dependency objects.
When loading Druid DataSource metadata should the query interval be the entire dataSource interval, or only the latest segment is enough.
When loading Druid DataSource metadata should the query interval be the entire dataSource interval, or only the latest segment is enough. Default is to load from all segments; since our query has ("analysisTypes" -> []) the query is cheap.
If the result cardinality of a Query exceeeds this value then Query is not converted to a Druid Query.
If the result size estimate exceeds this number, and attempt is made to run 'n' druid queries, each of which spans a sub interval of the total time interval.
If the result size estimate exceeds this number, and attempt is made to run 'n'
druid queries, each of which spans a sub interval of the total time interval.
'n' is computed as result.size % thisParam + 1
Controls whether Query results from Druid are streamed into Spark Operator pipeline.
Controls whether Query results from Druid are streamed into Spark Operator pipeline. Default is true.
List of DruidRelationColumnInfo that provide details about the source column to Druid linkages.
Map column names to Druid field names.
Map column names to Druid field names. Specified as a json string.
Controls whether Query results from Druid are streamed into Spark Operator pipeline.
Controls whether Query results from Druid are streamed into Spark Operator pipeline. Default is true.