org.sparklinedata.druid

DateTimeGroupingElem

case class DateTimeGroupingElem(outputName: String, druidColumn: DruidRelationColumn, formatToApply: String, tzForFormat: Option[String], pushedExpression: Expression, inputFormat: Option[String] = scala.None) extends Product with Serializable

outputName

the name of the column in the Druid resultset

druidColumn

the Druid Dimension this is mapped to

formatToApply

in Druid Spark DateTime Expressions handled as TimeFormatExtractionFunctionSpec; this specifies the format to apply on the Druid Dimension.

tzForFormat

see above

pushedExpression

this controls the expression evlaution that happens on return from Druid. So for expression like

to_date(cast(dateCol as DateType))
to_date(cast(DruidValue, DatetYpe))

on the resultset of Druid. This is required because Dates are Ints and Timestamps are Longs in Spark, whereas the value coming out of Druid is an ISO DateTime String.

inputFormat

format to use to parse input value.

Linear Supertypes
Serializable, Serializable, Product, Equals, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. DateTimeGroupingElem
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new DateTimeGroupingElem(outputName: String, druidColumn: DruidRelationColumn, formatToApply: String, tzForFormat: Option[String], pushedExpression: Expression, inputFormat: Option[String] = scala.None)

    outputName

    the name of the column in the Druid resultset

    druidColumn

    the Druid Dimension this is mapped to

    formatToApply

    in Druid Spark DateTime Expressions handled as TimeFormatExtractionFunctionSpec; this specifies the format to apply on the Druid Dimension.

    tzForFormat

    see above

    pushedExpression

    this controls the expression evlaution that happens on return from Druid. So for expression like

    to_date(cast(dateCol as DateType))
    to_date(cast(DruidValue, DatetYpe))

    on the resultset of Druid. This is required because Dates are Ints and Timestamps are Longs in Spark, whereas the value coming out of Druid is an ISO DateTime String.

    inputFormat

    format to use to parse input value.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. val druidColumn: DruidRelationColumn

    the Druid Dimension this is mapped to

  9. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  10. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. val formatToApply: String

    in Druid Spark DateTime Expressions handled as TimeFormatExtractionFunctionSpec; this specifies the format to apply on the Druid Dimension.

  12. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  13. val inputFormat: Option[String]

    format to use to parse input value.

  14. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  15. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  16. final def notify(): Unit

    Definition Classes
    AnyRef
  17. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  18. val outputName: String

    the name of the column in the Druid resultset

  19. val pushedExpression: Expression

    this controls the expression evlaution that happens on return from Druid.

    this controls the expression evlaution that happens on return from Druid. So for expression like

    to_date(cast(dateCol as DateType))
    to_date(cast(DruidValue, DatetYpe))

    on the resultset of Druid. This is required because Dates are Ints and Timestamps are Longs in Spark, whereas the value coming out of Druid is an ISO DateTime String.

  20. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  21. val tzForFormat: Option[String]

    see above

  22. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  23. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  24. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from AnyRef

Inherited from Any

Ungrouped