Package

com.lucidworks

spark

Permalink

package spark

Visibility
  1. Public
  2. All

Type Members

  1. case class CloudStreamPartition(index: Int, zkhost: String, collection: String, params: SolrParams) extends Partition with Product with Serializable

    Permalink
  2. class CollectionEmptyException extends IllegalStateException

    Permalink
  3. case class ExportHandlerPartition(index: Int, solrShard: SolrShard, query: SolrQuery, preferredReplica: SolrReplica, numWorkers: Int, workerId: Int) extends SolrRDDPartition with Product with Serializable

    Permalink
  4. trait LazyLogging extends AnyRef

    Permalink
  5. final class Logger extends Serializable

    Permalink
  6. case class SelectSolrRDDPartition(index: Int, cursorMark: String, solrShard: SolrShard, query: SolrQuery, preferredReplica: SolrReplica) extends SolrRDDPartition with Product with Serializable

    Permalink
  7. class ShardIndexPartitioner extends Partitioner with Serializable

    Permalink
  8. class SolrConf extends Serializable with LazyLogging

    Permalink
  9. case class SolrLimitPartition(index: Int = 0, zkhost: String, collection: String, maxRows: Int, query: SolrQuery) extends Partition with Product with Serializable

    Permalink
  10. trait SolrRDDPartition extends Partition

    Permalink
  11. class SolrRelation extends BaseRelation with Serializable with TableScan with PrunedFilteredScan with InsertableRelation with LazyLogging

    Permalink
  12. case class SolrReplica(replicaNumber: Int, replicaName: String, replicaUrl: String, replicaHostName: String, locations: Array[InetAddress]) extends Product with Serializable

    Permalink
  13. case class SolrShard(shardName: String, replicas: List[SolrReplica]) extends Product with Serializable

    Permalink
  14. class SolrStreamWriter extends Sink with LazyLogging

    Permalink

    Writes a Spark stream to Solr

  15. class SparkApp extends Serializable

    Permalink
  16. class SparkSolrAccumulator extends AccumulatorV2[Long, Long]

    Permalink
  17. case class StreamField(name: String, dataType: DataType, alias: Option[String], hasReplace: Boolean = false) extends Product with Serializable

    Permalink
  18. case class StreamFields(collection: String, fields: ListBuffer[StreamField], metrics: ListBuffer[StreamField]) extends Product with Serializable

    Permalink
  19. trait StrictLogging extends AnyRef

    Permalink
  20. class TimePartitioningQuery extends LazyLogging

    Permalink

Value Members

  1. object JsonFacetUtil extends LazyLogging

    Permalink
  2. object Logger extends Serializable

    Permalink

    Companion for Logger, providing a factory for Loggers.

  3. object SolrPartitioner

    Permalink
  4. object SolrRelation extends LazyLogging with Serializable

    Permalink
  5. object SparkSolrAccumulatorContext

    Permalink

    Spark made it impossible to lookup an accumulator by name.

    Spark made it impossible to lookup an accumulator by name. Holding a global singleton here, so that external clients that use this library can access the accumulators that are created by spark-solr for reading/writing Get rid of this once Spark ties accumulators to the context SPARK-13051

    Not really happy about the global singleton but I don't see any other way to do it

  6. object TimePartitioningQuery

    Permalink
  7. package analysis

    Permalink
  8. package example

    Permalink
  9. package filter

    Permalink
  10. package fusion

    Permalink
  11. package ml

    Permalink
  12. package port

    Permalink
  13. package query

    Permalink
  14. package rdd

    Permalink
  15. package util

    Permalink

Ungrouped