Package

com.lucidworks.spark.example

ml

Permalink

package ml

Visibility
  1. Public
  2. All

Type Members

  1. class MLPipeline extends RDDProcessor

    Permalink
  2. class MLPipelineScala extends RDDProcessor

    Permalink

    An example of building a spark.ml classification model to predict the newsgroup of articles from the 20 newsgroups data (see http://qwone.com/~jason/20Newsgroups/) hosted in a Solr collection.

    An example of building a spark.ml classification model to predict the newsgroup of articles from the 20 newsgroups data (see http://qwone.com/~jason/20Newsgroups/) hosted in a Solr collection.

    Prerequisites

    You must run mvn -DskipTests package in the spark-solr project, and you must download a Spark 1.6.1 binary distribution and point the environment variable $SPARK_HOME to the unpacked distribution directory.

    Follow the instructions in the NewsgroupsIndexer example's scaladoc to populate a Solr collection with articles from the above-linked 20 newsgroup data.

    Example invocation

    $SPARK_HOME/bin/spark-submit --master 'local[2]' --class com.lucidworks.spark.SparkApp \
    target/spark-solr-2.0.0-SNAPSHOT-shaded.jar ml-pipeline-scala

    To see a description of all available options, run the following:

    $SPARK_HOME/bin/spark-submit --class com.lucidworks.spark.SparkApp \
    target/spark-solr-2.0.0-SNAPSHOT-shaded.jar ml-pipeline-scala --help
  3. class NewsgroupsIndexer extends RDDProcessor with LazyLogging

    Permalink

    Example application to index each article in the 20 newsgroups data as a Solr document.

    Example application to index each article in the 20 newsgroups data as a Solr document. The 20 newsgroups data is downloadable from http://qwone.com/~jason/20Newsgroups/.

    Articles in any of the three archives available there can be indexed, after first downloading it from the above page and unpacking it.

    The path you supply as an argument to the --path cmdline option (see the "Example invocation" section below) should be the directory containing the newsgroup folders. All files found recursively under this path will be indexed to Solr.

    Prerequisites

    Start Solr in cloud mode, and create a target collection, e.g. (after downloading the binary Solr distribution - see https://lucene.apache.org/solr/ - then unpacking and changing to the unpacked root directory, e.g. solr-5.4.1/):

    bin/solr -c && bin/solr create -c testing -shards 2

    Document fields

    Each header present in the newsgroup articles will be indexed to a Solr dynamic field name prefixed with the header name, e.g. Subject: text will be indexed into a field named Subject_txt_en.

    Note that the set of headers in each of the three available archives is different; details are on the download page above.

    The body of each article will be indexed into the content_txt_en field.

    The newsgroup_s field will contain the name of the article's parent directory.

    The id field value will be in the format newsgroup_articlenum, e.g. "comp.graphics_38659", where newsgroup is the name of the article's parent directory, and articlenum is the article filename.

    The filepath_s field will contain the full path of the article source file.

    If you downloaded the 20news-19997.tar.gz archive, the only one with the Date: header, dates will be indexed into two fields: the Date_s field will contain the original Date: text, and the Date_tdt field will contain the date reformatted in ISO-8601 format.

    Example invocation

    You must first run mvn -DskipTests package in the spark-solr project, and you must download a Spark 1.6.1 binary distribution and point the environment variable $SPARK_HOME to the unpacked distribution directory.

    $SPARK_HOME/bin/spark-submit --master 'local[2]' --class com.lucidworks.spark.SparkApp \
    target/spark-solr-2.0.0-SNAPSHOT-shaded.jar newsgroups2solr -zkHost localhost:9983     \
    -collection ml20news -path /relative/or/absolute/path/to/20news-18828`

    To see a description of all available options, run the following:

    $SPARK_HOME/bin/spark-submit --class com.lucidworks.spark.SparkApp \
    target/spark-solr-2.0.0-SNAPSHOT-shaded.jar newsgroups2solr --help
  4. class UseML extends RDDProcessor

    Permalink

Value Members

  1. object DateConverter extends Serializable with LazyLogging

    Permalink

    Converts 3-letter time zone IDs to IDs that Joda-Time understands, parses dates using a set of date formats known to be present in the 20 newsgroups data, then converts them to ISO8601 format.

  2. object MLPipelineScala extends Serializable

    Permalink
  3. object NewsgroupsIndexer extends Serializable

    Permalink

Ungrouped