classNewsgroupsIndexer extends RDDProcessor with LazyLogging
Example application to index each article in the 20 newsgroups data as a Solr document.
The 20 newsgroups data is downloadable from http://qwone.com/~jason/20Newsgroups/.
Articles in any of the three archives available there can be indexed,
after first downloading it from the above page and unpacking it.
The path you supply as an argument to the --path cmdline option (see the
"Example invocation" section below) should be the directory containing the newsgroup
folders. All files found recursively under this path will be indexed to Solr.
Prerequisites
Start Solr in cloud mode, and create a target collection, e.g. (after downloading the
binary Solr distribution - see https://lucene.apache.org/solr/ - then unpacking and
changing to the unpacked root directory, e.g. solr-5.4.1/):
Each header present in the newsgroup articles will be indexed to a Solr dynamic field
name prefixed with the header name, e.g. Subject: text will be indexed into a field
named Subject_txt_en.
Note that the set of headers in each of the three available archives is different; details
are on the download page above.
The body of each article will be indexed into the content_txt_en field.
The newsgroup_s field will contain the name of the article's parent directory.
The id field value will be in the format newsgroup_articlenum, e.g. "comp.graphics_38659",
where newsgroup is the name of the article's parent directory, and articlenum is the
article filename.
The filepath_s field will contain the full path of the article source file.
If you downloaded the 20news-19997.tar.gz archive, the only one with the Date: header,
dates will be indexed into two fields: the Date_s field will contain the original Date:
text, and the Date_tdt field will contain the date reformatted in ISO-8601 format.
Example invocation
You must first run mvn -DskipTests package in the spark-solr project, and you must download
a Spark 1.6.1 binary distribution and point the environment variable $SPARK_HOME
to the unpacked distribution directory.
Example application to index each article in the 20 newsgroups data as a Solr document. The 20 newsgroups data is downloadable from http://qwone.com/~jason/20Newsgroups/.
Articles in any of the three archives available there can be indexed, after first downloading it from the above page and unpacking it.
The path you supply as an argument to the
--path
cmdline option (see the "Example invocation" section below) should be the directory containing the newsgroup folders. All files found recursively under this path will be indexed to Solr.Prerequisites
Start Solr in cloud mode, and create a target collection, e.g. (after downloading the binary Solr distribution - see https://lucene.apache.org/solr/ - then unpacking and changing to the unpacked root directory, e.g.
solr-5.4.1/
):bin/solr -c && bin/solr create -c testing -shards 2
Document fields
Each header present in the newsgroup articles will be indexed to a Solr dynamic field name prefixed with the header name, e.g. Subject: text will be indexed into a field named Subject_txt_en
.
Note that the set of headers in each of the three available archives is different; details are on the download page above.
The body of each article will be indexed into the
content_txt_en
field.The
newsgroup_s
field will contain the name of the article's parent directory.The
id
field value will be in the formatnewsgroup_articlenum
, e.g. "comp.graphics_38659", wherenewsgroup
is the name of the article's parent directory, andarticlenum
is the article filename.The
filepath_s
field will contain the full path of the article source file.If you downloaded the
20news-19997.tar.gz
archive, the only one with the Date: header, dates will be indexed into two fields: theDate_s
field will contain the original Date: text, and theDate_tdt
field will contain the date reformatted in ISO-8601 format.Example invocation
You must first run
mvn -DskipTests package
in the spark-solr project, and you must download a Spark 1.6.1 binary distribution and point the environment variable$SPARK_HOME
to the unpacked distribution directory.To see a description of all available options, run the following: