Wednesday, April 15, 2015

Luke gets support for Elasticsearch indices

That is that, really. The so long awaited proper support for elasticsearch indices.





Luke supported Apache Solr indices already. Why not Elasticsearch? The reason was, that ES uses its own SPI for postings format. If you tried to open an Elasticsearch index with luke before, you'd get something like:

A SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'es090' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [Lucene40, Lucene41]


The biggest issue of supporting custom SPI is that you'd need to hack the luke jar binary and add the ES SPI. I bet it is not what you would want to spend your time on.

With the excellent pull request by apakulov https://github.com/DmitryKey/luke/pull/23 luke uses shade maven plugin, that does all the magic. It magically updates the in-binary META-INF/services file with the following entry:

org.elasticsearch.index.codec.postingsformat.Elasticsearch090PostingsFormat
org.elasticsearch.search.suggest.completion.Completion090PostingsFormat
org.elasticsearch.index.codec.postingsformat.BloomFilterPostingsFormat


Currently this is available on luke master: https://github.com/DmitryKey/luke and a pre-release: https://github.com/DmitryKey/luke/releases/tag/luke-4.10.4-field-reconstruction

Saturday, March 21, 2015

Flexible run-time logging configuration in Apache Solr 4.10.x

In a multi-shard setup it is useful to be able to change log level in runtime without going to each and every shard's admin page.

For example, we can set the logging to WARN level during massive posting sessions and back to INFO, when serving the user queries.

In solr 4.10.2 these one-liners do the trick:

# set logging level to WARN,
# saves disk space and speeds up massive posting 
curl -s http://localhost:8983/solr/admin/info/logging \
                       --data-binary "set=root:WARN&wt=json" 
 
# set logging level to INFO,
# suitable for serving the user queries 
curl -s http://localhost:8983/solr/admin/info/logging \
                       --data-binary "set=root:INFO&wt=json"

Back from Solr you get a JSON with the current status of each configured logger.

Monday, March 16, 2015

Luke keeps getting updates and now on Apache Pivot

Originally developed for fun and profit by Andrzej Bialecki, the lucene toolbox luke continues to be developed. Its releases are published at: https://github.com/DmitryKey/luke/releases


Most recently Tomoko Uchida has contributed into effort of porting Luke to an Apache License 2.0 friendly GUI framework Apache Pivot. New branch has been created to host this work:

https://github.com/DmitryKey/luke/tree/pivot-luke

Currently supported Lucene: 4.10.4.

It is far from completion, but already now you can:

  • open your Lucene index and check its metadata

  • page through the documents and analyze fields


  • search the index

We will appreciate if you could test the pivot luke and give your feedback.

Monday, November 17, 2014

Lightweight Java Profiler and Interactive svg Flame Graphs

A colleague of mine has just returned from the AWS re:Invent and brought in all the excitement about new AWS technologies. So I went on to watching the released videos of the talks. One of the first technical ones I have set on watching was Performance Tuning Amazon EC2 Instances by Brendan Gregg of Netflix. From Brendan's talk I have learnt about Lightweight Java Profiler (LJP) and visualizing stack traces with Flame Graphs.

I'm quite 'obsessed' with monitoring and performance tuning based on it.
Monitoring your applications is definitely the way to:

1. Get numbers on performance inside your company, spread them and let people talk stories about them.
2. Tune the system in where you see the bottleneck and measure again.

In this post I would like to share a shell script that will produce a colourful and interactive flame graph out of a stack trace of your java application. This may be useful in a variety of ways, starting from an impressive graph for you slides to making informed tuning of your code / system.

Components to build / install

This was run on ubuntu 12.04 LTS.
Checkout the Lightweight Java Profiler project source code and build it:

svn checkout \
    http://lightweight-java-profiler.googlecode.com/svn/trunk/ \
    lightweight-java-profiler-read-only 
 
cd lightweight-java-profiler-read-only/
make BITS=64 all

(omit the BITS parameter if you want to build for 32 bit platform).

As a result of successful compilation you will have a liblagent.so binary that will be used to configure your java process.


Next, clone the FlameGraph github repository:

git clone https://github.com/brendangregg/FlameGraph.git

You don't need to build anything, it is a collection of shell / perl scripts that will do the magic.

Configuring the LJP agent on your java process

Next step is to configure the LJP agent to report stats from your java process. I have picked a Solr instance running under jetty. Here is how I have configured it in my Solr startup script:

java \
-agentpath:/.../lightweight-java-profiler-read-only/\
      build-64/liblagent.so \
-Dsolr.solr.home=cores start.jar

Executing the script should start the Solr instance normally and will be logging stack trace to traces.txt

Generating a Flame graph

In order to produce a flame graph out of the LJP stack trace you will need to perform the following:

1. Convert LJP stack trace into a collapsed form that FlameGraph understands.

2. Call flamegraph.pl tool on the collapsed stack trace and produce the svg file.


I have written a shell script that will do this for you.

#!/bin/sh

# change this variable to point to your FlameGraph directory
FLAME_GRAPH_HOME=/home/dmitry/tools/FlameGraph

LJP_TRACES_FILE=${1}
FILENAME=$(basename $LJP_TRACES_FILE)

JLP_TRACES_FILE_COLLAPSED=\
   $(dirname $LJP_TRACES_FILE)\
       /${FILENAME%.*}_collapsed.${FILENAME##*.}
FLAME_GRAPH=\
       $(dirname $LJP_TRACES_FILE)/${FILENAME%.*}.svg

# collapse the LJP stack trace
$FLAME_GRAPH_HOME/stackcollapse-ljp.awk $LJP_TRACES_FILE > \
    $JLP_TRACES_FILE_COLLAPSED

# create a flame graph
$FLAME_GRAPH_HOME/flamegraph.pl $JLP_TRACES_FILE_COLLAPSED > \
    $FLAME_GRAPH


And here is the flame graph of my Solr instance under the indexing load.



You could interpret this diagram bottom-up: the lowest level is entry point class that starts the application. Then we see that CPU-wise two methods are taking the most of the time: org.eclipse.jetty.start.Main.main and java.lang.Thread.run.

This svg diagram is in fact an interactive one: load it in the browser and click on the rectangles with methods you would like to explore more. I have clicked on the
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd rectangle and drilled down to it:


It is this easy to setup a CPU performance check for your java program. Remember to monitor before tuning your code and wear a helmet.

Friday, November 14, 2014

Ruby pearls and gems for your daily routine coding tasks

This is a list of ruby pearls and gems, that help me in my daily routine coding tasks.




1. Retain only unique elements in an array:

a = [1, 1, 2, 3, 4, 4, 5]

a = a.uniq # => [1, 2, 3, 4, 5]

2. Command line options parsing:

require 'optparse'
class Optparser

def self.parse(args)
  options = {}
  OptionParser.new do |opts|
    opts.banner = "Usage: example.rb [options]"

    opts.on("-v", "--[no-]verbose", "Run verbosely") do |v|
     options[:verbose] = v
    end

   opts.on("-o", "--require OUTPUTDIR", "Output directory") do |o|
     options[:output_dir] = o
   end

   options[:source_dir] = []
     opts.on("-s", "--require SOURCEDIR", "Source directory") do |s|
     options[:source_dir] << s
   end

   end.parse!

   options
  end
end

options = Optparser.parse(ARGV) #pp options  When executed with -h, this script will automatically show the options and exit.  

3. Delete a key-value pair in the hash map, where the key matches certain condition:

hashMap.delete_if {|key, value| key == "someString" }

Certainly, you can use regular expression based matching for the condition or a custom function, say, on the 'key' value.


4. Interacting with mysql. I use mysql2 gem. Check out the documentation, it is pretty self-evident.

5. Working with Apache SOLR: rsolr and rsolr-ext are invaluable here:

require 'rsolr'
require 'rsolr-ext'
solrServer = RSolr::Ext.connect :url => $solrServerUrl, :read_timeout => $read_timeout, :open_timeout => $open_timeout

doc = {field1=>"value1", "field2"=>"value2"}

solrServer.add doc

solrServer.commit(:commit_attributes => {:waitSearcher=>false, :softCommit=>false, :expungeDeletes=>true})
solrServer.optimize(:optimize_attributes => {:maxSegments=>1}) # single segment as output

Tuesday, September 23, 2014

Indexing documents in Apache Solr using custom update chain and solrj api

This post focuses on how to target custom update chain using solrj api and index your documents in Apache Solr. The reason for this post existence is because I have spent more than one hour figuring this out. This warrants a blog post (hopefully for other's benefit as well).

Setup


Suppose that you have a default update chain, that is executed in every day situations, i.e. for majority of input documents:

<updaterequestprocessorchain default="true" name="everydaychain">
<processor class="solr.LogUpdateProcessorFactory" />
<processor class="solr.RunUpdateProcessorFactory" />
</updaterequestprocessorchain>

In some specific cases you would like to execute a slightly modified update chain, in this case with a factory that drops duplicate values from document fields. For that purpose you have configured a custom update chain:

<updaterequestprocessorchain default="true" name="customchain">
<processor class="solr.UniqFieldsUpdateProcessorFactory" >
<lst name="fields">
   <str>field1</str>
<lst>
<processor class="solr.LogUpdateProcessorFactory" />
<processor class="solr.RunUpdateProcessorFactory" />
</updaterequestprocessorchain>

Your update request handler looks like this:

<requesthandler class="solr.UpdateRequestHandler" name="/update">
<lst name="defaults">
<str name="update.chain">everydaychain</str>
</requesthandler>

Every time you hit /update from your solrj backed code, you'll execute document indexing using the "everydaychain".

Task


Using solrj, index documents against the custom update chain.

Solution


First before diving into the solution, I'll show the code that you use for normal indexing process from java, i.e. with every:

HttpSolrServer httpSolrServer = null;
try {
     httpSolrServer = new HttpSolrServer("http://localhost:8983/solr/core0");
     SolrInputDocument sid = new SolrInputDocument();
     sid.addField("field1", "value1");
     httpSolrServer.add(sid);

     httpSolrServer.commit(); // hard commit; could be soft too
} catch (Exception e) {
     if (httpSolrServer != null) {
         httpSolrServer.shutdown();
     }
}

So far so good. Next turning to indexing with custom update chain. This part of non-obvious from the point of view of solrj api design: having an instance of SolrInputDocument, how would one access a custom update chain? You may notice, how the update chain is defined in the update request handler of your solrconfig.xml. It uses the update.chain parameter name. Luckily, this is an http parameter, that can be supplied on the /update endpoint. Figuring this out via http client of the httpSolrServer object led to nowhere.

Turns out, you can use UpdateRequest class instead. The object has got a nice setParam() method that lets you set a value for the update.chain parameter:

HttpSolrServer httpSolrServer = null;
        try {
            httpSolrServer = new HttpSolrServer(updateURL);

            SolrInputDocument sid = new SolrInputDocument();
            // dummy field
            sid.addField("field1", "value1");

            UpdateRequest updateRequest = new UpdateRequest();
            updateRequest.setCommitWithin(2000);
            updateRequest.setParam("update.chain", "customchain");
            updateRequest.add(sid);

            UpdateResponse updateResponse = updateRequest.process(httpSolrServer);
            if (updateResponse.getStatus() == 200) {
                log.info("Successfully added a document");
            } else {
                log.info("Adding document failed, status code=" + updateResponse.getStatus());
            }
        } catch (Exception e) {
            e.printStackTrace();
            if (httpSolrServer != null) {
                httpSolrServer.shutdown();
                log.info("Released connection to the Solr server");
            }

        }

Executing the second code will trigger the LogUpdateProcessor to output the following line in the solr logs:

org.apache.solr.update.processor.LogUpdateProcessor  –
   [core0] webapp=/solr path=/update params={wt=javabin&
      version=2&update.chain=customchain}

That's it for today. Happy indexing!

Wednesday, September 17, 2014

Exporting Lucene index to xml with Luke

Luke is the open source Lucene toolbox originally written by Andrzej Bialecki and currently maintained by yours truly. The tool allows you to introspect into your solr / lucene index, check it for health, fix problems, verify field tokens and even experiment with scoring or read the index from HDFS.

In this post I would like to illustrate one particular luke's feature, that allows you to dump index into an xml for external processing.

Task

Extract indexed tokens from a field to a file for further analysis outside luke.

 

Indexing data

In order to extract tokens you need to index your field with term vectors configured. Usually, this also means, that you need to configure positions and offsets.

If you are indexing using Apache Solr, you would configure the following on your field:

<field indexed="true" name="Contents" omitnorms="false" stored="true" termoffsets="true" termpositions="true" termvectors="true" type="text">

With this line you make sure you field is going to store its contents, not only index; it will also store the term vectors, i.e. a term, its positions and offsets in the token stream.

 

Extracting index terms

One way to view the indexed tokens with luke is to search / list documents, select the field with term vectors enabled and click TV button (or right-click and choose "Field's Term Vector").




If you would like to extract this data into an external file, there is a way currently to accomplish this via menu Tools->Export index to XML:



In this case I have selected the docid 94724 (note, that this is lucene's internal doc id, not solr application level document id!), that is visible when viewing a particular document in luke. This dumps a document into the xml file, including the fields in the schema and each field's contents. In particular, this will dump the term vectors (if present) of a field, in my case:

<field flags="Idfp--SV-Nnum--------" name="Contents">
<val>CENTURY TEXT.</val>
<tv>
<t freq="1" offsets="0-7" positions="0" text="centuri" />
<t freq="1" offsets="0-7" positions="0" text="centuryä" />
<t freq="1" offsets="8-12" positions="1" text="text" />
<t freq="1" offsets="8-12" positions="1" text="textä" />
</tv>
</field>