CSV output for Solr

29 07 2010

Solr has been able to slurp in CSV for quite some time, and now I’ve finally got around to adding the ability to output query results in CSV also. The output format matches what the CSV loader can slurp.

Adding a simple wt=csv to a query request will cause the docs to be written in a CSV format that can be loaded into something like Excel.

http://localhost:8983/solr/select?q=ipod&fl=id,cat,name,popularity,price,score&wt=csv

id,cat,name,popularity,price,score
IW-02,"electronics,connector",iPod & iPod Mini USB 2.0 Cable,1,11.5,0.98867977
F8V7067-APL-KIT,"electronics,connector",Belkin Mobile Power Cord for iPod w/ Dock,1,19.95,0.6523595
MA147LL/A,"electronics,music",Apple 60 GB iPod with Video Playback Black,10,399.0,0.2446348

CSV formats tend to vary, so there are a number of parameters that allow you to customize the output. For example setting csv.escape=\ and csv.separator=%09 (a URL-encoded tab character) will use a tab separator and backslash escaping to match the default CSV format that MySQL uses.

http://localhost:8983/solr/select?q=ipod&fl=score,id&wt=csv&csv.escape=\&csv.separator=%09

score	id
0.98867977	IW-02
0.6523595	F8V7067-APL-KIT
0.2446348	MA147LL/A

The CSVResponseWriter is documented on the Solr Wiki, but you will need a recent
nightly build (Solr 3.1-dev or Solr 4.0-dev) to try it out.





Ranges over Functions in Solr 1.4

6 07 2009

Solr 1.4 contains a new feature that allows range queries or range filters over arbitrary functions.  It’s implemented as a standard Solr QParser plugin, and thus easily available for use any place that accepts the standard Solr Query Syntax by specifying the frange query type.  Here’s an example of a filter specifying the lower and upper bounds for a function:

fq={!frange l=0 u=2.2}log(sum(user_ranking,editor_ranking))

The other interesting use for frange is to trade off memory for speed when doing range queries on any type of single-valued field.  For example, one can use frange on a string field provided that there is only one value per field, and that numeric functions are avoided.

For example, here is a filter that only allows authors between martin and rowling, specified using a standard range query:
fq=author_last_name:[martin TO rowling]

And the same filter using a function range query (frange):
fq={!frange l=martin u=rowling}author_last_name

This can lead to significant performance improvements for range queries with many terms between the endpoints, at the cost of memory to hold the un-inverted form of the field in memory (i.e. a FieldCache entry – same as would be used for sorting). If the field in question is already being used for sorting or other function queries, there won’t be any additional memory overhead.

The following chart shows the results of a test of frange queries vs standard range queries on a string field with 200,000 unique values. For example, frange was 14 times faster when executing a range query / range filter that covered 20% of the terms in the field. For narrower ranges that matched less than 5% of the values, the traditional range query performed better.

Percent of terms covered Fastest implementation Speedup (how many times faster)
100% frange 43.32
20% frange 14.25
10% frange 8.07
5% frange 1.337
1% normal range query 3.59

Of course, Solr 1.4 also contains the new TrieRange functionality that will generally have the best time/space profile for range queries over numeric fields.





Filtered query performance increases for Solr 1.4

27 05 2009

One of the many performance improvements in the upcoming Solr 1.4 release involves improved filtering performance. Solr 1.4 filters are both faster (anywhere from 30% to 80% faster to calculate intersections, depending on configuration), take less memory (40% smaller), and are more efficiently applied to the query during a search.

In previous Solr releases, filters were applied after the main query and thus had little impact on overall query performance. Filters are now checked in parallel with the query, resulting in greater speedups the fewer documents that match the filters.

Example: Adding a filter that matched 10% of a large index resulted in a 300% performance increase for a dismax query consisting of three words on a single field with proximity boost.

Related issues:

https://issues.apache.org/jira/browse/SOLR-1169

https://issues.apache.org/jira/browse/SOLR-1179





Solr scalability improvements

1 12 2008

With CPU cores constantly increasing, there has been some major work done in Lucene/Solr to increase the scalability under multi-threaded load.

Read-only IndexReaders

One bottleneck was synchronization around the checking of deleted docs in a Lucene IndexReader.  Since another thread could delete a document at any time, the IndexReader.isDeleted() call was synchronized.  It’s a very quick call, simply checking if a bit is set in a BitVector, but the problem was that it can be called millions of times in the process of satisfying a single query. The Read-only IndexReader feature allowed for the removal of this synchronization by prohibiting deletion.

Use of NIO to read index files

The standard method for Lucene to read index files is via Java’s RandomAccessFile.  Reading a part of the file involves two calls, a seek() to position the file pointer followed by a read() to get the data.  For multiple threads to share the same RandomAccessFile instance, this obviously involves synchronization to avoid one thread changing the file pointer before another thread gets to read at the file position it set.   If the data to be read isn’t in the operating system cache, it’s even worse news… the synchronization causes all other reads to block while the data is retrieved from disk, even if some of those reads could have been quickly satisified.

The preferred solution would be to have a method on RandomAccessFile that accepted an offset to read from.  This could easily be implemented by the JVM via a pread() system call.  But since Sun has not provided this functionality, we need to use something else.  NIO’s FileChannel does have the type of method we are looking for:  FileChannel.read(ByteBuffer dst, long position)

Solr now uses the non-synchronizing NIO method of reading index files (via Lucene’s NIOFSDirectory) by default if you are on a non-Windows platform.  Windows systems default to the older method since it turns out to be faster than the new method – the reason being a long standing “bug” in Java that still synchronizes internally even when using FileChannel.read().

Non blocking caches

Solr’s standard LRU cache implementation use a synchronized LinkedHashMap.  A single cache could be checked hundreds or thousands of times during the course of a single request that involves faceting.  A non-blocking ConcurrentLRUCache was developed as an alternative implementation, and is now the default for Solr’s filter cache.  One user indicated that this has doubled their query throughput under ideal circumstances.

Where to find this scalability goodness?

Solr 1.3 has read-only IndexReaders, but for the other scalability improvements, including the improved faceting, you’ll have to grab a nightly Solr build.





Solr Faceted Search Performance Improvements

25 11 2008

See facet performance benchmarks on my new blog for the latest performance benchmarks.





lookup3ycs : a standard high performance string hash

14 06 2008

I was surprised to discovered that there isn’t a good cross-platform hash function defined for strings. MD5, SHA, FVN, etc, all define hash functions over bytes, meaning that it’s under-specified for strings.

So I set out to create a standard 32 bit string hash that would be well defined for implementation in all languages, have very high performance, and have very good hash properties such as distribution. After evaluating all the options, I settled on using Bob Jenkins’ lookup3 as a base. It’s a well studied and very fast hash function, and the hashword variant can work with 32 bits at a time (perfect for hashing unicode code points). It’s also even faster on the latest JVMs which can translate pairs of shifts into native rotate instructions.

The only problem with using lookup3 hashword is that it includes a length in the initial value. This would suck some performance out since directly hashing a UTF8 or UTF16 string (Java) would require a pre-scan to get the actual number of unicode code points. The solution was to simply remove the length factor, which is equivalent to biasing initVal by -(numCodePoints*4). This slightly modified lookup3 I define as lookup3ycs.

So the definition of the cross-platform string hash lookup3ycs is:

The hash value of a character sequence (a string) is defined to be the hash of it’s unicode code points, according to lookup3 hashword, with the initval biased by -(length*4).

So by definition

lookup3ycs(k,offset,length,initval) == lookup3(k,offset,length,initval-(length*4))

AND

lookup3ycs(k,offset,length,initval+(length*4)) == lookup3(k,offset,length,initval)

An obvious advantage of this relationship is that you can use lookup3 if you don’t have an implementation of lookup3ycs.

Here’s my optimized version for Java

Update: I’ve also included a 64 bit version called lookup3ycs64





Distributed Search for Solr

27 02 2008

A new chapter in Solr scalability has been opened with the addition of distributed search!

http://wiki.apache.org/solr/DistributedSearch

Distributed Search splits an index into multiple shards, and queries across all the shards, combining the results and presenting a single merged response that looks like it came from a single server.

Solr’s current implementation uses SolrJ (the solr java client) to talk to other Solr servers via HTTP, in two main phases. The first phase collects matching document ids and scores, as well as doing any requested faceting. The second phase retrieves the stored fields for selected documents, does highlighting, and may include additional faceting requests to nail down exact facet counts.