Advanced Filter Caching in Solr

10 02 2012

Note: my blog has moved here





Solr’s Realtime Get

7 09 2011

Note: my blog has moved here





Solr relevancy function queries

10 03 2011

Note: my blog has moved here





Solr Result Grouping / Field Collapsing Improvements

17 12 2010

I previously introduced Solr’s Result Grouping, also called Field Collapsing, that limits the number of documents shown for each “group”, normally defined as the unique values in a field or function query.

Since then, there have been a number of bug fixes, performance improvements, and feature enhancements. You’ll need a recent nightly build of Solr 4.0-dev, or the newly released LucidWorks Enterprise v1.6, our commercial version of Solr.

Feature Enhancements

One improvement is the ability to group by query via the group.query parameter. This functionality is very similar to facet.query, except that it retrieves the top documents that match the query, not just the count. This has many potential uses, including always getting the top documents for specific groups, or defining custom groups such has price ranges.

Another useful capability is the addition of the group.main parameter. Setting this to true causes the results of the first grouping command to be used as the main result list in a flattened response format that legacy clients will be able to handle.

For example, the grouped response format normally returns highly structured results under “grouped”.
…&q=solr+memory&group=true&group.field=manu_exact


 "grouped":{
  "manu_exact":{
   "matches":6,
   "groups":[{
     "groupValue":"Apache Software Foundation",
     "doclist":{"numFound":1,"start":0,"docs":[
       {
        "id":"SOLR1000",
        "name":"Solr, the Enterprise Search Server",
        "manu":"Apache Software Foundation"}]
     }},
    {
     "groupValue":"Corsair Microsystems Inc.",
     "doclist":{"numFound":2,"start":0,"docs":[
       {
        "id":"VS1GB400C3",
        "name":"CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) System Memory - Retail",
        "manu":"Corsair Microsystems Inc."}]
     }},
[...]

If we add group.main=true to the request, then we get back a much more familiar looking response (i.e. it looks like a normal non-grouped response):
…&q=solr+memory&group=true&group.field=manu_exact&group.main=true


 "response":{"numFound":6,"start":0,"docs":[
   {
    "id":"SOLR1000",
    "name":"Solr, the Enterprise Search Server",
    "manu":"Apache Software Foundation"},
   {
    "id":"VS1GB400C3",
    "name":"CORSAIR ValueSelect 1GB 184-Pin DDR SDRAM Unbuffered DDR 400 (PC 3200) System Memory - Retail",
    "manu":"Corsair Microsystems Inc."},

One can also use the group.format=simple parameter to select this simplified flattened response within the normal “grouped” section of the response.

Other recent enhancements include support for debugging explain, highlighting, faceting, and the ability to handle missing values in the grouping field by treating all documents without a value as being in the “null” group.

Performance Enhancements

There have been a number of performance enhancements, including an improvement to the short circuiting logic… cutting off low ranking documents earlier in the process. This important optimization resulted in a speedup of about 9x for collapsing on certain fields!

Collapsing on string fields was further optimized with specialized code that worked on ord values instead of the string values. This doubled the performance yet again!

Please see the Solr Wiki for further documentation on all of result grouping’s capabilities and parameters.





Ranges over Functions in Solr 1.4

6 07 2009

Solr 1.4 contains a new feature that allows range queries or range filters over arbitrary functions.  It’s implemented as a standard Solr QParser plugin, and thus easily available for use any place that accepts the standard Solr Query Syntax by specifying the frange query type.  Here’s an example of a filter specifying the lower and upper bounds for a function:

fq={!frange l=0 u=2.2}log(sum(user_ranking,editor_ranking))

The other interesting use for frange is to trade off memory for speed when doing range queries on any type of single-valued field.  For example, one can use frange on a string field provided that there is only one value per field, and that numeric functions are avoided.

For example, here is a filter that only allows authors between martin and rowling, specified using a standard range query:
fq=author_last_name:[martin TO rowling]

And the same filter using a function range query (frange):
fq={!frange l=martin u=rowling}author_last_name

This can lead to significant performance improvements for range queries with many terms between the endpoints, at the cost of memory to hold the un-inverted form of the field in memory (i.e. a FieldCache entry – same as would be used for sorting). If the field in question is already being used for sorting or other function queries, there won’t be any additional memory overhead.

The following chart shows the results of a test of frange queries vs standard range queries on a string field with 200,000 unique values. For example, frange was 14 times faster when executing a range query / range filter that covered 20% of the terms in the field. For narrower ranges that matched less than 5% of the values, the traditional range query performed better.

Percent of terms covered Fastest implementation Speedup (how many times faster)
100% frange 43.32
20% frange 14.25
10% frange 8.07
5% frange 1.337
1% normal range query 3.59

Of course, Solr 1.4 also contains the new TrieRange functionality that will generally have the best time/space profile for range queries over numeric fields.





Filtered query performance increases for Solr 1.4

27 05 2009

One of the many performance improvements in the upcoming Solr 1.4 release involves improved filtering performance. Solr 1.4 filters are both faster (anywhere from 30% to 80% faster to calculate intersections, depending on configuration), take less memory (40% smaller), and are more efficiently applied to the query during a search.

In previous Solr releases, filters were applied after the main query and thus had little impact on overall query performance. Filters are now checked in parallel with the query, resulting in greater speedups the fewer documents that match the filters.

Example: Adding a filter that matched 10% of a large index resulted in a 300% performance increase for a dismax query consisting of three words on a single field with proximity boost.

Related issues:

https://issues.apache.org/jira/browse/SOLR-1169

https://issues.apache.org/jira/browse/SOLR-1179





Solr scalability improvements

1 12 2008

With CPU cores constantly increasing, there has been some major work done in Lucene/Solr to increase the scalability under multi-threaded load.

Read-only IndexReaders

One bottleneck was synchronization around the checking of deleted docs in a Lucene IndexReader.  Since another thread could delete a document at any time, the IndexReader.isDeleted() call was synchronized.  It’s a very quick call, simply checking if a bit is set in a BitVector, but the problem was that it can be called millions of times in the process of satisfying a single query. The Read-only IndexReader feature allowed for the removal of this synchronization by prohibiting deletion.

Use of NIO to read index files

The standard method for Lucene to read index files is via Java’s RandomAccessFile.  Reading a part of the file involves two calls, a seek() to position the file pointer followed by a read() to get the data.  For multiple threads to share the same RandomAccessFile instance, this obviously involves synchronization to avoid one thread changing the file pointer before another thread gets to read at the file position it set.   If the data to be read isn’t in the operating system cache, it’s even worse news… the synchronization causes all other reads to block while the data is retrieved from disk, even if some of those reads could have been quickly satisified.

The preferred solution would be to have a method on RandomAccessFile that accepted an offset to read from.  This could easily be implemented by the JVM via a pread() system call.  But since Sun has not provided this functionality, we need to use something else.  NIO’s FileChannel does have the type of method we are looking for:  FileChannel.read(ByteBuffer dst, long position)

Solr now uses the non-synchronizing NIO method of reading index files (via Lucene’s NIOFSDirectory) by default if you are on a non-Windows platform.  Windows systems default to the older method since it turns out to be faster than the new method – the reason being a long standing “bug” in Java that still synchronizes internally even when using FileChannel.read().

Non blocking caches

Solr’s standard LRU cache implementation use a synchronized LinkedHashMap.  A single cache could be checked hundreds or thousands of times during the course of a single request that involves faceting.  A non-blocking ConcurrentLRUCache was developed as an alternative implementation, and is now the default for Solr’s filter cache.  One user indicated that this has doubled their query throughput under ideal circumstances.

Where to find this scalability goodness?

Solr 1.3 has read-only IndexReaders, but for the other scalability improvements, including the improved faceting, you’ll have to grab a nightly Solr build.





Solr Faceted Search Performance Improvements

25 11 2008

Having performance issues with Solr’s faceted search and certain types of fields?  Help has arrived in the form of a new Solr faceting algorithm!  This new faceting implementation dramatically improves the performance of faceted search, making it suitable for a much wider range of applications.

The existing multivalued field faceting algorithm (where each document may have multiple values) steps over each term in the index for that field.  For each term, the set of documents that match that term is retrieved from the filterCache, and an intersection count is calculated with the set of documents that match the query.  This works well for fields with a limited number of terms (less than 1000), but not so great for fields with many terms.

The new method works by un-inverting the indexed field to be faceted, allowing quick lookup of the terms in the field for any given document.  It’s actually a hybrid approach – to save memory and increase speed, terms that appear in many documents (over 5%) are not un-inverted, instead the traditional set intersection logic is used to get the counts.

Results: up to 5000% increase in queries per second and up to 700% improvement in memory utilization.

More gory details and detailed benchmark results can be found at http://issues.apache.org/jira/browse/SOLR-475

Try it now with a Solr nightly/test development build dated 11/25/2008 or later.





Distributed Search for Solr

27 02 2008

A new chapter in Solr scalability has been opened with the addition of distributed search!

http://wiki.apache.org/solr/DistributedSearch

Distributed Search splits an index into multiple shards, and queries across all the shards, combining the results and presenting a single merged response that looks like it came from a single server.

Solr’s current implementation uses SolrJ (the solr java client) to talk to other Solr servers via HTTP, in two main phases. The first phase collects matching document ids and scores, as well as doing any requested faceting. The second phase retrieves the stored fields for selected documents, does highlighting, and may include additional faceting requests to nail down exact facet counts.








Follow

Get every new post delivered to your Inbox.