more progress

git-svn-id: https://svn.apache.org/repos/asf/lucene/dev/branches/lucene4258@1498804 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
index a40ba38..769d122 100644
--- a/lucene/CHANGES.txt
+++ b/lucene/CHANGES.txt
@@ -76,41 +76,37 @@
 * LUCENE-3907: EdgeNGramTokenFilter does not support backward grams and does
   not update offsets anymore. (Adrien Grand)
 
+* LUCENE-4981: PositionFilter is now deprecated as it can corrupt token stream
+  graphs. Since it main use-case was to make query parsers generate boolean
+  queries instead of phrase queries, it is now advised to use
+  QueryParser.setAutoGeneratePhraseQueries(false) (for simple cases) or to
+  override QueryParser.newFieldQuery. (Adrien Grand, Steve Rowe)
+
+* LUCENE-5018: CompoundWordTokenFilterBase and its children
+  DictionaryCompoundWordTokenFilter and HyphenationCompoundWordTokenFilter don't
+  update offsets anymore. (Adrien Grand)
+
+* LUCENE-5015: SamplingAccumulator no longer corrects the counts of the sampled 
+  categories. You should set TakmiSampleFixer on SamplingParams if required (but 
+  notice that this means slower search). (Rob Audenaerde, Gilad Barkai, Shai Erera)
+
+* LUCENE-4933: Replace ExactSimScorer/SloppySimScorer with just SimScorer. Previously
+  there were 2 implementations as a performance hack to support tableization of
+  sqrt(), but this caching is removed, as sqrt is implemented in hardware with modern 
+  jvms and its faster not to cache.  (Robert Muir)
+
 Bug Fixes
 
 * LUCENE-4997: Internal test framework's tests are sensitive to previous 
   test failures and tests.failfast. (Dawid Weiss, Shai Erera)
 
-* LUCENE-4935: CustomScoreQuery wrongly applied its query boost twice 
-  (boost^2).  (Robert Muir)
-
-* LUCENE-4948: Fixed ArrayIndexOutOfBoundsException in PostingsHighlighter
-  if you had a 64-bit JVM without compressed OOPS: IBM J9, or Oracle with
-  large heap/explicitly disabled.  (Mike McCandless, Uwe Schindler, Robert Muir)
-
 * LUCENE-4955: NGramTokenizer now supports inputs larger than 1024 chars.
   (Adrien Grand)
 
-* LUCENE-4953: Fixed ParallelCompositeReader to inform ReaderClosedListeners of
-  its synthetic subreaders. FieldCaches keyed on the atomic childs will be purged
-  earlier and FC insanity prevented.  In addition, ParallelCompositeReader's
-  toString() was changed to better reflect the reader structure.
-  (Mike McCandless, Uwe Schindler)
-
 * LUCENE-4959: Fix incorrect return value in
   SimpleNaiveBayesClassifier.assignClass. (Alexey Kutin via Adrien Grand)
 
-* LUCENE-4968: Fixed ToParentBlockJoinQuery/Collector: correctly handle parent
-  hits that had no child matches, don't throw IllegalArgumentEx when
-  the child query has no hits, more aggressively catch cases where childQuery
-  incorrectly matches parent documents (Mike McCandless)
-
-* LUCENE-4970: Fix boost value of rewritten NGramPhraseQuery.
-  (Shingo Sasaki via Adrien Grand)
-
-* LUCENE-4974: CommitIndexTask was broken if no params were set. (Shai Erera)
-
-* LUCENE-4972: DirectoryTaxonomyWriter created empty commits even if no changes 
+* LUCENE-4972: DirectoryTaxonomyWriter created empty commits even if no changes
   were made. (Shai Erera, Michael McCandless)
   
 * LUCENE-949: AnalyzingQueryParser can't work with leading wildcards.
@@ -120,28 +116,34 @@
   non-RangeFacetRequest when using DrillSideways.  (Mike McCandless,
   Shai Erera)
 
-* LUCENE-4986: Fixed case where a newly opened near-real-time reader
-  fails to reflect a delete from IndexWriter.tryDeleteDocument (Reg,
-  Mike McCandless)
-  
-* LUCENE-4994: Fix PatternKeywordMarkerFilter to have public constructor.
-  (Uwe Schindler)
-  
-* LUCENE-4993: Fix BeiderMorseFilter to preserve custom attributes when
-  inserting tokens with position increment 0.  (Uwe Schindler)
-
 * LUCENE-4996: Ensure DocInverterPerField always includes field name
   in exception messages.  (Markus Jelsma via Robert Muir)
 
-* LUCENE-4991: Fix handling of synonyms in classic QueryParser.getFieldQuery for 
-  terms not separated by whitespace. PositionIncrementAttribute was ignored, so with 
-  default AND synonyms wrongly became mandatory clauses, and with OR, the 
-  coordination factor was wrong.  (李威, Robert Muir)
-  
-Optimizations
+* LUCENE-4992: Fix constructor of CustomScoreQuery to take FunctionQuery
+  for scoringQueries. Instead use QueryValueSource to safely wrap arbitrary 
+  queries and use them with CustomScoreQuery.  (John Wang, Robert Muir)
 
-* LUCENE-4938: Don't use an unnecessarily large priority queue in IndexSearcher
-  methods that take top-N.  (Uwe Schindler, Mike McCandless, Robert Muir)
+* LUCENE-5016: SamplingAccumulator returned inconsistent label if asked to
+  aggregate a non-existing category. Also fixed a bug in RangeAccumulator if
+  some readers did not have the requested numeric DV field.
+  (Rob Audenaerde, Shai Erera)
+
+* LUCENE-5028: Remove pointless and confusing doShare option in FST's
+  PositiveIntOutputs (Han Jiang via Mike McCandless)
+
+* LUCENE-5032: Fix IndexOutOfBoundsExc in PostingsHighlighter when
+  multi-valued fields exceed maxLength (Tomás Fernández Löbbe
+  via Mike McCandless)
+
+* LUCENE-4933: SweetSpotSimilarity didn't apply its tf function to some
+  queries (SloppyPhraseQuery, SpanQueries).  (Robert Muir)
+
+* LUCENE-5033: SlowFuzzyQuery was accepting too many terms (documents) when
+  provided minSimilarity is an int > 1 (Tim Allison via Mike McCandless)
+
+* LUCENE-5045: DrillSideways.search did not work on an empty index. (Shai Erera)
+
+Optimizations
 
 * LUCENE-4936: Improve numeric doc values compression in case all values share
   a common divisor. In particular, this improves the compression ratio of dates
@@ -155,6 +157,12 @@
   single snapshots_N file, and no longer requires closing (Mike
   McCandless, Shai Erera)
 
+* LUCENE-5035: Compress addresses in FieldCacheImpl.SortedDocValuesImpl more
+  efficiently. (Adrien Grand, Robert Muir)
+
+* LUCENE-4941: Sort "from" terms only once when using JoinUtil.
+  (Martijn van Groningen)
+
 New Features
 
 * LUCENE-4766: Added a PatternCaptureGroupTokenFilter that uses Java regexes to 
@@ -184,17 +192,88 @@
 * LUCENE-4975: Added a new Replicator module which can replicate index 
   revisions between server and client. (Shai Erera, Mike McCandless)
 
+* LUCENE-5022: Added FacetResult.mergeHierarchies to merge multiple
+  FacetResult of the same dimension into a single one with the reconstructed
+  hierarchy. (Shai Erera)
+
+* LUCENE-5026: Added PagedGrowableWriter, a new internal packed-ints structure
+  that grows the number of bits per value on demand, can store more than 2B
+  values and supports random write and read access. (Adrien Grand)
+
+* LUCENE-5025: FST's Builder can now handle more than 2.1 billion
+  "tail nodes" while building a minimal FST.  (Aaron Binns, Adrien
+  Grand, Mike McCandless)
+  
 Build
 
 * LUCENE-4987: Upgrade randomized testing to version 2.0.10: 
   Test framework may fail internally due to overly aggresive J9 optimizations. 
   (Dawid Weiss, Shai Erera)
 
+* LUCENE-5043: The eclipse target now uses the containing directory for the
+  project name.  This also enforces UTF-8 encoding when files are copied with
+  filtering.
+
+Tests
+
+* LUCENE-4901: TestIndexWriterOnJRECrash should work on any 
+  JRE vendor via Runtime.halt().
+  (Mike McCandless, Robert Muir, Uwe Schindler, Rodrigo Trujillo, Dawid Weiss)
 
 ======================= Lucene 4.3.1 =======================
 
 Bug Fixes
 
+* SOLR-4813: Fix SynonymFilterFactory to allow init parameters for
+  tokenizer factory used when parsing synonyms file.  (Shingo Sasaki, hossman)
+
+* LUCENE-4935: CustomScoreQuery wrongly applied its query boost twice
+  (boost^2).  (Robert Muir)
+
+* LUCENE-4948: Fixed ArrayIndexOutOfBoundsException in PostingsHighlighter
+  if you had a 64-bit JVM without compressed OOPS: IBM J9, or Oracle with
+  large heap/explicitly disabled.  (Mike McCandless, Uwe Schindler, Robert Muir)
+
+* LUCENE-4953: Fixed ParallelCompositeReader to inform ReaderClosedListeners of
+  its synthetic subreaders. FieldCaches keyed on the atomic childs will be purged
+  earlier and FC insanity prevented.  In addition, ParallelCompositeReader's
+  toString() was changed to better reflect the reader structure.
+  (Mike McCandless, Uwe Schindler)
+
+* LUCENE-4968: Fixed ToParentBlockJoinQuery/Collector: correctly handle parent
+  hits that had no child matches, don't throw IllegalArgumentEx when
+  the child query has no hits, more aggressively catch cases where childQuery
+  incorrectly matches parent documents (Mike McCandless)
+
+* LUCENE-4970: Fix boost value of rewritten NGramPhraseQuery.
+  (Shingo Sasaki via Adrien Grand)
+
+* LUCENE-4974: CommitIndexTask was broken if no params were set. (Shai Erera)
+
+* LUCENE-4986: Fixed case where a newly opened near-real-time reader
+  fails to reflect a delete from IndexWriter.tryDeleteDocument (Reg,
+  Mike McCandless)
+
+* LUCENE-4994: Fix PatternKeywordMarkerFilter to have public constructor.
+  (Uwe Schindler)
+
+* LUCENE-4993: Fix BeiderMorseFilter to preserve custom attributes when
+  inserting tokens with position increment 0.  (Uwe Schindler)
+
+* LUCENE-4991: Fix handling of synonyms in classic QueryParser.getFieldQuery for
+  terms not separated by whitespace. PositionIncrementAttribute was ignored, so with
+  default AND synonyms wrongly became mandatory clauses, and with OR, the
+  coordination factor was wrong.  (李威, Robert Muir)
+
+* LUCENE-5002: IndexWriter#deleteAll() caused a deadlock in DWPT / DWSC if a
+  DwPT was flushing concurrently while deleteAll() aborted all DWPT. The IW
+  should never wait on DWPT via the flush control while holding on to the IW
+  Lock. (Simon Willnauer)
+
+Optimizations
+
+* LUCENE-4938: Don't use an unnecessarily large priority queue in IndexSearcher
+  methods that take top-N.  (Uwe Schindler, Mike McCandless, Robert Muir)
 
 
 ======================= Lucene 4.3.0 =======================
diff --git a/lucene/NOTICE.txt b/lucene/NOTICE.txt
index 0cf70cb..9f483b9 100644
--- a/lucene/NOTICE.txt
+++ b/lucene/NOTICE.txt
@@ -104,8 +104,8 @@
 Morfologic includes data from BSD-licensed dictionary of Polish (SGJP)
 (http://sgjp.pl/morfeusz/)
 
-Servlet-api.jar is under the CDDL license, the original source
-code for this can be found at http://www.eclipse.org/jetty/downloads.php
+Servlet-api.jar and javax.servlet-*.jar are under the CDDL license, the original
+source code for this can be found at http://www.eclipse.org/jetty/downloads.php
 
 ===========================================================================
 Kuromoji Japanese Morphological Analyzer - Apache Lucene Integration
diff --git a/lucene/analysis/common/src/java/org/apache/lucene/analysis/compound/CompoundWordTokenFilterBase.java b/lucene/analysis/common/src/java/org/apache/lucene/analysis/compound/CompoundWordTokenFilterBase.java
index e73f554..d85b64a 100644
--- a/lucene/analysis/common/src/java/org/apache/lucene/analysis/compound/CompoundWordTokenFilterBase.java
+++ b/lucene/analysis/common/src/java/org/apache/lucene/analysis/compound/CompoundWordTokenFilterBase.java
@@ -19,7 +19,6 @@
 
 import java.io.IOException;
 import java.util.LinkedList;
-import java.util.Set;
 
 import org.apache.lucene.analysis.TokenFilter;
 import org.apache.lucene.analysis.TokenStream;
@@ -41,6 +40,7 @@
  * <li>As of 3.1, CompoundWordTokenFilterBase correctly handles Unicode 4.0
  * supplementary characters in strings and char arrays provided as compound word
  * dictionaries.
+ * <li>As of 4.4, {@link CompoundWordTokenFilterBase} doesn't update offsets.
  * </ul>
  */
 public abstract class CompoundWordTokenFilterBase extends TokenFilter {
@@ -58,7 +58,8 @@
    * The default for maximal length of subwords that get propagated to the output of this filter
    */
   public static final int DEFAULT_MAX_SUBWORD_SIZE = 15;
-  
+
+  protected final Version matchVersion;
   protected final CharArraySet dictionary;
   protected final LinkedList<CompoundToken> tokens;
   protected final int minWordSize;
@@ -82,7 +83,7 @@
 
   protected CompoundWordTokenFilterBase(Version matchVersion, TokenStream input, CharArraySet dictionary, int minWordSize, int minSubwordSize, int maxSubwordSize, boolean onlyLongestMatch) {
     super(input);
-    
+    this.matchVersion = matchVersion;
     this.tokens=new LinkedList<CompoundToken>();
     if (minWordSize < 0) {
       throw new IllegalArgumentException("minWordSize cannot be negative");
@@ -156,7 +157,8 @@
       int startOff = CompoundWordTokenFilterBase.this.offsetAtt.startOffset();
       int endOff = CompoundWordTokenFilterBase.this.offsetAtt.endOffset();
       
-      if (endOff - startOff != CompoundWordTokenFilterBase.this.termAtt.length()) {
+      if (matchVersion.onOrAfter(Version.LUCENE_44) ||
+          endOff - startOff != CompoundWordTokenFilterBase.this.termAtt.length()) {
         // if length by start + end offsets doesn't match the term text then assume
         // this is a synonym and don't adjust the offsets.
         this.startOffset = startOff;
diff --git a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java b/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
index 10aaf16..7ef82ad 100644
--- a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
+++ b/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
@@ -24,6 +24,7 @@
 import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
 import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
 import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
+import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute;
 import org.apache.lucene.util.Version;
 
 /**
@@ -43,11 +44,12 @@
   private int tokStart;
   private int tokEnd; // only used if the length changed before this filter
   private int savePosIncr;
-  private boolean isFirstToken = true;
+  private int savePosLen;
   
   private final CharTermAttribute termAtt = addAttribute(CharTermAttribute.class);
   private final OffsetAttribute offsetAtt = addAttribute(OffsetAttribute.class);
   private final PositionIncrementAttribute posIncrAtt = addAttribute(PositionIncrementAttribute.class);
+  private final PositionLengthAttribute posLenAtt = addAttribute(PositionLengthAttribute.class);
 
   /**
    * Creates EdgeNGramTokenFilter that can generate n-grams in the sizes of the given range
@@ -88,7 +90,8 @@
           curGramSize = minGram;
           tokStart = offsetAtt.startOffset();
           tokEnd = offsetAtt.endOffset();
-          savePosIncr = posIncrAtt.getPositionIncrement();
+          savePosIncr += posIncrAtt.getPositionIncrement();
+          savePosLen = posLenAtt.getPositionLength();
         }
       }
       if (curGramSize <= maxGram) {         // if we have hit the end of our n-gram size range, quit
@@ -98,16 +101,14 @@
           offsetAtt.setOffset(tokStart, tokEnd);
           // first ngram gets increment, others don't
           if (curGramSize == minGram) {
-            //  Leave the first token position increment at the cleared-attribute value of 1
-            if ( ! isFirstToken) {
-              posIncrAtt.setPositionIncrement(savePosIncr);
-            }
+            posIncrAtt.setPositionIncrement(savePosIncr);
+            savePosIncr = 0;
           } else {
             posIncrAtt.setPositionIncrement(0);
           }
+          posLenAtt.setPositionLength(savePosLen);
           termAtt.copyBuffer(curTermBuffer, 0, curGramSize);
           curGramSize++;
-          isFirstToken = false;
           return true;
         }
       }
@@ -119,6 +120,6 @@
   public void reset() throws IOException {
     super.reset();
     curTermBuffer = null;
-    isFirstToken = true;
+    savePosIncr = 0;
   }
 }
diff --git a/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilterFactory.java b/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilterFactory.java
index a4bbe58..c06ba32 100644
--- a/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilterFactory.java
+++ b/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilterFactory.java
@@ -26,6 +26,7 @@
 import java.nio.charset.CodingErrorAction;
 import java.text.ParseException;
 import java.util.HashMap;
+import java.util.Iterator;
 import java.util.List;
 import java.util.Map;
 
@@ -48,9 +49,18 @@
  *     &lt;tokenizer class="solr.WhitespaceTokenizerFactory"/&gt;
  *     &lt;filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" 
  *             format="solr" ignoreCase="false" expand="true" 
- *             tokenizerFactory="solr.WhitespaceTokenizerFactory"/&gt;
+ *             tokenizerFactory="solr.WhitespaceTokenizerFactory"
+ *             [optional tokenizer factory parameters]/&gt;
  *   &lt;/analyzer&gt;
  * &lt;/fieldType&gt;</pre>
+ * 
+ * <p>
+ * An optional param name prefix of "tokenizerFactory." may be used for any 
+ * init params that the SynonymFilterFactory needs to pass to the specified 
+ * TokenizerFactory.  If the TokenizerFactory expects an init parameters with 
+ * the same name as an init param used by the SynonymFilterFactory, the prefix 
+ * is mandatory.
+ * </p>
  */
 public class SynonymFilterFactory extends TokenFilterFactory implements ResourceLoaderAware {
   private final boolean ignoreCase;
@@ -58,19 +68,27 @@
   private final String synonyms;
   private final String format;
   private final boolean expand;
+  private final Map<String, String> tokArgs = new HashMap<String, String>();
 
   private SynonymMap map;
   
   public SynonymFilterFactory(Map<String,String> args) {
     super(args);
     ignoreCase = getBoolean(args, "ignoreCase", false);
-    tokenizerFactory = get(args, "tokenizerFactory");
-    if (tokenizerFactory != null) {
-      assureMatchVersion();
-    }
     synonyms = require(args, "synonyms");
     format = get(args, "format");
     expand = getBoolean(args, "expand", true);
+
+    tokenizerFactory = get(args, "tokenizerFactory");
+    if (tokenizerFactory != null) {
+      assureMatchVersion();
+      tokArgs.put("luceneMatchVersion", getLuceneMatchVersion().toString());
+      for (Iterator<String> itr = args.keySet().iterator(); itr.hasNext();) {
+        String key = itr.next();
+        tokArgs.put(key.replaceAll("^tokenizerFactory\\.",""), args.get(key));
+        itr.remove();
+      }
+    }
     if (!args.isEmpty()) {
       throw new IllegalArgumentException("Unknown parameters: " + args);
     }
@@ -159,11 +177,9 @@
   
   // (there are no tests for this functionality)
   private TokenizerFactory loadTokenizerFactory(ResourceLoader loader, String cname) throws IOException {
-    Map<String,String> args = new HashMap<String,String>();
-    args.put("luceneMatchVersion", getLuceneMatchVersion().toString());
     Class<? extends TokenizerFactory> clazz = loader.findClass(cname, TokenizerFactory.class);
     try {
-      TokenizerFactory tokFactory = clazz.getConstructor(Map.class).newInstance(args);
+      TokenizerFactory tokFactory = clazz.getConstructor(Map.class).newInstance(tokArgs);
       if (tokFactory instanceof ResourceLoaderAware) {
         ((ResourceLoaderAware) tokFactory).inform(loader);
       }
diff --git a/lucene/analysis/common/src/resources/META-INF/services/org.apache.lucene.analysis.util.TokenFilterFactory b/lucene/analysis/common/src/resources/META-INF/services/org.apache.lucene.analysis.util.TokenFilterFactory
index 21d6db8..3497e30 100644
--- a/lucene/analysis/common/src/resources/META-INF/services/org.apache.lucene.analysis.util.TokenFilterFactory
+++ b/lucene/analysis/common/src/resources/META-INF/services/org.apache.lucene.analysis.util.TokenFilterFactory
@@ -76,7 +76,6 @@
 org.apache.lucene.analysis.payloads.NumericPayloadTokenFilterFactory
 org.apache.lucene.analysis.payloads.TokenOffsetPayloadTokenFilterFactory
 org.apache.lucene.analysis.payloads.TypeAsPayloadTokenFilterFactory
-org.apache.lucene.analysis.position.PositionFilterFactory
 org.apache.lucene.analysis.pt.PortugueseLightStemFilterFactory
 org.apache.lucene.analysis.pt.PortugueseMinimalStemFilterFactory
 org.apache.lucene.analysis.pt.PortugueseStemFilterFactory
diff --git a/lucene/analysis/common/src/test/org/apache/lucene/analysis/compound/TestCompoundWordTokenFilter.java b/lucene/analysis/common/src/test/org/apache/lucene/analysis/compound/TestCompoundWordTokenFilter.java
index d3ee9a9..5d6b23b 100644
--- a/lucene/analysis/common/src/test/org/apache/lucene/analysis/compound/TestCompoundWordTokenFilter.java
+++ b/lucene/analysis/common/src/test/org/apache/lucene/analysis/compound/TestCompoundWordTokenFilter.java
@@ -151,12 +151,12 @@
         "fiol", "fodral", "Basfiolsfodralmakaregesäll", "Bas", "fiol",
         "fodral", "makare", "gesäll", "Skomakare", "Sko", "makare",
         "Vindrutetorkare", "Vind", "rute", "torkare", "Vindrutetorkarblad",
-        "Vind", "rute", "blad", "abba" }, new int[] { 0, 0, 3, 8, 8, 11, 17,
-        17, 20, 24, 24, 28, 33, 33, 39, 44, 44, 49, 54, 54, 58, 62, 69, 69, 72,
-        77, 84, 84, 87, 92, 98, 104, 111, 111, 114, 121, 121, 125, 129, 137,
-        137, 141, 151, 156 }, new int[] { 7, 3, 7, 16, 11, 16, 23, 20, 23, 32,
-        28, 32, 43, 39, 43, 53, 49, 53, 68, 58, 62, 68, 83, 72, 76, 83, 110,
-        87, 91, 98, 104, 110, 120, 114, 120, 136, 125, 129, 136, 155, 141, 145,
+        "Vind", "rute", "blad", "abba" }, new int[] { 0, 0, 0, 8, 8, 8, 17,
+        17, 17, 24, 24, 24, 33, 33, 33, 44, 44, 44, 54, 54, 54, 54, 69, 69, 69,
+        69, 84, 84, 84, 84, 84, 84, 111, 111, 111, 121, 121, 121, 121, 137,
+        137, 137, 137, 156 }, new int[] { 7, 7, 7, 16, 16, 16, 23, 23, 23, 32,
+        32, 32, 43, 43, 43, 53, 53, 53, 68, 68, 68, 68, 83, 83, 83, 83, 110,
+        110, 110, 110, 110, 110, 120, 120, 120, 136, 136, 136, 136, 155, 155, 155,
         155, 160 }, new int[] { 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1,
         0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1,
         0, 0, 0, 1 });
@@ -174,8 +174,8 @@
         CompoundWordTokenFilterBase.DEFAULT_MAX_SUBWORD_SIZE, true);
 
     assertTokenStreamContents(tf, new String[] { "Basfiolsfodralmakaregesäll", "Bas",
-        "fiolsfodral", "fodral", "makare", "gesäll" }, new int[] { 0, 0, 3, 8,
-        14, 20 }, new int[] { 26, 3, 14, 14, 20, 26 }, new int[] { 1, 0, 0, 0,
+        "fiolsfodral", "fodral", "makare", "gesäll" }, new int[] { 0, 0, 0, 0,
+        0, 0 }, new int[] { 26, 26, 26, 26, 26, 26 }, new int[] { 1, 0, 0, 0,
         0, 0 });
   }
 
@@ -194,8 +194,8 @@
 
     assertTokenStreamContents(tf,
       new String[] { "abcdef", "ab", "cd", "ef" },
-      new int[] { 0, 0, 2, 4},
-      new int[] { 6, 2, 4, 6},
+      new int[] { 0, 0, 0, 0},
+      new int[] { 6, 6, 6, 6},
       new int[] { 1, 0, 0, 0}
       );
   }
@@ -216,8 +216,8 @@
   // since "d" is shorter than the minimum subword size, it should not be added to the token stream
     assertTokenStreamContents(tf,
       new String[] { "abcdefg", "abc", "efg" },
-      new int[] { 0, 0, 4},
-      new int[] { 7, 3, 7},
+      new int[] { 0, 0, 0},
+      new int[] { 7, 7, 7},
       new int[] { 1, 0, 0}
       );
   }
diff --git a/lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java b/lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
index 4baefbc..ccba681 100644
--- a/lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
+++ b/lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
@@ -59,29 +59,21 @@
 import org.apache.lucene.analysis.cjk.CJKBigramFilter;
 import org.apache.lucene.analysis.commongrams.CommonGramsFilter;
 import org.apache.lucene.analysis.commongrams.CommonGramsQueryFilter;
-import org.apache.lucene.analysis.compound.DictionaryCompoundWordTokenFilter;
 import org.apache.lucene.analysis.compound.HyphenationCompoundWordTokenFilter;
 import org.apache.lucene.analysis.compound.TestCompoundWordTokenFilter;
 import org.apache.lucene.analysis.compound.hyphenation.HyphenationTree;
 import org.apache.lucene.analysis.hunspell.HunspellDictionary;
 import org.apache.lucene.analysis.hunspell.HunspellDictionaryTest;
 import org.apache.lucene.analysis.miscellaneous.HyphenatedWordsFilter;
-import org.apache.lucene.analysis.miscellaneous.KeepWordFilter;
-import org.apache.lucene.analysis.miscellaneous.LengthFilter;
 import org.apache.lucene.analysis.miscellaneous.LimitTokenCountFilter;
 import org.apache.lucene.analysis.miscellaneous.LimitTokenPositionFilter;
 import org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter;
 import org.apache.lucene.analysis.miscellaneous.StemmerOverrideFilter.StemmerOverrideMap;
-import org.apache.lucene.analysis.miscellaneous.TrimFilter;
 import org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter;
-import org.apache.lucene.analysis.ngram.EdgeNGramTokenFilter;
-import org.apache.lucene.analysis.ngram.EdgeNGramTokenizer;
-import org.apache.lucene.analysis.ngram.Lucene43NGramTokenizer;
 import org.apache.lucene.analysis.path.PathHierarchyTokenizer;
 import org.apache.lucene.analysis.path.ReversePathHierarchyTokenizer;
 import org.apache.lucene.analysis.payloads.IdentityEncoder;
 import org.apache.lucene.analysis.payloads.PayloadEncoder;
-import org.apache.lucene.analysis.position.PositionFilter;
 import org.apache.lucene.analysis.snowball.TestSnowball;
 import org.apache.lucene.analysis.standard.StandardTokenizer;
 import org.apache.lucene.analysis.synonym.SynonymMap;
@@ -172,10 +164,6 @@
       for (Class<?> c : Arrays.<Class<?>>asList(
           ReversePathHierarchyTokenizer.class,
           PathHierarchyTokenizer.class,
-          HyphenationCompoundWordTokenFilter.class,
-          DictionaryCompoundWordTokenFilter.class,
-          // TODO: corrumpts graphs (offset consistency check):
-          PositionFilter.class,
           // TODO: it seems to mess up offsets!?
           WikipediaTokenizer.class,
           // TODO: doesn't handle graph inputs
diff --git a/lucene/analysis/common/src/test/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilterTest.java b/lucene/analysis/common/src/test/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilterTest.java
index 6139323..6b3d8c5 100644
--- a/lucene/analysis/common/src/test/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilterTest.java
+++ b/lucene/analysis/common/src/test/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilterTest.java
@@ -17,19 +17,24 @@
  * limitations under the License.
  */
 
-import org.apache.lucene.analysis.Analyzer;
-import org.apache.lucene.analysis.MockTokenizer;
-import org.apache.lucene.analysis.TokenStream;
-import org.apache.lucene.analysis.BaseTokenStreamTestCase;
-import org.apache.lucene.analysis.Tokenizer;
-import org.apache.lucene.analysis.core.KeywordTokenizer;
-import org.apache.lucene.analysis.core.WhitespaceTokenizer;
-import org.apache.lucene.analysis.position.PositionFilter;
-
+import java.io.IOException;
 import java.io.Reader;
 import java.io.StringReader;
 import java.util.Random;
 
+import org.apache.lucene.analysis.Analyzer;
+import org.apache.lucene.analysis.BaseTokenStreamTestCase;
+import org.apache.lucene.analysis.MockTokenizer;
+import org.apache.lucene.analysis.TokenFilter;
+import org.apache.lucene.analysis.TokenStream;
+import org.apache.lucene.analysis.Tokenizer;
+import org.apache.lucene.analysis.core.KeywordTokenizer;
+import org.apache.lucene.analysis.core.LetterTokenizer;
+import org.apache.lucene.analysis.core.WhitespaceTokenizer;
+import org.apache.lucene.analysis.shingle.ShingleFilter;
+import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
+import org.apache.lucene.util.Version;
+
 /**
  * Tests {@link EdgeNGramTokenFilter} for correctness.
  */
@@ -101,9 +106,39 @@
                               false);
   }
 
+  private static class PositionFilter extends TokenFilter {
+    
+    private final PositionIncrementAttribute posIncrAtt = addAttribute(PositionIncrementAttribute.class);
+    private boolean started;
+    
+    PositionFilter(final TokenStream input) {
+      super(input);
+    }
+    
+    @Override
+    public final boolean incrementToken() throws IOException {
+      if (input.incrementToken()) {
+        if (started) {
+          posIncrAtt.setPositionIncrement(0);
+        } else {
+          started = true;
+        }
+        return true;
+      } else {
+        return false;
+      }
+    }
+    
+    @Override
+    public void reset() throws IOException {
+      super.reset();
+      started = false;
+    }
+  }
+
   public void testFirstTokenPositionIncrement() throws Exception {
     TokenStream ts = new MockTokenizer(new StringReader("a abc"), MockTokenizer.WHITESPACE, false);
-    ts = new PositionFilter(ts, 0); // All but first token will get 0 position increment
+    ts = new PositionFilter(ts); // All but first token will get 0 position increment
     EdgeNGramTokenFilter filter = new EdgeNGramTokenFilter(TEST_VERSION_CURRENT, ts, 2, 3);
     // The first token "a" will not be output, since it's smaller than the mingram size of 2.
     // The second token on input to EdgeNGramTokenFilter will have position increment of 0,
@@ -155,4 +190,19 @@
     };
     checkAnalysisConsistency(random, a, random.nextBoolean(), "");
   }
+
+  public void testGraphs() throws IOException {
+    TokenStream tk = new LetterTokenizer(Version.LUCENE_44, new StringReader("abc d efgh ij klmno p q"));
+    tk = new ShingleFilter(tk);
+    tk = new EdgeNGramTokenFilter(Version.LUCENE_44, tk, 7, 10);
+    tk.reset();
+    assertTokenStreamContents(tk,
+        new String[] { "efgh ij", "ij klmn", "ij klmno", "klmno p" },
+        new int[]    { 6,11,11,14 },
+        new int[]    { 13,19,19,21 },
+        new int[]    { 3,1,0,1 },
+        new int[]    { 2,2,2,2 },
+        23
+    );
+  }
 }
diff --git a/lucene/analysis/common/src/test/org/apache/lucene/analysis/synonym/TestSynonymFilterFactory.java b/lucene/analysis/common/src/test/org/apache/lucene/analysis/synonym/TestSynonymFilterFactory.java
index 0ec93bb..6cf3bc2 100644
--- a/lucene/analysis/common/src/test/org/apache/lucene/analysis/synonym/TestSynonymFilterFactory.java
+++ b/lucene/analysis/common/src/test/org/apache/lucene/analysis/synonym/TestSynonymFilterFactory.java
@@ -19,11 +19,15 @@
 
 import java.io.Reader;
 import java.io.StringReader;
+import java.util.HashMap;
+import java.util.Map;
 
 import org.apache.lucene.analysis.MockTokenizer;
 import org.apache.lucene.analysis.TokenStream;
-import org.apache.lucene.analysis.synonym.SynonymFilter;
+import org.apache.lucene.analysis.pattern.PatternTokenizerFactory;
+import org.apache.lucene.analysis.util.TokenFilterFactory;
 import org.apache.lucene.analysis.util.BaseTokenStreamFactoryTestCase;
+import org.apache.lucene.analysis.util.ClasspathResourceLoader;
 import org.apache.lucene.analysis.util.StringMockResourceLoader;
 
 public class TestSynonymFilterFactory extends BaseTokenStreamFactoryTestCase {
@@ -59,4 +63,53 @@
       assertTrue(expected.getMessage().contains("Unknown parameters"));
     }
   }
+
+  static final String TOK_SYN_ARG_VAL = "argument";
+  static final String TOK_FOO_ARG_VAL = "foofoofoo";
+
+  /** Test that we can parse TokenierFactory's arguments */
+  public void testTokenizerFactoryArguments() throws Exception {
+    final String clazz = PatternTokenizerFactory.class.getName();
+    TokenFilterFactory factory = null;
+
+    // simple arg form
+    factory = tokenFilterFactory("Synonym", 
+        "synonyms", "synonyms.txt", 
+        "tokenizerFactory", clazz,
+        "pattern", "(.*)",
+        "group", "0");
+    assertNotNull(factory);
+    // prefix
+    factory = tokenFilterFactory("Synonym", 
+        "synonyms", "synonyms.txt", 
+        "tokenizerFactory", clazz,
+        "tokenizerFactory.pattern", "(.*)",
+        "tokenizerFactory.group", "0");
+    assertNotNull(factory);
+
+    // sanity check that sub-PatternTokenizerFactory fails w/o pattern
+    try {
+      factory = tokenFilterFactory("Synonym", 
+          "synonyms", "synonyms.txt", 
+          "tokenizerFactory", clazz);
+      fail("tokenizerFactory should have complained about missing pattern arg");
+    } catch (Exception expected) {
+      // :NOOP:
+    }
+
+    // sanity check that sub-PatternTokenizerFactory fails on unexpected
+    try {
+      factory = tokenFilterFactory("Synonym", 
+          "synonyms", "synonyms.txt", 
+          "tokenizerFactory", clazz,
+          "tokenizerFactory.pattern", "(.*)",
+          "tokenizerFactory.bogusbogusbogus", "bogus",
+          "tokenizerFactory.group", "0");
+      fail("tokenizerFactory should have complained about missing pattern arg");
+    } catch (Exception expected) {
+      // :NOOP:
+    }
+  }
 }
+
+
diff --git a/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/TokenInfoDictionary.java b/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/TokenInfoDictionary.java
index fd7a676..6edcf34 100644
--- a/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/TokenInfoDictionary.java
+++ b/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/TokenInfoDictionary.java
@@ -44,7 +44,7 @@
     try {
       is = getResource(FST_FILENAME_SUFFIX);
       is = new BufferedInputStream(is);
-      fst = new FST<Long>(new InputStreamDataInput(is), PositiveIntOutputs.getSingleton(true));
+      fst = new FST<Long>(new InputStreamDataInput(is), PositiveIntOutputs.getSingleton());
     } catch (IOException ioe) {
       priorE = ioe;
     } finally {
diff --git a/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/UserDictionary.java b/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/UserDictionary.java
index 3ff5e64..10df235 100644
--- a/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/UserDictionary.java
+++ b/lucene/analysis/kuromoji/src/java/org/apache/lucene/analysis/ja/dict/UserDictionary.java
@@ -88,7 +88,7 @@
     List<String> data = new ArrayList<String>(featureEntries.size());
     List<int[]> segmentations = new ArrayList<int[]>(featureEntries.size());
     
-    PositiveIntOutputs fstOutput = PositiveIntOutputs.getSingleton(true);
+    PositiveIntOutputs fstOutput = PositiveIntOutputs.getSingleton();
     Builder<Long> fstBuilder = new Builder<Long>(FST.INPUT_TYPE.BYTE2, fstOutput);
     IntsRef scratch = new IntsRef();
     long ord = 0;
diff --git a/lucene/analysis/kuromoji/src/tools/java/org/apache/lucene/analysis/ja/util/TokenInfoDictionaryBuilder.java b/lucene/analysis/kuromoji/src/tools/java/org/apache/lucene/analysis/ja/util/TokenInfoDictionaryBuilder.java
index bec0c87..253bc87 100644
--- a/lucene/analysis/kuromoji/src/tools/java/org/apache/lucene/analysis/ja/util/TokenInfoDictionaryBuilder.java
+++ b/lucene/analysis/kuromoji/src/tools/java/org/apache/lucene/analysis/ja/util/TokenInfoDictionaryBuilder.java
@@ -131,7 +131,7 @@
     
     System.out.println("  encode...");
 
-    PositiveIntOutputs fstOutput = PositiveIntOutputs.getSingleton(true);
+    PositiveIntOutputs fstOutput = PositiveIntOutputs.getSingleton();
     Builder<Long> fstBuilder = new Builder<Long>(FST.INPUT_TYPE.BYTE2, 0, 0, true, true, Integer.MAX_VALUE, fstOutput, null, true, PackedInts.DEFAULT, true, 15);
     IntsRef scratch = new IntsRef();
     long ord = -1; // first ord will be 0
diff --git a/lucene/build.xml b/lucene/build.xml
index 0c16b7f..fc73ced 100644
--- a/lucene/build.xml
+++ b/lucene/build.xml
@@ -41,7 +41,6 @@
               excludes="build/**,site/**,tools/**,**/lib/*servlet-api*.jar"
   />
 
-
   <!-- ================================================================== -->
   <!-- Prepares the build directory                                       -->
   <!-- ================================================================== -->
@@ -164,6 +163,7 @@
       <additional-filters>
         <replaceregex pattern="jetty([^/]+)$" replace="jetty" flags="gi" />
         <replaceregex pattern="slf4j-([^/]+)$" replace="slf4j" flags="gi" />
+        <replaceregex pattern="javax\.servlet([^/]+)$" replace="javax.servlet" flags="gi" />
         <replaceregex pattern="(bcmail|bcprov)-([^/]+)$" replace="\1" flags="gi" />
       </additional-filters>
     </license-check-macro>
@@ -302,7 +302,7 @@
     <check-missing-javadocs dir="build/docs/core/org/apache/lucene/codecs" level="method"/>
   </target>
   
-  <target name="-ecj-javadoc-lint" depends="compile,compile-test,-ecj-resolve">
+  <target name="-ecj-javadoc-lint" depends="compile,compile-test,-ecj-javadoc-lint-unsupported,-ecj-resolve" if="ecj-javadoc-lint.supported">
     <subant target="-ecj-javadoc-lint" failonerror="true" inheritall="false">
       <propertyset refid="uptodate.and.compiled.properties"/>
       <fileset dir="core" includes="build.xml"/>
diff --git a/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexReader.java b/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexReader.java
index 532c9e6..6975f26 100644
--- a/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexReader.java
+++ b/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexReader.java
@@ -44,7 +44,7 @@
  * @lucene.experimental */
 public class VariableGapTermsIndexReader extends TermsIndexReaderBase {
 
-  private final PositiveIntOutputs fstOutputs = PositiveIntOutputs.getSingleton(true);
+  private final PositiveIntOutputs fstOutputs = PositiveIntOutputs.getSingleton();
   private int indexDivisor;
 
   // Closed if indexLoaded is true:
@@ -199,7 +199,7 @@
         if (indexDivisor > 1) {
           // subsample
           final IntsRef scratchIntsRef = new IntsRef();
-          final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+          final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
           final Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, outputs);
           final BytesRefFSTEnum<Long> fstEnum = new BytesRefFSTEnum<Long>(fst);
           BytesRefFSTEnum.InputOutput<Long> result;
diff --git a/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexWriter.java b/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexWriter.java
index 65a0b7e..6d3f6ba 100644
--- a/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexWriter.java
+++ b/lucene/codecs/src/java/org/apache/lucene/codecs/blockterms/VariableGapTermsIndexWriter.java
@@ -235,7 +235,7 @@
 
     public FSTFieldWriter(FieldInfo fieldInfo, long termsFilePointer) throws IOException {
       this.fieldInfo = fieldInfo;
-      fstOutputs = PositiveIntOutputs.getSingleton(true);
+      fstOutputs = PositiveIntOutputs.getSingleton();
       fstBuilder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, fstOutputs);
       indexStart = out.getFilePointer();
       ////System.out.println("VGW: field=" + fieldInfo.name);
diff --git a/lucene/codecs/src/java/org/apache/lucene/codecs/simpletext/SimpleTextFieldsReader.java b/lucene/codecs/src/java/org/apache/lucene/codecs/simpletext/SimpleTextFieldsReader.java
index ff940e8..d576d3c 100644
--- a/lucene/codecs/src/java/org/apache/lucene/codecs/simpletext/SimpleTextFieldsReader.java
+++ b/lucene/codecs/src/java/org/apache/lucene/codecs/simpletext/SimpleTextFieldsReader.java
@@ -513,7 +513,7 @@
     }
 
     private void loadTerms() throws IOException {
-      PositiveIntOutputs posIntOutputs = PositiveIntOutputs.getSingleton(false);
+      PositiveIntOutputs posIntOutputs = PositiveIntOutputs.getSingleton();
       final Builder<PairOutputs.Pair<Long,PairOutputs.Pair<Long,Long>>> b;
       final PairOutputs<Long,Long> outputsInner = new PairOutputs<Long,Long>(posIntOutputs, posIntOutputs);
       final PairOutputs<Long,PairOutputs.Pair<Long,Long>> outputs = new PairOutputs<Long,PairOutputs.Pair<Long,Long>>(posIntOutputs,
diff --git a/lucene/common-build.xml b/lucene/common-build.xml
index 582981b..e08ee51 100644
--- a/lucene/common-build.xml
+++ b/lucene/common-build.xml
@@ -1414,7 +1414,7 @@
     </sequential>
   </target>
   
-  <target name="-validate-maven-dependencies.init">
+  <target name="-validate-maven-dependencies.init" depends="filter-pom-templates">
     <!-- find the correct pom.xml path and assigns it to property pom.xml -->
     <property name="top.level.dir" location="${common.dir}/.."/>
     <pathconvert property="maven.pom.xml">
@@ -1441,6 +1441,11 @@
   
   <target name="-validate-maven-dependencies" depends="-validate-maven-dependencies.init">
     <m2-validate-dependencies pom.xml="${maven.pom.xml}" licenseDirectory="${license.dir}">
+      <additional-filters>
+        <replaceregex pattern="jetty([^/]+)$" replace="jetty" flags="gi" />
+        <replaceregex pattern="slf4j-([^/]+)$" replace="slf4j" flags="gi" />
+        <replaceregex pattern="javax\.servlet([^/]+)$" replace="javax.servlet" flags="gi" />
+      </additional-filters>
       <excludes>
         <rsel:name name="**/lucene-*-${maven.version.glob}.jar" handledirsep="true"/>
       </excludes>
@@ -1449,7 +1454,7 @@
 
   <target name="filter-pom-templates" unless="filtered.pom.templates.uptodate">
     <mkdir dir="${filtered.pom.templates.dir}"/>
-    <copy todir="${common.dir}/build/poms" overwrite="true">
+    <copy todir="${common.dir}/build/poms" overwrite="true" encoding="UTF-8">
       <fileset dir="${common.dir}/../dev-tools/maven"/>
       <filterset begintoken="@" endtoken="@">
         <filter token="version" value="${version}"/>
@@ -1625,21 +1630,43 @@
     </sequential>
   </macrodef>
 
-  <target name="-ecj-javadoc-lint" depends="-ecj-javadoc-lint-src,-ecj-javadoc-lint-tests"/>
+  <!-- ECJ Javadoc linting: -->
+  
+  <condition property="ecj-javadoc-lint.supported">
+    <not><equals arg1="${build.java.runtime}" arg2="1.8"/></not>
+  </condition>
 
-  <target name="-ecj-javadoc-lint-src" depends="-ecj-resolve">
+  <condition property="ecj-javadoc-lint-tests.supported">
+    <and>
+      <isset property="ecj-javadoc-lint.supported"/>
+      <isset property="module.has.tests"/>
+    </and>
+  </condition>
+
+  <target name="-ecj-javadoc-lint-unsupported" unless="ecj-javadoc-lint.supported">
+    <fail message="Linting documentation with ECJ is not supported on this Java version (${build.java.runtime}).">
+      <condition>
+        <not><isset property="is.jenkins.build"/></not>
+      </condition>
+    </fail>
+    <echo level="warning" message="WARN: Linting documentation with ECJ is not supported on this Java version (${build.java.runtime}). NOTHING DONE!"/>
+  </target>
+
+  <target name="-ecj-javadoc-lint" depends="-ecj-javadoc-lint-unsupported,-ecj-javadoc-lint-src,-ecj-javadoc-lint-tests"/>
+
+  <target name="-ecj-javadoc-lint-src" depends="-ecj-resolve" if="ecj-javadoc-lint.supported">
     <ecj-macro srcdir="${src.dir}" configuration="${common.dir}/tools/javadoc/ecj.javadocs.prefs">
       <classpath refid="classpath"/>
     </ecj-macro>
   </target>
 
-  <target name="-ecj-javadoc-lint-tests" depends="-ecj-resolve" if="module.has.tests">
+  <target name="-ecj-javadoc-lint-tests" depends="-ecj-resolve" if="ecj-javadoc-lint-tests.supported">
     <ecj-macro srcdir="${tests.src.dir}" configuration="${common.dir}/tools/javadoc/ecj.javadocs.prefs">
       <classpath refid="test.classpath"/>
     </ecj-macro>
   </target>
   
-  <target name="-ecj-resolve" unless="ecj.loaded" depends="ivy-availability-check,ivy-configure">
+  <target name="-ecj-resolve" unless="ecj.loaded" depends="ivy-availability-check,ivy-configure" if="ecj-javadoc-lint.supported">
     <ivy:cachepath organisation="org.eclipse.jdt.core.compiler" module="ecj" revision="3.7.2"
      inline="true" conf="master" type="jar" pathid="ecj.classpath" />
     <componentdef classname="org.eclipse.jdt.core.JDTCompilerAdapter"
@@ -2003,7 +2030,7 @@
     <element name="nested" optional="false" implicit="true"/>
     <sequential>
       <copy todir="@{todir}" flatten="@{flatten}" overwrite="@{overwrite}" verbose="true"
-        preservelastmodified="false" encoding="UTF-8" outputencoding="UTF-8" taskname="pegdown"
+        preservelastmodified="false" encoding="UTF-8" taskname="pegdown"
       >
         <filterchain>
           <tokenfilter>
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/StoredFieldsReader.java b/lucene/core/src/java/org/apache/lucene/codecs/StoredFieldsReader.java
index 11961d1..06203ec 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/StoredFieldsReader.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/StoredFieldsReader.java
@@ -36,9 +36,12 @@
   protected StoredFieldsReader() {
   }
   
-  /** Visit the stored fields for document <code>n</code>, ignoring certain
-   * fields. */
-  public abstract void visitDocument(int n, StoredFieldVisitor visitor, Set<String> ignoreFields) throws IOException;
+  /**
+   * Visit the stored fields for document <code>n</code>, ignoring certain
+   * fields.
+   */
+  public abstract void visitDocument(int n, StoredFieldVisitor visitor,
+      Set<String> ignoreFields) throws IOException;
 
   @Override
   public abstract StoredFieldsReader clone();
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsIndexReader.java b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsIndexReader.java
index 749b2e9..00cc4f4 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsIndexReader.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/compressing/CompressingStoredFieldsIndexReader.java
@@ -165,9 +165,6 @@
     if (docID < 0 || docID >= maxDoc) {
       throw new IllegalArgumentException("docID out of range [0-" + maxDoc + "]: " + docID);
     }
-    if (docBases.length == 0) {
-      return -1;
-    }
     final int block = block(docID);
     final int relativeChunk = relativeChunk(block, docID - docBases[block]);
     return startPointers[block] + relativeStartPointer(block, relativeChunk);
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsWriter.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsWriter.java
index 0386369..938d4c0 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene40/Lucene40TermVectorsWriter.java
@@ -242,8 +242,8 @@
       if (payloads) {
         tvf.writeBytes(payloadData.bytes, payloadData.offset, payloadData.length);
       }
-      for (int i = 0; i < bufferedIndex; i++) {
-        if (offsets) {
+      if (offsets) {
+        for (int i = 0; i < bufferedIndex; i++) {
           tvf.writeVInt(offsetStartBuffer[i] - lastOffset);
           tvf.writeVInt(offsetEndBuffer[i] - offsetStartBuffer[i]);
           lastOffset = offsetEndBuffer[i];
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene40/package.html b/lucene/core/src/java/org/apache/lucene/codecs/lucene40/package.html
index d20037b..5187359 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene40/package.html
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene40/package.html
@@ -372,13 +372,7 @@
 <a name="Limitations" id="Limitations"></a>
 <h2>Limitations</h2>
 <div>
-<p>When referring to term numbers, Lucene's current implementation uses a Java
-<code>int</code> to hold the term index, which means the
-maximum number of unique terms in any single index segment is ~2.1 billion
-times the term index interval (default 128) = ~274 billion. This is technically
-not a limitation of the index file format, just of Lucene's current
-implementation.</p>
-<p>Similarly, Lucene uses a Java <code>int</code> to refer to
+<p>Lucene uses a Java <code>int</code> to refer to
 document numbers, and the index file format uses an <code>Int32</code>
 on-disk to store document numbers. This is a limitation
 of both the index file format and the current implementation. Eventually these
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.java
index 4838f7a..df7fe6f 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene41/Lucene41PostingsFormat.java
@@ -161,7 +161,7 @@
  *    <li>SkipFPDelta determines the position of this term's SkipData within the .doc
  *        file. In particular, it is the length of the TermFreq data.
  *        SkipDelta is only stored if DocFreq is not smaller than SkipMinimum
- *        (i.e. 8 in Lucene41PostingsFormat).</li>
+ *        (i.e. 128 in Lucene41PostingsFormat).</li>
  *    <li>SingletonDocID is an optimization when a term only appears in one document. In this case, instead
  *        of writing a file pointer to the .doc file (DocFPDelta), and then a VIntBlock at that location, the 
  *        single document ID is written to the term dictionary.</li>
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene41/package.html b/lucene/core/src/java/org/apache/lucene/codecs/lucene41/package.html
index 3df0293..d429cb0 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene41/package.html
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene41/package.html
@@ -381,13 +381,7 @@
 <a name="Limitations" id="Limitations"></a>
 <h2>Limitations</h2>
 <div>
-<p>When referring to term numbers, Lucene's current implementation uses a Java
-<code>int</code> to hold the term index, which means the
-maximum number of unique terms in any single index segment is ~2.1 billion
-times the term index interval (default 128) = ~274 billion. This is technically
-not a limitation of the index file format, just of Lucene's current
-implementation.</p>
-<p>Similarly, Lucene uses a Java <code>int</code> to refer to
+<p>Lucene uses a Java <code>int</code> to refer to
 document numbers, and the index file format uses an <code>Int32</code>
 on-disk to store document numbers. This is a limitation
 of both the index file format and the current implementation. Eventually these
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesConsumer.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesConsumer.java
index aced6ce..a1f6dc4 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesConsumer.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesConsumer.java
@@ -245,7 +245,7 @@
     meta.writeVInt(field.number);
     meta.writeByte(FST);
     meta.writeLong(data.getFilePointer());
-    PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     Builder<Long> builder = new Builder<Long>(INPUT_TYPE.BYTE1, outputs);
     IntsRef scratch = new IntsRef();
     long ord = 0;
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesProducer.java b/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesProducer.java
index 30aad0f..c5ac3f1 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesProducer.java
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene42/Lucene42DocValuesProducer.java
@@ -278,7 +278,7 @@
       instance = fstInstances.get(field.number);
       if (instance == null) {
         data.seek(entry.offset);
-        instance = new FST<Long>(data, PositiveIntOutputs.getSingleton(true));
+        instance = new FST<Long>(data, PositiveIntOutputs.getSingleton());
         fstInstances.put(field.number, instance);
       }
     }
@@ -352,7 +352,7 @@
       instance = fstInstances.get(field.number);
       if (instance == null) {
         data.seek(entry.offset);
-        instance = new FST<Long>(data, PositiveIntOutputs.getSingleton(true));
+        instance = new FST<Long>(data, PositiveIntOutputs.getSingleton());
         fstInstances.put(field.number, instance);
       }
     }
diff --git a/lucene/core/src/java/org/apache/lucene/codecs/lucene42/package.html b/lucene/core/src/java/org/apache/lucene/codecs/lucene42/package.html
index 9ed17df..571b766 100644
--- a/lucene/core/src/java/org/apache/lucene/codecs/lucene42/package.html
+++ b/lucene/core/src/java/org/apache/lucene/codecs/lucene42/package.html
@@ -384,13 +384,7 @@
 <a name="Limitations" id="Limitations"></a>
 <h2>Limitations</h2>
 <div>
-<p>When referring to term numbers, Lucene's current implementation uses a Java
-<code>int</code> to hold the term index, which means the
-maximum number of unique terms in any single index segment is ~2.1 billion
-times the term index interval (default 128) = ~274 billion. This is technically
-not a limitation of the index file format, just of Lucene's current
-implementation.</p>
-<p>Similarly, Lucene uses a Java <code>int</code> to refer to
+<p>Lucene uses a Java <code>int</code> to refer to
 document numbers, and the index file format uses an <code>Int32</code>
 on-disk to store document numbers. This is a limitation
 of both the index file format and the current implementation. Eventually these
diff --git a/lucene/core/src/java/org/apache/lucene/index/BufferedDeletesStream.java b/lucene/core/src/java/org/apache/lucene/index/BufferedDeletesStream.java
index 12b48f4..bb616cf 100644
--- a/lucene/core/src/java/org/apache/lucene/index/BufferedDeletesStream.java
+++ b/lucene/core/src/java/org/apache/lucene/index/BufferedDeletesStream.java
@@ -21,9 +21,7 @@
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Comparator;
-import java.util.HashSet;
 import java.util.List;
-import java.util.Set;
 import java.util.SortedSet;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
@@ -86,12 +84,16 @@
     assert packet.anyDeletes() || packet.anyUpdates();
     assert checkDeleteStats();
     assert packet.delGen() < nextGen;
-    assert deletes.isEmpty() || deletes.get(deletes.size()-1).delGen() < packet.delGen() : "Delete packets must be in order";
+    assert deletes.isEmpty()
+        || deletes.get(deletes.size() - 1).delGen() < packet.delGen() : "Delete packets must be in order";
     deletes.add(packet);
     numTerms.addAndGet(packet.numTermDeletes);
     bytesUsed.addAndGet(packet.bytesUsed);
     if (infoStream.isEnabled("BD")) {
-      infoStream.message("BD", "push deletes " + packet + " delGen=" + packet.delGen() + " packetCount=" + deletes.size() + " totBytesUsed=" + bytesUsed.get());
+      infoStream.message("BD",
+          "push deletes " + packet + " delGen=" + packet.delGen()
+              + " packetCount=" + deletes.size() + " totBytesUsed="
+              + bytesUsed.get());
     }
     assert checkDeleteStats();
     return packet.delGen();
@@ -169,25 +171,51 @@
     List<SegmentInfoPerCommit> infos2 = new ArrayList<SegmentInfoPerCommit>();
     infos2.addAll(infos);
     Collections.sort(infos2, sortSegInfoByDelGen);
-
+    
+    boolean anyNewDeletes = false;
+    List<SegmentInfoPerCommit> allDeleted = new ArrayList<SegmentInfoPerCommit>();
+    // go through packets forward and apply deletes and updates
+    anyNewDeletes |= handleUpdates(readerPool, infos2);
+    // go through packets backwards and apply deletes
+    anyNewDeletes |= handleDeletes(readerPool, infos2, allDeleted);
+    
+    // mark all advanced segment infos
+    for (SegmentInfoPerCommit info : infos2) {
+      info.setBufferedDeletesGen(gen);
+    }
+    
+    assert checkDeleteStats();
+    if (infoStream.isEnabled("BD")) {
+      infoStream.message("BD",
+          "applyDeletes took " + (System.currentTimeMillis() - t0) + " msec");
+    }
+    // assert infos != segmentInfos || !any() : "infos=" + infos +
+    // " segmentInfos=" + segmentInfos + " any=" + any;
+    
+    if (allDeleted.size() == 0) {
+      allDeleted = null;
+    }
+    
+    return new ApplyDeletesResult(anyNewDeletes, gen, allDeleted);
+  }
+  
+  private boolean handleDeletes(IndexWriter.ReaderPool readerPool,
+      List<SegmentInfoPerCommit> infos2, List<SegmentInfoPerCommit> allDeleted) throws IOException {
     CoalescedDeletes coalescedDeletes = null;
     boolean anyNewDeletes = false;
-
-    int infosIDX = infos2.size()-1;
-    int delIDX = deletes.size()-1;
-
-    List<SegmentInfoPerCommit> allDeleted = null;
-    Set<SegmentInfoPerCommit> advanced = null;
-
+    
+    int infosIDX = infos2.size() - 1;
+    int delIDX = deletes.size() - 1;
+    
     while (infosIDX >= 0) {
       //System.out.println("BD: cycle delIDX=" + delIDX + " infoIDX=" + infosIDX);
 
       final FrozenBufferedDeletes packet = delIDX >= 0 ? deletes.get(delIDX) : null;
       final SegmentInfoPerCommit info = infos2.get(infosIDX);
       final long segGen = info.getBufferedDeletesGen();
-
+      
       if (packet != null && packet.anyDeletes() && segGen < packet.delGen()) {
-        //System.out.println("  coalesce");
+        // System.out.println("  coalesce");
         if (coalescedDeletes == null) {
           coalescedDeletes = new CoalescedDeletes();
         }
@@ -203,10 +231,12 @@
         }
 
         delIDX--;
-      } else if (packet != null && packet.anyDeletes() && segGen == packet.delGen()) {
-        assert packet.isSegmentPrivate : "Packet and Segments deletegen can only match on a segment private del packet gen=" + segGen;
-        //System.out.println("  eq");
-
+      } else if (packet != null && packet.anyDeletes()
+          && segGen == packet.delGen()) {
+        assert packet.isSegmentPrivate : "Packet and Segments deletegen can only match on a segment private del packet gen="
+            + segGen;
+        // System.out.println("  eq");
+        
         // Lock order: IW -> BD -> RP
         assert readerPool.infoIsLive(info);
         final ReadersAndLiveDocs rld = readerPool.get(info, true);
@@ -230,17 +260,20 @@
           rld.release(reader);
           readerPool.release(rld);
         }
-        anyNewDeletes |= delCount > 0;
-
+        if (delCount > 0) {
+          anyNewDeletes = true;
+        }
+        
         if (segAllDeletes) {
-          if (allDeleted == null) {
-            allDeleted = new ArrayList<SegmentInfoPerCommit>();
-          }
           allDeleted.add(info);
         }
 
         if (infoStream.isEnabled("BD")) {
-          infoStream.message("BD", "seg=" + info + " segGen=" + segGen + " segDeletes=[" + packet + "]; coalesced deletes=[" + (coalescedDeletes == null ? "null" : coalescedDeletes) + "] newDelCount=" + delCount + (segAllDeletes ? " 100% deleted" : ""));
+          infoStream.message("BD", "seg=" + info + " segGen=" + segGen
+              + " segDeletes=[" + packet + "]; coalesced deletes=["
+              + (coalescedDeletes == null ? "null" : coalescedDeletes)
+              + "] newDelCount=" + delCount
+              + (segAllDeletes ? " 100% deleted" : ""));
         }
 
         if (coalescedDeletes == null) {
@@ -254,11 +287,6 @@
          */
         delIDX--;
         infosIDX--;
-        if (advanced == null) {
-          advanced = new HashSet<SegmentInfoPerCommit>();
-        }
-        advanced.add(info);
-
       } else if (packet != null && !packet.anyDeletes() && packet.anyUpdates()) {
         // ignore updates only packets
         delIDX--;
@@ -282,68 +310,58 @@
             rld.release(reader);
             readerPool.release(rld);
           }
-          anyNewDeletes |= delCount > 0;
-
+          if (delCount > 0) {
+            anyNewDeletes = true;
+          }
           if (segAllDeletes) {
-            if (allDeleted == null) {
-              allDeleted = new ArrayList<SegmentInfoPerCommit>();
-            }
             allDeleted.add(info);
           }
 
           if (infoStream.isEnabled("BD")) {
-            infoStream.message("BD", "seg=" + info + " segGen=" + segGen + " coalesced deletes=[" + (coalescedDeletes == null ? "null" : coalescedDeletes) + "] newDelCount=" + delCount + (segAllDeletes ? " 100% deleted" : ""));
+            infoStream.message("BD", "seg=" + info + " segGen=" + segGen
+                + " coalesced deletes=["
+                + (coalescedDeletes == null ? "null" : coalescedDeletes)
+                + "] newDelCount=" + delCount
+                + (segAllDeletes ? " 100% deleted" : ""));
           }
-        if (advanced == null) {
-          advanced = new HashSet<SegmentInfoPerCommit>();
-        }
-        advanced.add(info);
         }
 
         infosIDX--;
       }
     }
+    return anyNewDeletes;
+  }
+  
+  private boolean handleUpdates(IndexWriter.ReaderPool readerPool,
+      List<SegmentInfoPerCommit> infos2)
+      throws IOException {
+    boolean anyNewDeletes = false;
     
-    // go through deletes forward and apply updates
-    for (SegmentInfoPerCommit updateInfo : infos2) {
-      final long updateSegGen = updateInfo.getBufferedDeletesGen();
+    for (SegmentInfoPerCommit info : infos2) {
+      final long segGen = info.getBufferedDeletesGen();
       
-      for (FrozenBufferedDeletes updatePacket : deletes) {
-        if (updatePacket.anyUpdates() && updateSegGen <= updatePacket.delGen()) {
-          assert readerPool.infoIsLive(updateInfo);
+      for (int delIdx = 0; delIdx < deletes.size(); delIdx++) {
+        FrozenBufferedDeletes packet = deletes.get(delIdx);
+        assert readerPool.infoIsLive(info);
+        if (segGen <= packet.delGen() && packet.anyUpdates()) {
           // we need to reopen the reader every time, to include previous
-          // updates when applying new ones
-          final ReadersAndLiveDocs rld = readerPool.get(updateInfo, true);
+          // changes when applying new ones
+          final ReadersAndLiveDocs rld = readerPool.get(info, true);
           final SegmentReader reader = rld.getReader(IOContext.READ);
-          final boolean exactGen = updateSegGen == updatePacket.delGen();
           try {
-            anyNewDeletes |= applyTermUpdates(updatePacket.allUpdates, rld,
-                reader, exactGen);
+            final boolean exactGen = (segGen == packet.delGen());
+            if (applyTermUpdates(packet.allUpdates, rld, reader, exactGen)) {
+              anyNewDeletes = true;
+            }
           } finally {
             rld.release(reader);
             readerPool.release(rld);
           }
-          if (advanced == null) {
-            advanced = new HashSet<SegmentInfoPerCommit>();
-          }
-          advanced.add(updateInfo);
         }
       }
+      
     }
-
-    if (advanced != null) {
-      for (SegmentInfoPerCommit info : advanced) {
-        info.setBufferedDeletesGen(gen);
-      }
-    }
-    
-    assert checkDeleteStats();
-    if (infoStream.isEnabled("BD")) {
-      infoStream.message("BD", "applyDeletes took " + (System.currentTimeMillis()-t0) + " msec");
-    }
-    // assert infos != segmentInfos || !any() : "infos=" + infos + " segmentInfos=" + segmentInfos + " any=" + any;
-
-    return new ApplyDeletesResult(anyNewDeletes, gen, allDeleted);
+    return anyNewDeletes;
   }
 
   synchronized long getNextGen() {
@@ -467,7 +485,7 @@
 
     return delCount;
   }
-
+  
   private synchronized boolean applyTermUpdates(
       SortedSet<FieldsUpdate> packetUpdates, ReadersAndLiveDocs rld,
       SegmentReader reader, boolean exactSegment) throws IOException {
@@ -478,9 +496,9 @@
     }
     
     assert checkDeleteTerm(null);
-
+    
     UpdatedSegmentData updatedSegmentData = new UpdatedSegmentData(reader,
-        packetUpdates, exactSegment);
+        packetUpdates, exactSegment, infoStream);
     
     if (updatedSegmentData.hasUpdates()) {
       rld.setLiveUpdates(updatedSegmentData);
@@ -489,7 +507,7 @@
     
     return false;
   }
-
+  
   public static class QueryAndLimit {
     public final Query query;
     public final int limit;
diff --git a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdates.java b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdates.java
index b0b8bc1..b01b267 100644
--- a/lucene/core/src/java/org/apache/lucene/index/BufferedUpdates.java
+++ b/lucene/core/src/java/org/apache/lucene/index/BufferedUpdates.java
@@ -17,8 +17,9 @@
  * limitations under the License.
  */
 
-import java.util.SortedSet;
-import java.util.TreeSet;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ConcurrentSkipListMap;
 import java.util.concurrent.atomic.AtomicInteger;
 import java.util.concurrent.atomic.AtomicLong;
 
@@ -37,7 +38,7 @@
 class BufferedUpdates {
 
   final AtomicInteger numTermUpdates = new AtomicInteger();
-  final SortedFieldsUpdates terms = new SortedFieldsUpdates();
+  final ConcurrentSkipListMap<Term,List<FieldsUpdate>> terms = new ConcurrentSkipListMap<Term,List<FieldsUpdate>>();
 
   public static final Integer MAX_INT = Integer.valueOf(Integer.MAX_VALUE);
 
@@ -73,21 +74,11 @@
     }
   }
 
-  public void addTerm(Term term, FieldsUpdate update) {
-    SortedSet<FieldsUpdate> current = terms.get(term);
-    //if (current != null && update.docIDUpto < current.peek().docIDUpto) {
-      // Only record the new number if it's greater than the
-      // current one.  This is important because if multiple
-      // threads are replacing the same doc at nearly the
-      // same time, it's possible that one thread that got a
-      // higher docID is scheduled before the other
-      // threads.  If we blindly replace than we can
-      // incorrectly get both docs indexed.
-      //return;
-    //}
+  public synchronized void addTerm(Term term, FieldsUpdate update) {
+    List<FieldsUpdate> current = terms.get(term);
 
     if (current == null) {
-      current = new TreeSet<FieldsUpdate>();
+      current = new ArrayList<FieldsUpdate>(1);
       terms.put(term, current);
       bytesUsed.addAndGet(BufferedDeletes.BYTES_PER_DEL_TERM
           + term.bytes.length
diff --git a/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java b/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
index 6a2a771..f55df1d 100644
--- a/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
+++ b/lucene/core/src/java/org/apache/lucene/index/CheckIndex.java
@@ -30,7 +30,8 @@
 
 import org.apache.lucene.codecs.BlockTreeTermsReader;
 import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.codecs.PostingsFormat;
+import org.apache.lucene.codecs.PostingsFormat; // javadocs
+import org.apache.lucene.document.FieldType; // for javadocs
 import org.apache.lucene.index.FieldInfo.IndexOptions;
 import org.apache.lucene.search.DocIdSetIterator;
 import org.apache.lucene.store.Directory;
@@ -43,8 +44,6 @@
 import org.apache.lucene.util.FixedBitSet;
 import org.apache.lucene.util.OpenBitSet;
 import org.apache.lucene.util.StringHelper;
-// javadocs
-// for javadocs
 
 /**
  * Basic tool and API to check the health of an index and
@@ -464,11 +463,11 @@
 
     if (onlySegments != null) {
       result.partial = true;
-      if (infoStream != null)
+      if (infoStream != null) {
         infoStream.print("\nChecking only these segments:");
-      for (String s : onlySegments) {
-        if (infoStream != null)
+        for (String s : onlySegments) {
           infoStream.print(" " + s);
+        }
       }
       result.segmentsChecked.addAll(onlySegments);
       msg(infoStream, ":");
diff --git a/lucene/core/src/java/org/apache/lucene/index/CoalescedDeletes.java b/lucene/core/src/java/org/apache/lucene/index/CoalescedDeletes.java
index 7d55d53..a5c3a70 100644
--- a/lucene/core/src/java/org/apache/lucene/index/CoalescedDeletes.java
+++ b/lucene/core/src/java/org/apache/lucene/index/CoalescedDeletes.java
@@ -38,10 +38,12 @@
 
   void update(FrozenBufferedDeletes in) {
     iterables.add(in.termsIterable());
-
-    for(int queryIdx=0;queryIdx<in.queries.length;queryIdx++) {
-      final Query query = in.queries[queryIdx];
-      queries.put(query, BufferedDeletes.MAX_INT);
+    
+    if (in.queries != null) {
+      for (int queryIdx = 0; queryIdx < in.queries.length; queryIdx++) {
+        final Query query = in.queries[queryIdx];
+        queries.put(query, BufferedDeletes.MAX_INT);
+      }
     }
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocFieldConsumer.java b/lucene/core/src/java/org/apache/lucene/index/DocFieldConsumer.java
index 13832c8..be5c74c 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocFieldConsumer.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocFieldConsumer.java
@@ -24,8 +24,7 @@
 
 abstract class DocFieldConsumer {
   /** Called when DocumentsWriterPerThread decides to create a new
-   *  segment 
-   */
+   *  segment */
   abstract void flush(Map<String, DocFieldConsumerPerField> fieldsToFlush, SegmentWriteState state) throws IOException;
 
   /** Called when an aborting exception is hit */
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
index 99c2f1d..0400b2d 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriter.java
@@ -35,80 +35,96 @@
 import org.apache.lucene.index.DocumentsWriterPerThreadPool.ThreadState;
 import org.apache.lucene.index.FieldInfos.FieldNumbers;
 import org.apache.lucene.index.FieldsUpdate.Operation;
+import org.apache.lucene.search.MatchAllDocsQuery;
 import org.apache.lucene.search.Query;
 import org.apache.lucene.search.similarities.Similarity;
 import org.apache.lucene.store.AlreadyClosedException;
 import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.FlushInfo;
 import org.apache.lucene.store.IOContext;
 import org.apache.lucene.store.MergeInfo;
 import org.apache.lucene.store.TrackingDirectoryWrapper;
 import org.apache.lucene.util.InfoStream;
+import org.apache.lucene.util.MutableBits;
 
 /**
- * This class accepts multiple added documents and directly writes segment
- * files.
- * 
- * Each added document is passed to the {@link DocConsumer}, which in turn
- * processes the document and interacts with other consumers in the indexing
- * chain. Certain consumers, like {@link StoredFieldsConsumer} and
- * {@link TermVectorsConsumer}, digest a document and immediately write bytes to
- * the "doc store" files (ie, they do not consume RAM per document, except while
- * they are processing the document).
- * 
- * Other consumers, eg {@link FreqProxTermsWriter} and {@link NormsConsumer},
- * buffer bytes in RAM and flush only when a new segment is produced.
- * 
- * Once we have used our allowed RAM buffer, or the number of added docs is
- * large enough (in the case we are flushing by doc count instead of RAM usage),
- * we create a real segment and flush it to the Directory.
- * 
+ * This class accepts multiple added documents and directly
+ * writes segment files.
+ *
+ * Each added document is passed to the {@link DocConsumer},
+ * which in turn processes the document and interacts with
+ * other consumers in the indexing chain.  Certain
+ * consumers, like {@link StoredFieldsConsumer} and {@link
+ * TermVectorsConsumer}, digest a document and
+ * immediately write bytes to the "doc store" files (ie,
+ * they do not consume RAM per document, except while they
+ * are processing the document).
+ *
+ * Other consumers, eg {@link FreqProxTermsWriter} and
+ * {@link NormsConsumer}, buffer bytes in RAM and flush only
+ * when a new segment is produced.
+
+ * Once we have used our allowed RAM buffer, or the number
+ * of added docs is large enough (in the case we are
+ * flushing by doc count instead of RAM usage), we create a
+ * real segment and flush it to the Directory.
+ *
  * Threads:
- * 
- * Multiple threads are allowed into addDocument at once. There is an initial
- * synchronized call to getThreadState which allocates a ThreadState for this
- * thread. The same thread will get the same ThreadState over time (thread
- * affinity) so that if there are consistent patterns (for example each thread
- * is indexing a different content source) then we make better use of RAM. Then
- * processDocument is called on that ThreadState without synchronization (most
- * of the "heavy lifting" is in this call). Finally the synchronized
- * "finishDocument" is called to flush changes to the directory.
- * 
- * When flush is called by IndexWriter we forcefully idle all threads and flush
- * only once they are all idle. This means you can call flush with a given
- * thread even while other threads are actively adding/deleting documents.
- * 
- * 
+ *
+ * Multiple threads are allowed into addDocument at once.
+ * There is an initial synchronized call to getThreadState
+ * which allocates a ThreadState for this thread.  The same
+ * thread will get the same ThreadState over time (thread
+ * affinity) so that if there are consistent patterns (for
+ * example each thread is indexing a different content
+ * source) then we make better use of RAM.  Then
+ * processDocument is called on that ThreadState without
+ * synchronization (most of the "heavy lifting" is in this
+ * call).  Finally the synchronized "finishDocument" is
+ * called to flush changes to the directory.
+ *
+ * When flush is called by IndexWriter we forcefully idle
+ * all threads and flush only once they are all idle.  This
+ * means you can call flush with a given thread even while
+ * other threads are actively adding/deleting documents.
+ *
+ *
  * Exceptions:
- * 
- * Because this class directly updates in-memory posting lists, and flushes
- * stored fields and term vectors directly to files in the directory, there are
- * certain limited times when an exception can corrupt this state. For example,
- * a disk full while flushing stored fields leaves this file in a corrupt state.
- * Or, an OOM exception while appending to the in-memory posting lists can
- * corrupt that posting list. We call such exceptions "aborting exceptions". In
- * these cases we must call abort() to discard all docs added since the last
- * flush.
- * 
- * All other exceptions ("non-aborting exceptions") can still partially update
- * the index structures. These updates are consistent, but, they represent only
- * a part of the document seen up until the exception was hit. When this
- * happens, we immediately mark the document as deleted so that the document is
- * always atomically ("all or none") added to the index.
+ *
+ * Because this class directly updates in-memory posting
+ * lists, and flushes stored fields and term vectors
+ * directly to files in the directory, there are certain
+ * limited times when an exception can corrupt this state.
+ * For example, a disk full while flushing stored fields
+ * leaves this file in a corrupt state.  Or, an OOM
+ * exception while appending to the in-memory posting lists
+ * can corrupt that posting list.  We call such exceptions
+ * "aborting exceptions".  In these cases we must call
+ * abort() to discard all docs added since the last flush.
+ *
+ * All other exceptions ("non-aborting exceptions") can
+ * still partially update the index structures.  These
+ * updates are consistent, but, they represent only a part
+ * of the document seen up until the exception was hit.
+ * When this happens, we immediately mark the document as
+ * deleted so that the document is always atomically ("all
+ * or none") added to the index.
  */
 
 final class DocumentsWriter {
   Directory directory;
-  
+
   private volatile boolean closed;
-  
+
   final InfoStream infoStream;
   Similarity similarity;
-  
+
   List<String> newFiles;
-  
+
   final IndexWriter indexWriter;
-  
+
   private AtomicInteger numDocsInRAM = new AtomicInteger(0);
+  private AtomicInteger numUpdates = new AtomicInteger(0);
   
   // TODO: cut over to BytesRefHash in BufferedDeletes
   volatile DocumentsWriterDeleteQueue deleteQueue = new DocumentsWriterDeleteQueue();
@@ -120,20 +136,17 @@
    * #anyChanges() & #flushAllThreads
    */
   private volatile boolean pendingChangesInCurrentFullFlush;
-  
-  private Collection<String> abortedFiles; // List of files that were written
-                                           // before last abort()
-  
+
+  private Collection<String> abortedFiles;               // List of files that were written before last abort()
+
   final IndexingChain chain;
-  
+
   final DocumentsWriterPerThreadPool perThreadPool;
   final FlushPolicy flushPolicy;
   final DocumentsWriterFlushControl flushControl;
   
   final Codec codec;
-  
-  DocumentsWriter(Codec codec, LiveIndexWriterConfig config,
-      Directory directory, IndexWriter writer, FieldNumbers globalFieldNumbers,
+  DocumentsWriter(Codec codec, LiveIndexWriterConfig config, Directory directory, IndexWriter writer, FieldNumbers globalFieldNumbers,
       BufferedDeletesStream bufferedDeletesStream) {
     this.codec = codec;
     this.directory = directory;
@@ -148,7 +161,7 @@
     flushPolicy.init(this);
     flushControl = new DocumentsWriterFlushControl(this, config);
   }
-  
+
   synchronized void deleteQueries(final Query... queries) throws IOException {
     deleteQueue.addDelete(queries);
     flushControl.doOnDelete();
@@ -156,7 +169,7 @@
       applyAllDeletes(deleteQueue);
     }
   }
-  
+
   // TODO: we could check w/ FreqProxTermsWriter: if the
   // term doesn't exist, don't bother buffering into the
   // per-DWPT map (but still must go into the global map)
@@ -168,49 +181,47 @@
       applyAllDeletes(deleteQueue);
     }
   }
-  
+
   DocumentsWriterDeleteQueue currentDeleteSession() {
     return deleteQueue;
   }
   
-  private void applyAllDeletes(DocumentsWriterDeleteQueue deleteQueue)
-      throws IOException {
+  private void applyAllDeletes(DocumentsWriterDeleteQueue deleteQueue) throws IOException {
     if (deleteQueue != null && !flushControl.isFullFlush()) {
       ticketQueue.addDeletesAndPurge(this, deleteQueue);
     }
     indexWriter.applyAllDeletes();
     indexWriter.flushCount.incrementAndGet();
   }
-  
+
   /** Returns how many docs are currently buffered in RAM. */
   int getNumDocs() {
     return numDocsInRAM.get();
   }
-  
+
   Collection<String> abortedFiles() {
     return abortedFiles;
   }
-  
+
   private void ensureOpen() throws AlreadyClosedException {
     if (closed) {
       throw new AlreadyClosedException("this IndexWriter is closed");
     }
   }
-  
-  /**
-   * Called if we hit an exception at a bad time (when updating the index files)
-   * and must discard all currently buffered docs. This resets our state,
-   * discarding any docs added since last flush.
-   */
+
+  /** Called if we hit an exception at a bad time (when
+   *  updating the index files) and must discard all
+   *  currently buffered docs.  This resets our state,
+   *  discarding any docs added since last flush. */
   synchronized void abort() {
     boolean success = false;
-    
+
     try {
       deleteQueue.clear();
       if (infoStream.isEnabled("DW")) {
         infoStream.message("DW", "abort");
       }
-      
+
       final int limit = perThreadPool.getActiveThreadState();
       for (int i = 0; i < limit; i++) {
         final ThreadState perThread = perThreadPool.getThreadState(i);
@@ -235,58 +246,110 @@
       success = true;
     } finally {
       if (infoStream.isEnabled("DW")) {
-        infoStream.message("DW", "done abort; abortedFiles=" + abortedFiles
-            + " success=" + success);
+        infoStream.message("DW", "done abort; abortedFiles=" + abortedFiles + " success=" + success);
       }
     }
   }
   
+  synchronized void lockAndAbortAll() {
+    assert indexWriter.holdsFullFlushLock();
+    if (infoStream.isEnabled("DW")) {
+      infoStream.message("DW", "lockAndAbortAll");
+    }
+    boolean success = false;
+    try {
+      deleteQueue.clear();
+      final int limit = perThreadPool.getMaxThreadStates();
+      for (int i = 0; i < limit; i++) {
+        final ThreadState perThread = perThreadPool.getThreadState(i);
+        perThread.lock();
+        if (perThread.isActive()) { // we might be closed or 
+          try {
+            perThread.dwpt.abort();
+          } finally {
+            perThread.dwpt.checkAndResetHasAborted();
+            flushControl.doOnAbort(perThread);
+          }
+        }
+      }
+      deleteQueue.clear();
+      flushControl.abortPendingFlushes();
+      flushControl.waitForFlush();
+      success = true;
+    } finally {
+      if (infoStream.isEnabled("DW")) {
+        infoStream.message("DW", "finished lockAndAbortAll success=" + success);
+      }
+      if (!success) {
+        // if something happens here we unlock all states again
+        unlockAllAfterAbortAll();
+      }
+    }
+  }
+  
+  final synchronized void unlockAllAfterAbortAll() {
+    assert indexWriter.holdsFullFlushLock();
+    if (infoStream.isEnabled("DW")) {
+      infoStream.message("DW", "unlockAll");
+    }
+    final int limit = perThreadPool.getMaxThreadStates();
+    for (int i = 0; i < limit; i++) {
+      try {
+        final ThreadState perThread = perThreadPool.getThreadState(i);
+        if (perThread.isHeldByCurrentThread()) {
+          perThread.unlock();
+        }
+      } catch(Throwable e) {
+        if (infoStream.isEnabled("DW")) {
+          infoStream.message("DW", "unlockAll: could not unlock state: " + i + " msg:" + e.getMessage());
+        }
+        // ignore & keep on unlocking
+      }
+    }
+  }
+
   boolean anyChanges() {
     if (infoStream.isEnabled("DW")) {
-      infoStream.message("DW",
-          "anyChanges? numDocsInRam=" + numDocsInRAM.get() + " deletes="
-              + anyDeletions() + " hasTickets:" + ticketQueue.hasTickets()
-              + " pendingChangesInFullFlush: "
-              + pendingChangesInCurrentFullFlush);
+      infoStream.message("DW", "anyChanges? numDocsInRam=" + numDocsInRAM.get()
+          + " deletes=" + anyDeletions() + " hasTickets:"
+          + ticketQueue.hasTickets() + " pendingChangesInFullFlush: "
+          + pendingChangesInCurrentFullFlush);
     }
     /*
-     * changes are either in a DWPT or in the deleteQueue. yet if we currently
-     * flush deletes and / or dwpt there could be a window where all changes are
-     * in the ticket queue before they are published to the IW. ie we need to
-     * check if the ticket queue has any tickets.
+     * changes are either in a DWPT or in the deleteQueue.
+     * yet if we currently flush deletes and / or dwpt there
+     * could be a window where all changes are in the ticket queue
+     * before they are published to the IW. ie we need to check if the 
+     * ticket queue has any tickets.
      */
-    return numDocsInRAM.get() != 0 || anyDeletions()
-        || ticketQueue.hasTickets() || pendingChangesInCurrentFullFlush;
+    return numDocsInRAM.get() != 0 || anyDeletions() || ticketQueue.hasTickets() || pendingChangesInCurrentFullFlush;
   }
   
   public int getBufferedDeleteTermsSize() {
     return deleteQueue.getBufferedDeleteTermsSize();
   }
-  
-  // for testing
+
+  //for testing
   public int getNumBufferedDeleteTerms() {
     return deleteQueue.numGlobalTermDeletes();
   }
-  
+
   public boolean anyDeletions() {
     return deleteQueue.anyChanges();
   }
-  
+
   void close() {
     closed = true;
     flushControl.setClosed();
   }
-  
+
   private boolean preUpdate() throws IOException {
     ensureOpen();
     boolean maybeMerge = false;
     if (flushControl.anyStalledThreads() || flushControl.numQueuedFlushes() > 0) {
       // Help out flushing any queued DWPTs so we can un-stall:
       if (infoStream.isEnabled("DW")) {
-        infoStream
-            .message(
-                "DW",
-                "DocumentsWriter has queued dwpt; will hijack this thread to flush pending segment(s)");
+        infoStream.message("DW", "DocumentsWriter has queued dwpt; will hijack this thread to flush pending segment(s)");
       }
       do {
         // Try pick up pending threads here if possible
@@ -295,58 +358,52 @@
           // Don't push the delete here since the update could fail!
           maybeMerge |= doFlush(flushingDWPT);
         }
-        
+  
         if (infoStream.isEnabled("DW")) {
           if (flushControl.anyStalledThreads()) {
-            infoStream.message("DW",
-                "WARNING DocumentsWriter has stalled threads; waiting");
+            infoStream.message("DW", "WARNING DocumentsWriter has stalled threads; waiting");
           }
         }
         
         flushControl.waitIfStalled(); // block if stalled
-      } while (flushControl.numQueuedFlushes() != 0); // still queued DWPTs try
-                                                      // help flushing
-      
+      } while (flushControl.numQueuedFlushes() != 0); // still queued DWPTs try help flushing
+
       if (infoStream.isEnabled("DW")) {
-        infoStream
-            .message("DW",
-                "continue indexing after helping out flushing DocumentsWriter is healthy");
+        infoStream.message("DW", "continue indexing after helping out flushing DocumentsWriter is healthy");
       }
     }
     return maybeMerge;
   }
-  
-  private boolean postUpdate(DocumentsWriterPerThread flushingDWPT,
-      boolean maybeMerge) throws IOException {
+
+  private boolean postUpdate(DocumentsWriterPerThread flushingDWPT, boolean maybeMerge) throws IOException {
     if (flushControl.doApplyAllDeletes()) {
       applyAllDeletes(deleteQueue);
     }
     if (flushingDWPT != null) {
       maybeMerge |= doFlush(flushingDWPT);
     } else {
-      final DocumentsWriterPerThread nextPendingFlush = flushControl
-          .nextPendingFlush();
+      final DocumentsWriterPerThread nextPendingFlush = flushControl.nextPendingFlush();
       if (nextPendingFlush != null) {
         maybeMerge |= doFlush(nextPendingFlush);
       }
     }
-    
+
     return maybeMerge;
   }
-  
-  boolean updateDocuments(final Iterable<? extends IndexDocument> docs,
-      final Analyzer analyzer, final Term delTerm) throws IOException {
+
+  boolean updateDocuments(final Iterable<? extends IndexDocument> docs, final Analyzer analyzer,
+                          final Term delTerm) throws IOException {
     boolean maybeMerge = preUpdate();
-    
+
     final ThreadState perThread = flushControl.obtainAndLock();
     final DocumentsWriterPerThread flushingDWPT;
     
     try {
       if (!perThread.isActive()) {
         ensureOpen();
-        assert false : "perThread is not active but we are still open";
+        assert false: "perThread is not active but we are still open";
       }
-      
+       
       final DocumentsWriterPerThread dwpt = perThread.dwpt;
       try {
         final int docCount = dwpt.updateDocuments(docs, analyzer, delTerm);
@@ -361,30 +418,29 @@
     } finally {
       perThread.unlock();
     }
-    
+
     return postUpdate(flushingDWPT, maybeMerge);
   }
-  
+
   boolean updateDocument(final IndexDocument doc, final Analyzer analyzer,
       final Term delTerm) throws IOException {
-    
+
     boolean maybeMerge = preUpdate();
-    
+
     final ThreadState perThread = flushControl.obtainAndLock();
-    
+
     final DocumentsWriterPerThread flushingDWPT;
     
     try {
-      
+
       if (!perThread.isActive()) {
         ensureOpen();
-        throw new IllegalStateException(
-            "perThread is not active but we are still open");
+        throw new IllegalStateException("perThread is not active but we are still open");
       }
-      
+       
       final DocumentsWriterPerThread dwpt = perThread.dwpt;
       try {
-        dwpt.updateDocument(doc, analyzer, delTerm);
+        dwpt.updateDocument(doc, analyzer, delTerm); 
         numDocsInRAM.incrementAndGet();
       } finally {
         if (dwpt.checkAndResetHasAborted()) {
@@ -421,7 +477,7 @@
         // create new fields update, which should effect previous docs in the
         // current segment
         FieldsUpdate fieldsUpdate = new FieldsUpdate(term, operation, fields, 
-            analyzer, numDocsInRAM.get() - 1, System.currentTimeMillis());
+            analyzer, numDocsInRAM.get() - 1, numUpdates.addAndGet(1));
         // invert the given fields and store in RAMDirectory
         dwpt.invertFieldsUpdate(fieldsUpdate, globalFieldNumberMap);
         dwpt.updateFields(term, fieldsUpdate);
@@ -573,10 +629,9 @@
          * might miss to deletes documents in 'A'.
          */
         try {
-          // Each flush is assigned a ticket in the order they acquire the
-          // ticketQueue lock
+          // Each flush is assigned a ticket in the order they acquire the ticketQueue lock
           ticket = ticketQueue.addFlushTicket(flushingDWPT);
-          
+  
           // flush concurrently without locking
           final FlushedSegment newSegment = flushingDWPT.flush();
           if (newSegment == null) {
@@ -590,8 +645,7 @@
         } finally {
           if (!success && ticket != null) {
             // In the case of a failure make sure we are making progress and
-            // apply all the deletes since the segment flush failed since the
-            // flush
+            // apply all the deletes since the segment flush failed since the flush
             // ticket could hold global deletes see FlushTicket#canPublish()
             ticketQueue.markTicketFailed(ticket);
           }
@@ -600,38 +654,35 @@
          * Now we are done and try to flush the ticket queue if the head of the
          * queue has already finished the flush.
          */
-        if (ticketQueue.getTicketCount() >= perThreadPool
-            .getActiveThreadState()) {
+        if (ticketQueue.getTicketCount() >= perThreadPool.getActiveThreadState()) {
           // This means there is a backlog: the one
           // thread in innerPurge can't keep up with all
-          // other threads flushing segments. In this case
+          // other threads flushing segments.  In this case
           // we forcefully stall the producers.
           ticketQueue.forcePurge(this);
         } else {
           ticketQueue.tryPurge(this);
         }
-        
+
       } finally {
         flushControl.doAfterFlush(flushingDWPT);
         flushingDWPT.checkAndResetHasAborted();
         indexWriter.flushCount.incrementAndGet();
         indexWriter.doAfterFlush();
       }
-      
+     
       flushingDWPT = flushControl.nextPendingFlush();
     }
-    
+
     // If deletes alone are consuming > 1/2 our RAM
     // buffer, force them all to apply now. This is to
     // prevent too-frequent flushing of a long tail of
     // tiny segments:
     final double ramBufferSizeMB = indexWriter.getConfig().getRAMBufferSizeMB();
-    if (ramBufferSizeMB != IndexWriterConfig.DISABLE_AUTO_FLUSH
-        && flushControl.getDeleteBytesUsed() > (1024 * 1024 * ramBufferSizeMB / 2)) {
+    if (ramBufferSizeMB != IndexWriterConfig.DISABLE_AUTO_FLUSH &&
+        flushControl.getDeleteBytesUsed() > (1024*1024*ramBufferSizeMB/2)) {
       if (infoStream.isEnabled("DW")) {
-        infoStream.message("DW", "force apply deletes bytesUsed="
-            + flushControl.getDeleteBytesUsed() + " vs ramBuffer="
-            + (1024 * 1024 * ramBufferSizeMB));
+        infoStream.message("DW", "force apply deletes bytesUsed=" + flushControl.getDeleteBytesUsed() + " vs ramBuffer=" + (1024*1024*ramBufferSizeMB));
       }
       applyAllDeletes(deleteQueue);
     }
@@ -639,8 +690,9 @@
     return actualFlushes > 0;
   }
   
-  void finishFlush(FlushedSegment newSegment,
-      FrozenBufferedDeletes bufferedDeletes) throws IOException {
+
+  void finishFlush(FlushedSegment newSegment, FrozenBufferedDeletes bufferedDeletes)
+      throws IOException {
     // Finish the flushed segment and publish it to IndexWriter
     if (newSegment == null) {
       assert bufferedDeletes != null;
@@ -648,15 +700,14 @@
           && (bufferedDeletes.anyDeletes() || bufferedDeletes.anyUpdates())) {
         indexWriter.publishFrozenDeletes(bufferedDeletes);
         if (infoStream.isEnabled("DW")) {
-          infoStream.message("DW", "flush: push buffered deletes: "
-              + bufferedDeletes);
+          infoStream.message("DW", "flush: push buffered deletes: " + bufferedDeletes);
         }
       }
     } else {
-      publishFlushedSegment(newSegment, bufferedDeletes);
+      publishFlushedSegment(newSegment, bufferedDeletes);  
     }
   }
-  
+
   final void subtractFlushedNumDocs(int numFlushed) {
     int oldValue = numDocsInRAM.get();
     while (!numDocsInRAM.compareAndSet(oldValue, oldValue - numFlushed)) {
@@ -666,62 +717,55 @@
   
   /**
    * Publishes the flushed segment, segment private deletes (if any) and its
-   * associated global delete (if present) to IndexWriter. The actual publishing
-   * operation is synced on IW -> BDS so that the {@link SegmentInfo}'s delete
-   * generation is always GlobalPacket_deleteGeneration + 1
+   * associated global delete (if present) to IndexWriter.  The actual
+   * publishing operation is synced on IW -> BDS so that the {@link SegmentInfo}'s
+   * delete generation is always GlobalPacket_deleteGeneration + 1
    */
-  private void publishFlushedSegment(FlushedSegment newSegment,
-      FrozenBufferedDeletes globalPacket) throws IOException {
+  private void publishFlushedSegment(FlushedSegment newSegment, FrozenBufferedDeletes globalPacket)
+      throws IOException {
     assert newSegment != null;
     assert newSegment.segmentInfo != null;
     final FrozenBufferedDeletes segmentDeletes = newSegment.segmentDeletes;
-    // System.out.println("FLUSH: " + newSegment.segmentInfo.info.name);
+    //System.out.println("FLUSH: " + newSegment.segmentInfo.info.name);
     if (infoStream.isEnabled("DW")) {
-      infoStream.message("DW", "publishFlushedSegment seg-private deletes="
-          + segmentDeletes);
+      infoStream.message("DW", "publishFlushedSegment seg-private deletes=" + segmentDeletes);  
     }
+    
     if (segmentDeletes != null && infoStream.isEnabled("DW")) {
-      infoStream.message("DW", "flush: push buffered seg private deletes: "
-          + segmentDeletes);
+      infoStream.message("DW", "flush: push buffered seg private deletes: " + segmentDeletes);
     }
     // now publish!
-    indexWriter.publishFlushedSegment(newSegment.segmentInfo, segmentDeletes,
-        globalPacket);
+    indexWriter.publishFlushedSegment(newSegment.segmentInfo, segmentDeletes, globalPacket);
   }
   
   // for asserts
   private volatile DocumentsWriterDeleteQueue currentFullFlushDelQueue = null;
-  
+
   // for asserts
-  private synchronized boolean setFlushingDeleteQueue(
-      DocumentsWriterDeleteQueue session) {
+  private synchronized boolean setFlushingDeleteQueue(DocumentsWriterDeleteQueue session) {
     currentFullFlushDelQueue = session;
     return true;
   }
   
   /*
    * FlushAllThreads is synced by IW fullFlushLock. Flushing all threads is a
-   * two stage operation; the caller must ensure (in try/finally) that
-   * finishFlush is called after this method, to release the flush lock in
-   * DWFlushControl
+   * two stage operation; the caller must ensure (in try/finally) that finishFlush
+   * is called after this method, to release the flush lock in DWFlushControl
    */
-  final boolean flushAllThreads() throws IOException {
+  final boolean flushAllThreads()
+    throws IOException {
     final DocumentsWriterDeleteQueue flushingDeleteQueue;
     if (infoStream.isEnabled("DW")) {
-      infoStream.message("DW", Thread.currentThread().getName()
-          + " startFullFlush");
+      infoStream.message("DW", Thread.currentThread().getName() + " startFullFlush");
     }
     
     synchronized (this) {
       pendingChangesInCurrentFullFlush = anyChanges();
       flushingDeleteQueue = deleteQueue;
-      /*
-       * Cutover to a new delete queue. This must be synced on the flush control
+      /* Cutover to a new delete queue.  This must be synced on the flush control
        * otherwise a new DWPT could sneak into the loop with an already flushing
-       * delete queue
-       */
-      flushControl.markForFullFlush(); // swaps the delQueue synced on
-                                       // FlushControl
+       * delete queue */
+      flushControl.markForFullFlush(); // swaps the delQueue synced on FlushControl
       assert setFlushingDeleteQueue(flushingDeleteQueue);
     }
     assert currentFullFlushDelQueue != null;
@@ -735,15 +779,10 @@
         anythingFlushed |= doFlush(flushingDWPT);
       }
       // If a concurrent flush is still in flight wait for it
-      flushControl.waitForFlush();
-      if (!anythingFlushed && flushingDeleteQueue.anyChanges()) { // apply
-                                                                  // deletes if
-                                                                  // we did not
-                                                                  // flush any
-                                                                  // document
+      flushControl.waitForFlush();  
+      if (!anythingFlushed && flushingDeleteQueue.anyChanges()) { // apply deletes if we did not flush any document
         if (infoStream.isEnabled("DW")) {
-          infoStream.message("DW", Thread.currentThread().getName()
-              + ": flush naked frozen global deletes");
+          infoStream.message("DW", Thread.currentThread().getName() + ": flush naked frozen global deletes");
         }
         ticketQueue.addDeletesAndPurge(this, flushingDeleteQueue);
       } else {
@@ -759,8 +798,7 @@
   final void finishFullFlush(boolean success) {
     try {
       if (infoStream.isEnabled("DW")) {
-        infoStream.message("DW", Thread.currentThread().getName()
-            + " finishFullFlush success=" + success);
+        infoStream.message("DW", Thread.currentThread().getName() + " finishFullFlush success=" + success);
       }
       assert setFlushingDeleteQueue(null);
       if (success) {
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
index 817d08f..cef4410 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterFlushControl.java
@@ -240,6 +240,7 @@
   }
   
   public synchronized void waitForFlush() {
+    assert !Thread.holdsLock(this.documentsWriter.indexWriter) : "IW lock should never be hold when waiting on flush";
     while (flushingWriters.size() != 0) {
       try {
         this.wait();
@@ -606,9 +607,10 @@
       for (DocumentsWriterPerThread dwpt : flushQueue) {
         try {
           dwpt.abort();
-          doAfterFlush(dwpt);
         } catch (Throwable ex) {
           // ignore - keep on aborting the flush queue
+        } finally {
+          doAfterFlush(dwpt);
         }
       }
       for (BlockedFlush blockedFlush : blockedFlushes) {
@@ -616,9 +618,10 @@
           flushingWriters
               .put(blockedFlush.dwpt, Long.valueOf(blockedFlush.bytes));
           blockedFlush.dwpt.abort();
-          doAfterFlush(blockedFlush.dwpt);
         } catch (Throwable ex) {
           // ignore - keep on aborting the blocked queue
+        } finally {
+          doAfterFlush(blockedFlush.dwpt);
         }
       }
     } finally {
diff --git a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java
index a7a208f..c2b9123 100644
--- a/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java
+++ b/lucene/core/src/java/org/apache/lucene/index/DocumentsWriterPerThreadPool.java
@@ -274,7 +274,7 @@
    *         given ord.
    */
   ThreadState getThreadState(int ord) {
-    assert ord < numThreadStatesActive;
+    //assert ord < numThreadStatesActive;
     return threadStates[ord];
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/index/FieldsUpdate.java b/lucene/core/src/java/org/apache/lucene/index/FieldsUpdate.java
index 87531dd..f0a1a6c 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FieldsUpdate.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FieldsUpdate.java
@@ -47,7 +47,7 @@
   final Set<String> replacedFields;
   final Analyzer analyzer;
   final int docIdUpto;
-  final long timeStamp;
+  final int updateNumber;
 
   IndexDocument fields;
   Directory directory;
@@ -64,11 +64,13 @@
    *          The fields to use in the update operation.
    * @param analyzer
    *          The analyzer to use in the update.
-   * @param docIDUpto
-   *          Document ID of the last document added before this field update
+   * @param docIdUpto
+   *          The doc ID of the last document added before this update.
+   * @param updateNumber
+   *          The running number of this update for the current segment.
    */
   public FieldsUpdate(Term term, Operation operation, IndexDocument fields,
-      Analyzer analyzer, int docIDUpto, long timeStamp) {
+      Analyzer analyzer, int docIdUpto, int updateNumber) {
     this.term = term;
     this.fields = fields;
     this.operation = operation;
@@ -84,8 +86,8 @@
       }
     }
     this.analyzer = analyzer;
-    this.docIdUpto = docIDUpto;
-    this.timeStamp = timeStamp;
+    this.docIdUpto = docIdUpto;
+    this.updateNumber = updateNumber;
   }
   
   /**
@@ -100,23 +102,20 @@
     this.replacedFields = other.replacedFields;
     this.analyzer = other.analyzer;
     this.docIdUpto = other.docIdUpto;
-    this.timeStamp = other.timeStamp;
+    this.updateNumber = other.updateNumber;
     this.directory = other.directory;
     this.segmentInfo = other.segmentInfo;
   }
   
-  /* Order FrieldsUpdate by increasing docIDUpto */
   @Override
   public int compareTo(FieldsUpdate other) {
-    int diff = this.docIdUpto - other.docIdUpto;
-    if (diff == 0) {
-      if (this.timeStamp < other.timeStamp) {
-        return -1;
-      } else if (this.timeStamp > other.timeStamp) {
-        return 1;
-      }
-    }
-    return diff;
+    return this.updateNumber - other.updateNumber;
   }
-  
+
+  @Override
+  public String toString() {
+    return "FieldsUpdate [term=" + term + ", operation=" + operation
+        + ", docIdUpto=" + docIdUpto + ", updateNumber=" + updateNumber + "]";
+  }
+
 }
diff --git a/lucene/core/src/java/org/apache/lucene/index/FreqProxTermsWriterPerField.java b/lucene/core/src/java/org/apache/lucene/index/FreqProxTermsWriterPerField.java
index 8c4aff3..70d476f 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FreqProxTermsWriterPerField.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FreqProxTermsWriterPerField.java
@@ -359,12 +359,12 @@
     assert !writeOffsets || writePositions;
 
     final Map<Term,Integer> segDeletes;
-    if (state.segDeletes != null && state.segDeletes.terms.size() > 0) {
+    if (state.hasDeletesWithoutUpdates() && state.segDeletes.terms.size() > 0) {
       segDeletes = state.segDeletes.terms;
     } else {
       segDeletes = null;
     }
-    
+
     final int[] termIDs = termsHashPerField.sortPostings(termComp);
     final int numTerms = termsHashPerField.bytesHash.size();
     final BytesRef text = new BytesRef();
@@ -476,7 +476,7 @@
           if (state.liveDocs == null) {
             state.liveDocs = docState.docWriter.codec.liveDocsFormat().newLiveDocs(state.segmentInfo.getDocCount());
           }
-          if (state.liveDocs.get(docID)) {
+          if (state.hasDeletesWithoutUpdates() && state.liveDocs.get(docID)) {
             state.delCountOnFlush++;
             state.liveDocs.clear(docID);
           }
diff --git a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedDeletes.java b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedDeletes.java
index 450cae0..1a74691 100644
--- a/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedDeletes.java
+++ b/lucene/core/src/java/org/apache/lucene/index/FrozenBufferedDeletes.java
@@ -55,13 +55,17 @@
                                    // a segment private deletes. in that case is should
                                    // only have Queries 
   
-  // An sorted set of updates
+  // A sorted set of updates
   final SortedSet<FieldsUpdate> allUpdates;
 
-  public FrozenBufferedDeletes(BufferedDeletes deletes, BufferedUpdates updates, boolean isSegmentPrivate) {
+  public FrozenBufferedDeletes(BufferedDeletes deletes,
+      BufferedUpdates updates, boolean isSegmentPrivate) {
     this.isSegmentPrivate = isSegmentPrivate;
     int localBytesUsed = 0;
+
+    // freeze deletes
     if (deletes != null) {
+      // arrange terms and queries in arrays
       assert !isSegmentPrivate || deletes.terms.size() == 0 : "segment private package should only have del queries";
       Term termsArray[] = deletes.terms.keySet().toArray(
           new Term[deletes.terms.size()]);
@@ -97,10 +101,10 @@
       allUpdates = null;
     } else {
       allUpdates = new TreeSet<>();
-      for (SortedSet<FieldsUpdate> list : updates.terms.values()) {
+      for (List<FieldsUpdate> list : updates.terms.values()) {
         allUpdates.addAll(list);
       }
-      localBytesUsed += 100;
+      localBytesUsed += updates.bytesUsed.get();
     }
     
     bytesUsed = localBytesUsed;
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexFileNames.java b/lucene/core/src/java/org/apache/lucene/index/IndexFileNames.java
index a67b51a..5bd51d5 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexFileNames.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexFileNames.java
@@ -242,7 +242,7 @@
    * All files created by codecs much match this pattern (checked in
    * SegmentInfo).
    */
-  public static final Pattern CODEC_FILE_PATTERN = Pattern.compile("_[a-z0-9]+(_.*)?\\..*");
+  public static final Pattern CODEC_FILE_PATTERN = Pattern.compile("_[_]?[a-z0-9]+(_.*)?\\..*");
 
   /** Returns true if the file denotes an updated segment. */
   public static boolean isUpdatedSegmentFile(String file) {
diff --git a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
index 92ef81a..7d57030 100644
--- a/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java
@@ -31,6 +31,7 @@
 import java.util.Locale;
 import java.util.Map;
 import java.util.Set;
+import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
 import org.apache.lucene.analysis.Analyzer;
@@ -57,133 +58,132 @@
 import org.apache.lucene.util.ThreadInterruptedException;
 
 /**
- * An <code>IndexWriter</code> creates and maintains an index.
- * 
- * <p>
- * The {@link OpenMode} option on
- * {@link IndexWriterConfig#setOpenMode(OpenMode)} determines whether a new
- * index is created, or whether an existing index is opened. Note that you can
- * open an index with {@link OpenMode#CREATE} even while readers are using the
- * index. The old readers will continue to search the "point in time" snapshot
- * they had opened, and won't see the newly created index until they re-open. If
- * {@link OpenMode#CREATE_OR_APPEND} is used IndexWriter will create a new index
- * if there is not already an index at the provided path and otherwise open the
- * existing index.
- * </p>
- * 
- * <p>
- * In either case, documents are added with {@link #addDocument(IndexDocument)
- * addDocument} and removed with {@link #deleteDocuments(Term)} or
- * {@link #deleteDocuments(Query)}. A document can be updated with
- * {@link #updateDocument(Term, IndexDocument) updateDocument} (which just
- * deletes and then adds the entire document). When finished adding, deleting
- * and updating documents, {@link #close() close} should be called.
- * </p>
- * 
- * <a name="flush"></a>
- * <p>
- * These changes are buffered in memory and periodically flushed to the
- * {@link Directory} (during the above method calls). A flush is triggered when
- * there are enough added documents since the last flush. Flushing is triggered
- * either by RAM usage of the documents (see
- * {@link IndexWriterConfig#setRAMBufferSizeMB}) or the number of added
- * documents (see {@link IndexWriterConfig#setMaxBufferedDocs(int)}). The
- * default is to flush when RAM usage hits
- * {@link IndexWriterConfig#DEFAULT_RAM_BUFFER_SIZE_MB} MB. For best indexing
- * speed you should flush by RAM usage with a large RAM buffer. Additionally, if
- * IndexWriter reaches the configured number of buffered deletes (see
- * {@link IndexWriterConfig#setMaxBufferedDeleteTerms}) the deleted terms and
- * queries are flushed and applied to existing segments. In contrast to the
- * other flush options {@link IndexWriterConfig#setRAMBufferSizeMB} and
- * {@link IndexWriterConfig#setMaxBufferedDocs(int)}, deleted terms won't
- * trigger a segment flush. Note that flushing just moves the internal buffered
- * state in IndexWriter into the index, but these changes are not visible to
- * IndexReader until either {@link #commit()} or {@link #close} is called. A
- * flush may also trigger one or more segment merges which by default run with a
- * background thread so as not to block the addDocument calls (see <a
- * href="#mergePolicy">below</a> for changing the {@link MergeScheduler}).
- * </p>
- * 
- * <p>
- * Opening an <code>IndexWriter</code> creates a lock file for the directory in
- * use. Trying to open another <code>IndexWriter</code> on the same directory
- * will lead to a {@link LockObtainFailedException}. The
- * {@link LockObtainFailedException} is also thrown if an IndexReader on the
- * same directory is used to delete documents from the index.
- * </p>
- * 
- * <a name="deletionPolicy"></a>
- * <p>
- * Expert: <code>IndexWriter</code> allows an optional
- * {@link IndexDeletionPolicy} implementation to be specified. You can use this
- * to control when prior commits are deleted from the index. The default policy
- * is {@link KeepOnlyLastCommitDeletionPolicy} which removes all prior commits
- * as soon as a new commit is done (this matches behavior before 2.2). Creating
- * your own policy can allow you to explicitly keep previous "point in time"
- * commits alive in the index for some time, to allow readers to refresh to the
- * new commit without having the old commit deleted out from under them. This is
- * necessary on filesystems like NFS that do not support "delete on last
- * close" semantics, which Lucene's "point in time" search normally relies on.
- * </p>
- * 
- * <a name="mergePolicy"></a>
- * <p>
- * Expert: <code>IndexWriter</code> allows you to separately change the
- * {@link MergePolicy} and the {@link MergeScheduler}. The {@link MergePolicy}
- * is invoked whenever there are changes to the segments in the index. Its role
- * is to select which merges to do, if any, and return a
- * {@link MergePolicy.MergeSpecification} describing the merges. The default is
- * {@link LogByteSizeMergePolicy}. Then, the {@link MergeScheduler} is invoked
- * with the requested merges and it decides when and how to run the merges. The
- * default is {@link ConcurrentMergeScheduler}.
- * </p>
- * 
- * <a name="OOME"></a>
- * <p>
- * <b>NOTE</b>: if you hit an OutOfMemoryError then IndexWriter will quietly
- * record this fact and block all future segment commits. This is a defensive
- * measure in case any internal state (buffered documents and deletions) were
- * corrupted. Any subsequent calls to {@link #commit()} will throw an
- * IllegalStateException. The only course of action is to call {@link #close()},
- * which internally will call {@link #rollback()}, to undo any changes to the
- * index since the last commit. You can also just call {@link #rollback()}
- * directly.
- * </p>
- * 
- * <a name="thread-safety"></a>
- * <p>
- * <b>NOTE</b>: {@link IndexWriter} instances are completely thread safe,
- * meaning multiple threads can call any of its methods, concurrently. If your
- * application requires external synchronization, you should <b>not</b>
- * synchronize on the <code>IndexWriter</code> instance as this may cause
- * deadlock; use your own (non-Lucene) objects instead.
- * </p>
- * 
- * <p>
- * <b>NOTE</b>: If you call <code>Thread.interrupt()</code> on a thread that's
- * within IndexWriter, IndexWriter will try to catch this (eg, if it's in a
- * wait() or Thread.sleep()), and will then throw the unchecked exception
- * {@link ThreadInterruptedException} and <b>clear</b> the interrupt status on
- * the thread.
- * </p>
- */
+  An <code>IndexWriter</code> creates and maintains an index.
+
+  <p>The {@link OpenMode} option on 
+  {@link IndexWriterConfig#setOpenMode(OpenMode)} determines 
+  whether a new index is created, or whether an existing index is
+  opened. Note that you can open an index with {@link OpenMode#CREATE}
+  even while readers are using the index. The old readers will 
+  continue to search the "point in time" snapshot they had opened, 
+  and won't see the newly created index until they re-open. If 
+  {@link OpenMode#CREATE_OR_APPEND} is used IndexWriter will create a 
+  new index if there is not already an index at the provided path
+  and otherwise open the existing index.</p>
+
+  <p>In either case, documents are added with {@link #addDocument(IndexDocument)
+  addDocument} and removed with {@link #deleteDocuments(Term)} or {@link
+  #deleteDocuments(Query)}. A document can be updated with {@link
+  #updateDocument(Term, IndexDocument) updateDocument} (which just deletes
+  and then adds the entire document). When finished adding, deleting 
+  and updating documents, {@link #close() close} should be called.</p>
+
+  <a name="flush"></a>
+  <p>These changes are buffered in memory and periodically
+  flushed to the {@link Directory} (during the above method
+  calls). A flush is triggered when there are enough added documents
+  since the last flush. Flushing is triggered either by RAM usage of the
+  documents (see {@link IndexWriterConfig#setRAMBufferSizeMB}) or the
+  number of added documents (see {@link IndexWriterConfig#setMaxBufferedDocs(int)}).
+  The default is to flush when RAM usage hits
+  {@link IndexWriterConfig#DEFAULT_RAM_BUFFER_SIZE_MB} MB. For
+  best indexing speed you should flush by RAM usage with a
+  large RAM buffer. Additionally, if IndexWriter reaches the configured number of
+  buffered deletes (see {@link IndexWriterConfig#setMaxBufferedDeleteTerms})
+  the deleted terms and queries are flushed and applied to existing segments.
+  In contrast to the other flush options {@link IndexWriterConfig#setRAMBufferSizeMB} and 
+  {@link IndexWriterConfig#setMaxBufferedDocs(int)}, deleted terms
+  won't trigger a segment flush. Note that flushing just moves the
+  internal buffered state in IndexWriter into the index, but
+  these changes are not visible to IndexReader until either
+  {@link #commit()} or {@link #close} is called.  A flush may
+  also trigger one or more segment merges which by default
+  run with a background thread so as not to block the
+  addDocument calls (see <a href="#mergePolicy">below</a>
+  for changing the {@link MergeScheduler}).</p>
+
+  <p>Opening an <code>IndexWriter</code> creates a lock file for the directory in use. Trying to open
+  another <code>IndexWriter</code> on the same directory will lead to a
+  {@link LockObtainFailedException}. The {@link LockObtainFailedException}
+  is also thrown if an IndexReader on the same directory is used to delete documents
+  from the index.</p>
+  
+  <a name="deletionPolicy"></a>
+  <p>Expert: <code>IndexWriter</code> allows an optional
+  {@link IndexDeletionPolicy} implementation to be
+  specified.  You can use this to control when prior commits
+  are deleted from the index.  The default policy is {@link
+  KeepOnlyLastCommitDeletionPolicy} which removes all prior
+  commits as soon as a new commit is done (this matches
+  behavior before 2.2).  Creating your own policy can allow
+  you to explicitly keep previous "point in time" commits
+  alive in the index for some time, to allow readers to
+  refresh to the new commit without having the old commit
+  deleted out from under them.  This is necessary on
+  filesystems like NFS that do not support "delete on last
+  close" semantics, which Lucene's "point in time" search
+  normally relies on. </p>
+
+  <a name="mergePolicy"></a> <p>Expert:
+  <code>IndexWriter</code> allows you to separately change
+  the {@link MergePolicy} and the {@link MergeScheduler}.
+  The {@link MergePolicy} is invoked whenever there are
+  changes to the segments in the index.  Its role is to
+  select which merges to do, if any, and return a {@link
+  MergePolicy.MergeSpecification} describing the merges.
+  The default is {@link LogByteSizeMergePolicy}.  Then, the {@link
+  MergeScheduler} is invoked with the requested merges and
+  it decides when and how to run the merges.  The default is
+  {@link ConcurrentMergeScheduler}. </p>
+
+  <a name="OOME"></a><p><b>NOTE</b>: if you hit an
+  OutOfMemoryError then IndexWriter will quietly record this
+  fact and block all future segment commits.  This is a
+  defensive measure in case any internal state (buffered
+  documents and deletions) were corrupted.  Any subsequent
+  calls to {@link #commit()} will throw an
+  IllegalStateException.  The only course of action is to
+  call {@link #close()}, which internally will call {@link
+  #rollback()}, to undo any changes to the index since the
+  last commit.  You can also just call {@link #rollback()}
+  directly.</p>
+
+  <a name="thread-safety"></a><p><b>NOTE</b>: {@link
+  IndexWriter} instances are completely thread
+  safe, meaning multiple threads can call any of its
+  methods, concurrently.  If your application requires
+  external synchronization, you should <b>not</b>
+  synchronize on the <code>IndexWriter</code> instance as
+  this may cause deadlock; use your own (non-Lucene) objects
+  instead. </p>
+  
+  <p><b>NOTE</b>: If you call
+  <code>Thread.interrupt()</code> on a thread that's within
+  IndexWriter, IndexWriter will try to catch this (eg, if
+  it's in a wait() or Thread.sleep()), and will then throw
+  the unchecked exception {@link ThreadInterruptedException}
+  and <b>clear</b> the interrupt status on the thread.</p>
+*/
 
 /*
- * Clarification: Check Points (and commits) IndexWriter writes new index files
- * to the directory without writing a new segments_N file which references these
- * new files. It also means that the state of the in memory SegmentInfos object
- * is different than the most recent segments_N file written to the directory.
- * 
- * Each time the SegmentInfos is changed, and matches the (possibly modified)
- * directory files, we have a new "check point". If the modified/new
- * SegmentInfos is written to disk - as a new (generation of) segments_N file -
- * this check point is also an IndexCommit.
- * 
- * A new checkpoint always replaces the previous checkpoint and becomes the new
- * "front" of the index. This allows the IndexFileDeleter to delete files that
- * are referenced only by stale checkpoints. (files that were created since the
- * last commit, but are no longer referenced by the "front" of the index). For
- * this, IndexFileDeleter keeps track of the last non commit checkpoint.
+ * Clarification: Check Points (and commits)
+ * IndexWriter writes new index files to the directory without writing a new segments_N
+ * file which references these new files. It also means that the state of
+ * the in memory SegmentInfos object is different than the most recent
+ * segments_N file written to the directory.
+ *
+ * Each time the SegmentInfos is changed, and matches the (possibly
+ * modified) directory files, we have a new "check point".
+ * If the modified/new SegmentInfos is written to disk - as a new
+ * (generation of) segments_N file - this check point is also an
+ * IndexCommit.
+ *
+ * A new checkpoint always replaces the previous checkpoint and
+ * becomes the new "front" of the index. This allows the IndexFileDeleter
+ * to delete files that are referenced only by stale checkpoints.
+ * (files that were created since the last commit, but are no longer
+ * referenced by the "front" of the index). For this, IndexFileDeleter
+ * keeps track of the last non commit checkpoint.
  */
 public class IndexWriter implements Closeable, TwoPhaseCommit {
   
@@ -193,7 +193,7 @@
    * Name of the write lock in the index.
    */
   public static final String WRITE_LOCK_NAME = "write.lock";
-  
+
   /** Key for the source of a segment in the {@link SegmentInfo#getDiagnostics() diagnostics}. */
   public static final String SOURCE = "source";
   /** Source of a segment which results from a merge of other segments. */
@@ -204,50 +204,50 @@
   public static final String SOURCE_ADDINDEXES_READERS = "addIndexes(IndexReader...)";
 
   /**
-   * Absolute hard maximum length for a term, in bytes once encoded as UTF8. If
-   * a term arrives from the analyzer longer than this length, it is skipped and
-   * a message is printed to infoStream, if set (see
-   * {@link IndexWriterConfig#setInfoStream(InfoStream)}).
+   * Absolute hard maximum length for a term, in bytes once
+   * encoded as UTF8.  If a term arrives from the analyzer
+   * longer than this length, it is skipped and a message is
+   * printed to infoStream, if set (see {@link
+   * IndexWriterConfig#setInfoStream(InfoStream)}).
    */
   public final static int MAX_TERM_LENGTH = DocumentsWriterPerThread.MAX_TERM_LENGTH_UTF8;
   volatile private boolean hitOOM;
-  
-  private final Directory directory; // where this index resides
-  private final Analyzer analyzer; // how to analyze text
-  
-  private volatile long changeCount; // increments every time a change is
-                                     // completed
+
+  private final Directory directory;  // where this index resides
+  private final Analyzer analyzer;    // how to analyze text
+
+  private volatile long changeCount; // increments every time a change is completed
   private long lastCommitChangeCount; // last changeCount that was committed
-  
-  private List<SegmentInfoPerCommit> rollbackSegments; // list of segmentInfo we
-                                                       // will fallback to if
-                                                       // the commit fails
-  
-  volatile SegmentInfos pendingCommit; // set when a commit is pending (after
-                                       // prepareCommit() & before commit())
+
+  private List<SegmentInfoPerCommit> rollbackSegments;      // list of segmentInfo we will fallback to if the commit fails
+
+  volatile SegmentInfos pendingCommit;            // set when a commit is pending (after prepareCommit() & before commit())
   volatile long pendingCommitChangeCount;
+
+  volatile AtomicBoolean deletesPending; // set when there are pending deletes
+                                         // to be flushed before adding updates
   
   private Collection<String> filesToCommit;
-  
-  final SegmentInfos segmentInfos; // the segments
+
+  final SegmentInfos segmentInfos;       // the segments
   final FieldNumbers globalFieldNumberMap;
-  
+
   private DocumentsWriter docWriter;
   final IndexFileDeleter deleter;
-  
+
   // used by forceMerge to note those needing merging
   private Map<SegmentInfoPerCommit,Boolean> segmentsToMerge = new HashMap<SegmentInfoPerCommit,Boolean>();
   private int mergeMaxNumSegments;
-  
+
   private Lock writeLock;
-  
+
   private volatile boolean closed;
   private volatile boolean closing;
-  
+
   // Holds all SegmentInfo instances currently involved in
   // merges
   private HashSet<SegmentInfoPerCommit> mergingSegments = new HashSet<SegmentInfoPerCommit>();
-  
+
   private MergePolicy mergePolicy;
   private final MergeScheduler mergeScheduler;
   private LinkedList<MergePolicy.OneMerge> pendingMerges = new LinkedList<MergePolicy.OneMerge>();
@@ -255,106 +255,95 @@
   private List<MergePolicy.OneMerge> mergeExceptions = new ArrayList<MergePolicy.OneMerge>();
   private long mergeGen;
   private boolean stopMerges;
-  
+
   final AtomicInteger flushCount = new AtomicInteger();
   final AtomicInteger flushDeletesCount = new AtomicInteger();
-  
+
   final ReaderPool readerPool = new ReaderPool();
   final BufferedDeletesStream bufferedDeletesStream;
-  
-  private boolean updatesPending;
-  
+
   // This is a "write once" variable (like the organic dye
   // on a DVD-R that may or may not be heated by a laser and
   // then cooled to permanently record the event): it's
   // false, until getReader() is called for the first time,
   // at which point it's switched to true and never changes
-  // back to false. Once this is true, we hold open and
+  // back to false.  Once this is true, we hold open and
   // reuse SegmentReader instances internally for applying
   // deletes, doing merges, and reopening near real-time
   // readers.
   private volatile boolean poolReaders;
-  
+
   // The instance that was passed to the constructor. It is saved only in order
   // to allow users to query an IndexWriter settings.
   private final LiveIndexWriterConfig config;
-  
+
   DirectoryReader getReader() throws IOException {
     return getReader(true);
   }
-  
+
   /**
-   * Expert: returns a readonly reader, covering all committed as well as
-   * un-committed changes to the index. This provides "near real-time"
-   * searching, in that changes made during an IndexWriter session can be
-   * quickly made available for searching without closing the writer nor calling
-   * {@link #commit}.
-   * 
-   * <p>
-   * Note that this is functionally equivalent to calling {#flush} and then
-   * opening a new reader. But the turnaround time of this method should be
-   * faster since it avoids the potentially costly {@link #commit}.
-   * </p>
-   * 
-   * <p>
-   * You must close the {@link IndexReader} returned by this method once you are
-   * done using it.
-   * </p>
-   * 
-   * <p>
-   * It's <i>near</i> real-time because there is no hard guarantee on how
-   * quickly you can get a new reader after making changes with IndexWriter.
-   * You'll have to experiment in your situation to determine if it's fast
-   * enough. As this is a new and experimental feature, please report back on
-   * your findings so we can learn, improve and iterate.
-   * </p>
-   * 
-   * <p>
-   * The resulting reader supports {@link DirectoryReader#openIfChanged}, but
-   * that call will simply forward back to this method (though this may change
-   * in the future).
-   * </p>
-   * 
-   * <p>
-   * The very first time this method is called, this writer instance will make
-   * every effort to pool the readers that it opens for doing merges, applying
-   * deletes, etc. This means additional resources (RAM, file descriptors, CPU
-   * time) will be consumed.
-   * </p>
-   * 
-   * <p>
-   * For lower latency on reopening a reader, you should call
-   * {@link IndexWriterConfig#setMergedSegmentWarmer} to pre-warm a newly merged
-   * segment before it's committed to the index. This is important for
-   * minimizing index-to-search delay after a large merge.
-   * </p>
-   * 
-   * <p>
-   * If an addIndexes* call is running in another thread, then this reader will
-   * only search those segments from the foreign index that have been
-   * successfully copied over, so far
-   * </p>
-   * .
-   * 
-   * <p>
-   * <b>NOTE</b>: Once the writer is closed, any outstanding readers may
-   * continue to be used. However, if you attempt to reopen any of those
-   * readers, you'll hit an {@link AlreadyClosedException}.
-   * </p>
-   * 
+   * Expert: returns a readonly reader, covering all
+   * committed as well as un-committed changes to the index.
+   * This provides "near real-time" searching, in that
+   * changes made during an IndexWriter session can be
+   * quickly made available for searching without closing
+   * the writer nor calling {@link #commit}.
+   *
+   * <p>Note that this is functionally equivalent to calling
+   * {#flush} and then opening a new reader.  But the turnaround time of this
+   * method should be faster since it avoids the potentially
+   * costly {@link #commit}.</p>
+   *
+   * <p>You must close the {@link IndexReader} returned by
+   * this method once you are done using it.</p>
+   *
+   * <p>It's <i>near</i> real-time because there is no hard
+   * guarantee on how quickly you can get a new reader after
+   * making changes with IndexWriter.  You'll have to
+   * experiment in your situation to determine if it's
+   * fast enough.  As this is a new and experimental
+   * feature, please report back on your findings so we can
+   * learn, improve and iterate.</p>
+   *
+   * <p>The resulting reader supports {@link
+   * DirectoryReader#openIfChanged}, but that call will simply forward
+   * back to this method (though this may change in the
+   * future).</p>
+   *
+   * <p>The very first time this method is called, this
+   * writer instance will make every effort to pool the
+   * readers that it opens for doing merges, applying
+   * deletes, etc.  This means additional resources (RAM,
+   * file descriptors, CPU time) will be consumed.</p>
+   *
+   * <p>For lower latency on reopening a reader, you should
+   * call {@link IndexWriterConfig#setMergedSegmentWarmer} to
+   * pre-warm a newly merged segment before it's committed
+   * to the index.  This is important for minimizing
+   * index-to-search delay after a large merge.  </p>
+   *
+   * <p>If an addIndexes* call is running in another thread,
+   * then this reader will only search those segments from
+   * the foreign index that have been successfully copied
+   * over, so far</p>.
+   *
+   * <p><b>NOTE</b>: Once the writer is closed, any
+   * outstanding readers may continue to be used.  However,
+   * if you attempt to reopen any of those readers, you'll
+   * hit an {@link AlreadyClosedException}.</p>
+   *
    * @lucene.experimental
-   * 
-   * @return IndexReader that covers entire index plus all changes made so far
-   *         by this IndexWriter instance
-   * 
-   * @throws IOException
-   *           If there is a low-level I/O error
+   *
+   * @return IndexReader that covers entire index plus all
+   * changes made so far by this IndexWriter instance
+   *
+   * @throws IOException If there is a low-level I/O error
    */
   DirectoryReader getReader(boolean applyAllDeletes) throws IOException {
     ensureOpen();
-    
+
     final long tStart = System.currentTimeMillis();
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "flush at getReader");
     }
@@ -366,10 +355,11 @@
     doBeforeFlush();
     boolean anySegmentFlushed = false;
     /*
-     * for releasing a NRT reader we must ensure that DW doesn't add any
-     * segments or deletes until we are done with creating the NRT
-     * DirectoryReader. We release the two stage full flush after we are done
-     * opening the directory reader!
+     * for releasing a NRT reader we must ensure that 
+     * DW doesn't add any segments or deletes until we are
+     * done with creating the NRT DirectoryReader. 
+     * We release the two stage full flush after we are done opening the
+     * directory reader!
      */
     boolean success2 = false;
     try {
@@ -422,27 +412,26 @@
     }
     return r;
   }
-  
-  /**
-   * Holds shared SegmentReader instances. IndexWriter uses SegmentReaders for
-   * 1) applying deletes, 2) doing merges, 3) handing out a real-time reader.
-   * This pool reuses instances of the SegmentReaders in all these places if it
-   * is in "near real-time mode" (getReader() has been called on this instance).
-   */
-  
+
+  /** Holds shared SegmentReader instances. IndexWriter uses
+   *  SegmentReaders for 1) applying deletes, 2) doing
+   *  merges, 3) handing out a real-time reader.  This pool
+   *  reuses instances of the SegmentReaders in all these
+   *  places if it is in "near real-time mode" (getReader()
+   *  has been called on this instance). */
+
   class ReaderPool {
     
     private final Map<SegmentInfoPerCommit,ReadersAndLiveDocs> readerMap = new HashMap<SegmentInfoPerCommit,ReadersAndLiveDocs>();
-    
+
     // used only by asserts
     public synchronized boolean infoIsLive(SegmentInfoPerCommit info) {
       int idx = segmentInfos.indexOf(info);
-      assert idx != -1 : "info=" + info + " isn't live";
-      assert segmentInfos.info(idx) == info : "info=" + info
-          + " doesn't match live info in segmentInfos";
+      assert idx != -1: "info=" + info + " isn't live";
+      assert segmentInfos.info(idx) == info: "info=" + info + " doesn't match live info in segmentInfos";
       return true;
     }
-    
+
     public synchronized void drop(SegmentInfoPerCommit info) throws IOException {
       final ReadersAndLiveDocs rld = readerMap.get(info);
       if (rld != null) {
@@ -451,7 +440,7 @@
         rld.dropReaders();
       }
     }
-    
+
     public synchronized boolean anyPendingDeletes() {
       for(ReadersAndLiveDocs rld : readerMap.values()) {
         if (rld.getPendingDeleteCount() != 0) {
@@ -463,10 +452,10 @@
     }
 
     public synchronized void release(ReadersAndLiveDocs rld) throws IOException {
-      
+
       // Matches incRef in get:
       rld.decRef();
-      
+
       // Pool still holds a ref:
       assert rld.refCount() >= 1;
       
@@ -482,15 +471,14 @@
           // created created new _X_N.del file.
           deleter.checkpoint(segmentInfos, false);
         }
-        
+
         rld.dropReaders();
         readerMap.remove(rld.info);
       }
     }
-    
-    /**
-     * Remove all our references to readers, and commits any pending changes.
-     */
+
+    /** Remove all our references to readers, and commits
+     *  any pending changes. */
     synchronized void dropAll(boolean doSave) throws IOException {
       Throwable priorE = null;
       final Iterator<Map.Entry<SegmentInfoPerCommit,ReadersAndLiveDocs>> it = readerMap.entrySet().iterator();
@@ -510,13 +498,13 @@
             priorE = t;
           }
         }
-        
+
         // Important to remove as-we-go, not with .clear()
         // in the end, in case we hit an exception;
         // otherwise we could over-decref if close() is
         // called again:
         it.remove();
-        
+
         // NOTE: it is allowed that these decRefs do not
         // actually close the SRs; this happens when a
         // near real-time reader is kept open after the
@@ -534,12 +522,12 @@
         throw new RuntimeException(priorE);
       }
     }
-    
+
     /**
-     * Commit live docs changes for the segment readers for the provided infos.
-     * 
-     * @throws IOException
-     *           If there is a low-level I/O error
+     * Commit live docs changes for the segment readers for
+     * the provided infos.
+     *
+     * @throws IOException If there is a low-level I/O error
      */
     public synchronized void commit(SegmentInfos infos) throws IOException {
       for (SegmentInfoPerCommit info : infos) {
@@ -556,17 +544,16 @@
         }
       }
     }
-    
+
     /**
-     * Obtain a ReadersAndLiveDocs instance from the readerPool. If create is
-     * true, you must later call {@link #release(ReadersAndLiveDocs)}.
+     * Obtain a ReadersAndLiveDocs instance from the
+     * readerPool.  If create is true, you must later call
+     * {@link #release(ReadersAndLiveDocs)}.
      */
-    public synchronized ReadersAndLiveDocs get(SegmentInfoPerCommit info,
-        boolean create) {
-      
-      assert info.info.dir == directory : "info.dir=" + info.info.dir + " vs "
-          + directory;
-      
+    public synchronized ReadersAndLiveDocs get(SegmentInfoPerCommit info, boolean create) {
+
+      assert info.info.dir == directory: "info.dir=" + info.info.dir + " vs " + directory;
+
       ReadersAndLiveDocs rld = readerMap.get(info);
       if (rld == null) {
         if (!create) {
@@ -576,15 +563,14 @@
         // Steal initial reference:
         readerMap.put(info, rld);
       } else {
-        assert rld.info == info : "rld.info=" + rld.info + " info=" + info
-            + " isLive?=" + infoIsLive(rld.info) + " vs " + infoIsLive(info);
+        assert rld.info == info: "rld.info=" + rld.info + " info=" + info + " isLive?=" + infoIsLive(rld.info) + " vs " + infoIsLive(info);
       }
-      
+
       if (create) {
         // Return ref to caller:
         rld.incRef();
       }
-      
+
       assert noDups();
 
       return rld;
@@ -601,22 +587,23 @@
       return true;
     }
   }
-  
+
   /**
-   * Obtain the number of deleted docs for a pooled reader. If the reader isn't
-   * being pooled, the segmentInfo's delCount is returned.
+   * Obtain the number of deleted docs for a pooled reader.
+   * If the reader isn't being pooled, the segmentInfo's 
+   * delCount is returned.
    */
   public int numDeletedDocs(SegmentInfoPerCommit info) {
     ensureOpen(false);
     int delCount = info.getDelCount();
-    
+
     final ReadersAndLiveDocs rld = readerPool.get(info, false);
     if (rld != null) {
       delCount += rld.getPendingDeleteCount();
     }
     return delCount;
   }
-  
+
   /**
    * Used internally to throw an {@link AlreadyClosedException} if this
    * IndexWriter has been closed or is in the process of closing.
@@ -628,34 +615,32 @@
    * @throws AlreadyClosedException
    *           if this IndexWriter is closed or in the process of closing
    */
-  protected final void ensureOpen(boolean failIfClosing)
-      throws AlreadyClosedException {
+  protected final void ensureOpen(boolean failIfClosing) throws AlreadyClosedException {
     if (closed || (failIfClosing && closing)) {
       throw new AlreadyClosedException("this IndexWriter is closed");
     }
   }
-  
+
   /**
-   * Used internally to throw an {@link AlreadyClosedException} if this
-   * IndexWriter has been closed ({@code closed=true}) or is in the process of
+   * Used internally to throw an {@link
+   * AlreadyClosedException} if this IndexWriter has been
+   * closed ({@code closed=true}) or is in the process of
    * closing ({@code closing=true}).
    * <p>
    * Calls {@link #ensureOpen(boolean) ensureOpen(true)}.
-   * 
-   * @throws AlreadyClosedException
-   *           if this IndexWriter is closed
+   * @throws AlreadyClosedException if this IndexWriter is closed
    */
   protected final void ensureOpen() throws AlreadyClosedException {
     ensureOpen(true);
   }
-  
+
   final Codec codec; // for writing new segments
-  
+
   /**
    * Constructs a new IndexWriter per the settings given in <code>conf</code>.
-   * Note that the passed in {@link IndexWriterConfig} is privately cloned; if
-   * you need to make subsequent "live" changes to the configuration use
-   * {@link #getConfig}.
+   * Note that the passed in {@link IndexWriterConfig} is
+   * privately cloned; if you need to make subsequent "live"
+   * changes to the configuration use {@link #getConfig}.
    * <p>
    * 
    * @param d
@@ -679,15 +664,16 @@
     mergePolicy.setIndexWriter(this);
     mergeScheduler = config.getMergeScheduler();
     codec = config.getCodec();
-    
+
     bufferedDeletesStream = new BufferedDeletesStream(infoStream);
     poolReaders = config.getReaderPooling();
+    deletesPending = new AtomicBoolean(false);
     
     writeLock = directory.makeLock(WRITE_LOCK_NAME);
-    
+
     if (!writeLock.obtain(config.getWriteLockTimeout())) // obtain write lock
-    throw new LockObtainFailedException("Index locked for write: " + writeLock);
-    
+      throw new LockObtainFailedException("Index locked for write: " + writeLock);
+
     boolean success = false;
     try {
       OpenMode mode = config.getOpenMode();
@@ -700,17 +686,17 @@
         // CREATE_OR_APPEND - create only if an index does not exist
         create = !DirectoryReader.indexExists(directory);
       }
-      
+
       // If index is too old, reading the segments will throw
       // IndexFormatTooOldException.
       segmentInfos = new SegmentInfos();
-      
+
       boolean initialIndexExists = true;
 
       if (create) {
-        // Try to read first. This is to allow create
+        // Try to read first.  This is to allow create
         // against an index that's currently open for
-        // searching. In this case we write the next
+        // searching.  In this case we write the next
         // segments_N file with no segments:
         try {
           segmentInfos.read(directory);
@@ -719,49 +705,47 @@
           // Likely this means it's a fresh directory
           initialIndexExists = false;
         }
-        
+
         // Record that we have a change (zero out all
         // segments) pending:
         changed();
       } else {
         segmentInfos.read(directory);
-        
+
         IndexCommit commit = config.getIndexCommit();
         if (commit != null) {
           // Swap out all segments, but, keep metadata in
           // SegmentInfos, like version & generation, to
-          // preserve write-once. This is important if
+          // preserve write-once.  This is important if
           // readers are open against the future commit
           // points.
-          if (commit.getDirectory() != directory) throw new IllegalArgumentException(
-              "IndexCommit's directory doesn't match my directory");
+          if (commit.getDirectory() != directory)
+            throw new IllegalArgumentException("IndexCommit's directory doesn't match my directory");
           SegmentInfos oldInfos = new SegmentInfos();
           oldInfos.read(directory, commit.getSegmentsFileName());
           segmentInfos.replace(oldInfos);
           changed();
           if (infoStream.isEnabled("IW")) {
-            infoStream.message("IW",
-                "init: loaded commit \"" + commit.getSegmentsFileName() + "\"");
+            infoStream.message("IW", "init: loaded commit \"" + commit.getSegmentsFileName() + "\"");
           }
         }
       }
-      
+
       rollbackSegments = segmentInfos.createBackupSegmentInfos();
-      
+
       // start with previous field numbers, but new FieldInfos
       globalFieldNumberMap = getFieldNumberMap();
-      docWriter = new DocumentsWriter(codec, config, directory, this,
-          globalFieldNumberMap, bufferedDeletesStream);
-      
+      docWriter = new DocumentsWriter(codec, config, directory, this, globalFieldNumberMap, bufferedDeletesStream);
+
       // Default deleter (for backwards compatibility) is
       // KeepOnlyLastCommitDeleter:
-      synchronized (this) {
+      synchronized(this) {
         deleter = new IndexFileDeleter(directory,
                                        config.getIndexDeletionPolicy(),
                                        segmentInfos, infoStream, this,
                                        initialIndexExists);
       }
-      
+
       if (deleter.startingCommitDeleted) {
         // Deletion policy deleted the "head" commit point.
         // We have to mark ourself as changed so that if we
@@ -769,19 +753,18 @@
         // segments_N file.
         changed();
       }
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "init: create=" + create);
         messageState();
       }
-      
+
       success = true;
-      
+
     } finally {
       if (!success) {
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW",
-              "init: hit exception on init; releasing write lock");
+          infoStream.message("IW", "init: hit exception on init; releasing write lock");
         }
         try {
           writeLock.release();
@@ -792,31 +775,31 @@
       }
     }
   }
-  
+
   private FieldInfos getFieldInfos(SegmentInfo info) throws IOException {
     Directory cfsDir = null;
     try {
       if (info.getUseCompoundFile()) {
         cfsDir = new CompoundFileDirectory(info.dir,
-            IndexFileNames.segmentFileName(info.name, "",
-                IndexFileNames.COMPOUND_FILE_EXTENSION), IOContext.READONCE,
-            false);
+                                           IndexFileNames.segmentFileName(info.name, "", IndexFileNames.COMPOUND_FILE_EXTENSION),
+                                           IOContext.READONCE,
+                                           false);
       } else {
         cfsDir = info.dir;
       }
-      return info.getCodec().fieldInfosFormat().getFieldInfosReader()
-          .read(cfsDir, info.name, IOContext.READONCE);
+      return info.getCodec().fieldInfosFormat().getFieldInfosReader().read(cfsDir,
+                                                                                info.name,
+                                                                                IOContext.READONCE);
     } finally {
       if (info.getUseCompoundFile() && cfsDir != null) {
         cfsDir.close();
       }
     }
   }
-  
+
   /**
-   * Loads or returns the already loaded the global field number map for this
-   * {@link SegmentInfos}. If this {@link SegmentInfos} has no global field
-   * number map the returned instance is empty
+   * Loads or returns the already loaded the global field number map for this {@link SegmentInfos}.
+   * If this {@link SegmentInfos} has no global field number map the returned instance is empty
    */
   private FieldNumbers getFieldNumberMap() throws IOException {
     final FieldNumbers map = new FieldNumbers();
@@ -826,55 +809,57 @@
         map.addOrGet(fi.name, fi.number, fi.getDocValuesType());
       }
     }
-    
+
     return map;
   }
   
   /**
-   * Returns a {@link LiveIndexWriterConfig}, which can be used to query the
-   * IndexWriter current settings, as well as modify "live" ones.
+   * Returns a {@link LiveIndexWriterConfig}, which can be used to query the IndexWriter
+   * current settings, as well as modify "live" ones.
    */
   public LiveIndexWriterConfig getConfig() {
     ensureOpen(false);
     return config;
   }
-  
+
   private void messageState() {
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "\ndir=" + directory + "\n" + "index="
-          + segString() + "\n" + "version=" + Constants.LUCENE_VERSION + "\n"
-          + config.toString());
+      infoStream.message("IW", "\ndir=" + directory + "\n" +
+            "index=" + segString() + "\n" +
+            "version=" + Constants.LUCENE_VERSION + "\n" +
+            config.toString());
     }
   }
-  
+
   /**
-   * Commits all changes to an index, waits for pending merges to complete, and
-   * closes all associated files.
+   * Commits all changes to an index, waits for pending merges
+   * to complete, and closes all associated files.  
    * <p>
-   * This is a "slow graceful shutdown" which may take a long time especially if
-   * a big merge is pending: If you only want to close resources use
-   * {@link #rollback()}. If you only want to commit pending changes and close
-   * resources see {@link #close(boolean)}.
+   * This is a "slow graceful shutdown" which may take a long time
+   * especially if a big merge is pending: If you only want to close
+   * resources use {@link #rollback()}. If you only want to commit
+   * pending changes and close resources see {@link #close(boolean)}.
    * <p>
-   * Note that this may be a costly operation, so, try to re-use a single writer
-   * instead of closing and opening a new one. See {@link #commit()} for caveats
-   * about write caching done by some IO devices.
-   * 
-   * <p>
-   * If an Exception is hit during close, eg due to disk full or some other
-   * reason, then both the on-disk index and the internal state of the
-   * IndexWriter instance will be consistent. However, the close will not be
-   * complete even though part of it (flushing buffered documents) may have
-   * succeeded, so the write lock will still be held.
-   * </p>
-   * 
-   * <p>
-   * If you can correct the underlying cause (eg free up some disk space) then
-   * you can call close() again. Failing that, if you want to force the write
-   * lock to be released (dangerous, because you may then lose buffered docs in
-   * the IndexWriter instance) then you can do something like this:
-   * </p>
-   * 
+   * Note that this may be a costly
+   * operation, so, try to re-use a single writer instead of
+   * closing and opening a new one.  See {@link #commit()} for
+   * caveats about write caching done by some IO devices.
+   *
+   * <p> If an Exception is hit during close, eg due to disk
+   * full or some other reason, then both the on-disk index
+   * and the internal state of the IndexWriter instance will
+   * be consistent.  However, the close will not be complete
+   * even though part of it (flushing buffered documents)
+   * may have succeeded, so the write lock will still be
+   * held.</p>
+   *
+   * <p> If you can correct the underlying cause (eg free up
+   * some disk space) then you can call close() again.
+   * Failing that, if you want to force the write lock to be
+   * released (dangerous, because you may then lose buffered
+   * docs in the IndexWriter instance) then you can do
+   * something like this:</p>
+   *
    * <pre class="prettyprint">
    * try {
    *   writer.close();
@@ -884,50 +869,49 @@
    *   }
    * }
    * </pre>
-   * 
-   * after which, you must be certain not to use the writer instance
-   * anymore.</p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer, again. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @throws IOException
-   *           if there is a low-level IO error
+   *
+   * after which, you must be certain not to use the writer
+   * instance anymore.</p>
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer, again.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @throws IOException if there is a low-level IO error
    */
   @Override
   public void close() throws IOException {
     close(true);
   }
-  
+
   /**
-   * Closes the index with or without waiting for currently running merges to
-   * finish. This is only meaningful when using a MergeScheduler that runs
-   * merges in background threads.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer, again. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: it is dangerous to always call close(false), especially when
-   * IndexWriter is not open for very long, because this can result in "merge
-   * starvation" whereby long merges will never have a chance to finish. This
-   * will cause too many segments in your index over time.
-   * </p>
-   * 
-   * @param waitForMerges
-   *          if true, this call will block until all merges complete; else, it
-   *          will ask all running merges to abort, wait until those merges have
-   *          finished (which should be at most a few seconds), and then return.
+   * Closes the index with or without waiting for currently
+   * running merges to finish.  This is only meaningful when
+   * using a MergeScheduler that runs merges in background
+   * threads.
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer, again.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * <p><b>NOTE</b>: it is dangerous to always call
+   * close(false), especially when IndexWriter is not open
+   * for very long, because this can result in "merge
+   * starvation" whereby long merges will never have a
+   * chance to finish.  This will cause too many segments in
+   * your index over time.</p>
+   *
+   * @param waitForMerges if true, this call will block
+   * until all merges complete; else, it will ask all
+   * running merges to abort, wait until those merges have
+   * finished (which should be at most a few seconds), and
+   * then return.
    */
   public void close(boolean waitForMerges) throws IOException {
-    
+
     // Ensure that only one thread actually gets to do the
     // closing, and make sure no commit is also in progress:
-    synchronized (commitLock) {
+    synchronized(commitLock) {
       if (shouldClose()) {
         // If any methods have hit OutOfMemoryError, then abort
         // on close, in case the internal state of IndexWriter
@@ -940,12 +924,12 @@
       }
     }
   }
-  
+
   // Returns true if this thread should attempt to close, or
   // false if IndexWriter is now closed; else, waits until
   // another thread finishes closing
   synchronized private boolean shouldClose() {
-    while (true) {
+    while(true) {
       if (!closed) {
         if (!closing) {
           closing = true;
@@ -961,39 +945,34 @@
       }
     }
   }
-  
-  private void closeInternal(boolean waitForMerges, boolean doFlush)
-      throws IOException {
+
+  private void closeInternal(boolean waitForMerges, boolean doFlush) throws IOException {
     boolean interrupted = false;
     try {
-      
+
       if (pendingCommit != null) {
-        throw new IllegalStateException(
-            "cannot close: prepareCommit was already called with no corresponding call to commit");
+        throw new IllegalStateException("cannot close: prepareCommit was already called with no corresponding call to commit");
       }
-      
+
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "now flush at close waitForMerges="
-            + waitForMerges);
+        infoStream.message("IW", "now flush at close waitForMerges=" + waitForMerges);
       }
-      
+
       try {
         // Only allow a new merge to be triggered if we are
         // going to wait for merges:
         if (doFlush) {
           flush(waitForMerges, true);
+          docWriter.close();
         } else {
-          docWriter.abort(); // already closed
+          docWriter.abort(); // already closed -- never sync on IW 
         }
         
-        docWriter.close();
-        
       } finally {
         try {
-          // clean up merge scheduler in all cases, although flushing may have
-          // failed:
+          // clean up merge scheduler in all cases, although flushing may have failed:
           interrupted = Thread.interrupted();
-          
+        
           if (waitForMerges) {
             try {
               // Give merge scheduler last chance to run, in case
@@ -1003,13 +982,12 @@
               // ignore any interruption, does not matter
               interrupted = true;
               if (infoStream.isEnabled("IW")) {
-                infoStream.message("IW",
-                    "interrupted while waiting for final merges");
+                infoStream.message("IW", "interrupted while waiting for final merges");
               }
             }
           }
           
-          synchronized (this) {
+          synchronized(this) {
             for (;;) {
               try {
                 finishMerges(waitForMerges && !interrupted);
@@ -1020,8 +998,7 @@
                 // so it will not wait
                 interrupted = true;
                 if (infoStream.isEnabled("IW")) {
-                  infoStream.message("IW",
-                      "interrupted while waiting for merges to finish");
+                  infoStream.message("IW", "interrupted while waiting for merges to finish");
                 }
               }
             }
@@ -1029,44 +1006,42 @@
           }
           
         } finally {
-          // shutdown policy, scheduler and all threads (this call is not
-          // interruptible):
+          // shutdown policy, scheduler and all threads (this call is not interruptible):
           IOUtils.closeWhileHandlingException(mergePolicy, mergeScheduler);
         }
       }
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "now call final commit()");
       }
-      
+
       if (doFlush) {
         commitInternal();
       }
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "at close: " + segString());
       }
       // used by assert below
       final DocumentsWriter oldWriter = docWriter;
-      synchronized (this) {
+      synchronized(this) {
         readerPool.dropAll(true);
         docWriter = null;
         deleter.close();
       }
-      
+
       if (writeLock != null) {
-        writeLock.release(); // release write lock
+        writeLock.release();                          // release write lock
         writeLock = null;
       }
-      synchronized (this) {
+      synchronized(this) {
         closed = true;
       }
-      assert oldWriter.perThreadPool.numDeactivatedThreadStates() == oldWriter.perThreadPool
-          .getMaxThreadStates();
+      assert oldWriter.perThreadPool.numDeactivatedThreadStates() == oldWriter.perThreadPool.getMaxThreadStates();
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "closeInternal");
     } finally {
-      synchronized (this) {
+      synchronized(this) {
         closing = false;
         notifyAll();
         if (!closed) {
@@ -1079,54 +1054,54 @@
       if (interrupted) Thread.currentThread().interrupt();
     }
   }
-  
+
   /** Returns the Directory used by this index. */
   public Directory getDirectory() {
     return directory;
   }
-  
+
   /** Returns the analyzer used by this index. */
   public Analyzer getAnalyzer() {
     ensureOpen();
     return analyzer;
   }
-  
-  /**
-   * Returns total number of docs in this index, including docs not yet flushed
-   * (still in the RAM buffer), not counting deletions.
-   * 
-   * @see #numDocs
-   */
+
+  /** Returns total number of docs in this index, including
+   *  docs not yet flushed (still in the RAM buffer),
+   *  not counting deletions.
+   *  @see #numDocs */
   public synchronized int maxDoc() {
     ensureOpen();
     int count;
-    if (docWriter != null) count = docWriter.getNumDocs();
-    else count = 0;
-    
+    if (docWriter != null)
+      count = docWriter.getNumDocs();
+    else
+      count = 0;
+
     count += segmentInfos.totalDocCount();
     return count;
   }
-  
-  /**
-   * Returns total number of docs in this index, including docs not yet flushed
-   * (still in the RAM buffer), and including deletions. <b>NOTE:</b> buffered
-   * deletions are not counted. If you really need these to be counted you
-   * should call {@link #commit()} first.
-   * 
-   * @see #numDocs
-   */
+
+  /** Returns total number of docs in this index, including
+   *  docs not yet flushed (still in the RAM buffer), and
+   *  including deletions.  <b>NOTE:</b> buffered deletions
+   *  are not counted.  If you really need these to be
+   *  counted you should call {@link #commit()} first.
+   *  @see #numDocs */
   public synchronized int numDocs() {
     ensureOpen();
     int count;
-    if (docWriter != null) count = docWriter.getNumDocs();
-    else count = 0;
-    
+    if (docWriter != null)
+      count = docWriter.getNumDocs();
+    else
+      count = 0;
+
     for (final SegmentInfoPerCommit info : segmentInfos) {
       count += info.info.getDocCount() - numDeletedDocs(info);
     }
     return count;
   }
-  
+
   /**
    * Returns true if this index has deletions (including buffered deletions).
    */
@@ -1148,159 +1123,143 @@
     }
     return false;
   }
-  
+
   /**
    * Adds a document to this index.
-   * 
-   * <p>
-   * Note that if an Exception is hit (for example disk full) then the index
-   * will be consistent, but this document may not have been added. Furthermore,
-   * it's possible the index will have one segment in non-compound format even
-   * when using compound files (when a merge has partially succeeded).
-   * </p>
-   * 
-   * <p>
-   * This method periodically flushes pending documents to the Directory (see <a
-   * href="#flush">above</a>), and also periodically triggers segment merges in
-   * the index according to the {@link MergePolicy} in use.
-   * </p>
-   * 
-   * <p>
-   * Merges temporarily consume space in the directory. The amount of space
-   * required is up to 1X the size of all segments being merged, when no
-   * readers/searchers are open against the index, and up to 2X the size of all
-   * segments being merged when readers/searchers are open against the index
-   * (see {@link #forceMerge(int)} for details). The sequence of primitive merge
-   * operations performed is governed by the merge policy.
-   * 
-   * <p>
-   * Note that each term in the document can be no longer than 16383 characters,
-   * otherwise an IllegalArgumentException will be thrown.
-   * </p>
-   * 
-   * <p>
-   * Note that it's possible to create an invalid Unicode string in java if a
-   * UTF16 surrogate pair is malformed. In this case, the invalid characters are
-   * silently replaced with the Unicode replacement character U+FFFD.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   *
+   * <p> Note that if an Exception is hit (for example disk full)
+   * then the index will be consistent, but this document
+   * may not have been added.  Furthermore, it's possible
+   * the index will have one segment in non-compound format
+   * even when using compound files (when a merge has
+   * partially succeeded).</p>
+   *
+   * <p> This method periodically flushes pending documents
+   * to the Directory (see <a href="#flush">above</a>), and
+   * also periodically triggers segment merges in the index
+   * according to the {@link MergePolicy} in use.</p>
+   *
+   * <p>Merges temporarily consume space in the
+   * directory. The amount of space required is up to 1X the
+   * size of all segments being merged, when no
+   * readers/searchers are open against the index, and up to
+   * 2X the size of all segments being merged when
+   * readers/searchers are open against the index (see
+   * {@link #forceMerge(int)} for details). The sequence of
+   * primitive merge operations performed is governed by the
+   * merge policy.
+   *
+   * <p>Note that each term in the document can be no longer
+   * than 16383 characters, otherwise an
+   * IllegalArgumentException will be thrown.</p>
+   *
+   * <p>Note that it's possible to create an invalid Unicode
+   * string in java if a UTF16 surrogate pair is malformed.
+   * In this case, the invalid characters are silently
+   * replaced with the Unicode replacement character
+   * U+FFFD.</p>
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void addDocument(IndexDocument doc) throws IOException {
     addDocument(doc, analyzer);
   }
-  
+
   /**
    * Adds a document to this index, using the provided analyzer instead of the
    * value of {@link #getAnalyzer()}.
-   * 
-   * <p>
-   * See {@link #addDocument(IndexDocument)} for details on index and
-   * IndexWriter state after an Exception, and flushing/merging temporary free
-   * space requirements.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   *
+   * <p>See {@link #addDocument(IndexDocument)} for details on
+   * index and IndexWriter state after an Exception, and
+   * flushing/merging temporary free space requirements.</p>
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
-  public void addDocument(IndexDocument doc, Analyzer analyzer)
-      throws IOException {
-    replaceDocument(null, doc, analyzer);
+  public void addDocument(IndexDocument doc, Analyzer analyzer) throws IOException {
+    updateDocument(null, doc, analyzer);
   }
-  
+
   /**
-   * Atomically adds a block of documents with sequentially assigned document
-   * IDs, such that an external reader will see all or none of the documents.
-   * 
-   * <p>
-   * <b>WARNING</b>: the index does not currently record which documents were
-   * added as a block. Today this is fine, because merging will preserve a
-   * block. The order of documents within a segment will be preserved, even when
-   * child documents within a block are deleted. Most search features (like
-   * result grouping and block joining) require you to mark documents; when
-   * these documents are deleted these search features will not work as
-   * expected. Obviously adding documents to an existing block will require you
-   * the reindex the entire block.
-   * 
-   * <p>
-   * However it's possible that in the future Lucene may merge more aggressively
-   * re-order documents (for example, perhaps to obtain better index
-   * compression), in which case you may need to fully re-index your documents
-   * at that time.
-   * 
-   * <p>
-   * See {@link #addDocument(IndexDocument)} for details on index and
-   * IndexWriter state after an Exception, and flushing/merging temporary free
-   * space requirements.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: tools that do offline splitting of an index (for example,
-   * IndexSplitter in contrib) or re-sorting of documents (for example,
-   * IndexSorter in contrib) are not aware of these atomically added documents
-   * and will likely break them up. Use such tools at your own risk!
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
-   * 
+   * Atomically adds a block of documents with sequentially
+   * assigned document IDs, such that an external reader
+   * will see all or none of the documents.
+   *
+   * <p><b>WARNING</b>: the index does not currently record
+   * which documents were added as a block.  Today this is
+   * fine, because merging will preserve a block. The order of
+   * documents within a segment will be preserved, even when child
+   * documents within a block are deleted. Most search features
+   * (like result grouping and block joining) require you to
+   * mark documents; when these documents are deleted these
+   * search features will not work as expected. Obviously adding
+   * documents to an existing block will require you the reindex
+   * the entire block.
+   *
+   * <p>However it's possible that in the future Lucene may
+   * merge more aggressively re-order documents (for example,
+   * perhaps to obtain better index compression), in which case
+   * you may need to fully re-index your documents at that time.
+   *
+   * <p>See {@link #addDocument(IndexDocument)} for details on
+   * index and IndexWriter state after an Exception, and
+   * flushing/merging temporary free space requirements.</p>
+   *
+   * <p><b>NOTE</b>: tools that do offline splitting of an index
+   * (for example, IndexSplitter in contrib) or
+   * re-sorting of documents (for example, IndexSorter in
+   * contrib) are not aware of these atomically added documents
+   * and will likely break them up.  Use such tools at your
+   * own risk!
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
+   *
    * @lucene.experimental
    */
-  public void addDocuments(Iterable<? extends IndexDocument> docs)
-      throws IOException {
+  public void addDocuments(Iterable<? extends IndexDocument> docs) throws IOException {
     addDocuments(docs, analyzer);
   }
-  
+
   /**
-   * Atomically adds a block of documents, analyzed using the provided analyzer,
-   * with sequentially assigned document IDs, such that an external reader will
-   * see all or none of the documents.
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
-   * 
+   * Atomically adds a block of documents, analyzed using the
+   * provided analyzer, with sequentially assigned document
+   * IDs, such that an external reader will see all or none
+   * of the documents. 
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
+   *
    * @lucene.experimental
    */
-  public void addDocuments(Iterable<? extends IndexDocument> docs,
-      Analyzer analyzer) throws IOException {
+  public void addDocuments(Iterable<? extends IndexDocument> docs, Analyzer analyzer) throws IOException {
     updateDocuments(null, docs, analyzer);
   }
-  
+
   /**
-   * Atomically deletes documents matching the provided delTerm and adds a block
-   * of documents with sequentially assigned document IDs, such that an external
-   * reader will see all or none of the documents.
-   * 
+   * Atomically deletes documents matching the provided
+   * delTerm and adds a block of documents with sequentially
+   * assigned document IDs, such that an external reader
+   * will see all or none of the documents. 
+   *
    * See {@link #addDocuments(Iterable)}.
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
-   * 
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
+   *
    * @lucene.experimental
    */
   public void replaceDocuments(Term delTerm,
@@ -1318,20 +1277,19 @@
       Iterable<? extends IndexDocument> docs) throws IOException {
     replaceDocuments(delTerm, docs, analyzer);
   }
-  
+
   /**
-   * Atomically deletes documents matching the provided delTerm and adds a block
-   * of documents, analyzed using the provided analyzer, with sequentially
-   * assigned document IDs, such that an external reader will see all or none of
-   * the documents.
-   * 
+   * Atomically deletes documents matching the provided
+   * delTerm and adds a block of documents, analyzed  using
+   * the provided analyzer, with sequentially
+   * assigned document IDs, such that an external reader
+   * will see all or none of the documents. 
+   *
    * See {@link #addDocuments(Iterable)}.
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
-   * 
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
+   *
    * @lucene.experimental
    */
   public void replaceDocuments(Term delTerm,
@@ -1344,6 +1302,9 @@
       try {
         anySegmentFlushed = docWriter.updateDocuments(docs, analyzer, delTerm);
         success = true;
+        if (delTerm != null) {
+          deletesPending.set(true);
+        }
       } finally {
         if (!success) {
           if (infoStream.isEnabled("IW")) {
@@ -1435,6 +1396,12 @@
   public void updateFields(FieldsUpdate.Operation operation, Term term,
       IndexDocument fields, Analyzer analyzer) throws IOException {
     ensureOpen();
+
+    if (deletesPending.get()) {
+      commit();
+      deletesPending.set(false);
+    }
+    
     try {
       boolean success = false;
       boolean anySegmentFlushed = false;
@@ -1442,7 +1409,6 @@
         anySegmentFlushed = docWriter.updateFields(term, operation, fields,
             analyzer, globalFieldNumberMap);
         success = true;
-        updatesPending = true;
       } finally {
         if (!success) {
           if (infoStream.isEnabled("IW")) {
@@ -1461,44 +1427,43 @@
   
   /**
    * Deletes the document(s) containing <code>term</code>.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @param term
-   *          the term to identify the documents to be deleted
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @param term the term to identify the documents to be deleted
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void deleteDocuments(Term term) throws IOException {
     ensureOpen();
     try {
       docWriter.deleteTerms(term);
+      if (term != null) {
+        deletesPending.set(true);
+      }
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "deleteDocuments(Term)");
     }
   }
-  
-  /**
-   * Expert: attempts to delete by document ID, as long as the provided reader
-   * is a near-real-time reader (from
-   * {@link DirectoryReader#open(IndexWriter,boolean)}). If the provided reader
-   * is an NRT reader obtained from this writer, and its segment has not been
-   * merged away, then the delete succeeds and this method returns true; else,
-   * it returns false the caller must then separately delete by Term or Query.
-   * 
-   * <b>NOTE</b>: this method can only delete documents visible to the currently
-   * open NRT reader. If you need to delete documents indexed after opening the
-   * NRT reader you must use the other deleteDocument methods (e.g.,
-   * {@link #deleteDocuments(Term)}).
-   */
-  public synchronized boolean tryDeleteDocument(IndexReader readerIn, int docID)
-      throws IOException {
-    
+
+  /** Expert: attempts to delete by document ID, as long as
+   *  the provided reader is a near-real-time reader (from {@link
+   *  DirectoryReader#open(IndexWriter,boolean)}).  If the
+   *  provided reader is an NRT reader obtained from this
+   *  writer, and its segment has not been merged away, then
+   *  the delete succeeds and this method returns true; else, it
+   *  returns false the caller must then separately delete by
+   *  Term or Query.
+   *
+   *  <b>NOTE</b>: this method can only delete documents
+   *  visible to the currently open NRT reader.  If you need
+   *  to delete documents indexed after opening the NRT
+   *  reader you must use the other deleteDocument methods
+   *  (e.g., {@link #deleteDocuments(Term)}). */
+  public synchronized boolean tryDeleteDocument(IndexReader readerIn, int docID) throws IOException {
+
     final AtomicReader reader;
     if (readerIn instanceof AtomicReader) {
       // Reader is already atomic: use the incoming docID:
@@ -1512,27 +1477,25 @@
       assert docID >= 0;
       assert docID < reader.maxDoc();
     }
-    
+
     if (!(reader instanceof SegmentReader)) {
-      throw new IllegalArgumentException(
-          "the reader must be a SegmentReader or composite reader containing only SegmentReaders");
+      throw new IllegalArgumentException("the reader must be a SegmentReader or composite reader containing only SegmentReaders");
     }
-    
+      
     final SegmentInfoPerCommit info = ((SegmentReader) reader).getSegmentInfo();
-    
+
     // TODO: this is a slow linear search, but, number of
     // segments should be contained unless something is
     // seriously wrong w/ the index, so it should be a minor
     // cost:
-    
+
     if (segmentInfos.indexOf(info) != -1) {
       ReadersAndLiveDocs rld = readerPool.get(info, false);
       if (rld != null) {
-        synchronized (bufferedDeletesStream) {
+        synchronized(bufferedDeletesStream) {
           rld.initWritableLiveDocs();
           if (rld.delete(docID)) {
-            final int fullDelCount = rld.info.getDelCount()
-                + rld.getPendingDeleteCount();
+            final int fullDelCount = rld.info.getDelCount() + rld.getPendingDeleteCount();
             if (fullDelCount == rld.info.info.getDocCount()) {
               // If a merge has already registered for this
               // segment, we leave it in the readerPool; the
@@ -1544,92 +1507,92 @@
                 checkpoint();
               }
             }
-            
+
             // Must bump changeCount so if no other changes
             // happened, we still commit this change:
             changed();
           }
-          // System.out.println("  yes " + info.info.name + " " + docID);
+          //System.out.println("  yes " + info.info.name + " " + docID);
           return true;
         }
       } else {
-        // System.out.println("  no rld " + info.info.name + " " + docID);
+        //System.out.println("  no rld " + info.info.name + " " + docID);
       }
     } else {
-      // System.out.println("  no seg " + info.info.name + " " + docID);
+      //System.out.println("  no seg " + info.info.name + " " + docID);
     }
     return false;
   }
-  
+
   /**
-   * Deletes the document(s) containing any of the terms. All given deletes are
-   * applied and flushed atomically at the same time.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @param terms
-   *          array of terms to identify the documents to be deleted
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   * Deletes the document(s) containing any of the
+   * terms. All given deletes are applied and flushed atomically
+   * at the same time.
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @param terms array of terms to identify the documents
+   * to be deleted
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void deleteDocuments(Term... terms) throws IOException {
     ensureOpen();
     try {
       docWriter.deleteTerms(terms);
+      if (terms != null && terms.length > 0) {
+        deletesPending.set(true);
+      }
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "deleteDocuments(Term..)");
     }
   }
-  
+
   /**
    * Deletes the document(s) matching the provided query.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @param query
-   *          the query to identify the documents to be deleted
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @param query the query to identify the documents to be deleted
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void deleteDocuments(Query query) throws IOException {
     ensureOpen();
     try {
       docWriter.deleteQueries(query);
+      if (query != null) {
+        deletesPending.set(true);
+      }
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "deleteDocuments(Query)");
     }
   }
-  
+
   /**
-   * Deletes the document(s) matching any of the provided queries. All given
-   * deletes are applied and flushed atomically at the same time.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @param queries
-   *          array of queries to identify the documents to be deleted
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   * Deletes the document(s) matching any of the provided queries.
+   * All given deletes are applied and flushed atomically at the same time.
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @param queries array of queries to identify the documents
+   * to be deleted
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void deleteDocuments(Query... queries) throws IOException {
     ensureOpen();
     try {
       docWriter.deleteQueries(queries);
+      if (queries != null && queries.length > 0) {
+        deletesPending.set(true);
+      }
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "deleteDocuments(Query..)");
     }
@@ -1673,28 +1636,24 @@
     ensureOpen();
     replaceDocument(term, doc, analyzer);
   }
-  
+
   /**
-   * Updates a document by first deleting the document(s) containing
-   * <code>term</code> and then adding the new document. The delete and then add
-   * are atomic as seen by a reader on the same index (flush may happen only
-   * after the add).
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * @param term
-   *          the term to identify the document(s) to be deleted
-   * @param doc
-   *          the document to be added
-   * @param analyzer
-   *          the analyzer to use when analyzing the document
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   * Updates a document by first deleting the document(s)
+   * containing <code>term</code> and then adding the new
+   * document.  The delete and then add are atomic as seen
+   * by a reader on the same index (flush may happen only after
+   * the add).
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * @param term the term to identify the document(s) to be
+   * deleted
+   * @param doc the document to be added
+   * @param analyzer the analyzer to use when analyzing the document
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void replaceDocument(Term term, IndexDocument doc, Analyzer analyzer)
       throws IOException {
@@ -1705,6 +1664,9 @@
       try {
         anySegmentFlushed = docWriter.updateDocument(doc, analyzer, term);
         success = true;
+        if (term != null) {
+          deletesPending.set(true);
+        }
       } finally {
         if (!success) {
           if (infoStream.isEnabled("IW")) {
@@ -1712,7 +1674,7 @@
           }
         }
       }
-      
+
       if (anySegmentFlushed) {
         maybeMerge(MergeTrigger.SEGMENT_FLUSH, UNBOUNDED_MAX_MERGE_SEGMENTS);
       }
@@ -1738,20 +1700,20 @@
   }
   
   // for test purpose
-  final synchronized int getSegmentCount() {
+  final synchronized int getSegmentCount(){
     return segmentInfos.size();
   }
-  
+
   // for test purpose
-  final synchronized int getNumBufferedDocuments() {
+  final synchronized int getNumBufferedDocuments(){
     return docWriter.getNumDocs();
   }
-  
+
   // for test purpose
   final synchronized Collection<String> getIndexFileNames() throws IOException {
     return segmentInfos.files(directory, true);
   }
-  
+
   // for test purpose
   final synchronized int getDocCount(int i) {
     if (i >= 0 && i < segmentInfos.size()) {
@@ -1760,407 +1722,392 @@
       return -1;
     }
   }
-  
+
   // for test purpose
   final int getFlushCount() {
     return flushCount.get();
   }
-  
+
   // for test purpose
   final int getFlushDeletesCount() {
     return flushDeletesCount.get();
   }
-  
+
   final String newSegmentName() {
     // Cannot synchronize on IndexWriter because that causes
     // deadlock
-    synchronized (segmentInfos) {
+    synchronized(segmentInfos) {
       // Important to increment changeCount so that the
-      // segmentInfos is written on close. Otherwise we
+      // segmentInfos is written on close.  Otherwise we
       // could close, re-open and re-return the same segment
       // name that was previously returned which can cause
       // problems at least with ConcurrentMergeScheduler.
       changeCount++;
       segmentInfos.changed();
-      return "_"
-          + Integer.toString(segmentInfos.counter++, Character.MAX_RADIX);
+      return "_" + Integer.toString(segmentInfos.counter++, Character.MAX_RADIX);
     }
   }
-  
-  /**
-   * If non-null, information about merges will be printed to this.
+
+  /** If non-null, information about merges will be printed to this.
    */
   final InfoStream infoStream;
-  
+
   /**
-   * Forces merge policy to merge segments until there are <= maxNumSegments.
-   * The actual merges to be executed are determined by the {@link MergePolicy}.
+   * Forces merge policy to merge segments until there are <=
+   * maxNumSegments.  The actual merges to be
+   * executed are determined by the {@link MergePolicy}.
+   *
+   * <p>This is a horribly costly operation, especially when
+   * you pass a small {@code maxNumSegments}; usually you
+   * should only call this if the index is static (will no
+   * longer be changed).</p>
+   *
+   * <p>Note that this requires up to 2X the index size free
+   * space in your Directory (3X if you're using compound
+   * file format).  For example, if your index size is 10 MB
+   * then you need up to 20 MB free for this to complete (30
+   * MB if you're using compound file format).  Also,
+   * it's best to call {@link #commit()} afterwards,
+   * to allow IndexWriter to free up disk space.</p>
+   *
+   * <p>If some but not all readers re-open while merging
+   * is underway, this will cause > 2X temporary
+   * space to be consumed as those new readers will then
+   * hold open the temporary segments at that time.  It is
+   * best not to re-open readers while merging is running.</p>
+   *
+   * <p>The actual temporary usage could be much less than
+   * these figures (it depends on many factors).</p>
+   *
+   * <p>In general, once this completes, the total size of the
+   * index will be less than the size of the starting index.
+   * It could be quite a bit smaller (if there were many
+   * pending deletes) or just slightly smaller.</p>
+   *
+   * <p>If an Exception is hit, for example
+   * due to disk full, the index will not be corrupted and no
+   * documents will be lost.  However, it may have
+   * been partially merged (some segments were merged but
+   * not all), and it's possible that one of the segments in
+   * the index will be in non-compound format even when
+   * using compound file format.  This will occur when the
+   * Exception is hit during conversion of the segment into
+   * compound format.</p>
+   *
+   * <p>This call will merge those segments present in
+   * the index when the call started.  If other threads are
+   * still adding documents and flushing segments, those
+   * newly created segments will not be merged unless you
+   * call forceMerge again.</p>
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * <p><b>NOTE</b>: if you call {@link #close(boolean)}
+   * with <tt>false</tt>, which aborts all running merges,
+   * then any thread still running this method might hit a
+   * {@link MergePolicy.MergeAbortedException}.
+   *
+   * @param maxNumSegments maximum number of segments left
+   * in the index after merging finishes
    * 
-   * <p>
-   * This is a horribly costly operation, especially when you pass a small
-   * {@code maxNumSegments}; usually you should only call this if the index is
-   * static (will no longer be changed).
-   * </p>
-   * 
-   * <p>
-   * Note that this requires up to 2X the index size free space in your
-   * Directory (3X if you're using compound file format). For example, if your
-   * index size is 10 MB then you need up to 20 MB free for this to complete (30
-   * MB if you're using compound file format). Also, it's best to call
-   * {@link #commit()} afterwards, to allow IndexWriter to free up disk space.
-   * </p>
-   * 
-   * <p>
-   * If some but not all readers re-open while merging is underway, this will
-   * cause > 2X temporary space to be consumed as those new readers will then
-   * hold open the temporary segments at that time. It is best not to re-open
-   * readers while merging is running.
-   * </p>
-   * 
-   * <p>
-   * The actual temporary usage could be much less than these figures (it
-   * depends on many factors).
-   * </p>
-   * 
-   * <p>
-   * In general, once this completes, the total size of the index will be less
-   * than the size of the starting index. It could be quite a bit smaller (if
-   * there were many pending deletes) or just slightly smaller.
-   * </p>
-   * 
-   * <p>
-   * If an Exception is hit, for example due to disk full, the index will not be
-   * corrupted and no documents will be lost. However, it may have been
-   * partially merged (some segments were merged but not all), and it's possible
-   * that one of the segments in the index will be in non-compound format even
-   * when using compound file format. This will occur when the Exception is hit
-   * during conversion of the segment into compound format.
-   * </p>
-   * 
-   * <p>
-   * This call will merge those segments present in the index when the call
-   * started. If other threads are still adding documents and flushing segments,
-   * those newly created segments will not be merged unless you call forceMerge
-   * again.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if you call {@link #close(boolean)} with <tt>false</tt>, which
-   * aborts all running merges, then any thread still running this method might
-   * hit a {@link MergePolicy.MergeAbortedException}.
-   * 
-   * @param maxNumSegments
-   *          maximum number of segments left in the index after merging
-   *          finishes
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    * @see MergePolicy#findMerges
-   * 
-   */
+   *
+  */
   public void forceMerge(int maxNumSegments) throws IOException {
     forceMerge(maxNumSegments, true);
   }
-  
-  /**
-   * Just like {@link #forceMerge(int)}, except you can specify whether the call
-   * should block until all merging completes. This is only meaningful with a
-   * {@link MergeScheduler} that is able to run merges in background threads.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
+
+  /** Just like {@link #forceMerge(int)}, except you can
+   *  specify whether the call should block until
+   *  all merging completes.  This is only meaningful with a
+   *  {@link MergeScheduler} that is able to run merges in
+   *  background threads.
+   *
+   *  <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   *  you should immediately close the writer.  See <a
+   *  href="#OOME">above</a> for details.</p>
    */
   public void forceMerge(int maxNumSegments, boolean doWait) throws IOException {
     ensureOpen();
-    
-    if (maxNumSegments < 1) throw new IllegalArgumentException(
-        "maxNumSegments must be >= 1; got " + maxNumSegments);
-    
+
+    if (maxNumSegments < 1)
+      throw new IllegalArgumentException("maxNumSegments must be >= 1; got " + maxNumSegments);
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "forceMerge: index now " + segString());
       infoStream.message("IW", "now flush at forceMerge");
     }
-    
+
     flush(true, true);
-    
-    synchronized (this) {
+
+    synchronized(this) {
       resetMergeExceptions();
       segmentsToMerge.clear();
-      for (SegmentInfoPerCommit info : segmentInfos) {
+      for(SegmentInfoPerCommit info : segmentInfos) {
         segmentsToMerge.put(info, Boolean.TRUE);
       }
       mergeMaxNumSegments = maxNumSegments;
-      
+
       // Now mark all pending & running merges for forced
       // merge:
-      for (final MergePolicy.OneMerge merge : pendingMerges) {
+      for(final MergePolicy.OneMerge merge  : pendingMerges) {
         merge.maxNumSegments = maxNumSegments;
         segmentsToMerge.put(merge.info, Boolean.TRUE);
       }
-      
-      for (final MergePolicy.OneMerge merge : runningMerges) {
+
+      for (final MergePolicy.OneMerge merge: runningMerges) {
         merge.maxNumSegments = maxNumSegments;
         segmentsToMerge.put(merge.info, Boolean.TRUE);
       }
     }
-    
+
     maybeMerge(MergeTrigger.EXPLICIT, maxNumSegments);
-    
+
     if (doWait) {
-      synchronized (this) {
-        while (true) {
-          
+      synchronized(this) {
+        while(true) {
+
           if (hitOOM) {
-            throw new IllegalStateException(
-                "this writer hit an OutOfMemoryError; cannot complete forceMerge");
+            throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot complete forceMerge");
           }
-          
+
           if (mergeExceptions.size() > 0) {
             // Forward any exceptions in background merge
             // threads to the current thread:
             final int size = mergeExceptions.size();
-            for (int i = 0; i < size; i++) {
+            for(int i=0;i<size;i++) {
               final MergePolicy.OneMerge merge = mergeExceptions.get(i);
               if (merge.maxNumSegments != -1) {
-                IOException err = new IOException(
-                    "background merge hit exception: "
-                        + merge.segString(directory));
+                IOException err = new IOException("background merge hit exception: " + merge.segString(directory));
                 final Throwable t = merge.getException();
-                if (t != null) err.initCause(t);
+                if (t != null)
+                  err.initCause(t);
                 throw err;
               }
             }
           }
-          
-          if (maxNumSegmentsMergesPending()) doWait();
-          else break;
+
+          if (maxNumSegmentsMergesPending())
+            doWait();
+          else
+            break;
         }
       }
-      
+
       // If close is called while we are still
       // running, throw an exception so the calling
       // thread will know merging did not
       // complete
       ensureOpen();
     }
-    
+
     // NOTE: in the ConcurrentMergeScheduler case, when
     // doWait is false, we can return immediately while
     // background threads accomplish the merging
   }
-  
-  /**
-   * Returns true if any merges in pendingMerges or runningMerges are
-   * maxNumSegments merges.
-   */
+
+  /** Returns true if any merges in pendingMerges or
+   *  runningMerges are maxNumSegments merges. */
   private synchronized boolean maxNumSegmentsMergesPending() {
     for (final MergePolicy.OneMerge merge : pendingMerges) {
-      if (merge.maxNumSegments != -1) return true;
+      if (merge.maxNumSegments != -1)
+        return true;
     }
-    
+
     for (final MergePolicy.OneMerge merge : runningMerges) {
-      if (merge.maxNumSegments != -1) return true;
+      if (merge.maxNumSegments != -1)
+        return true;
     }
-    
+
     return false;
   }
-  
-  /**
-   * Just like {@link #forceMergeDeletes()}, except you can specify whether the
-   * call should block until the operation completes. This is only meaningful
-   * with a {@link MergeScheduler} that is able to run merges in background
-   * threads.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if you call {@link #close(boolean)} with <tt>false</tt>, which
-   * aborts all running merges, then any thread still running this method might
-   * hit a {@link MergePolicy.MergeAbortedException}.
+
+  /** Just like {@link #forceMergeDeletes()}, except you can
+   *  specify whether the call should block until the
+   *  operation completes.  This is only meaningful with a
+   *  {@link MergeScheduler} that is able to run merges in
+   *  background threads.
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
+   * <p><b>NOTE</b>: if you call {@link #close(boolean)}
+   * with <tt>false</tt>, which aborts all running merges,
+   * then any thread still running this method might hit a
+   * {@link MergePolicy.MergeAbortedException}.
    */
-  public void forceMergeDeletes(boolean doWait) throws IOException {
+  public void forceMergeDeletes(boolean doWait)
+    throws IOException {
     ensureOpen();
-    
+
     flush(true, true);
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "forceMergeDeletes: index now " + segString());
     }
-    
+
     MergePolicy.MergeSpecification spec;
-    
-    synchronized (this) {
+
+    synchronized(this) {
       spec = mergePolicy.findForcedDeletesMerges(segmentInfos);
       if (spec != null) {
         final int numMerges = spec.merges.size();
-        for (int i = 0; i < numMerges; i++)
+        for(int i=0;i<numMerges;i++)
           registerMerge(spec.merges.get(i));
       }
     }
-    
+
     mergeScheduler.merge(this);
-    
+
     if (spec != null && doWait) {
       final int numMerges = spec.merges.size();
-      synchronized (this) {
+      synchronized(this) {
         boolean running = true;
-        while (running) {
-          
+        while(running) {
+
           if (hitOOM) {
-            throw new IllegalStateException(
-                "this writer hit an OutOfMemoryError; cannot complete forceMergeDeletes");
+            throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot complete forceMergeDeletes");
           }
-          
+
           // Check each merge that MergePolicy asked us to
           // do, to see if any of them are still running and
           // if any of them have hit an exception.
           running = false;
-          for (int i = 0; i < numMerges; i++) {
+          for(int i=0;i<numMerges;i++) {
             final MergePolicy.OneMerge merge = spec.merges.get(i);
             if (pendingMerges.contains(merge) || runningMerges.contains(merge)) {
               running = true;
             }
             Throwable t = merge.getException();
             if (t != null) {
-              IOException ioe = new IOException(
-                  "background merge hit exception: "
-                      + merge.segString(directory));
+              IOException ioe = new IOException("background merge hit exception: " + merge.segString(directory));
               ioe.initCause(t);
               throw ioe;
             }
           }
-          
+
           // If any of our merges are still running, wait:
-          if (running) doWait();
+          if (running)
+            doWait();
         }
       }
     }
-    
+
     // NOTE: in the ConcurrentMergeScheduler case, when
     // doWait is false, we can return immediately while
     // background threads accomplish the merging
   }
-  
+
+
   /**
-   * Forces merging of all segments that have deleted documents. The actual
-   * merges to be executed are determined by the {@link MergePolicy}. For
-   * example, the default {@link TieredMergePolicy} will only pick a segment if
-   * the percentage of deleted docs is over 10%.
-   * 
-   * <p>
-   * This is often a horribly costly operation; rarely is it warranted.
-   * </p>
-   * 
-   * <p>
-   * To see how many deletions you have pending in your index, call
-   * {@link IndexReader#numDeletedDocs}.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: this method first flushes a new segment (if there are indexed
-   * documents), and applies all buffered deletes.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
+   *  Forces merging of all segments that have deleted
+   *  documents.  The actual merges to be executed are
+   *  determined by the {@link MergePolicy}.  For example,
+   *  the default {@link TieredMergePolicy} will only
+   *  pick a segment if the percentage of
+   *  deleted docs is over 10%.
+   *
+   *  <p>This is often a horribly costly operation; rarely
+   *  is it warranted.</p>
+   *
+   *  <p>To see how
+   *  many deletions you have pending in your index, call
+   *  {@link IndexReader#numDeletedDocs}.</p>
+   *
+   *  <p><b>NOTE</b>: this method first flushes a new
+   *  segment (if there are indexed documents), and applies
+   *  all buffered deletes.
+   *
+   *  <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   *  you should immediately close the writer.  See <a
+   *  href="#OOME">above</a> for details.</p>
    */
   public void forceMergeDeletes() throws IOException {
     forceMergeDeletes(true);
   }
-  
+
   /**
-   * Expert: asks the mergePolicy whether any merges are necessary now and if
-   * so, runs the requested merges and then iterate (test again if merges are
-   * needed) until no more merges are returned by the mergePolicy.
-   * 
-   * Explicit calls to maybeMerge() are usually not necessary. The most common
-   * case is when merge policy parameters have changed.
+   * Expert: asks the mergePolicy whether any merges are
+   * necessary now and if so, runs the requested merges and
+   * then iterate (test again if merges are needed) until no
+   * more merges are returned by the mergePolicy.
+   *
+   * Explicit calls to maybeMerge() are usually not
+   * necessary. The most common case is when merge policy
+   * parameters have changed.
    * 
    * This method will call the {@link MergePolicy} with
    * {@link MergeTrigger#EXPLICIT}.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
    */
   public final void maybeMerge() throws IOException {
     maybeMerge(MergeTrigger.EXPLICIT, UNBOUNDED_MAX_MERGE_SEGMENTS);
   }
-  
-  private final void maybeMerge(MergeTrigger trigger, int maxNumSegments)
-      throws IOException {
+
+  private final void maybeMerge(MergeTrigger trigger, int maxNumSegments) throws IOException {
     ensureOpen(false);
     updatePendingMerges(trigger, maxNumSegments);
     mergeScheduler.merge(this);
   }
-  
-  private synchronized void updatePendingMerges(MergeTrigger trigger,
-      int maxNumSegments) throws IOException {
+
+  private synchronized void updatePendingMerges(MergeTrigger trigger, int maxNumSegments)
+    throws IOException {
     assert maxNumSegments == -1 || maxNumSegments > 0;
     assert trigger != null;
     if (stopMerges) {
       return;
     }
-    
+
     // Do not start new merges if we've hit OOME
     if (hitOOM) {
       return;
     }
-    
+
     final MergePolicy.MergeSpecification spec;
     if (maxNumSegments != UNBOUNDED_MAX_MERGE_SEGMENTS) {
-      assert trigger == MergeTrigger.EXPLICIT
-          || trigger == MergeTrigger.MERGE_FINISHED : "Expected EXPLICT or MERGE_FINISHED as trigger even with maxNumSegments set but was: "
-          + trigger.name();
-      spec = mergePolicy.findForcedMerges(segmentInfos, maxNumSegments,
-          Collections.unmodifiableMap(segmentsToMerge));
+      assert trigger == MergeTrigger.EXPLICIT || trigger == MergeTrigger.MERGE_FINISHED :
+        "Expected EXPLICT or MERGE_FINISHED as trigger even with maxNumSegments set but was: " + trigger.name();
+      spec = mergePolicy.findForcedMerges(segmentInfos, maxNumSegments, Collections.unmodifiableMap(segmentsToMerge));
       if (spec != null) {
         final int numMerges = spec.merges.size();
-        for (int i = 0; i < numMerges; i++) {
+        for(int i=0;i<numMerges;i++) {
           final MergePolicy.OneMerge merge = spec.merges.get(i);
           merge.maxNumSegments = maxNumSegments;
         }
       }
-      
+
     } else {
       spec = mergePolicy.findMerges(trigger, segmentInfos);
     }
-    
+
     if (spec != null) {
       final int numMerges = spec.merges.size();
-      for (int i = 0; i < numMerges; i++) {
+      for(int i=0;i<numMerges;i++) {
         registerMerge(spec.merges.get(i));
       }
     }
   }
-  
-  /**
-   * Expert: to be used by a {@link MergePolicy} to avoid selecting merges for
-   * segments already being merged. The returned collection is not cloned, and
-   * thus is only safe to access if you hold IndexWriter's lock (which you do
-   * when IndexWriter invokes the MergePolicy).
-   * 
-   * <p>
-   * Do not alter the returned collection!
-   */
+
+  /** Expert: to be used by a {@link MergePolicy} to avoid
+   *  selecting merges for segments already being merged.
+   *  The returned collection is not cloned, and thus is
+   *  only safe to access if you hold IndexWriter's lock
+   *  (which you do when IndexWriter invokes the
+   *  MergePolicy).
+   *
+   *  <p>Do not alter the returned collection! */
   public synchronized Collection<SegmentInfoPerCommit> getMergingSegments() {
     return mergingSegments;
   }
-  
+
   /**
    * Expert: the {@link MergeScheduler} calls this method to retrieve the next
    * merge requested by the MergePolicy
@@ -2177,7 +2124,7 @@
       return merge;
     }
   }
-  
+
   /**
    * Expert: returns true if there are merges waiting to be scheduled.
    * 
@@ -2186,97 +2133,96 @@
   public synchronized boolean hasPendingMerges() {
     return pendingMerges.size() != 0;
   }
-  
+
   /**
-   * Close the <code>IndexWriter</code> without committing any changes that have
-   * occurred since the last commit (or since it was opened, if commit hasn't
-   * been called). This removes any temporary files that had been created, after
-   * which the state of the index will be the same as it was when commit() was
-   * last called or when this writer was first opened. This also clears a
-   * previous call to {@link #prepareCommit}.
-   * 
-   * @throws IOException
-   *           if there is a low-level IO error
+   * Close the <code>IndexWriter</code> without committing
+   * any changes that have occurred since the last commit
+   * (or since it was opened, if commit hasn't been called).
+   * This removes any temporary files that had been created,
+   * after which the state of the index will be the same as
+   * it was when commit() was last called or when this
+   * writer was first opened.  This also clears a previous
+   * call to {@link #prepareCommit}.
+   * @throws IOException if there is a low-level IO error
    */
   @Override
   public void rollback() throws IOException {
     ensureOpen();
-    
+
     // Ensure that only one thread actually gets to do the
     // closing, and make sure no commit is also in progress:
-    synchronized (commitLock) {
+    synchronized(commitLock) {
       if (shouldClose()) {
         rollbackInternal();
       }
     }
   }
-  
+
   private void rollbackInternal() throws IOException {
-    
+
     boolean success = false;
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "rollback");
     }
     
     try {
-      synchronized (this) {
+      synchronized(this) {
         finishMerges(false);
         stopMerges = true;
       }
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "rollback: done finish merges");
       }
-      
+
       // Must pre-close these two, in case they increment
       // changeCount so that we can then set it to false
       // before calling closeInternal
       mergePolicy.close();
       mergeScheduler.close();
-      
+
       bufferedDeletesStream.clear();
-      docWriter.close(); // mark it as closed first to prevent subsequent
-                         // indexing actions/flushes
-      docWriter.abort();
-      synchronized (this) {
-        
+      docWriter.close(); // mark it as closed first to prevent subsequent indexing actions/flushes 
+      docWriter.abort(); // don't sync on IW here
+      synchronized(this) {
+
         if (pendingCommit != null) {
           pendingCommit.rollbackCommit(directory);
           deleter.decRef(pendingCommit);
           pendingCommit = null;
           notifyAll();
         }
-        
+
         // Don't bother saving any changes in our segmentInfos
         readerPool.dropAll(false);
-        
+
         // Keep the same segmentInfos instance but replace all
-        // of its SegmentInfo instances. This is so the next
+        // of its SegmentInfo instances.  This is so the next
         // attempt to commit using this instance of IndexWriter
         // will always write to a new generation ("write
         // once").
         segmentInfos.rollbackSegmentInfos(rollbackSegments);
-        if (infoStream.isEnabled("IW")) {
-          infoStream
-              .message("IW", "rollback: infos=" + segString(segmentInfos));
+        if (infoStream.isEnabled("IW") ) {
+          infoStream.message("IW", "rollback: infos=" + segString(segmentInfos));
         }
         
+
         assert testPoint("rollback before checkpoint");
-        
+
         // Ask deleter to locate unreferenced files & remove
         // them:
         deleter.checkpoint(segmentInfos, false);
         deleter.refresh();
-        
+
         lastCommitChangeCount = changeCount;
       }
-      
+
       success = true;
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "rollbackInternal");
     } finally {
-      synchronized (this) {
+      synchronized(this) {
         if (!success) {
           closing = false;
           notifyAll();
@@ -2286,114 +2232,131 @@
         }
       }
     }
-    
+
     closeInternal(false, false);
   }
-  
+
   /**
    * Delete all documents in the index.
-   * 
-   * <p>
-   * This method will drop all buffered documents and will remove all segments
-   * from the index. This change will not be visible until a {@link #commit()}
-   * has been called. This method can be rolled back using {@link #rollback()}.
-   * </p>
-   * 
-   * <p>
-   * NOTE: this method is much faster than using deleteDocuments( new
-   * MatchAllDocsQuery() ).
-   * </p>
-   * 
-   * <p>
-   * NOTE: this method will forcefully abort all merges in progress. If other
-   * threads are running {@link #forceMerge}, {@link #addIndexes(IndexReader[])}
-   * or {@link #forceMergeDeletes} methods, they may receive
-   * {@link MergePolicy.MergeAbortedException}s.
+   *
+   * <p>This method will drop all buffered documents and will
+   *    remove all segments from the index. This change will not be
+   *    visible until a {@link #commit()} has been called. This method
+   *    can be rolled back using {@link #rollback()}.</p>
+   *
+   * <p>NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ). 
+   *    Yet, this method also has different semantics compared to {@link #deleteDocuments(Query)} 
+   *    / {@link #deleteDocuments(Query...)} since internal data-structures are cleared as well 
+   *    as all segment information is forcefully dropped anti-viral semantics like omitting norms
+   *    are reset or doc value types are cleared. Essentially a call to {@link #deleteAll()} is equivalent
+   *    to creating a new {@link IndexWriter} with {@link OpenMode#CREATE} which a delete query only marks
+   *    documents as deleted.</p>
+   *
+   * <p>NOTE: this method will forcefully abort all merges
+   *    in progress.  If other threads are running {@link
+   *    #forceMerge}, {@link #addIndexes(IndexReader[])} or
+   *    {@link #forceMergeDeletes} methods, they may receive
+   *    {@link MergePolicy.MergeAbortedException}s.
    */
-  public synchronized void deleteAll() throws IOException {
+  public void deleteAll() throws IOException {
     ensureOpen();
+    // Remove any buffered docs
     boolean success = false;
-    try {
-      
-      // Abort any running merges
-      finishMerges(false);
-      
-      // Remove any buffered docs
-      docWriter.abort();
-      
-      // Remove all segments
-      segmentInfos.clear();
-      
-      // Ask deleter to locate unreferenced files & remove them:
-      deleter.checkpoint(segmentInfos, false);
-      deleter.refresh();
-      
-      globalFieldNumberMap.clear();
-
-      // Don't bother saving any changes in our segmentInfos
-      readerPool.dropAll(false);
-      
-      // Mark that the index has changed
-      ++changeCount;
-      segmentInfos.changed();
-      success = true;
-    } catch (OutOfMemoryError oom) {
-      handleOOM(oom, "deleteAll");
-    } finally {
-      if (!success) {
-        if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "hit exception during deleteAll");
+    /* hold the full flush lock to prevent concurrency commits / NRT reopens to
+     * get in our way and do unnecessary work. -- if we don't lock this here we might
+     * get in trouble if */
+    synchronized (fullFlushLock) { 
+        /*
+         * We first abort and trash everything we have in-memory
+         * and keep the thread-states locked, the lockAndAbortAll operation
+         * also guarantees "point in time semantics" ie. the checkpoint that we need in terms
+         * of logical happens-before relationship in the DW. So we do
+         * abort all in memory structures 
+         * We also drop global field numbering before during abort to make
+         * sure it's just like a fresh index.
+         */
+      try {
+        docWriter.lockAndAbortAll();
+        synchronized (this) {
+          try {
+            // Abort any running merges
+            finishMerges(false);
+            // Remove all segments
+            segmentInfos.clear();
+            // Ask deleter to locate unreferenced files & remove them:
+            deleter.checkpoint(segmentInfos, false);
+            /* don't refresh the deleter here since there might
+             * be concurrent indexing requests coming in opening
+             * files on the directory after we called DW#abort()
+             * if we do so these indexing requests might hit FNF exceptions.
+             * We will remove the files incrementally as we go...
+             */
+            // Don't bother saving any changes in our segmentInfos
+            readerPool.dropAll(false);
+            // Mark that the index has changed
+            ++changeCount;
+            segmentInfos.changed();
+            globalFieldNumberMap.clear();
+            success = true;
+          } catch (OutOfMemoryError oom) {
+            handleOOM(oom, "deleteAll");
+          } finally {
+            if (!success) {
+              if (infoStream.isEnabled("IW")) {
+                infoStream.message("IW", "hit exception during deleteAll");
+              }
+            }
+          }
         }
+      } finally {
+        docWriter.unlockAllAfterAbortAll();
       }
     }
   }
-  
+
   private synchronized void finishMerges(boolean waitForMerges) {
     if (!waitForMerges) {
-      
+
       stopMerges = true;
-      
+
       // Abort all pending & running merges:
       for (final MergePolicy.OneMerge merge : pendingMerges) {
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "now abort pending merge "
-              + segString(merge.segments));
+          infoStream.message("IW", "now abort pending merge " + segString(merge.segments));
         }
         merge.abort();
         mergeFinish(merge);
       }
       pendingMerges.clear();
-      
+
       for (final MergePolicy.OneMerge merge : runningMerges) {
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "now abort running merge "
-              + segString(merge.segments));
+          infoStream.message("IW", "now abort running merge " + segString(merge.segments));
         }
         merge.abort();
       }
-      
+
       // These merges periodically check whether they have
-      // been aborted, and stop if so. We wait here to make
-      // sure they all stop. It should not take very long
+      // been aborted, and stop if so.  We wait here to make
+      // sure they all stop.  It should not take very long
       // because the merge threads periodically check if
       // they are aborted.
-      while (runningMerges.size() > 0) {
+      while(runningMerges.size() > 0) {
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "now wait for " + runningMerges.size()
-              + " running merge/s to abort");
+          infoStream.message("IW", "now wait for " + runningMerges.size() + " running merge/s to abort");
         }
         doWait();
       }
-      
+
       stopMerges = false;
       notifyAll();
-      
+
       assert 0 == mergingSegments.size();
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "all running merges have aborted");
       }
-      
+
     } else {
       // waitForMerges() will ensure any running addIndexes finishes.
       // It's fine if a new one attempts to start because from our
@@ -2403,35 +2366,34 @@
       waitForMerges();
     }
   }
-  
+
   /**
    * Wait for any currently outstanding merges to finish.
-   * 
-   * <p>
-   * It is guaranteed that any merges started prior to calling this method will
-   * have completed once this method completes.
-   * </p>
+   *
+   * <p>It is guaranteed that any merges started prior to calling this method
+   *    will have completed once this method completes.</p>
    */
   public synchronized void waitForMerges() {
     ensureOpen(false);
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "waitForMerges");
     }
-    while (pendingMerges.size() > 0 || runningMerges.size() > 0) {
+    while(pendingMerges.size() > 0 || runningMerges.size() > 0) {
       doWait();
     }
-    
+
     // sanity check
     assert 0 == mergingSegments.size();
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "waitForMerges done");
     }
   }
-  
+
   /**
-   * Called whenever the SegmentInfos has been updated and the index files
-   * referenced exist (correctly) in the index directory.
+   * Called whenever the SegmentInfos has been updated and
+   * the index files referenced exist (correctly) in the
+   * index directory.
    */
   synchronized void checkpoint() throws IOException {
     changed();
@@ -2468,8 +2430,7 @@
    * segments SegmentInfo to the index writer.
    */
   synchronized void publishFlushedSegment(SegmentInfoPerCommit newSegment,
-      FrozenBufferedDeletes packet, FrozenBufferedDeletes globalPacket)
-      throws IOException {
+      FrozenBufferedDeletes packet, FrozenBufferedDeletes globalPacket) throws IOException {
     // Lock order IW -> BDS
     synchronized (bufferedDeletesStream) {
       if (infoStream.isEnabled("IW")) {
@@ -2479,7 +2440,7 @@
       if (globalPacket != null
           && (globalPacket.anyDeletes() || globalPacket.anyUpdates())) {
         bufferedDeletesStream.push(globalPacket);
-      }
+      } 
       // Publishing the segment must be synched on IW -> BDS to make the sure
       // that no merge prunes away the seg. private delete packet
       final long nextGen;
@@ -2491,8 +2452,7 @@
         nextGen = bufferedDeletesStream.getNextGen();
       }
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "publish sets newSegment delGen=" + nextGen
-            + " seg=" + segString(newSegment));
+        infoStream.message("IW", "publish sets newSegment delGen=" + nextGen + " seg=" + segString(newSegment));
       }
       newSegment.setBufferedDeletesGen(nextGen);
       segmentInfos.add(newSegment);
@@ -2507,82 +2467,81 @@
     }
     return mergePolicy.useCompoundFile(segmentInfos, segmentInfo);
   }
-  
+
   private synchronized void resetMergeExceptions() {
     mergeExceptions = new ArrayList<MergePolicy.OneMerge>();
     mergeGen++;
   }
-  
+
   private void noDupDirs(Directory... dirs) {
     HashSet<Directory> dups = new HashSet<Directory>();
-    for (int i = 0; i < dirs.length; i++) {
-      if (dups.contains(dirs[i])) throw new IllegalArgumentException(
-          "Directory " + dirs[i] + " appears more than once");
-      if (dirs[i] == directory) throw new IllegalArgumentException(
-          "Cannot add directory to itself");
+    for(int i=0;i<dirs.length;i++) {
+      if (dups.contains(dirs[i]))
+        throw new IllegalArgumentException("Directory " + dirs[i] + " appears more than once");
+      if (dirs[i] == directory)
+        throw new IllegalArgumentException("Cannot add directory to itself");
       dups.add(dirs[i]);
     }
   }
-  
+
   /**
    * Adds all segments from an array of indexes into this index.
-   * 
+   *
+   * <p>This may be used to parallelize batch indexing. A large document
+   * collection can be broken into sub-collections. Each sub-collection can be
+   * indexed in parallel, on a different thread, process or machine. The
+   * complete index can then be created by merging sub-collection indexes
+   * with this method.
+   *
    * <p>
-   * This may be used to parallelize batch indexing. A large document collection
-   * can be broken into sub-collections. Each sub-collection can be indexed in
-   * parallel, on a different thread, process or machine. The complete index can
-   * then be created by merging sub-collection indexes with this method.
-   * 
-   * <p>
-   * <b>NOTE:</b> the index in each {@link Directory} must not be changed
-   * (opened by a writer) while this method is running. This method does not
-   * acquire a write lock in each input Directory, so it is up to the caller to
+   * <b>NOTE:</b> the index in each {@link Directory} must not be
+   * changed (opened by a writer) while this method is
+   * running.  This method does not acquire a write lock in
+   * each input Directory, so it is up to the caller to
    * enforce this.
-   * 
-   * <p>
-   * This method is transactional in how Exceptions are handled: it does not
-   * commit a new segments_N file until all indexes are added. This means if an
-   * Exception occurs (for example disk full), then either no indexes will have
-   * been added or they all will have been.
-   * 
-   * <p>
-   * Note that this requires temporary free space in the {@link Directory} up to
-   * 2X the sum of all input indexes (including the starting index). If
-   * readers/searchers are open against the starting index, then temporary free
-   * space required will be higher by the size of the starting index (see
-   * {@link #forceMerge(int)} for details).
-   * 
+   *
+   * <p>This method is transactional in how Exceptions are
+   * handled: it does not commit a new segments_N file until
+   * all indexes are added.  This means if an Exception
+   * occurs (for example disk full), then either no indexes
+   * will have been added or they all will have been.
+   *
+   * <p>Note that this requires temporary free space in the
+   * {@link Directory} up to 2X the sum of all input indexes
+   * (including the starting index). If readers/searchers
+   * are open against the starting index, then temporary
+   * free space required will be higher by the size of the
+   * starting index (see {@link #forceMerge(int)} for details).
+   *
    * <p>
    * <b>NOTE:</b> this method only copies the segments of the incoming indexes
    * and does not merge them. Therefore deleted documents are not removed and
    * the new segments are not merged with the existing ones.
-   * 
+   *
+   * <p>This requires this index not be among those to be added.
+   *
    * <p>
-   * This requires this index not be among those to be added.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   * <b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer. See <a
+   * href="#OOME">above</a> for details.
+   *
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   public void addIndexes(Directory... dirs) throws IOException {
     ensureOpen();
-    
+
     noDupDirs(dirs);
-    
+
     try {
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "flush at addIndexes(Directory...)");
       }
-      
+
       flush(false, true);
-      
+
       List<SegmentInfoPerCommit> infos = new ArrayList<SegmentInfoPerCommit>();
-      
+
       boolean success = false;
       try {
         for (Directory dir : dirs) {
@@ -2591,18 +2550,14 @@
           }
           SegmentInfos sis = new SegmentInfos(); // read infos from dir
           sis.read(dir);
-          
+
           for (SegmentInfoPerCommit info : sis) {
-            assert !infos.contains(info) : "dup info dir=" + info.info.dir
-                + " name=" + info.info.name;
-            
+            assert !infos.contains(info): "dup info dir=" + info.info.dir + " name=" + info.info.name;
+
             String newSegName = newSegmentName();
-            
+
             if (infoStream.isEnabled("IW")) {
-              infoStream
-                  .message("IW", "addIndexes: process segment origName="
-                      + info.info.name + " newName=" + newSegName + " info="
-                      + info);
+              infoStream.message("IW", "addIndexes: process segment origName=" + info.info.name + " newName=" + newSegName + " info=" + info);
             }
 
             IOContext context = new IOContext(new MergeInfo(info.info.getDocCount(), info.sizeInBytes(), true, -1));
@@ -2616,16 +2571,17 @@
         success = true;
       } finally {
         if (!success) {
-          for (SegmentInfoPerCommit sipc : infos) {
-            for (String file : sipc.files()) {
+          for(SegmentInfoPerCommit sipc : infos) {
+            for(String file : sipc.files()) {
               try {
                 directory.deleteFile(file);
-              } catch (Throwable t) {}
+              } catch (Throwable t) {
+              }
             }
           }
         }
       }
-      
+
       synchronized (this) {
         success = false;
         try {
@@ -2633,11 +2589,12 @@
           success = true;
         } finally {
           if (!success) {
-            for (SegmentInfoPerCommit sipc : infos) {
-              for (String file : sipc.files()) {
+            for(SegmentInfoPerCommit sipc : infos) {
+              for(String file : sipc.files()) {
                 try {
                   directory.deleteFile(file);
-                } catch (Throwable t) {}
+                } catch (Throwable t) {
+                }
               }
             }
           }
@@ -2645,7 +2602,7 @@
         segmentInfos.addAll(infos);
         checkpoint();
       }
-      
+
     } catch (OutOfMemoryError oom) {
       handleOOM(oom, "addIndexes(Directory...)");
     }
@@ -2689,13 +2646,13 @@
   public void addIndexes(IndexReader... readers) throws IOException {
     ensureOpen();
     int numDocs = 0;
-    
+
     try {
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "flush at addIndexes(IndexReader...)");
       }
       flush(false, true);
-      
+
       String mergedName = newSegmentName();
       final List<AtomicReader> mergeReaders = new ArrayList<AtomicReader>();
       for (IndexReader indexReader : readers) {
@@ -2705,7 +2662,7 @@
         }
       }
       final IOContext context = new IOContext(new MergeInfo(numDocs, -1, true, -1));
-      
+
       // TODO: somehow we should fix this merge so it's
       // abortable so that IW.close(false) is able to stop it
       TrackingDirectoryWrapper trackingDir = new TrackingDirectoryWrapper(directory);
@@ -2719,11 +2676,11 @@
       MergeState mergeState;
       boolean success = false;
       try {
-        mergeState = merger.merge(); // merge 'em
+        mergeState = merger.merge();                // merge 'em
         success = true;
       } finally {
-        if (!success) {
-          synchronized (this) {
+        if (!success) { 
+          synchronized(this) {
             deleter.refresh(info.name);
           }
         }
@@ -2734,20 +2691,19 @@
       
       info.setFiles(new HashSet<String>(trackingDir.getCreatedFiles()));
       trackingDir.getCreatedFiles().clear();
-      
+                                         
       setDiagnostics(info, SOURCE_ADDINDEXES_READERS);
-      
+
       boolean useCompoundFile;
-      synchronized (this) { // Guard segmentInfos
+      synchronized(this) { // Guard segmentInfos
         if (stopMerges) {
           deleter.deleteNewFiles(infoPerCommit.files());
           return;
         }
         ensureOpen();
-        useCompoundFile = mergePolicy.useCompoundFile(segmentInfos,
-            infoPerCommit);
+        useCompoundFile = mergePolicy.useCompoundFile(segmentInfos, infoPerCommit);
       }
-      
+
       // Now create the compound file if needed
       if (useCompoundFile) {
         Collection<String> filesToDelete = infoPerCommit.files();
@@ -2757,14 +2713,14 @@
         } finally {
           // delete new non cfs files directly: they were never
           // registered with IFD
-          synchronized (this) {
+          synchronized(this) {
             deleter.deleteNewFiles(filesToDelete);
           }
         }
         info.setUseCompoundFile(true);
       }
-      
-      // Have codec write SegmentInfo. Must do this after
+
+      // Have codec write SegmentInfo.  Must do this after
       // creating CFS so that 1) .si isn't slurped into CFS,
       // and 2) .si reflects useCompoundFile=true change
       // above:
@@ -2777,16 +2733,16 @@
         success = true;
       } finally {
         if (!success) {
-          synchronized (this) {
+          synchronized(this) {
             deleter.refresh(info.name);
           }
         }
       }
-      
+
       info.addFiles(trackingDir.getCreatedFiles());
-      
+
       // Register the new segment
-      synchronized (this) {
+      synchronized(this) {
         if (stopMerges) {
           deleter.deleteNewFiles(info.files());
           return;
@@ -2799,10 +2755,10 @@
       handleOOM(oom, "addIndexes(IndexReader...)");
     }
   }
-  
+
   /** Copies the segment files as-is into the IndexWriter's directory. */
-  private SegmentInfoPerCommit copySegmentAsIs(SegmentInfoPerCommit info,
-      String segName, IOContext context) throws IOException {
+  private SegmentInfoPerCommit copySegmentAsIs(SegmentInfoPerCommit info, String segName, IOContext context)
+      throws IOException {
     
     // note: we don't really need this fis (its copied), but we load it up
     // so we don't pass a null value to the si writer
@@ -2810,40 +2766,35 @@
     
     final Map<String,String> attributes;
     // copy the attributes map, we might modify it below.
-    // also we need to ensure its read-write, since we will invoke the SIwriter
-    // (which might want to set something).
+    // also we need to ensure its read-write, since we will invoke the SIwriter (which might want to set something).
     if (info.info.attributes() == null) {
       attributes = new HashMap<String,String>();
     } else {
       attributes = new HashMap<String,String>(info.info.attributes());
     }
-    
-    // System.out.println("copy seg=" + info.info.name + " version=" +
-    // info.info.getVersion());
+
+    //System.out.println("copy seg=" + info.info.name + " version=" + info.info.getVersion());
     // Same SI as before but we change directory and name
-    SegmentInfo newInfo = new SegmentInfo(directory, info.info.getVersion(),
-        segName, info.info.getDocCount(), info.info.getUseCompoundFile(),
-        info.info.getCodec(), info.info.getDiagnostics(), attributes);
-    SegmentInfoPerCommit newInfoPerCommit = new SegmentInfoPerCommit(newInfo,
-        info.getDelCount(), info.getDelGen(), info.getUpdateGen());
-    
+    SegmentInfo newInfo = new SegmentInfo(directory, info.info.getVersion(), segName, info.info.getDocCount(),
+                                          info.info.getUseCompoundFile(),
+                                          info.info.getCodec(), info.info.getDiagnostics(), attributes);
+    SegmentInfoPerCommit newInfoPerCommit = new SegmentInfoPerCommit(newInfo, info.getDelCount(), info.getDelGen(), info.getUpdateGen());
+
     Set<String> segFiles = new HashSet<String>();
-    
-    // Build up new segment's file names. Must do this
+
+    // Build up new segment's file names.  Must do this
     // before writing SegmentInfo:
     for (String file : info.files()) {
       final String newFileName = getNewFileName(file, segName);
       segFiles.add(newFileName);
     }
     newInfo.setFiles(segFiles);
-    
-    // We must rewrite the SI file because it references segment name in its
-    // list of files, etc
-    TrackingDirectoryWrapper trackingDir = new TrackingDirectoryWrapper(
-        directory);
-    
+
+    // We must rewrite the SI file because it references segment name in its list of files, etc
+    TrackingDirectoryWrapper trackingDir = new TrackingDirectoryWrapper(directory);
+
     boolean success = false;
-    
+
     try {
       
       SegmentInfoWriter segmentInfoWriter = newInfo.getCodec()
@@ -2851,27 +2802,29 @@
       segmentInfoWriter.write(trackingDir, newInfo, fis, context);
       
       final Collection<String> siFiles = trackingDir.getCreatedFiles();
-      
+
       // Copy the segment's files
-      for (String file : info.files()) {
+      for (String file: info.files()) {
+
         final String newFileName = getNewFileName(file, segName);
+
         if (siFiles.contains(newFileName)) {
           // We already rewrote this above
           continue;
         }
-        
-        assert !directory.fileExists(newFileName) : "file \"" + newFileName
-            + "\" already exists; siFiles=" + siFiles;
-        
+
+        assert !directory.fileExists(newFileName): "file \"" + newFileName + "\" already exists; siFiles=" + siFiles;
+
         info.info.dir.copy(directory, file, newFileName, context);
       }
       success = true;
     } finally {
       if (!success) {
-        for (String file : newInfo.files()) {
+        for(String file : newInfo.files()) {
           try {
             directory.deleteFile(file);
-          } catch (Throwable t) {}
+          } catch (Throwable t) {
+          }
         }
       }
     }
@@ -2898,99 +2851,95 @@
    * is committed (new segments_N file written).
    */
   protected void doAfterFlush() throws IOException {}
-  
+
   /**
    * A hook for extending classes to execute operations before pending added and
    * deleted documents are flushed to the Directory.
    */
   protected void doBeforeFlush() throws IOException {}
-  
-  /**
-   * <p>
-   * Expert: prepare for commit. This does the first phase of 2-phase commit.
-   * This method does all steps necessary to commit changes since this writer
-   * was opened: flushes pending added and deleted docs, syncs the index files,
-   * writes most of next segments_N file. After calling this you must call
-   * either {@link #commit()} to finish the commit, or {@link #rollback()} to
-   * revert the commit and undo all changes done since the writer was opened.
-   * </p>
-   * 
-   * <p>
-   * You can also just call {@link #commit()} directly without prepareCommit
-   * first in which case that method will internally call prepareCommit.
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
+
+  /** <p>Expert: prepare for commit.  This does the
+   *  first phase of 2-phase commit. This method does all
+   *  steps necessary to commit changes since this writer
+   *  was opened: flushes pending added and deleted docs,
+   *  syncs the index files, writes most of next segments_N
+   *  file.  After calling this you must call either {@link
+   *  #commit()} to finish the commit, or {@link
+   *  #rollback()} to revert the commit and undo all changes
+   *  done since the writer was opened.</p>
+   *
+   * <p>You can also just call {@link #commit()} directly
+   *  without prepareCommit first in which case that method
+   *  will internally call prepareCommit.
+   *
+   *  <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   *  you should immediately close the writer.  See <a
+   *  href="#OOME">above</a> for details.</p>
    */
   @Override
   public final void prepareCommit() throws IOException {
     ensureOpen();
     prepareCommitInternal();
   }
-  
+
   private void prepareCommitInternal() throws IOException {
-    synchronized (commitLock) {
+    synchronized(commitLock) {
       ensureOpen(false);
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "prepareCommit: flush");
         infoStream.message("IW", "  index before flush " + segString());
       }
-      
+
       if (hitOOM) {
-        throw new IllegalStateException(
-            "this writer hit an OutOfMemoryError; cannot commit");
+        throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot commit");
       }
-      
+
       if (pendingCommit != null) {
-        throw new IllegalStateException(
-            "prepareCommit was already called with no corresponding call to commit");
+        throw new IllegalStateException("prepareCommit was already called with no corresponding call to commit");
       }
-      
+
       doBeforeFlush();
       assert testPoint("startDoFlush");
       SegmentInfos toCommit = null;
       boolean anySegmentsFlushed = false;
-      
+
       // This is copied from doFlush, except it's modified to
       // clone & incRef the flushed SegmentInfos inside the
       // sync block:
-      
+
       try {
-        
+
         synchronized (fullFlushLock) {
           boolean flushSuccess = false;
           boolean success = false;
           try {
             anySegmentsFlushed = docWriter.flushAllThreads();
             if (!anySegmentsFlushed) {
-              // prevent double increment since docWriter#doFlush increments the
-              // flushcount
+              // prevent double increment since docWriter#doFlush increments the flushcount
               // if we flushed anything.
               flushCount.incrementAndGet();
             }
             flushSuccess = true;
-            
-            synchronized (this) {
+
+            synchronized(this) {
               maybeApplyDeletes(true);
-              
+
               readerPool.commit(segmentInfos);
-              
+
               // Must clone the segmentInfos while we still
               // hold fullFlushLock and while sync'd so that
               // no partial changes (eg a delete w/o
               // corresponding add from an updateDocument) can
               // sneak into the commit point:
               toCommit = segmentInfos.clone();
-              
+
               pendingCommitChangeCount = changeCount;
-              
+
               // This protects the segmentInfos we are now going
-              // to commit. This is important in case, eg, while
+              // to commit.  This is important in case, eg, while
               // we are trying to sync all referenced files, a
               // merge completes which would otherwise have
-              // removed the files we are now syncing.
+              // removed the files we are now syncing.    
               filesToCommit = toCommit.files(directory, false);
               deleter.incRef(filesToCommit);
             }
@@ -3009,7 +2958,7 @@
       } catch (OutOfMemoryError oom) {
         handleOOM(oom, "prepareCommit");
       }
-      
+ 
       boolean success = false;
       try {
         if (anySegmentsFlushed) {
@@ -3024,17 +2973,17 @@
           }
         }
       }
-      
+
       startCommit(toCommit);
     }
   }
   
   /**
    * Sets the commit user data map. That method is considered a transaction by
-   * {@link IndexWriter} and will be {@link #commit() committed} even if no
-   * other changes were made to the writer instance. Note that you must call
-   * this method before {@link #prepareCommit()}, or otherwise it won't be
-   * included in the follow-on {@link #commit()}.
+   * {@link IndexWriter} and will be {@link #commit() committed} even if no other
+   * changes were made to the writer instance. Note that you must call this method
+   * before {@link #prepareCommit()}, or otherwise it won't be included in the
+   * follow-on {@link #commit()}.
    * <p>
    * <b>NOTE:</b> the map is cloned internally, therefore altering the map's
    * contents after calling this method has no effect.
@@ -3055,34 +3004,34 @@
   // Used only by commit and prepareCommit, below; lock
   // order is commitLock -> IW
   private final Object commitLock = new Object();
-  
+
   /**
-   * <p>
-   * Commits all pending changes (added & deleted documents, segment merges,
-   * added indexes, etc.) to the index, and syncs all referenced index files,
-   * such that a reader will see the changes and the index updates will survive
-   * an OS or machine crash or power loss. Note that this does not wait for any
-   * running background merges to finish. This may be a costly operation, so you
-   * should test the cost in your application and do it only when really
-   * necessary.
-   * </p>
-   * 
-   * <p>
-   * Note that this operation calls Directory.sync on the index files. That call
-   * should not return until the file contents & metadata are on stable storage.
-   * For FSDirectory, this calls the OS's fsync. But, beware: some hardware
-   * devices may in fact cache writes even during fsync, and return before the
-   * bits are actually on stable storage, to give the appearance of faster
-   * performance. If you have such a device, and it does not have a battery
-   * backup (for example) then on power loss it may still lose data. Lucene
-   * cannot guarantee consistency on such devices.
-   * </p>
-   * 
-   * <p>
-   * <b>NOTE</b>: if this method hits an OutOfMemoryError you should immediately
-   * close the writer. See <a href="#OOME">above</a> for details.
-   * </p>
-   * 
+   * <p>Commits all pending changes (added & deleted
+   * documents, segment merges, added
+   * indexes, etc.) to the index, and syncs all referenced
+   * index files, such that a reader will see the changes
+   * and the index updates will survive an OS or machine
+   * crash or power loss.  Note that this does not wait for
+   * any running background merges to finish.  This may be a
+   * costly operation, so you should test the cost in your
+   * application and do it only when really necessary.</p>
+   *
+   * <p> Note that this operation calls Directory.sync on
+   * the index files.  That call should not return until the
+   * file contents & metadata are on stable storage.  For
+   * FSDirectory, this calls the OS's fsync.  But, beware:
+   * some hardware devices may in fact cache writes even
+   * during fsync, and return before the bits are actually
+   * on stable storage, to give the appearance of faster
+   * performance.  If you have such a device, and it does
+   * not have a battery backup (for example) then on power
+   * loss it may still lose data.  Lucene cannot guarantee
+   * consistency on such devices.  </p>
+   *
+   * <p><b>NOTE</b>: if this method hits an OutOfMemoryError
+   * you should immediately close the writer.  See <a
+   * href="#OOME">above</a> for details.</p>
+   *
    * @see #prepareCommit
    */
   @Override
@@ -3090,20 +3039,20 @@
     ensureOpen();
     commitInternal();
   }
-  
+
   private final void commitInternal() throws IOException {
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "commit: start");
     }
-    
-    synchronized (commitLock) {
+
+    synchronized(commitLock) {
       ensureOpen(false);
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "commit: enter lock");
       }
-      
+
       if (pendingCommit == null) {
         if (infoStream.isEnabled("IW")) {
           infoStream.message("IW", "commit: now prepare");
@@ -3114,13 +3063,13 @@
           infoStream.message("IW", "commit: already prepared");
         }
       }
-      
+
       finishCommit();
     }
   }
-  
+
   private synchronized final void finishCommit() throws IOException {
-    
+
     if (pendingCommit != null) {
       try {
         if (infoStream.isEnabled("IW")) {
@@ -3128,8 +3077,7 @@
         }
         pendingCommit.finishCommit(directory);
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "commit: wrote segments file \""
-              + pendingCommit.getSegmentsFileName() + "\"");
+          infoStream.message("IW", "commit: wrote segments file \"" + pendingCommit.getSegmentsFileName() + "\"");
         }
         lastCommitChangeCount = pendingCommitChangeCount;
         segmentInfos.updateGeneration(pendingCommit);
@@ -3140,72 +3088,70 @@
         deleter.decRef(filesToCommit);
         filesToCommit = null;
         pendingCommit = null;
-        updatesPending = false;
         notifyAll();
       }
-      
+
     } else {
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "commit: pendingCommit == null; skip");
       }
     }
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "commit: done");
     }
   }
-  
+
   // Ensures only one flush() is actually flushing segments
   // at a time:
   private final Object fullFlushLock = new Object();
   
+  // for assert
+  boolean holdsFullFlushLock() {
+    return Thread.holdsLock(fullFlushLock);
+  }
+
   /**
-   * Flush all in-memory buffered updates (adds and deletes) to the Directory.
-   * 
-   * @param triggerMerge
-   *          if true, we may merge segments (if deletes or docs were flushed)
-   *          if necessary
-   * @param applyAllDeletes
-   *          whether pending deletes should also
+   * Flush all in-memory buffered updates (adds and deletes)
+   * to the Directory.
+   * @param triggerMerge if true, we may merge segments (if
+   *  deletes or docs were flushed) if necessary
+   * @param applyAllDeletes whether pending deletes should also
    */
-  protected final void flush(boolean triggerMerge, boolean applyAllDeletes)
-      throws IOException {
-    
+  protected final void flush(boolean triggerMerge, boolean applyAllDeletes) throws IOException {
+
     // NOTE: this method cannot be sync'd because
     // maybeMerge() in turn calls mergeScheduler.merge which
     // in turn can take a long time to run and we don't want
-    // to hold the lock for that. In the case of
+    // to hold the lock for that.  In the case of
     // ConcurrentMergeScheduler this can lead to deadlock
     // when it stalls due to too many running merges.
-    
-    // We can be called during close, when closing==true, so we must pass false
-    // to ensureOpen:
+
+    // We can be called during close, when closing==true, so we must pass false to ensureOpen:
     ensureOpen(false);
     if (doFlush(applyAllDeletes) && triggerMerge) {
       maybeMerge(MergeTrigger.FULL_FLUSH, UNBOUNDED_MAX_MERGE_SEGMENTS);
     }
   }
-  
+
   private boolean doFlush(boolean applyAllDeletes) throws IOException {
     if (hitOOM) {
-      throw new IllegalStateException(
-          "this writer hit an OutOfMemoryError; cannot flush");
+      throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot flush");
     }
-    
+
     doBeforeFlush();
     assert testPoint("startDoFlush");
     boolean success = false;
     try {
-      
+
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "  start flush: applyAllDeletes="
-            + applyAllDeletes);
+        infoStream.message("IW", "  start flush: applyAllDeletes=" + applyAllDeletes);
         infoStream.message("IW", "  index before flush " + segString());
       }
       final boolean anySegmentFlushed;
       
       synchronized (fullFlushLock) {
-        boolean flushSuccess = false;
+      boolean flushSuccess = false;
         try {
           anySegmentFlushed = docWriter.flushAllThreads();
           flushSuccess = true;
@@ -3213,7 +3159,7 @@
           docWriter.finishFullFlush(flushSuccess);
         }
       }
-      synchronized (this) {
+      synchronized(this) {
         maybeApplyDeletes(applyAllDeletes);
         doAfterFlush();
         if (!anySegmentFlushed) {
@@ -3236,32 +3182,27 @@
     }
   }
   
-  final synchronized void maybeApplyDeletes(boolean applyAllDeletes)
-      throws IOException {
+  final synchronized void maybeApplyDeletes(boolean applyAllDeletes) throws IOException {
     if (applyAllDeletes) {
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "apply all deletes during flush");
       }
       applyAllDeletes();
     } else if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "don't apply deletes now delTermCount="
-          + bufferedDeletesStream.numTerms() + " bytesUsed="
-          + bufferedDeletesStream.bytesUsed());
+      infoStream.message("IW", "don't apply deletes now delTermCount=" + bufferedDeletesStream.numTerms() + " bytesUsed=" + bufferedDeletesStream.bytesUsed());
     }
   }
   
   final synchronized void applyAllDeletes() throws IOException {
     flushDeletesCount.incrementAndGet();
     final BufferedDeletesStream.ApplyDeletesResult result;
-    result = bufferedDeletesStream.applyDeletes(readerPool,
-        segmentInfos.asList());
+    result = bufferedDeletesStream.applyDeletes(readerPool, segmentInfos.asList());
     if (result.anyDeletes) {
       checkpoint();
     }
     if (!keepFullyDeletedSegments && result.allDeleted != null) {
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "drop 100% deleted segments: "
-            + segString(result.allDeleted));
+        infoStream.message("IW", "drop 100% deleted segments: " + segString(result.allDeleted));
       }
       for (SegmentInfoPerCommit info : result.allDeleted) {
         // If a merge has already registered for this
@@ -3277,15 +3218,13 @@
     }
     bufferedDeletesStream.prune(segmentInfos);
   }
-  
-  /**
-   * Expert: Return the total size of all index files currently cached in
-   * memory. Useful for size management with flushRamDocs()
+
+  /** Expert:  Return the total size of all index files currently cached in memory.
+   * Useful for size management with flushRamDocs()
    */
   public final long ramSizeInBytes() {
     ensureOpen();
-    return docWriter.flushControl.netBytes()
-        + bufferedDeletesStream.bytesUsed();
+    return docWriter.flushControl.netBytes() + bufferedDeletesStream.bytesUsed();
   }
   
   // for testing only
@@ -3294,21 +3233,18 @@
     assert test = true;
     return test ? docWriter : null;
   }
-  
-  /**
-   * Expert: Return the number of documents currently buffered in RAM.
-   */
+
+  /** Expert:  Return the number of documents currently
+   *  buffered in RAM. */
   public final synchronized int numRamDocs() {
     ensureOpen();
     return docWriter.getNumDocs();
   }
-  
+
   private synchronized void ensureValidMerge(MergePolicy.OneMerge merge) {
-    for (SegmentInfoPerCommit info : merge.segments) {
+    for(SegmentInfoPerCommit info : merge.segments) {
       if (!segmentInfos.contains(info)) {
-        throw new MergePolicy.MergeException("MergePolicy selected a segment ("
-            + info.info.name + ") that is not in the current index "
-            + segString(), directory);
+        throw new MergePolicy.MergeException("MergePolicy selected a segment (" + info.info.name + ") that is not in the current index " + segString(), directory);
       }
     }
   }
@@ -3325,19 +3261,18 @@
   synchronized private ReadersAndLiveDocs commitMergedDeletes(MergePolicy.OneMerge merge, MergeState mergeState) throws IOException {
 
     assert testPoint("startCommitMergeDeletes");
-    
+
     final List<SegmentInfoPerCommit> sourceSegments = merge.segments;
-    
+
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "commitMergeDeletes "
-          + segString(merge.segments));
+      infoStream.message("IW", "commitMergeDeletes " + segString(merge.segments));
     }
-    
+
     // Carefully merge deletes that occurred after we
     // started merging:
     int docUpto = 0;
     long minGen = Long.MAX_VALUE;
-    
+
     // Lazy init (only when we find a delete to carry over):
     ReadersAndLiveDocs mergedDeletes = null;
     MergePolicy.DocMap docMap = null;
@@ -3350,35 +3285,35 @@
       final Bits currentLiveDocs;
       final ReadersAndLiveDocs rld = readerPool.get(info, false);
       // We hold a ref so it should still be in the pool:
-      assert rld != null : "seg=" + info.info.name;
+      assert rld != null: "seg=" + info.info.name;
       currentLiveDocs = rld.getLiveDocs();
-      
+
       if (prevLiveDocs != null) {
-        
+
         // If we had deletions on starting the merge we must
         // still have deletions now:
         assert currentLiveDocs != null;
         assert prevLiveDocs.length() == docCount;
         assert currentLiveDocs.length() == docCount;
-        
+
         // There were deletes on this segment when the merge
-        // started. The merge has collapsed away those
+        // started.  The merge has collapsed away those
         // deletes, but, if new deletes were flushed since
         // the merge started, we must now carefully keep any
         // newly flushed deletes but mapping them to the new
         // docIDs.
-        
+
         // Since we copy-on-write, if any new deletes were
         // applied after merging has started, we can just
         // check if the before/after liveDocs have changed.
         // If so, we must carefully merge the liveDocs one
         // doc at a time:
         if (currentLiveDocs != prevLiveDocs) {
-          
+
           // This means this segment received new deletes
           // since we started the merge, so we
           // must merge them:
-          for (int j = 0; j < docCount; j++) {
+          for(int j=0;j<docCount;j++) {
             if (!prevLiveDocs.get(j)) {
               assert !currentLiveDocs.get(j);
             } else {
@@ -3395,14 +3330,13 @@
             }
           }
         } else {
-          docUpto += info.info.getDocCount() - info.getDelCount()
-              - rld.getPendingDeleteCount();
+          docUpto += info.info.getDocCount() - info.getDelCount() - rld.getPendingDeleteCount();
         }
       } else if (currentLiveDocs != null) {
         assert currentLiveDocs.length() == docCount;
         // This segment had no deletes before but now it
         // does:
-        for (int j = 0; j < docCount; j++) {
+        for(int j=0; j<docCount; j++) {
           if (!currentLiveDocs.get(j)) {
             if (mergedDeletes == null) {
               mergedDeletes = readerPool.get(merge.info, true);
@@ -3419,39 +3353,36 @@
         docUpto += info.info.getDocCount();
       }
     }
-    
+
     assert docUpto == merge.info.info.getDocCount();
-    
+
     if (infoStream.isEnabled("IW")) {
       if (mergedDeletes == null) {
         infoStream.message("IW", "no new deletes since merge started");
       } else {
-        infoStream.message("IW", mergedDeletes.getPendingDeleteCount()
-            + " new deletes since merge started");
+        infoStream.message("IW", mergedDeletes.getPendingDeleteCount() + " new deletes since merge started");
       }
     }
-    
+
     merge.info.setBufferedDeletesGen(minGen);
-    
+
     return mergedDeletes;
   }
 
   synchronized private boolean commitMerge(MergePolicy.OneMerge merge, MergeState mergeState) throws IOException {
 
     assert testPoint("startCommitMerge");
-    
+
     if (hitOOM) {
-      throw new IllegalStateException(
-          "this writer hit an OutOfMemoryError; cannot complete merge");
+      throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot complete merge");
     }
-    
+
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "commitMerge: " + segString(merge.segments)
-          + " index=" + segString());
+      infoStream.message("IW", "commitMerge: " + segString(merge.segments) + " index=" + segString());
     }
-    
+
     assert merge.registerDone;
-    
+
     // If merge was explicitly aborted, or, if rollback() or
     // rollbackTransaction() had been called since our merge
     // started (which results in an unqualified
@@ -3469,51 +3400,48 @@
     final ReadersAndLiveDocs mergedDeletes =  merge.info.info.getDocCount() == 0 ? null : commitMergedDeletes(merge, mergeState);
 
     assert mergedDeletes == null || mergedDeletes.getPendingDeleteCount() != 0;
-    
+
     // If the doc store we are using has been closed and
     // is in now compound format (but wasn't when we
     // started), then we will switch to the compound
     // format as well:
-    
+
     assert !segmentInfos.contains(merge.info);
-    
-    final boolean allDeleted = merge.segments.size() == 0
-        || merge.info.info.getDocCount() == 0
-        || (mergedDeletes != null && mergedDeletes.getPendingDeleteCount() == merge.info.info
-            .getDocCount());
-    
+
+    final boolean allDeleted = merge.segments.size() == 0 ||
+      merge.info.info.getDocCount() == 0 ||
+      (mergedDeletes != null &&
+       mergedDeletes.getPendingDeleteCount() == merge.info.info.getDocCount());
+
     if (infoStream.isEnabled("IW")) {
       if (allDeleted) {
-        infoStream.message("IW", "merged segment " + merge.info
-            + " is 100% deleted"
-            + (keepFullyDeletedSegments ? "" : "; skipping insert"));
+        infoStream.message("IW", "merged segment " + merge.info + " is 100% deleted" +  (keepFullyDeletedSegments ? "" : "; skipping insert"));
       }
     }
-    
+
     final boolean dropSegment = allDeleted && !keepFullyDeletedSegments;
-    
+
     // If we merged no segments then we better be dropping
     // the new segment:
     assert merge.segments.size() > 0 || dropSegment;
-    
-    assert merge.info.info.getDocCount() != 0 || keepFullyDeletedSegments
-        || dropSegment;
-    
+
+    assert merge.info.info.getDocCount() != 0 || keepFullyDeletedSegments || dropSegment;
+
     segmentInfos.applyMergeChanges(merge, dropSegment);
-    
+
     if (mergedDeletes != null) {
       if (dropSegment) {
         mergedDeletes.dropChanges();
       }
       readerPool.release(mergedDeletes);
     }
-    
+
     if (dropSegment) {
       assert !segmentInfos.contains(merge.info);
       readerPool.drop(merge.info);
       deleter.deleteNewFiles(merge.info.files());
     }
-    
+
     boolean success = false;
     try {
       // Must close before checkpoint, otherwise IFD won't be
@@ -3535,38 +3463,35 @@
         }
       }
     }
-    
+
     deleter.deletePendingFiles();
-    deleter.deletePendingFiles();
-    
+
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "after commitMerge: " + segString());
     }
-    
+
     if (merge.maxNumSegments != -1 && !dropSegment) {
       // cascade the forceMerge:
       if (!segmentsToMerge.containsKey(merge.info)) {
         segmentsToMerge.put(merge.info, Boolean.FALSE);
       }
     }
-    
+
     return true;
   }
-  
-  final private void handleMergeException(Throwable t,
-      MergePolicy.OneMerge merge) throws IOException {
-    
+
+  final private void handleMergeException(Throwable t, MergePolicy.OneMerge merge) throws IOException {
+
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "handleMergeException: merge="
-          + segString(merge.segments) + " exc=" + t);
+      infoStream.message("IW", "handleMergeException: merge=" + segString(merge.segments) + " exc=" + t);
     }
-    
+
     // Set the exception on the merge, so if
     // forceMerge is waiting on us it sees the root
     // cause exception:
     merge.setException(t);
     addMergeException(merge);
-    
+
     if (t instanceof MergePolicy.MergeAbortedException) {
       // We can ignore this exception (it happens when
       // close(false) or rollback is called), unless the
@@ -3574,40 +3499,43 @@
       // in which case we must throw it so, for example, the
       // rollbackTransaction code in addIndexes* is
       // executed.
-      if (merge.isExternal) throw (MergePolicy.MergeAbortedException) t;
-    } else if (t instanceof IOException) throw (IOException) t;
-    else if (t instanceof RuntimeException) throw (RuntimeException) t;
-    else if (t instanceof Error) throw (Error) t;
+      if (merge.isExternal)
+        throw (MergePolicy.MergeAbortedException) t;
+    } else if (t instanceof IOException)
+      throw (IOException) t;
+    else if (t instanceof RuntimeException)
+      throw (RuntimeException) t;
+    else if (t instanceof Error)
+      throw (Error) t;
     else
-    // Should not get here
-    throw new RuntimeException(t);
+      // Should not get here
+      throw new RuntimeException(t);
   }
-  
+
   /**
-   * Merges the indicated segments, replacing them in the stack with a single
-   * segment.
+   * Merges the indicated segments, replacing them in the stack with a
+   * single segment.
    * 
    * @lucene.experimental
    */
   public void merge(MergePolicy.OneMerge merge) throws IOException {
-    
+
     boolean success = false;
-    
+
     final long t0 = System.currentTimeMillis();
-    
+
     try {
       try {
         try {
           mergeInit(merge);
-          // if (merge.info != null) {
-          // System.out.println("MERGE: " + merge.info.info.name);
-          // }
-          
+          //if (merge.info != null) {
+          //System.out.println("MERGE: " + merge.info.info.name);
+          //}
+
           if (infoStream.isEnabled("IW")) {
-            infoStream.message("IW", "now merge\n  merge="
-                + segString(merge.segments) + "\n  index=" + segString());
+            infoStream.message("IW", "now merge\n  merge=" + segString(merge.segments) + "\n  index=" + segString());
           }
-          
+
           mergeMiddle(merge);
           mergeSuccess(merge);
           success = true;
@@ -3615,9 +3543,9 @@
           handleMergeException(t, merge);
         }
       } finally {
-        synchronized (this) {
+        synchronized(this) {
           mergeFinish(merge);
-          
+
           if (!success) {
             if (infoStream.isEnabled("IW")) {
               infoStream.message("IW", "hit exception during merge");
@@ -3626,14 +3554,12 @@
               deleter.refresh(merge.info.info.name);
             }
           }
-          
+
           // This merge (and, generally, any change to the
           // segments) may now enable new merges, so we call
           // merge policy & update pending merges.
-          if (success && !merge.isAborted()
-              && (merge.maxNumSegments != -1 || (!closed && !closing))) {
-            updatePendingMerges(MergeTrigger.MERGE_FINISHED,
-                merge.maxNumSegments);
+          if (success && !merge.isAborted() && (merge.maxNumSegments != -1 || (!closed && !closing))) {
+            updatePendingMerges(MergeTrigger.MERGE_FINISHED, merge.maxNumSegments);
           }
         }
       }
@@ -3642,52 +3568,44 @@
     }
     if (merge.info != null && !merge.isAborted()) {
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "merge time "
-            + (System.currentTimeMillis() - t0) + " msec for "
-            + merge.info.info.getDocCount() + " docs");
+        infoStream.message("IW", "merge time " + (System.currentTimeMillis()-t0) + " msec for " + merge.info.info.getDocCount() + " docs");
       }
     }
   }
-  
+
   /** Hook that's called when the specified merge is complete. */
-  void mergeSuccess(MergePolicy.OneMerge merge) {}
-  
-  /**
-   * Checks whether this merge involves any segments already participating in a
-   * merge. If not, this merge is "registered", meaning we record that its
-   * segments are now participating in a merge, and true is returned. Else (the
-   * merge conflicts) false is returned.
-   */
-  final synchronized boolean registerMerge(MergePolicy.OneMerge merge)
-      throws IOException {
-    
+  void mergeSuccess(MergePolicy.OneMerge merge) {
+  }
+
+  /** Checks whether this merge involves any segments
+   *  already participating in a merge.  If not, this merge
+   *  is "registered", meaning we record that its segments
+   *  are now participating in a merge, and true is
+   *  returned.  Else (the merge conflicts) false is
+   *  returned. */
+  final synchronized boolean registerMerge(MergePolicy.OneMerge merge) throws IOException {
+
     if (merge.registerDone) {
       return true;
     }
     assert merge.segments.size() > 0;
-    
+
     if (stopMerges) {
       merge.abort();
-      throw new MergePolicy.MergeAbortedException("merge is aborted: "
-          + segString(merge.segments));
+      throw new MergePolicy.MergeAbortedException("merge is aborted: " + segString(merge.segments));
     }
-    
+
     boolean isExternal = false;
-    for (SegmentInfoPerCommit info : merge.segments) {
+    for(SegmentInfoPerCommit info : merge.segments) {
       if (mergingSegments.contains(info)) {
         if (infoStream.isEnabled("IW")) {
-          infoStream
-              .message("IW", "reject merge " + segString(merge.segments)
-                  + ": segment " + segString(info)
-                  + " is already marked for merge");
+          infoStream.message("IW", "reject merge " + segString(merge.segments) + ": segment " + segString(info) + " is already marked for merge");
         }
         return false;
       }
       if (!segmentInfos.contains(info)) {
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "reject merge " + segString(merge.segments)
-              + ": segment " + segString(info)
-              + " does not exist in live infos");
+          infoStream.message("IW", "reject merge " + segString(merge.segments) + ": segment " + segString(info) + " does not exist in live infos");
         }
         return false;
       }
@@ -3698,20 +3616,18 @@
         merge.maxNumSegments = mergeMaxNumSegments;
       }
     }
-    
+
     ensureValidMerge(merge);
-    
+
     pendingMerges.add(merge);
-    
+
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "add merge to pendingMerges: "
-          + segString(merge.segments) + " [total " + pendingMerges.size()
-          + " pending]");
+      infoStream.message("IW", "add merge to pendingMerges: " + segString(merge.segments) + " [total " + pendingMerges.size() + " pending]");
     }
-    
+
     merge.mergeGen = mergeGen;
     merge.isExternal = isExternal;
-    
+
     // OK it does not conflict; now record that this merge
     // is running (while synchronized) to avoid race
     // condition where two conflicting merges from different
@@ -3719,23 +3635,22 @@
     if (infoStream.isEnabled("IW")) {
       StringBuilder builder = new StringBuilder("registerMerge merging= [");
       for (SegmentInfoPerCommit info : mergingSegments) {
-        builder.append(info.info.name).append(", ");
+        builder.append(info.info.name).append(", ");  
       }
       builder.append("]");
-      // don't call mergingSegments.toString() could lead to
-      // ConcurrentModException
+      // don't call mergingSegments.toString() could lead to ConcurrentModException
       // since merge updates the segments FieldInfos
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", builder.toString());
+        infoStream.message("IW", builder.toString());  
       }
     }
-    for (SegmentInfoPerCommit info : merge.segments) {
+    for(SegmentInfoPerCommit info : merge.segments) {
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "registerMerge info=" + segString(info));
       }
       mergingSegments.add(info);
     }
-    
+
     assert merge.estimatedMergeBytes == 0;
     assert merge.totalMergeBytes == 0;
     for(SegmentInfoPerCommit info : merge.segments) {
@@ -3750,16 +3665,13 @@
 
     // Merge is now registered
     merge.registerDone = true;
-    
+
     return true;
   }
-  
-  /**
-   * Does initial setup for a merge, which is fast but holds the synchronized
-   * lock on IndexWriter instance.
-   */
-  final synchronized void mergeInit(MergePolicy.OneMerge merge)
-      throws IOException {
+
+  /** Does initial setup for a merge, which is fast but holds
+   *  the synchronized lock on IndexWriter instance.  */
+  final synchronized void mergeInit(MergePolicy.OneMerge merge) throws IOException {
     boolean success = false;
     try {
       _mergeInit(merge);
@@ -3773,48 +3685,44 @@
       }
     }
   }
-  
-  synchronized private void _mergeInit(MergePolicy.OneMerge merge)
-      throws IOException {
-    
+
+  synchronized private void _mergeInit(MergePolicy.OneMerge merge) throws IOException {
+
     assert testPoint("startMergeInit");
-    
+
     assert merge.registerDone;
     assert merge.maxNumSegments == -1 || merge.maxNumSegments > 0;
-    
+
     if (hitOOM) {
-      throw new IllegalStateException(
-          "this writer hit an OutOfMemoryError; cannot merge");
+      throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot merge");
     }
-    
+
     if (merge.info != null) {
       // mergeInit already done
       return;
     }
-    
+
     if (merge.isAborted()) {
       return;
     }
-    
+
     // TODO: in the non-pool'd case this is somewhat
     // wasteful, because we open these readers, close them,
-    // and then open them again for merging. Maybe we
+    // and then open them again for merging.  Maybe  we
     // could pre-pool them somehow in that case...
-    
+
     // Lock order: IW -> BD
-    final BufferedDeletesStream.ApplyDeletesResult result = bufferedDeletesStream
-        .applyDeletes(readerPool, merge.segments);
-    
+    final BufferedDeletesStream.ApplyDeletesResult result = bufferedDeletesStream.applyDeletes(readerPool, merge.segments);
+
     if (result.anyDeletes) {
       checkpoint();
     }
-    
+
     if (!keepFullyDeletedSegments && result.allDeleted != null) {
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "drop 100% deleted segments: "
-            + result.allDeleted);
+        infoStream.message("IW", "drop 100% deleted segments: " + result.allDeleted);
       }
-      for (SegmentInfoPerCommit info : result.allDeleted) {
+      for(SegmentInfoPerCommit info : result.allDeleted) {
         segmentInfos.remove(info);
         if (merge.segments.contains(info)) {
           mergingSegments.remove(info);
@@ -3824,7 +3732,7 @@
       }
       checkpoint();
     }
-    
+
     // Bind a new segment name here so even with
     // ConcurrentMergePolicy we keep deterministic segment
     // names.
@@ -3838,19 +3746,17 @@
 
     // Lock order: IW -> BD
     bufferedDeletesStream.prune(segmentInfos);
-    
+
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW", "merge seg=" + merge.info.info.name + " "
-          + segString(merge.segments));
+      infoStream.message("IW", "merge seg=" + merge.info.info.name + " " + segString(merge.segments));
     }
   }
-  
+
   static void setDiagnostics(SegmentInfo info, String source) {
     setDiagnostics(info, source, null);
   }
-  
-  private static void setDiagnostics(SegmentInfo info, String source,
-      Map<String,String> details) {
+
+  private static void setDiagnostics(SegmentInfo info, String source, Map<String,String> details) {
     Map<String,String> diagnostics = new HashMap<String,String>();
     diagnostics.put("source", source);
     diagnostics.put("lucene.version", Constants.LUCENE_VERSION);
@@ -3865,43 +3771,39 @@
     }
     info.setDiagnostics(diagnostics);
   }
-  
-  /**
-   * Does fininishing for a merge, which is fast but holds the synchronized lock
-   * on IndexWriter instance.
-   */
+
+  /** Does fininishing for a merge, which is fast but holds
+   *  the synchronized lock on IndexWriter instance. */
   final synchronized void mergeFinish(MergePolicy.OneMerge merge) {
-    
+
     // forceMerge, addIndexes or finishMerges may be waiting
     // on merges to finish.
     notifyAll();
-    
+
     // It's possible we are called twice, eg if there was an
     // exception inside mergeInit
     if (merge.registerDone) {
       final List<SegmentInfoPerCommit> sourceSegments = merge.segments;
-      for (SegmentInfoPerCommit info : sourceSegments) {
+      for(SegmentInfoPerCommit info : sourceSegments) {
         mergingSegments.remove(info);
       }
       merge.registerDone = false;
     }
-    
+
     runningMerges.remove(merge);
   }
-  
-  private final synchronized void closeMergeReaders(MergePolicy.OneMerge merge,
-      boolean suppressExceptions) throws IOException {
+
+  private final synchronized void closeMergeReaders(MergePolicy.OneMerge merge, boolean suppressExceptions) throws IOException {
     final int numSegments = merge.readers.size();
     Throwable th = null;
-    
+
     boolean drop = !suppressExceptions;
     
     for (int i = 0; i < numSegments; i++) {
       final SegmentReader sr = merge.readers.get(i);
       if (sr != null) {
         try {
-          final ReadersAndLiveDocs rld = readerPool.get(sr.getSegmentInfo(),
-              false);
+          final ReadersAndLiveDocs rld = readerPool.get(sr.getSegmentInfo(), false);
           // We still hold a ref so it should not have been removed:
           assert rld != null;
           if (drop) {
@@ -3929,17 +3831,16 @@
       throw new RuntimeException(th);
     }
   }
-  
-  /**
-   * Does the actual (time-consuming) work of the merge, but without holding
-   * synchronized lock on IndexWriter instance
-   */
+
+  /** Does the actual (time-consuming) work of the merge,
+   *  but without holding synchronized lock on IndexWriter
+   *  instance */
   private int mergeMiddle(MergePolicy.OneMerge merge) throws IOException {
-    
+
     merge.checkAborted(directory);
-    
+
     final String mergedName = merge.info.info.name;
-    
+
     List<SegmentInfoPerCommit> sourceSegments = merge.segments;
     
     IOContext context = new IOContext(merge.getMergeInfo());
@@ -3950,52 +3851,48 @@
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "merging " + segString(merge.segments));
     }
-    
+
     merge.readers = new ArrayList<SegmentReader>();
-    
+
     // This is try/finally to make sure merger's readers are
     // closed:
     boolean success = false;
     try {
       int segUpto = 0;
-      while (segUpto < sourceSegments.size()) {
-        
+      while(segUpto < sourceSegments.size()) {
+
         final SegmentInfoPerCommit info = sourceSegments.get(segUpto);
-        
+
         // Hold onto the "live" reader; we will use this to
         // commit merged deletes
         final ReadersAndLiveDocs rld = readerPool.get(info, true);
         SegmentReader reader = rld.getMergeReader(context);
         assert reader != null;
-        
+
         // Carefully pull the most recent live docs:
         final Bits liveDocs;
         final int delCount;
-        
-        synchronized (this) {
+
+        synchronized(this) {
           // Must sync to ensure BufferedDeletesStream
           // cannot change liveDocs/pendingDeleteCount while
           // we pull a copy:
           liveDocs = rld.getReadOnlyLiveDocs();
           delCount = rld.getPendingDeleteCount() + info.getDelCount();
-          
+
           assert rld.verifyDocCounts();
-          
+
           if (infoStream.isEnabled("IW")) {
             if (rld.getPendingDeleteCount() != 0) {
-              infoStream.message("IW",
-                  "seg=" + segString(info) + " delCount=" + info.getDelCount()
-                      + " pendingDelCount=" + rld.getPendingDeleteCount());
+              infoStream.message("IW", "seg=" + segString(info) + " delCount=" + info.getDelCount() + " pendingDelCount=" + rld.getPendingDeleteCount());
             } else if (info.getDelCount() != 0) {
-              infoStream.message("IW", "seg=" + segString(info) + " delCount="
-                  + info.getDelCount());
+              infoStream.message("IW", "seg=" + segString(info) + " delCount=" + info.getDelCount());
             } else {
-              infoStream
-                  .message("IW", "seg=" + segString(info) + " no deletes");
+              infoStream.message("IW", "seg=" + segString(info) + " no deletes");
             }
           }
         }
-        
+
         // Deletes might have happened after we pulled the merge reader and
         // before we got a read-only copy of the segment's actual live docs
         // (taking pending deletes into account). In that case we need to
@@ -4015,18 +3912,15 @@
               newReader.decRef();
             }
           }
-          
+
           reader = newReader;
         }
-        
+
         merge.readers.add(reader);
-        assert delCount <= info.info.getDocCount() : "delCount=" + delCount
-            + " info.docCount=" + info.info.getDocCount()
-            + " rld.pendingDeleteCount=" + rld.getPendingDeleteCount()
-            + " info.getDelCount()=" + info.getDelCount();
+        assert delCount <= info.info.getDocCount(): "delCount=" + delCount + " info.docCount=" + info.info.getDocCount() + " rld.pendingDeleteCount=" + rld.getPendingDeleteCount() + " info.getDelCount()=" + info.getDelCount();
         segUpto++;
       }
-      
+
       // we pass merge.getMergeReaders() instead of merge.readers to allow the
       // OneMerge to return a view over the actual segments to merge
       final SegmentMerger merger = new SegmentMerger(merge.getMergeReaders(),
@@ -4034,7 +3928,7 @@
           checkAbort, globalFieldNumberMap, context);
 
       merge.checkAborted(directory);
-      
+
       // This is where all the work happens:
       MergeState mergeState;
       boolean success3 = false;
@@ -4043,54 +3937,45 @@
         success3 = true;
       } finally {
         if (!success3) {
-          synchronized (this) {
+          synchronized(this) {  
             deleter.refresh(merge.info.info.name);
           }
         }
       }
       assert mergeState.segmentInfo == merge.info.info;
-      merge.info.info
-          .setFiles(new HashSet<String>(dirWrapper.getCreatedFiles()));
-      
+      merge.info.info.setFiles(new HashSet<String>(dirWrapper.getCreatedFiles()));
+
       // Record which codec was used to write the segment
-      
+
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", "merge codec="
-            + codec
-            + " docCount="
-            + merge.info.info.getDocCount()
-            + "; merged segment has "
-            + (mergeState.fieldInfos.hasVectors() ? "vectors" : "no vectors")
-            + "; "
-            + (mergeState.fieldInfos.hasNorms() ? "norms" : "no norms")
-            + "; "
-            + (mergeState.fieldInfos.hasDocValues() ? "docValues"
-                : "no docValues") + "; "
-            + (mergeState.fieldInfos.hasProx() ? "prox" : "no prox") + "; "
-            + (mergeState.fieldInfos.hasProx() ? "freqs" : "no freqs"));
+        infoStream.message("IW", "merge codec=" + codec + " docCount=" + merge.info.info.getDocCount() + "; merged segment has " +
+                           (mergeState.fieldInfos.hasVectors() ? "vectors" : "no vectors") + "; " +
+                           (mergeState.fieldInfos.hasNorms() ? "norms" : "no norms") + "; " + 
+                           (mergeState.fieldInfos.hasDocValues() ? "docValues" : "no docValues") + "; " + 
+                           (mergeState.fieldInfos.hasProx() ? "prox" : "no prox") + "; " + 
+                           (mergeState.fieldInfos.hasProx() ? "freqs" : "no freqs"));
       }
-      
+
       // Very important to do this before opening the reader
       // because codec must know if prox was written for
       // this segment:
-      // System.out.println("merger set hasProx=" + merger.hasProx() + " seg=" +
-      // merge.info.name);
+      //System.out.println("merger set hasProx=" + merger.hasProx() + " seg=" + merge.info.name);
       boolean useCompoundFile;
       synchronized (this) { // Guard segmentInfos
         useCompoundFile = mergePolicy.useCompoundFile(segmentInfos, merge.info);
       }
-      
+
       if (useCompoundFile) {
         success = false;
-        
+
         Collection<String> filesToRemove = merge.info.files();
-        
+
         try {
           filesToRemove = createCompoundFile(infoStream, directory, checkAbort,
               merge.info.info, context, -1);
           success = true;
         } catch (IOException ioe) {
-          synchronized (this) {
+          synchronized(this) {
             if (merge.isAborted()) {
               // This can happen if rollback or close(false)
               // is called -- fall through to logic below to
@@ -4104,43 +3989,38 @@
         } finally {
           if (!success) {
             if (infoStream.isEnabled("IW")) {
-              infoStream.message("IW",
-                  "hit exception creating compound file during merge");
+              infoStream.message("IW", "hit exception creating compound file during merge");
             }
-            
-            synchronized (this) {
-              deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "",
-                  IndexFileNames.COMPOUND_FILE_EXTENSION));
-              deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "",
-                  IndexFileNames.COMPOUND_FILE_ENTRIES_EXTENSION));
+
+            synchronized(this) {
+              deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "", IndexFileNames.COMPOUND_FILE_EXTENSION));
+              deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "", IndexFileNames.COMPOUND_FILE_ENTRIES_EXTENSION));
               deleter.deleteNewFiles(merge.info.files());
             }
           }
         }
-        
+
         // So that, if we hit exc in deleteNewFiles (next)
         // or in commitMerge (later), we close the
         // per-segment readers in the finally clause below:
         success = false;
-        
-        synchronized (this) {
-          
+
+        synchronized(this) {
+
           // delete new non cfs files directly: they were never
           // registered with IFD
           deleter.deleteNewFiles(filesToRemove);
-          
+
           if (merge.isAborted()) {
             if (infoStream.isEnabled("IW")) {
               infoStream.message("IW", "abort merge after building CFS");
             }
-            deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "",
-                IndexFileNames.COMPOUND_FILE_EXTENSION));
-            deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "",
-                IndexFileNames.COMPOUND_FILE_ENTRIES_EXTENSION));
+            deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "", IndexFileNames.COMPOUND_FILE_EXTENSION));
+            deleter.deleteFile(IndexFileNames.segmentFileName(mergedName, "", IndexFileNames.COMPOUND_FILE_ENTRIES_EXTENSION));
             return 0;
           }
         }
-        
+
         merge.info.info.setUseCompoundFile(true);
       } else {
         // So that, if we hit exc in commitMerge (later),
@@ -4148,8 +4028,8 @@
         // clause below:
         success = false;
       }
-      
-      // Have codec write SegmentInfo. Must do this after
+
+      // Have codec write SegmentInfo.  Must do this after
       // creating CFS so that 1) .si isn't slurped into CFS,
       // and 2) .si reflects useCompoundFile=true change
       // above:
@@ -4162,48 +4042,43 @@
         success2 = true;
       } finally {
         if (!success2) {
-          synchronized (this) {
+          synchronized(this) {
             deleter.deleteNewFiles(merge.info.files());
           }
         }
       }
-      
+
       // TODO: ideally we would freeze merge.info here!!
       // because any changes after writing the .si will be
-      // lost...
-      
+      // lost... 
+
       if (infoStream.isEnabled("IW")) {
-        infoStream.message("IW", String.format(Locale.ROOT,
-            "merged segment size=%.3f MB vs estimate=%.3f MB",
-            merge.info.sizeInBytes() / 1024. / 1024.,
-            merge.estimatedMergeBytes / 1024 / 1024.));
+        infoStream.message("IW", String.format(Locale.ROOT, "merged segment size=%.3f MB vs estimate=%.3f MB", merge.info.sizeInBytes()/1024./1024., merge.estimatedMergeBytes/1024/1024.));
       }
-      
-      final IndexReaderWarmer mergedSegmentWarmer = config
-          .getMergedSegmentWarmer();
-      if (poolReaders && mergedSegmentWarmer != null
-          && merge.info.info.getDocCount() != 0) {
+
+      final IndexReaderWarmer mergedSegmentWarmer = config.getMergedSegmentWarmer();
+      if (poolReaders && mergedSegmentWarmer != null && merge.info.info.getDocCount() != 0) {
         final ReadersAndLiveDocs rld = readerPool.get(merge.info, true);
         final SegmentReader sr = rld.getReader(IOContext.READ);
         try {
           mergedSegmentWarmer.warm(sr);
         } finally {
-          synchronized (this) {
+          synchronized(this) {
             rld.release(sr);
             readerPool.release(rld);
           }
         }
       }
-      
+
       // Force READ context because we merge deletes onto
       // this reader:
       if (!commitMerge(merge, mergeState)) {
         // commitMerge will return false if this merge was aborted
         return 0;
       }
-      
+
       success = true;
-      
+
     } finally {
       // Readers are already closed in commitMerge if we didn't hit
       // an exc:
@@ -4211,50 +4086,47 @@
         closeMergeReaders(merge, true);
       }
     }
-    
+
     return merge.info.info.getDocCount();
   }
-  
+
   synchronized void addMergeException(MergePolicy.OneMerge merge) {
     assert merge.getException() != null;
     if (!mergeExceptions.contains(merge) && mergeGen == merge.mergeGen) {
       mergeExceptions.add(merge);
     }
   }
-  
+
   // For test purposes.
   final int getBufferedDeleteTermsSize() {
     return docWriter.getBufferedDeleteTermsSize();
   }
-  
+
   // For test purposes.
   final int getNumBufferedDeleteTerms() {
     return docWriter.getNumBufferedDeleteTerms();
   }
-  
+
   // utility routines for tests
   synchronized SegmentInfoPerCommit newestSegment() {
-    return segmentInfos.size() > 0 ? segmentInfos.info(segmentInfos.size() - 1)
-        : null;
+    return segmentInfos.size() > 0 ? segmentInfos.info(segmentInfos.size()-1) : null;
   }
-  
-  /**
-   * Returns a string description of all segments, for debugging.
-   * 
-   * @lucene.internal
-   */
+
+  /** Returns a string description of all segments, for
+   *  debugging.
+   *
+   * @lucene.internal */
   public synchronized String segString() {
     return segString(segmentInfos);
   }
-  
-  /**
-   * Returns a string description of the specified segments, for debugging.
-   * 
-   * @lucene.internal
-   */
+
+  /** Returns a string description of the specified
+   *  segments, for debugging.
+   *
+   * @lucene.internal */
   public synchronized String segString(Iterable<SegmentInfoPerCommit> infos) {
     final StringBuilder buffer = new StringBuilder();
-    for (final SegmentInfoPerCommit info : infos) {
+    for(final SegmentInfoPerCommit info : infos) {
       if (buffer.length() > 0) {
         buffer.append(' ');
       }
@@ -4262,17 +4134,15 @@
     }
     return buffer.toString();
   }
-  
-  /**
-   * Returns a string description of the specified segment, for debugging.
-   * 
-   * @lucene.internal
-   */
+
+  /** Returns a string description of the specified
+   *  segment, for debugging.
+   *
+   * @lucene.internal */
   public synchronized String segString(SegmentInfoPerCommit info) {
-    return info.toString(info.info.dir,
-        numDeletedDocs(info) - info.getDelCount());
+    return info.toString(info.info.dir, numDeletedDocs(info) - info.getDelCount());
   }
-  
+
   private synchronized void doWait() {
     // NOTE: the callers of this method should in theory
     // be able to do simply wait(), but, as a defense
@@ -4286,128 +4156,120 @@
       throw new ThreadInterruptedException(ie);
     }
   }
-  
+
   private boolean keepFullyDeletedSegments;
-  
-  /**
-   * Only for testing.
-   * 
-   * @lucene.internal
-   */
+
+  /** Only for testing.
+   *
+   * @lucene.internal */
   void keepFullyDeletedSegments() {
     keepFullyDeletedSegments = true;
   }
-  
+
   boolean getKeepFullyDeletedSegments() {
     return keepFullyDeletedSegments;
   }
-  
+
   // called only from assert
   private boolean filesExist(SegmentInfos toSync) throws IOException {
     
     Collection<String> files = toSync.files(directory, false);
-    for (final String fileName : files) {
-      assert directory.fileExists(fileName) : "file " + fileName
-          + " does not exist";
+    for(final String fileName: files) {
+      assert directory.fileExists(fileName): "file " + fileName + " does not exist";
       // If this trips it means we are missing a call to
       // .checkpoint somewhere, because by the time we
       // are called, deleter should know about every
       // file referenced by the current head
       // segmentInfos:
-      assert deleter.exists(fileName) : "IndexFileDeleter doesn't know about file "
-          + fileName;
+      assert deleter.exists(fileName): "IndexFileDeleter doesn't know about file " + fileName;
     }
     return true;
   }
-  
+
   // For infoStream output
   synchronized SegmentInfos toLiveInfos(SegmentInfos sis) {
     final SegmentInfos newSIS = new SegmentInfos();
-    final Map<SegmentInfoPerCommit,SegmentInfoPerCommit> liveSIS = new HashMap<SegmentInfoPerCommit,SegmentInfoPerCommit>();
-    for (SegmentInfoPerCommit info : segmentInfos) {
+    final Map<SegmentInfoPerCommit,SegmentInfoPerCommit> liveSIS = new HashMap<SegmentInfoPerCommit,SegmentInfoPerCommit>();        
+    for(SegmentInfoPerCommit info : segmentInfos) {
       liveSIS.put(info, info);
     }
-    for (SegmentInfoPerCommit info : sis) {
+    for(SegmentInfoPerCommit info : sis) {
       SegmentInfoPerCommit liveInfo = liveSIS.get(info);
       if (liveInfo != null) {
         info = liveInfo;
       }
       newSIS.add(info);
     }
-    
+
     return newSIS;
   }
-  
-  /**
-   * Walk through all files referenced by the current segmentInfos and ask the
-   * Directory to sync each file, if it wasn't already. If that succeeds, then
-   * we prepare a new segments_N file but do not fully commit it.
-   */
+
+  /** Walk through all files referenced by the current
+   *  segmentInfos and ask the Directory to sync each file,
+   *  if it wasn't already.  If that succeeds, then we
+   *  prepare a new segments_N file but do not fully commit
+   *  it. */
   private void startCommit(final SegmentInfos toSync) throws IOException {
-    
+
     assert testPoint("startStartCommit");
     assert pendingCommit == null;
-    
+
     if (hitOOM) {
-      throw new IllegalStateException(
-          "this writer hit an OutOfMemoryError; cannot commit");
+      throw new IllegalStateException("this writer hit an OutOfMemoryError; cannot commit");
     }
-    
+
     try {
-      
+
       if (infoStream.isEnabled("IW")) {
         infoStream.message("IW", "startCommit(): start");
       }
-      
-      synchronized (this) {
-        
-        assert lastCommitChangeCount <= changeCount : "lastCommitChangeCount="
-            + lastCommitChangeCount + " changeCount=" + changeCount;
-        
+
+      synchronized(this) {
+
+        assert lastCommitChangeCount <= changeCount: "lastCommitChangeCount=" + lastCommitChangeCount + " changeCount=" + changeCount;
+
         if (pendingCommitChangeCount == lastCommitChangeCount) {
           if (infoStream.isEnabled("IW")) {
-            infoStream
-                .message("IW", "  skip startCommit(): no changes pending");
+            infoStream.message("IW", "  skip startCommit(): no changes pending");
           }
           deleter.decRef(filesToCommit);
           filesToCommit = null;
           return;
         }
-        
+
         if (infoStream.isEnabled("IW")) {
-          infoStream.message("IW", "startCommit index="
-              + segString(toLiveInfos(toSync)) + " changeCount=" + changeCount);
+          infoStream.message("IW", "startCommit index=" + segString(toLiveInfos(toSync)) + " changeCount=" + changeCount);
         }
-        
+
         assert filesExist(toSync);
       }
-      
+
       assert testPoint("midStartCommit");
-      
+
       boolean pendingCommitSet = false;
-      
+
       try {
-        
+
         assert testPoint("midStartCommit2");
-        
-        synchronized (this) {
-          
+
+        synchronized(this) {
+
           assert pendingCommit == null;
-          
+
           assert segmentInfos.getGeneration() == toSync.getGeneration();
-          
+
           // Exception here means nothing is prepared
           // (this method unwinds everything it did on
           // an exception)
           toSync.prepareCommit(directory);
-          // System.out.println("DONE prepareCommit");
-          
+          //System.out.println("DONE prepareCommit");
+
           pendingCommitSet = true;
           pendingCommit = toSync;
         }
-        
+
         // This call can take a long time -- 10s of seconds
-        // or more. We do it without syncing on this:
+        // or more.  We do it without syncing on this:
         boolean success = false;
         final Collection<String> filesToSync;
         try {
@@ -4421,27 +4283,26 @@
             toSync.rollbackCommit(directory);
           }
         }
-        
+
         if (infoStream.isEnabled("IW")) {
           infoStream.message("IW", "done all syncs: " + filesToSync);
         }
-        
+
         assert testPoint("midStartCommitSuccess");
-        
+
       } finally {
-        synchronized (this) {
+        synchronized(this) {
           // Have our master segmentInfos record the
-          // generations we just prepared. We do this
+          // generations we just prepared.  We do this
           // on error or success so we don't
           // double-write a segments_N file.
           segmentInfos.updateGeneration(toSync);
-          
+
           if (!pendingCommitSet) {
             if (infoStream.isEnabled("IW")) {
-              infoStream
-                  .message("IW", "hit exception committing segments file");
+              infoStream.message("IW", "hit exception committing segments file");
             }
-            
+
             // Hit exception
             deleter.decRef(filesToCommit);
             filesToCommit = null;
@@ -4453,60 +4314,54 @@
     }
     assert testPoint("finishStartCommit");
   }
-  
+
   /**
-   * Returns <code>true</code> iff the index in the named directory is currently
-   * locked.
-   * 
-   * @param directory
-   *          the directory to check for a lock
-   * @throws IOException
-   *           if there is a low-level IO error
+   * Returns <code>true</code> iff the index in the named directory is
+   * currently locked.
+   * @param directory the directory to check for a lock
+   * @throws IOException if there is a low-level IO error
    */
   public static boolean isLocked(Directory directory) throws IOException {
     return directory.makeLock(WRITE_LOCK_NAME).isLocked();
   }
-  
+
   /**
    * Forcibly unlocks the index in the named directory.
    * <P>
-   * Caution: this should only be used by failure recovery code, when it is
-   * known that no other process nor thread is in fact currently accessing this
-   * index.
+   * Caution: this should only be used by failure recovery code,
+   * when it is known that no other process nor thread is in fact
+   * currently accessing this index.
    */
   public static void unlock(Directory directory) throws IOException {
     directory.makeLock(IndexWriter.WRITE_LOCK_NAME).release();
   }
-  
-  /**
-   * If {@link DirectoryReader#open(IndexWriter,boolean)} has been called (ie,
-   * this writer is in near real-time mode), then after a merge completes, this
-   * class can be invoked to warm the reader on the newly merged segment, before
-   * the merge commits. This is not required for near real-time search, but will
-   * reduce search latency on opening a new near real-time reader after a merge
-   * completes.
-   * 
+
+  /** If {@link DirectoryReader#open(IndexWriter,boolean)} has
+   *  been called (ie, this writer is in near real-time
+   *  mode), then after a merge completes, this class can be
+   *  invoked to warm the reader on the newly merged
+   *  segment, before the merge commits.  This is not
+   *  required for near real-time search, but will reduce
+   *  search latency on opening a new near real-time reader
+   *  after a merge completes.
+   *
    * @lucene.experimental
-   * 
-   *                      <p>
-   *                      <b>NOTE</b>: warm is called before any deletes have
-   *                      been carried over to the merged segment.
-   */
+   *
+   * <p><b>NOTE</b>: warm is called before any deletes have
+   * been carried over to the merged segment. */
   public static abstract class IndexReaderWarmer {
-    
-    /**
-     * Sole constructor. (For invocation by subclass constructors, typically
-     * implicit.)
-     */
-    protected IndexReaderWarmer() {}
-    
-    /**
-     * Invoked on the {@link AtomicReader} for the newly merged segment, before
-     * that segment is made visible to near-real-time readers.
-     */
+
+    /** Sole constructor. (For invocation by subclass 
+     *  constructors, typically implicit.) */
+    protected IndexReaderWarmer() {
+    }
+
+    /** Invoked on the {@link AtomicReader} for the newly
+     *  merged segment, before that segment is made visible
+     *  to near-real-time readers. */
     public abstract void warm(AtomicReader reader) throws IOException;
   }
-  
+
   private void handleOOM(OutOfMemoryError oom, String location) {
     if (infoStream.isEnabled("IW")) {
       infoStream.message("IW", "hit OutOfMemoryError inside " + location);
@@ -4514,72 +4369,66 @@
     hitOOM = true;
     throw oom;
   }
-  
-  // Used only by assert for testing. Current points:
-  // startDoFlush
-  // startCommitMerge
-  // startStartCommit
-  // midStartCommit
-  // midStartCommit2
-  // midStartCommitSuccess
-  // finishStartCommit
-  // startCommitMergeDeletes
-  // startMergeInit
-  // DocumentsWriter.ThreadState.init start
+
+  // Used only by assert for testing.  Current points:
+  //   startDoFlush
+  //   startCommitMerge
+  //   startStartCommit
+  //   midStartCommit
+  //   midStartCommit2
+  //   midStartCommitSuccess
+  //   finishStartCommit
+  //   startCommitMergeDeletes
+  //   startMergeInit
+  //   DocumentsWriter.ThreadState.init start
   boolean testPoint(String name) {
     return true;
   }
-  
+
   synchronized boolean nrtIsCurrent(SegmentInfos infos) {
-    // System.out.println("IW.nrtIsCurrent " + (infos.version ==
-    // segmentInfos.version && !docWriter.anyChanges() &&
-    // !bufferedDeletesStream.any()));
+    //System.out.println("IW.nrtIsCurrent " + (infos.version == segmentInfos.version && !docWriter.anyChanges() && !bufferedDeletesStream.any()));
     ensureOpen();
     if (infoStream.isEnabled("IW")) {
-      infoStream.message("IW",
-          "nrtIsCurrent: infoVersion matches: "
-              + (infos.version == segmentInfos.version) + " DW changes: "
-              + docWriter.anyChanges() + " BD changes: "
-              + bufferedDeletesStream.any());
+      infoStream.message("IW", "nrtIsCurrent: infoVersion matches: " + (infos.version == segmentInfos.version) + "; DW changes: " + docWriter.anyChanges() + "; BD changes: "+ bufferedDeletesStream.any());
     }
-    return infos.version == segmentInfos.version && !docWriter.anyChanges()
-        && !bufferedDeletesStream.any();
+    return infos.version == segmentInfos.version && !docWriter.anyChanges() && !bufferedDeletesStream.any();
   }
-  
+
   synchronized boolean isClosed() {
     return closed;
   }
-  
-  /**
-   * Expert: remove any index files that are no longer used.
-   * 
-   * <p>
-   * IndexWriter normally deletes unused files itself, during indexing. However,
-   * on Windows, which disallows deletion of open files, if there is a reader
-   * open on the index then those files cannot be deleted. This is fine, because
-   * IndexWriter will periodically retry the deletion.
-   * </p>
-   * 
-   * <p>
-   * However, IndexWriter doesn't try that often: only on open, close, flushing
-   * a new segment, and finishing a merge. If you don't do any of these actions
-   * with your IndexWriter, you'll see the unused files linger. If that's a
-   * problem, call this method to delete them (once you've closed the open
-   * readers that were preventing their deletion).
-   * 
-   * <p>
-   * In addition, you can call this method to delete unreferenced index commits.
-   * This might be useful if you are using an {@link IndexDeletionPolicy} which
-   * holds onto index commits until some criteria are met, but those commits are
-   * no longer needed. Otherwise, those commits will be deleted the next time
-   * commit() is called.
+
+  /** Expert: remove any index files that are no longer
+   *  used.
+   *
+   *  <p> IndexWriter normally deletes unused files itself,
+   *  during indexing.  However, on Windows, which disallows
+   *  deletion of open files, if there is a reader open on
+   *  the index then those files cannot be deleted.  This is
+   *  fine, because IndexWriter will periodically retry
+   *  the deletion.</p>
+   *
+   *  <p> However, IndexWriter doesn't try that often: only
+   *  on open, close, flushing a new segment, and finishing
+   *  a merge.  If you don't do any of these actions with your
+   *  IndexWriter, you'll see the unused files linger.  If
+   *  that's a problem, call this method to delete them
+   *  (once you've closed the open readers that were
+   *  preventing their deletion). 
+   *  
+   *  <p> In addition, you can call this method to delete 
+   *  unreferenced index commits. This might be useful if you 
+   *  are using an {@link IndexDeletionPolicy} which holds
+   *  onto index commits until some criteria are met, but those
+   *  commits are no longer needed. Otherwise, those commits will
+   *  be deleted the next time commit() is called.
    */
   public synchronized void deleteUnusedFiles() throws IOException {
     ensureOpen(false);
     deleter.deletePendingFiles();
     deleter.revisitPolicy();
   }
-  
+
   // Called by DirectoryReader.doClose
   synchronized void deletePendingFiles() throws IOException {
     deleter.deletePendingFiles();
@@ -4657,7 +4506,7 @@
         directory.copy(cfsDir, file, file, context);
         checkAbort.work(directory.fileLength(file));
       }
-    } catch (IOException ex) {
+    } catch(IOException ex) {
       prior = ex;
     } finally {
       boolean success = false;
@@ -4668,14 +4517,15 @@
         if (!success) {
           try {
             directory.deleteFile(fileName);
-          } catch (Throwable t) {}
+          } catch (Throwable t) {
+          }
           try {
             directory.deleteFile(cfeFileName);
           } catch (Throwable t) {}
         }
       }
     }
-    
+
     // Replace all previous files with the CFS/CFE files:
     Set<String> siFiles = new HashSet<String>();
     siFiles.addAll(info.files());
@@ -4683,29 +4533,23 @@
     siFiles.add(fileName);
     siFiles.add(cfeFileName);
     info.setFiles(siFiles);
-    
+
     return files;
   }
   
   /**
    * Tries to delete the given files if unreferenced
-   * 
-   * @param files
-   *          the files to delete
-   * @throws IOException
-   *           if an {@link IOException} occurs
+   * @param files the files to delete
+   * @throws IOException if an {@link IOException} occurs
    * @see IndexFileDeleter#deleteNewFiles(Collection)
    */
-  synchronized final void deleteNewFiles(Collection<String> files)
-      throws IOException {
+  synchronized final void deleteNewFiles(Collection<String> files) throws IOException {
     deleter.deleteNewFiles(files);
   }
   
   /**
-   * Cleans up residuals from a segment that could not be entirely flushed due
-   * to an error
-   * 
-   * @see IndexFileDeleter#refresh(String)
+   * Cleans up residuals from a segment that could not be entirely flushed due to an error
+   * @see IndexFileDeleter#refresh(String) 
    */
   synchronized final void flushFailed(SegmentInfo info) throws IOException {
     deleter.refresh(info.name);
diff --git a/lucene/core/src/java/org/apache/lucene/index/SegmentReader.java b/lucene/core/src/java/org/apache/lucene/index/SegmentReader.java
index 132809b..8a3b8e3 100644
--- a/lucene/core/src/java/org/apache/lucene/index/SegmentReader.java
+++ b/lucene/core/src/java/org/apache/lucene/index/SegmentReader.java
@@ -33,23 +33,22 @@
 // javadocs
 
 /**
- * IndexReader implementation over a single segment.
+ * IndexReader implementation over a single segment. 
  * <p>
- * Instances pointing to the same segment (but with different deletes, etc) may
- * share the same core data.
- * 
+ * Instances pointing to the same segment (but with different deletes, etc)
+ * may share the same core data.
  * @lucene.experimental
  */
 public final class SegmentReader extends AtomicReader {
-  
+
   private final SegmentInfoPerCommit si;
   private final Bits liveDocs;
-  
+
   // Normally set to si.docCount - si.delDocCount, unless we
   // were created as an NRT reader from IW, in which case IW
   // tells us the docCount:
   private final int numDocs;
-  
+
   final SegmentCoreReaders core;
   final SegmentCoreReaders[] updates;
   
@@ -62,15 +61,11 @@
   
   /**
    * Constructs a new SegmentReader with a new core.
-   * 
-   * @throws CorruptIndexException
-   *           if the index is corrupt
-   * @throws IOException
-   *           if there is a low-level IO error
+   * @throws CorruptIndexException if the index is corrupt
+   * @throws IOException if there is a low-level IO error
    */
   // TODO: why is this public?
-  public SegmentReader(SegmentInfoPerCommit si, int termInfosIndexDivisor,
-      IOContext context) throws IOException {
+  public SegmentReader(SegmentInfoPerCommit si, int termInfosIndexDivisor, IOContext context) throws IOException {
     this.si = si;
     this.context = context;
     core = new SegmentCoreReaders(this, si.info, -1, context, termInfosIndexDivisor);
@@ -79,8 +74,7 @@
     try {
       if (si.hasDeletions()) {
         // NOTE: the bitvector is stored using the regular directory, not cfs
-        liveDocs = si.info.getCodec().liveDocsFormat()
-            .readLiveDocs(directory(), si, new IOContext(IOContext.READ, true));
+        liveDocs = si.info.getCodec().liveDocsFormat().readLiveDocs(directory(), si, new IOContext(IOContext.READ, true));
       } else {
         assert si.getDelCount() == 0;
         liveDocs = null;
@@ -89,7 +83,7 @@
       success = true;
     } finally {
       // With lock-less commits, it's entirely possible (and
-      // fine) to hit a FileNotFound exception above. In
+      // fine) to hit a FileNotFound exception above.  In
       // this case, we want to explicitly close any subset
       // of things that were opened so that we don't have to
       // wait for a GC to do so.
@@ -131,7 +125,7 @@
     
     assert liveDocs != null;
     this.liveDocs = liveDocs;
-    
+
     this.numDocs = numDocs;
   }
   
@@ -154,10 +148,10 @@
     ensureOpen();
     return liveDocs;
   }
-  
+
   @Override
   protected void doClose() throws IOException {
-    // System.out.println("SR.close seg=" + si);
+    //System.out.println("SR.close seg=" + si);
     core.decRef();
     if (updates != null) {
       for (int i = 0; i < updates.length; i++) {
@@ -165,7 +159,7 @@
       }
     }
   }
-  
+
   @Override
   public FieldInfos getFieldInfos() {
     ensureOpen();
@@ -245,12 +239,11 @@
   }
   
   @Override
-  public void document(int docID, StoredFieldVisitor visitor)
-      throws IOException {
+  public void document(int docID, StoredFieldVisitor visitor) throws IOException {
     checkBounds(docID);
     getFieldsReader().visitDocument(docID, visitor, null);
   }
-  
+
   @Override
   public Fields fields() throws IOException {
     ensureOpen();
@@ -275,13 +268,13 @@
     }
     return fields;
   }
-  
+
   @Override
   public int numDocs() {
     // Don't call ensureOpen() here (it could affect performance)
     return numDocs;
   }
-  
+
   @Override
   public int maxDoc() {
     // Don't call ensureOpen() here (it could affect performance)
@@ -365,13 +358,12 @@
     
     return new StackedFields(fields, replacementsMap, docID);
   }
-  
+
   @Override
   public String toString() {
     // SegmentInfo.toString takes dir and number of
     // *pending* deletions; so we reverse compute that here:
-    return si.toString(si.info.dir,
-        si.info.getDocCount() - numDocs - si.getDelCount());
+    return si.toString(si.info.dir, si.info.getDocCount() - numDocs - si.getDelCount());
   }
   
   /**
@@ -387,7 +379,7 @@
   public SegmentInfoPerCommit getSegmentInfo() {
     return si;
   }
-  
+
   /** Returns the directory this index resides in. */
   public Directory directory() {
     // Don't ensureOpen here -- in certain cases, when a
@@ -395,30 +387,29 @@
     // this method on the closed original reader
     return si.info.dir;
   }
-  
+
   // This is necessary so that cloned SegmentReaders (which
   // share the underlying postings data) will map to the
-  // same entry in the FieldCache. See LUCENE-1579.
+  // same entry in the FieldCache.  See LUCENE-1579.
   @Override
   public Object getCoreCacheKey() {
     return core;
   }
-  
+
   @Override
   public Object getCombinedCoreAndDeletesKey() {
     return this;
   }
-  
-  /**
-   * Returns term infos index divisor originally passed to
-   * {@link #SegmentReader(SegmentInfoPerCommit, int, IOContext)}.
-   */
+
+  /** Returns term infos index divisor originally passed to
+   *  {@link #SegmentReader(SegmentInfoPerCommit, int, IOContext)}. */
   public int getTermInfosIndexDivisor() {
     return core.termsIndexDivisor;
   }
 
   @Override
   public NumericDocValues getNumericDocValues(String field) throws IOException {
+    ensureOpen();
     return core.getNumericDocValues(field);
   }
 
@@ -427,7 +418,7 @@
     ensureOpen();
     return core.getBinaryDocValues(field);
   }
-  
+
   @Override
   public SortedDocValues getSortedDocValues(String field) throws IOException {
     ensureOpen();
@@ -456,21 +447,21 @@
   }
 
   /**
-   * Called when the shared core for this SegmentReader is closed.
+   * Called when the shared core for this SegmentReader
+   * is closed.
    * <p>
-   * This listener is called only once all SegmentReaders sharing the same core
-   * are closed. At this point it is safe for apps to evict this reader from any
-   * caches keyed on {@link #getCoreCacheKey}. This is the same interface that
-   * {@link FieldCache} uses, internally, to evict entries.
-   * </p>
+   * This listener is called only once all SegmentReaders 
+   * sharing the same core are closed.  At this point it 
+   * is safe for apps to evict this reader from any caches 
+   * keyed on {@link #getCoreCacheKey}.  This is the same 
+   * interface that {@link FieldCache} uses, internally, 
+   * to evict entries.</p>
    * 
    * @lucene.experimental
    */
   public static interface CoreClosedListener {
-    /**
-     * Invoked when the shared core of the provided {@link SegmentReader} has
-     * closed.
-     */
+    /** Invoked when the shared core of the provided {@link
+     *  SegmentReader} has closed. */
     public void onClose(SegmentReader owner);
   }
   
diff --git a/lucene/core/src/java/org/apache/lucene/index/SegmentWriteState.java b/lucene/core/src/java/org/apache/lucene/index/SegmentWriteState.java
index 1d1661d..8ddfa48 100644
--- a/lucene/core/src/java/org/apache/lucene/index/SegmentWriteState.java
+++ b/lucene/core/src/java/org/apache/lucene/index/SegmentWriteState.java
@@ -119,4 +119,17 @@
     segUpdates = state.segUpdates;
     delCountOnFlush = state.delCountOnFlush;
   }
+  
+  public boolean hasDeletesWithoutUpdates() {
+    if (segDeletes == null) {
+      return false;
+    }
+    if (segUpdates == null) {
+      return true;
+    }
+    if (segUpdates.any()) {
+      return false;
+    }
+    return true;
+  }
 }
diff --git a/lucene/core/src/java/org/apache/lucene/index/SortedFieldsUpdates.java b/lucene/core/src/java/org/apache/lucene/index/SortedFieldsUpdates.java
deleted file mode 100644
index fc6c70f..0000000
--- a/lucene/core/src/java/org/apache/lucene/index/SortedFieldsUpdates.java
+++ /dev/null
@@ -1,25 +0,0 @@
-package org.apache.lucene.index;
-
-import java.util.SortedSet;
-import java.util.TreeMap;
-
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-public class SortedFieldsUpdates extends TreeMap<Term,SortedSet<FieldsUpdate>> {
-  
-}
diff --git a/lucene/core/src/java/org/apache/lucene/index/StoredFieldsConsumer.java b/lucene/core/src/java/org/apache/lucene/index/StoredFieldsConsumer.java
index 9c8c246..42ce237 100644
--- a/lucene/core/src/java/org/apache/lucene/index/StoredFieldsConsumer.java
+++ b/lucene/core/src/java/org/apache/lucene/index/StoredFieldsConsumer.java
@@ -22,9 +22,9 @@
 import org.apache.lucene.store.Directory;
 
 abstract class StoredFieldsConsumer {
-	  abstract void addField(int docID, StorableField field, FieldInfo fieldInfo) throws IOException;
-	  abstract void flush(SegmentWriteState state) throws IOException;
-	  abstract void abort() throws IOException;
-	  abstract void startDocument() throws IOException;
-	  abstract void finishDocument(Directory directory, SegmentInfo info) throws IOException;
+  abstract void addField(int docID, StorableField field, FieldInfo fieldInfo) throws IOException;
+  abstract void flush(SegmentWriteState state) throws IOException;
+  abstract void abort() throws IOException;
+  abstract void startDocument() throws IOException;
+  abstract void finishDocument(Directory directory, SegmentInfo info) throws IOException;
 }
diff --git a/lucene/core/src/java/org/apache/lucene/index/UpdatedSegmentData.java b/lucene/core/src/java/org/apache/lucene/index/UpdatedSegmentData.java
index 6c990a3..d980ca1 100644
--- a/lucene/core/src/java/org/apache/lucene/index/UpdatedSegmentData.java
+++ b/lucene/core/src/java/org/apache/lucene/index/UpdatedSegmentData.java
@@ -19,6 +19,8 @@
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.store.IOContext;
 import org.apache.lucene.util.Bits;
+import org.apache.lucene.util.IOUtils;
+import org.apache.lucene.util.InfoStream;
 
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
@@ -45,12 +47,13 @@
   static final FieldInfos EMPTY_FIELD_INFOS = new FieldInfos(new FieldInfo[0]);
   
   /** Updates mapped by doc ID, for each do sorted list of updates. */
-  private TreeMap<Integer,TreeMap<FieldsUpdate, Set<String>>> docIdToUpdatesMap;
-  private HashMap<FieldsUpdate, List<Integer>> updatesToDocIdMap;
-  private LinkedHashMap<FieldsUpdate,UpdateAtomicReader> allApplied;
+  private final TreeMap<Integer,TreeMap<FieldsUpdate,Set<String>>> docIdToUpdatesMap;
+  private final HashMap<FieldsUpdate,List<Integer>> updatesToDocIdMap;
+  private final LinkedHashMap<FieldsUpdate,UpdateAtomicReader> allApplied;
+  private final boolean exactSegment;
+  private final InfoStream infoStream;
   
   private long generation;
-  private boolean exactSegment;
   
   private Map<String,FieldGenerationReplacements> fieldGenerationReplacments;
   
@@ -62,15 +65,18 @@
   private Analyzer analyzer;
   
   UpdatedSegmentData(SegmentReader reader,
-      SortedSet<FieldsUpdate> packetUpdates, boolean exactSegment)
-      throws IOException {
+      SortedSet<FieldsUpdate> packetUpdates, boolean exactSegment,
+      InfoStream infoStream) throws IOException {
     docIdToUpdatesMap = new TreeMap<>();
     updatesToDocIdMap = new HashMap<>();
-    this.exactSegment = exactSegment;
-    
     allApplied = new LinkedHashMap<>();
+    this.exactSegment = exactSegment;
+    this.infoStream = infoStream;
     
     for (FieldsUpdate update : packetUpdates) {
+      if (infoStream.isEnabled("USD")) {
+        infoStream.message("USD", "update: " + update);
+      }
       // add updates according to the base reader
       DocsEnum docsEnum = reader.termDocsEnum(update.term);
       if (docsEnum != null) {
@@ -101,34 +107,51 @@
       allApplied.put(update, new UpdateAtomicReader(update.directory,
           update.segmentInfo, IOContext.DEFAULT));
     }
-    
+    if (infoStream.isEnabled("USD")) {
+      infoStream.message("USD", "done init");
+    }
   }
   
   private void addUpdate(int docId, FieldsUpdate fieldsUpdate) {
     if (exactSegment && docId > fieldsUpdate.docIdUpto) {
       return;
     }
-    TreeMap<FieldsUpdate,Set<String>> prevUpdates = docIdToUpdatesMap.get(docId);
-    if (prevUpdates == null) {
-      prevUpdates = new TreeMap<>();
-      docIdToUpdatesMap.put(docId, prevUpdates);
-    } else if (fieldsUpdate.operation == Operation.REPLACE_FIELDS) {
-      // set ignored fields in previous updates
-      for (Entry<FieldsUpdate,Set<String>> addIgnore : prevUpdates.entrySet()) {
-        if (addIgnore.getValue() == null) {
-          prevUpdates.put(addIgnore.getKey(), new HashSet<>(fieldsUpdate.replacedFields));
-        } else {
-          addIgnore.getValue().addAll(fieldsUpdate.replacedFields);
+    synchronized (docIdToUpdatesMap) {
+      TreeMap<FieldsUpdate,Set<String>> prevUpdates = docIdToUpdatesMap
+          .get(docId);
+      if (prevUpdates == null) {
+        prevUpdates = new TreeMap<>();
+        docIdToUpdatesMap.put(docId, prevUpdates);
+        if (infoStream.isEnabled("USD")) { 
+          infoStream.message("USD", "adding to doc " + docId);
+        }
+      } else if (fieldsUpdate.operation == Operation.REPLACE_FIELDS) {
+        // set ignored fields in previous updates
+        for (Entry<FieldsUpdate,Set<String>> prev : prevUpdates.entrySet()) {
+          if (prev.getValue() == null) {
+            prevUpdates.put(prev.getKey(), new HashSet<>(
+                fieldsUpdate.replacedFields));
+            if (infoStream.isEnabled("USD")) {
+              infoStream.message("USD", "new ignored fields "
+                  + fieldsUpdate.replacedFields);
+            }
+          } else {
+            prev.getValue().addAll(fieldsUpdate.replacedFields);
+            if (infoStream.isEnabled("USD")) {
+              infoStream.message("USD", "adding ignored fields "
+                  + fieldsUpdate.replacedFields);
+            }
+          }
         }
       }
+      prevUpdates.put(fieldsUpdate, null);
+      List<Integer> prevDocIds = updatesToDocIdMap.get(fieldsUpdate);
+      if (prevDocIds == null) {
+        prevDocIds = new ArrayList<Integer>();
+        updatesToDocIdMap.put(fieldsUpdate, prevDocIds);
+      }
+      prevDocIds.add(docId);
     }
-    prevUpdates.put(fieldsUpdate, null);
-    List<Integer> prevDocIds = updatesToDocIdMap.get(fieldsUpdate);
-    if (prevDocIds == null) {
-      prevDocIds = new ArrayList<Integer>();
-      updatesToDocIdMap.put(fieldsUpdate, prevDocIds);
-    }
-    prevDocIds.add(docId);
   }
   
   boolean hasUpdates() {
@@ -158,7 +181,8 @@
    */
   private void nextDocUpdate() {
     if (updatesIterator.hasNext()) {
-      Entry<Integer,TreeMap<FieldsUpdate,Set<String>>> docUpdates = updatesIterator.next();
+      Entry<Integer,TreeMap<FieldsUpdate,Set<String>>> docUpdates = updatesIterator
+          .next();
       nextDocID = docUpdates.getKey();
       nextUpdate = docUpdates.getValue();
     } else {
@@ -177,42 +201,50 @@
   
   AtomicReader nextReader() throws IOException {
     AtomicReader toReturn = null;
-    if (currDocID < nextDocID) {
-      // empty documents reader required
-      toReturn = new UpdateAtomicReader(nextDocID - currDocID);
-      currDocID = nextDocID;
-    } else if (currDocID < numDocs) {
-      // get the an actual updates reader...
-      FieldsUpdate update = nextUpdate.firstEntry().getKey();
-      Set<String> ignore = nextUpdate.remove(update);
-      toReturn = allApplied.get(update);
-      
-      // ... and if done for this document remove from updates map
-      if (nextUpdate.isEmpty()) {
-        updatesIterator.remove();
-      }
-      
-      // add generation replacements if exist
-      if (update.replacedFields != null) {
-        if (fieldGenerationReplacments == null) {
-          fieldGenerationReplacments = new HashMap<String,FieldGenerationReplacements>();
+    boolean success = false;
+    try {
+      if (currDocID < nextDocID) {
+        // empty documents reader required
+        toReturn = new UpdateAtomicReader(nextDocID - currDocID);
+        currDocID = nextDocID;
+      } else if (currDocID < numDocs) {
+        // get the an actual updates reader...
+        FieldsUpdate update = nextUpdate.firstEntry().getKey();
+        nextUpdate.remove(update);
+        toReturn = allApplied.get(update);
+        
+        // ... and if done for this document remove from updates map
+        if (nextUpdate.isEmpty()) {
+          updatesIterator.remove();
         }
-        for (String fieldName : update.replacedFields) {
-          FieldGenerationReplacements fieldReplacement = fieldGenerationReplacments
-              .get(fieldName);
-          if (fieldReplacement == null) {
-            fieldReplacement = new FieldGenerationReplacements();
-            fieldGenerationReplacments.put(fieldName, fieldReplacement);
+        
+        // add generation replacements if exist
+        if (update.replacedFields != null) {
+          if (fieldGenerationReplacments == null) {
+            fieldGenerationReplacments = new HashMap<String,FieldGenerationReplacements>();
           }
-          fieldReplacement.set(currDocID, generation);
+          for (String fieldName : update.replacedFields) {
+            FieldGenerationReplacements fieldReplacement = fieldGenerationReplacments
+                .get(fieldName);
+            if (fieldReplacement == null) {
+              fieldReplacement = new FieldGenerationReplacements();
+              fieldGenerationReplacments.put(fieldName, fieldReplacement);
+            }
+            fieldReplacement.set(currDocID, generation);
+          }
         }
+        // move to next doc id
+        nextDocUpdate();
+        currDocID++;
       }
-      // move to next doc id
-      nextDocUpdate();
-      currDocID++;
+      success = true;
+      return toReturn;
+    } finally {
+      if (!success) {
+        IOUtils.closeWhileHandlingException(toReturn);
+      }
     }
     
-    return toReturn;
   }
   
   boolean isEmpty() {
@@ -238,7 +270,7 @@
      */
     UpdateAtomicReader(Directory fieldsDir, SegmentInfo segmentInfo,
         IOContext context) throws IOException {
-      core = new SegmentCoreReaders(null, segmentInfo, -1, context, -1);
+      core = new SegmentCoreReaders(null, segmentInfo, -1, context, 1);
       numDocs = 1;
     }
     
@@ -254,13 +286,13 @@
       if (core == null) {
         return false;
       }
-      DocsEnum termDocsEnum = termDocsEnum(term);
-      if (termDocsEnum == null) {
+      Terms terms = terms(term.field);
+      if (terms == null) {
         return false;
       }
-      return termDocsEnum.nextDoc() != DocIdSetIterator.NO_MORE_DOCS;
+      return terms.iterator(null).seekExact(term.bytes(), false);
     }
-
+    
     @Override
     public Fields fields() throws IOException {
       if (core == null) {
diff --git a/lucene/core/src/java/org/apache/lucene/search/ExactPhraseScorer.java b/lucene/core/src/java/org/apache/lucene/search/ExactPhraseScorer.java
index 4850b04..909cfe0 100644
--- a/lucene/core/src/java/org/apache/lucene/search/ExactPhraseScorer.java
+++ b/lucene/core/src/java/org/apache/lucene/search/ExactPhraseScorer.java
@@ -56,10 +56,10 @@
   private int docID = -1;
   private int freq;
 
-  private final Similarity.ExactSimScorer docScorer;
+  private final Similarity.SimScorer docScorer;
   
   ExactPhraseScorer(Weight weight, PhraseQuery.PostingsAndFreq[] postings,
-                    Similarity.ExactSimScorer docScorer) throws IOException {
+                    Similarity.SimScorer docScorer) throws IOException {
     super(weight);
     this.docScorer = docScorer;
 
diff --git a/lucene/core/src/java/org/apache/lucene/search/FieldCacheImpl.java b/lucene/core/src/java/org/apache/lucene/search/FieldCacheImpl.java
index b94ad6e..89ab855 100644
--- a/lucene/core/src/java/org/apache/lucene/search/FieldCacheImpl.java
+++ b/lucene/core/src/java/org/apache/lucene/search/FieldCacheImpl.java
@@ -45,6 +45,7 @@
 import org.apache.lucene.util.FixedBitSet;
 import org.apache.lucene.util.PagedBytes;
 import org.apache.lucene.util.packed.GrowableWriter;
+import org.apache.lucene.util.packed.MonotonicAppendingLongBuffer;
 import org.apache.lucene.util.packed.PackedInts;
 
 /**
@@ -1069,11 +1070,11 @@
 
   public static class SortedDocValuesImpl extends SortedDocValues {
     private final PagedBytes.Reader bytes;
-    private final PackedInts.Reader termOrdToBytesOffset;
+    private final MonotonicAppendingLongBuffer termOrdToBytesOffset;
     private final PackedInts.Reader docToTermOrd;
     private final int numOrd;
 
-    public SortedDocValuesImpl(PagedBytes.Reader bytes, PackedInts.Reader termOrdToBytesOffset, PackedInts.Reader docToTermOrd, int numOrd) {
+    public SortedDocValuesImpl(PagedBytes.Reader bytes, MonotonicAppendingLongBuffer termOrdToBytesOffset, PackedInts.Reader docToTermOrd, int numOrd) {
       this.bytes = bytes;
       this.docToTermOrd = docToTermOrd;
       this.termOrdToBytesOffset = termOrdToBytesOffset;
@@ -1144,7 +1145,6 @@
 
       final PagedBytes bytes = new PagedBytes(15);
 
-      int startBytesBPV;
       int startTermsBPV;
       int startNumUniqueTerms;
 
@@ -1169,22 +1169,19 @@
             numUniqueTerms = termCountHardLimit;
           }
 
-          startBytesBPV = PackedInts.bitsRequired(numUniqueTerms*4);
           startTermsBPV = PackedInts.bitsRequired(numUniqueTerms);
 
           startNumUniqueTerms = (int) numUniqueTerms;
         } else {
-          startBytesBPV = 1;
           startTermsBPV = 1;
           startNumUniqueTerms = 1;
         }
       } else {
-        startBytesBPV = 1;
         startTermsBPV = 1;
         startNumUniqueTerms = 1;
       }
 
-      GrowableWriter termOrdToBytesOffset = new GrowableWriter(startBytesBPV, 1+startNumUniqueTerms, acceptableOverheadRatio);
+      MonotonicAppendingLongBuffer termOrdToBytesOffset = new MonotonicAppendingLongBuffer();
       final GrowableWriter docToTermOrd = new GrowableWriter(startTermsBPV, maxDoc, acceptableOverheadRatio);
 
       int termOrd = 0;
@@ -1204,13 +1201,7 @@
             break;
           }
 
-          if (termOrd == termOrdToBytesOffset.size()) {
-            // NOTE: this code only runs if the incoming
-            // reader impl doesn't implement
-            // size (which should be uncommon)
-            termOrdToBytesOffset = termOrdToBytesOffset.resize(ArrayUtil.oversize(1+termOrd, 1));
-          }
-          termOrdToBytesOffset.set(termOrd, bytes.copyUsingLengthPrefix(term));
+          termOrdToBytesOffset.add(bytes.copyUsingLengthPrefix(term));
           docs = termsEnum.docs(null, docs, DocsEnum.FLAG_NONE);
           while (true) {
             final int docID = docs.nextDoc();
@@ -1222,14 +1213,10 @@
           }
           termOrd++;
         }
-
-        if (termOrdToBytesOffset.size() > termOrd) {
-          termOrdToBytesOffset = termOrdToBytesOffset.resize(termOrd);
-        }
       }
 
       // maybe an int-only impl?
-      return new SortedDocValuesImpl(bytes.freeze(true), termOrdToBytesOffset.getMutable(), docToTermOrd.getMutable(), termOrd);
+      return new SortedDocValuesImpl(bytes.freeze(true), termOrdToBytesOffset, docToTermOrd.getMutable(), termOrd);
     }
   }
 
diff --git a/lucene/core/src/java/org/apache/lucene/search/MultiPhraseQuery.java b/lucene/core/src/java/org/apache/lucene/search/MultiPhraseQuery.java
index ce446a8..92837ec 100644
--- a/lucene/core/src/java/org/apache/lucene/search/MultiPhraseQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/MultiPhraseQuery.java
@@ -31,7 +31,7 @@
 import org.apache.lucene.index.TermState;
 import org.apache.lucene.index.Terms;
 import org.apache.lucene.index.TermsEnum;
-import org.apache.lucene.search.similarities.Similarity.SloppySimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.search.similarities.Similarity;
 import org.apache.lucene.util.ArrayUtil;
 import org.apache.lucene.util.Bits;
@@ -245,14 +245,14 @@
       }
 
       if (slop == 0) {
-        ExactPhraseScorer s = new ExactPhraseScorer(this, postingsFreqs, similarity.exactSimScorer(stats, context));
+        ExactPhraseScorer s = new ExactPhraseScorer(this, postingsFreqs, similarity.simScorer(stats, context));
         if (s.noDocs) {
           return null;
         } else {
           return s;
         }
       } else {
-        return new SloppyPhraseScorer(this, postingsFreqs, slop, similarity.sloppySimScorer(stats, context));
+        return new SloppyPhraseScorer(this, postingsFreqs, slop, similarity.simScorer(stats, context));
       }
     }
 
@@ -263,7 +263,7 @@
         int newDoc = scorer.advance(doc);
         if (newDoc == doc) {
           float freq = slop == 0 ? scorer.freq() : ((SloppyPhraseScorer)scorer).sloppyFreq();
-          SloppySimScorer docScorer = similarity.sloppySimScorer(stats, context);
+          SimScorer docScorer = similarity.simScorer(stats, context);
           ComplexExplanation result = new ComplexExplanation();
           result.setDescription("weight("+getQuery()+" in "+doc+") [" + similarity.getClass().getSimpleName() + "], result of:");
           Explanation scoreExplanation = docScorer.explain(doc, new Explanation(freq, "phraseFreq=" + freq));
diff --git a/lucene/core/src/java/org/apache/lucene/search/PhraseQuery.java b/lucene/core/src/java/org/apache/lucene/search/PhraseQuery.java
index 0911af4..b48a1dc 100644
--- a/lucene/core/src/java/org/apache/lucene/search/PhraseQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/PhraseQuery.java
@@ -33,7 +33,7 @@
 import org.apache.lucene.index.TermState;
 import org.apache.lucene.index.Terms;
 import org.apache.lucene.index.TermsEnum;
-import org.apache.lucene.search.similarities.Similarity.SloppySimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.search.similarities.Similarity;
 import org.apache.lucene.util.ArrayUtil;
 import org.apache.lucene.util.Bits;
@@ -282,7 +282,7 @@
       }
 
       if (slop == 0) {  // optimize exact case
-        ExactPhraseScorer s = new ExactPhraseScorer(this, postingsFreqs, similarity.exactSimScorer(stats, context));
+        ExactPhraseScorer s = new ExactPhraseScorer(this, postingsFreqs, similarity.simScorer(stats, context));
         if (s.noDocs) {
           return null;
         } else {
@@ -290,7 +290,7 @@
         }
       } else {
         return
-          new SloppyPhraseScorer(this, postingsFreqs, slop, similarity.sloppySimScorer(stats, context));
+          new SloppyPhraseScorer(this, postingsFreqs, slop, similarity.simScorer(stats, context));
       }
     }
     
@@ -306,7 +306,7 @@
         int newDoc = scorer.advance(doc);
         if (newDoc == doc) {
           float freq = slop == 0 ? scorer.freq() : ((SloppyPhraseScorer)scorer).sloppyFreq();
-          SloppySimScorer docScorer = similarity.sloppySimScorer(stats, context);
+          SimScorer docScorer = similarity.simScorer(stats, context);
           ComplexExplanation result = new ComplexExplanation();
           result.setDescription("weight("+getQuery()+" in "+doc+") [" + similarity.getClass().getSimpleName() + "], result of:");
           Explanation scoreExplanation = docScorer.explain(doc, new Explanation(freq, "phraseFreq=" + freq));
diff --git a/lucene/core/src/java/org/apache/lucene/search/SloppyPhraseScorer.java b/lucene/core/src/java/org/apache/lucene/search/SloppyPhraseScorer.java
index a4ad72a..0667d8b 100644
--- a/lucene/core/src/java/org/apache/lucene/search/SloppyPhraseScorer.java
+++ b/lucene/core/src/java/org/apache/lucene/search/SloppyPhraseScorer.java
@@ -34,7 +34,7 @@
 
   private float sloppyFreq; //phrase frequency in current doc as computed by phraseFreq().
 
-  private final Similarity.SloppySimScorer docScorer;
+  private final Similarity.SimScorer docScorer;
   
   private final int slop;
   private final int numPostings;
@@ -52,7 +52,7 @@
   private final long cost;
   
   SloppyPhraseScorer(Weight weight, PhraseQuery.PostingsAndFreq[] postings,
-      int slop, Similarity.SloppySimScorer docScorer) {
+      int slop, Similarity.SimScorer docScorer) {
     super(weight);
     this.docScorer = docScorer;
     this.slop = slop;
diff --git a/lucene/core/src/java/org/apache/lucene/search/TermQuery.java b/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
index fb5bfcc..099e90b 100644
--- a/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/TermQuery.java
@@ -29,7 +29,7 @@
 import org.apache.lucene.index.TermContext;
 import org.apache.lucene.index.TermState;
 import org.apache.lucene.index.TermsEnum;
-import org.apache.lucene.search.similarities.Similarity.ExactSimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.search.similarities.Similarity;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.ToStringUtils;
@@ -84,7 +84,7 @@
       }
       DocsEnum docs = termsEnum.docs(acceptDocs, null);
       assert docs != null;
-      return new TermScorer(this, docs, similarity.exactSimScorer(stats, context));
+      return new TermScorer(this, docs, similarity.simScorer(stats, context));
     }
     
     /**
@@ -116,7 +116,7 @@
         int newDoc = scorer.advance(doc);
         if (newDoc == doc) {
           float freq = scorer.freq();
-          ExactSimScorer docScorer = similarity.exactSimScorer(stats, context);
+          SimScorer docScorer = similarity.simScorer(stats, context);
           ComplexExplanation result = new ComplexExplanation();
           result.setDescription("weight("+getQuery()+" in "+doc+") [" + similarity.getClass().getSimpleName() + "], result of:");
           Explanation scoreExplanation = docScorer.explain(doc, new Explanation(freq, "termFreq=" + freq));
diff --git a/lucene/core/src/java/org/apache/lucene/search/TermScorer.java b/lucene/core/src/java/org/apache/lucene/search/TermScorer.java
index 7623914..6697524 100644
--- a/lucene/core/src/java/org/apache/lucene/search/TermScorer.java
+++ b/lucene/core/src/java/org/apache/lucene/search/TermScorer.java
@@ -26,7 +26,7 @@
  */
 final class TermScorer extends Scorer {
   private final DocsEnum docsEnum;
-  private final Similarity.ExactSimScorer docScorer;
+  private final Similarity.SimScorer docScorer;
   
   /**
    * Construct a <code>TermScorer</code>.
@@ -36,10 +36,10 @@
    * @param td
    *          An iterator over the documents matching the <code>Term</code>.
    * @param docScorer
-   *          The </code>Similarity.ExactSimScorer</code> implementation 
+   *          The </code>Similarity.SimScorer</code> implementation 
    *          to be used for score computations.
    */
-  TermScorer(Weight weight, DocsEnum td, Similarity.ExactSimScorer docScorer) {
+  TermScorer(Weight weight, DocsEnum td, Similarity.SimScorer docScorer) {
     super(weight);
     this.docScorer = docScorer;
     this.docsEnum = td;
diff --git a/lucene/core/src/java/org/apache/lucene/search/package.html b/lucene/core/src/java/org/apache/lucene/search/package.html
index 53ebf87..4be5eba 100644
--- a/lucene/core/src/java/org/apache/lucene/search/package.html
+++ b/lucene/core/src/java/org/apache/lucene/search/package.html
@@ -441,9 +441,8 @@
                   explain(AtomicReaderContext context, int doc)} &mdash; Provide a means for explaining why a given document was
                 scored the way it was.
                 Typically a weight such as TermWeight
-                that scores via a {@link org.apache.lucene.search.similarities.Similarity Similarity} will make use of the Similarity's implementations:
-                {@link org.apache.lucene.search.similarities.Similarity.ExactSimScorer#explain(int, Explanation) ExactSimScorer#explain(int doc, Explanation freq)},
-                and {@link org.apache.lucene.search.similarities.Similarity.SloppySimScorer#explain(int, Explanation) SloppySimScorer#explain(int doc, Explanation freq)}
+                that scores via a {@link org.apache.lucene.search.similarities.Similarity Similarity} will make use of the Similarity's implementation:
+                {@link org.apache.lucene.search.similarities.Similarity.SimScorer#explain(int, Explanation) SimScorer#explain(int doc, Explanation freq)}.
                 </li>
              </li>
         </ol>
@@ -468,7 +467,7 @@
                 {@link org.apache.lucene.search.Scorer#score score()} &mdash; Return the score of the
                 current document. This value can be determined in any appropriate way for an application. For instance, the
                 {@link org.apache.lucene.search.TermScorer TermScorer} simply defers to the configured Similarity:
-                {@link org.apache.lucene.search.similarities.Similarity.ExactSimScorer#score(int, int) ExactSimScorer.score(int doc, int freq)}.
+                {@link org.apache.lucene.search.similarities.Similarity.SimScorer#score(int, float) SimScorer.score(int doc, float freq)}.
             </li>
             <li>
                 {@link org.apache.lucene.search.Scorer#freq freq()} &mdash; Returns the number of matches
diff --git a/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadNearQuery.java b/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadNearQuery.java
index ef2c6e5..31034ea 100644
--- a/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadNearQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadNearQuery.java
@@ -25,7 +25,7 @@
 import org.apache.lucene.search.Weight;
 import org.apache.lucene.search.similarities.DefaultSimilarity;
 import org.apache.lucene.search.similarities.Similarity;
-import org.apache.lucene.search.similarities.Similarity.SloppySimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.search.spans.NearSpansOrdered;
 import org.apache.lucene.search.spans.NearSpansUnordered;
 import org.apache.lucene.search.spans.SpanNearQuery;
@@ -53,7 +53,7 @@
  * <p/>
  * Payload scores are aggregated using a pluggable {@link PayloadFunction}.
  * 
- * @see org.apache.lucene.search.similarities.Similarity.SloppySimScorer#computePayloadFactor(int, int, int, BytesRef)
+ * @see org.apache.lucene.search.similarities.Similarity.SimScorer#computePayloadFactor(int, int, int, BytesRef)
  */
 public class PayloadNearQuery extends SpanNearQuery {
   protected String fieldName;
@@ -151,7 +151,7 @@
     public Scorer scorer(AtomicReaderContext context, boolean scoreDocsInOrder,
         boolean topScorer, Bits acceptDocs) throws IOException {
       return new PayloadNearSpanScorer(query.getSpans(context, acceptDocs, termContexts), this,
-          similarity, similarity.sloppySimScorer(stats, context));
+          similarity, similarity.simScorer(stats, context));
     }
     
     @Override
@@ -161,7 +161,7 @@
         int newDoc = scorer.advance(doc);
         if (newDoc == doc) {
           float freq = scorer.freq();
-          SloppySimScorer docScorer = similarity.sloppySimScorer(stats, context);
+          SimScorer docScorer = similarity.simScorer(stats, context);
           Explanation expl = new Explanation();
           expl.setDescription("weight("+getQuery()+" in "+doc+") [" + similarity.getClass().getSimpleName() + "], result of:");
           Explanation scoreExplanation = docScorer.explain(doc, new Explanation(freq, "phraseFreq=" + freq));
@@ -190,7 +190,7 @@
     private int payloadsSeen;
 
     protected PayloadNearSpanScorer(Spans spans, Weight weight,
-        Similarity similarity, Similarity.SloppySimScorer docScorer) throws IOException {
+        Similarity similarity, Similarity.SimScorer docScorer) throws IOException {
       super(spans, weight, docScorer);
       this.spans = spans;
     }
diff --git a/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadTermQuery.java b/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadTermQuery.java
index cab55df..b263999 100644
--- a/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadTermQuery.java
+++ b/lucene/core/src/java/org/apache/lucene/search/payloads/PayloadTermQuery.java
@@ -27,7 +27,7 @@
 import org.apache.lucene.search.ComplexExplanation;
 import org.apache.lucene.search.similarities.DefaultSimilarity;
 import org.apache.lucene.search.similarities.Similarity;
-import org.apache.lucene.search.similarities.Similarity.SloppySimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.search.spans.SpanQuery;
 import org.apache.lucene.search.spans.TermSpans;
 import org.apache.lucene.search.spans.SpanTermQuery;
@@ -49,7 +49,7 @@
  * which returns 1 by default.
  * <p/>
  * Payload scores are aggregated using a pluggable {@link PayloadFunction}.
- * @see org.apache.lucene.search.similarities.Similarity.SloppySimScorer#computePayloadFactor(int, int, int, BytesRef)
+ * @see org.apache.lucene.search.similarities.Similarity.SimScorer#computePayloadFactor(int, int, int, BytesRef)
  **/
 public class PayloadTermQuery extends SpanTermQuery {
   protected PayloadFunction function;
@@ -82,7 +82,7 @@
     public Scorer scorer(AtomicReaderContext context, boolean scoreDocsInOrder,
         boolean topScorer, Bits acceptDocs) throws IOException {
       return new PayloadTermSpanScorer((TermSpans) query.getSpans(context, acceptDocs, termContexts),
-          this, similarity.sloppySimScorer(stats, context));
+          this, similarity.simScorer(stats, context));
     }
 
     protected class PayloadTermSpanScorer extends SpanScorer {
@@ -91,7 +91,7 @@
       protected int payloadsSeen;
       private final TermSpans termSpans;
 
-      public PayloadTermSpanScorer(TermSpans spans, Weight weight, Similarity.SloppySimScorer docScorer) throws IOException {
+      public PayloadTermSpanScorer(TermSpans spans, Weight weight, Similarity.SimScorer docScorer) throws IOException {
         super(spans, weight, docScorer);
         termSpans = spans;
       }
@@ -182,7 +182,7 @@
         int newDoc = scorer.advance(doc);
         if (newDoc == doc) {
           float freq = scorer.sloppyFreq();
-          SloppySimScorer docScorer = similarity.sloppySimScorer(stats, context);
+          SimScorer docScorer = similarity.simScorer(stats, context);
           Explanation expl = new Explanation();
           expl.setDescription("weight("+getQuery()+" in "+doc+") [" + similarity.getClass().getSimpleName() + "], result of:");
           Explanation scoreExplanation = docScorer.explain(doc, new Explanation(freq, "phraseFreq=" + freq));
diff --git a/lucene/core/src/java/org/apache/lucene/search/similarities/BM25Similarity.java b/lucene/core/src/java/org/apache/lucene/search/similarities/BM25Similarity.java
index e612aa4..d062015 100644
--- a/lucene/core/src/java/org/apache/lucene/search/similarities/BM25Similarity.java
+++ b/lucene/core/src/java/org/apache/lucene/search/similarities/BM25Similarity.java
@@ -212,80 +212,18 @@
   }
 
   @Override
-  public final ExactSimScorer exactSimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
+  public final SimScorer simScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
     BM25Stats bm25stats = (BM25Stats) stats;
-    final NumericDocValues norms = context.reader().getNormValues(bm25stats.field);
-    return norms == null 
-      ? new ExactBM25DocScorerNoNorms(bm25stats)
-      : new ExactBM25DocScorer(bm25stats, norms);
-  }
-
-  @Override
-  public final SloppySimScorer sloppySimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-    BM25Stats bm25stats = (BM25Stats) stats;
-    return new SloppyBM25DocScorer(bm25stats, context.reader().getNormValues(bm25stats.field));
+    return new BM25DocScorer(bm25stats, context.reader().getNormValues(bm25stats.field));
   }
   
-  private class ExactBM25DocScorer extends ExactSimScorer {
-    private final BM25Stats stats;
-    private final float weightValue;
-    private final NumericDocValues norms;
-    private final float[] cache;
-    
-    ExactBM25DocScorer(BM25Stats stats, NumericDocValues norms) throws IOException {
-      assert norms != null;
-      this.stats = stats;
-      this.weightValue = stats.weight * (k1 + 1); // boost * idf * (k1 + 1)
-      this.cache = stats.cache;
-      this.norms = norms;
-    }
-    
-    @Override
-    public float score(int doc, int freq) {
-      return weightValue * freq / (freq + cache[(byte)norms.get(doc) & 0xFF]);
-    }
-    
-    @Override
-    public Explanation explain(int doc, Explanation freq) {
-      return explainScore(doc, freq, stats, norms);
-    }
-  }
-  
-  /** there are no norms, we act as if b=0 */
-  private class ExactBM25DocScorerNoNorms extends ExactSimScorer {
-    private final BM25Stats stats;
-    private final float weightValue;
-    private static final int SCORE_CACHE_SIZE = 32;
-    private float[] scoreCache = new float[SCORE_CACHE_SIZE];
-
-    ExactBM25DocScorerNoNorms(BM25Stats stats) {
-      this.stats = stats;
-      this.weightValue = stats.weight * (k1 + 1); // boost * idf * (k1 + 1)
-      for (int i = 0; i < SCORE_CACHE_SIZE; i++)
-        scoreCache[i] = weightValue * i / (i + k1);
-    }
-    
-    @Override
-    public float score(int doc, int freq) {
-      // TODO: maybe score cache is more trouble than its worth?
-      return freq < SCORE_CACHE_SIZE        // check cache
-        ? scoreCache[freq]                  // cache hit
-        : weightValue * freq / (freq + k1); // cache miss
-    }
-    
-    @Override
-    public Explanation explain(int doc, Explanation freq) {
-      return explainScore(doc, freq, stats, null);
-    }
-  }
-  
-  private class SloppyBM25DocScorer extends SloppySimScorer {
+  private class BM25DocScorer extends SimScorer {
     private final BM25Stats stats;
     private final float weightValue; // boost * idf * (k1 + 1)
     private final NumericDocValues norms;
     private final float[] cache;
     
-    SloppyBM25DocScorer(BM25Stats stats, NumericDocValues norms) throws IOException {
+    BM25DocScorer(BM25Stats stats, NumericDocValues norms) throws IOException {
       this.stats = stats;
       this.weightValue = stats.weight * (k1 + 1);
       this.cache = stats.cache;
diff --git a/lucene/core/src/java/org/apache/lucene/search/similarities/MultiSimilarity.java b/lucene/core/src/java/org/apache/lucene/search/similarities/MultiSimilarity.java
index 28c6d80..507c568 100644
--- a/lucene/core/src/java/org/apache/lucene/search/similarities/MultiSimilarity.java
+++ b/lucene/core/src/java/org/apache/lucene/search/similarities/MultiSimilarity.java
@@ -57,60 +57,25 @@
   }
 
   @Override
-  public ExactSimScorer exactSimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-    ExactSimScorer subScorers[] = new ExactSimScorer[sims.length];
+  public SimScorer simScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
+    SimScorer subScorers[] = new SimScorer[sims.length];
     for (int i = 0; i < subScorers.length; i++) {
-      subScorers[i] = sims[i].exactSimScorer(((MultiStats)stats).subStats[i], context);
+      subScorers[i] = sims[i].simScorer(((MultiStats)stats).subStats[i], context);
     }
-    return new MultiExactDocScorer(subScorers);
-  }
-
-  @Override
-  public SloppySimScorer sloppySimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-    SloppySimScorer subScorers[] = new SloppySimScorer[sims.length];
-    for (int i = 0; i < subScorers.length; i++) {
-      subScorers[i] = sims[i].sloppySimScorer(((MultiStats)stats).subStats[i], context);
-    }
-    return new MultiSloppyDocScorer(subScorers);
+    return new MultiSimScorer(subScorers);
   }
   
-  static class MultiExactDocScorer extends ExactSimScorer {
-    private final ExactSimScorer subScorers[];
+  static class MultiSimScorer extends SimScorer {
+    private final SimScorer subScorers[];
     
-    MultiExactDocScorer(ExactSimScorer subScorers[]) {
-      this.subScorers = subScorers;
-    }
-    
-    @Override
-    public float score(int doc, int freq) {
-      float sum = 0.0f;
-      for (ExactSimScorer subScorer : subScorers) {
-        sum += subScorer.score(doc, freq);
-      }
-      return sum;
-    }
-
-    @Override
-    public Explanation explain(int doc, Explanation freq) {
-      Explanation expl = new Explanation(score(doc, (int)freq.getValue()), "sum of:");
-      for (ExactSimScorer subScorer : subScorers) {
-        expl.addDetail(subScorer.explain(doc, freq));
-      }
-      return expl;
-    }
-  }
-  
-  static class MultiSloppyDocScorer extends SloppySimScorer {
-    private final SloppySimScorer subScorers[];
-    
-    MultiSloppyDocScorer(SloppySimScorer subScorers[]) {
+    MultiSimScorer(SimScorer subScorers[]) {
       this.subScorers = subScorers;
     }
     
     @Override
     public float score(int doc, float freq) {
       float sum = 0.0f;
-      for (SloppySimScorer subScorer : subScorers) {
+      for (SimScorer subScorer : subScorers) {
         sum += subScorer.score(doc, freq);
       }
       return sum;
@@ -119,7 +84,7 @@
     @Override
     public Explanation explain(int doc, Explanation freq) {
       Explanation expl = new Explanation(score(doc, freq.getValue()), "sum of:");
-      for (SloppySimScorer subScorer : subScorers) {
+      for (SimScorer subScorer : subScorers) {
         expl.addDetail(subScorer.explain(doc, freq));
       }
       return expl;
diff --git a/lucene/core/src/java/org/apache/lucene/search/similarities/PerFieldSimilarityWrapper.java b/lucene/core/src/java/org/apache/lucene/search/similarities/PerFieldSimilarityWrapper.java
index 7856be9..17a461e 100644
--- a/lucene/core/src/java/org/apache/lucene/search/similarities/PerFieldSimilarityWrapper.java
+++ b/lucene/core/src/java/org/apache/lucene/search/similarities/PerFieldSimilarityWrapper.java
@@ -54,15 +54,9 @@
   }
 
   @Override
-  public final ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
+  public final SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
     PerFieldSimWeight perFieldWeight = (PerFieldSimWeight) weight;
-    return perFieldWeight.delegate.exactSimScorer(perFieldWeight.delegateWeight, context);
-  }
-
-  @Override
-  public final SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-    PerFieldSimWeight perFieldWeight = (PerFieldSimWeight) weight;
-    return perFieldWeight.delegate.sloppySimScorer(perFieldWeight.delegateWeight, context);
+    return perFieldWeight.delegate.simScorer(perFieldWeight.delegateWeight, context);
   }
   
   /** 
diff --git a/lucene/core/src/java/org/apache/lucene/search/similarities/Similarity.java b/lucene/core/src/java/org/apache/lucene/search/similarities/Similarity.java
index 16435e5..b4ff8bb 100644
--- a/lucene/core/src/java/org/apache/lucene/search/similarities/Similarity.java
+++ b/lucene/core/src/java/org/apache/lucene/search/similarities/Similarity.java
@@ -88,10 +88,8 @@
  *       is called for each query leaf node, {@link Similarity#queryNorm(float)} is called for the top-level
  *       query, and finally {@link Similarity.SimWeight#normalize(float, float)} passes down the normalization value
  *       and any top-level boosts (e.g. from enclosing {@link BooleanQuery}s).
- *   <li>For each segment in the index, the Query creates a {@link #exactSimScorer(SimWeight, AtomicReaderContext)}
- *       (for queries with exact frequencies such as TermQuerys and exact PhraseQueries) or a 
- *       {@link #sloppySimScorer(SimWeight, AtomicReaderContext)} (for queries with sloppy frequencies such as
- *       SpanQuerys and sloppy PhraseQueries). The score() method is called for each matching document.
+ *   <li>For each segment in the index, the Query creates a {@link #simScorer(SimWeight, AtomicReaderContext)}
+ *       The score() method is called for each matching document.
  * </ol>
  * <p>
  * <a name="explaintime"/>
@@ -166,76 +164,31 @@
    * @return SimWeight object with the information this Similarity needs to score a query.
    */
   public abstract SimWeight computeWeight(float queryBoost, CollectionStatistics collectionStats, TermStatistics... termStats);
-  
+
   /**
-   * Creates a new {@link Similarity.ExactSimScorer} to score matching documents from a segment of the inverted index.
-   * @param weight collection information from {@link #computeWeight(float, CollectionStatistics, TermStatistics...)}
-   * @param context segment of the inverted index to be scored.
-   * @return ExactSimScorer for scoring documents across <code>context</code>
-   * @throws IOException if there is a low-level I/O error
-   */
-  public abstract ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException;
-  
-  /**
-   * Creates a new {@link Similarity.SloppySimScorer} to score matching documents from a segment of the inverted index.
+   * Creates a new {@link Similarity.SimScorer} to score matching documents from a segment of the inverted index.
    * @param weight collection information from {@link #computeWeight(float, CollectionStatistics, TermStatistics...)}
    * @param context segment of the inverted index to be scored.
    * @return SloppySimScorer for scoring documents across <code>context</code>
    * @throws IOException if there is a low-level I/O error
    */
-  public abstract SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException;
+  public abstract SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException;
   
   /**
-   * API for scoring exact queries such as {@link TermQuery} and 
-   * exact {@link PhraseQuery}.
-   * <p>
-   * Frequencies are integers (the term or phrase frequency within the document)
-   */
-  public static abstract class ExactSimScorer {
-    
-    /**
-     * Sole constructor. (For invocation by subclass 
-     * constructors, typically implicit.)
-     */
-    public ExactSimScorer() {}
-
-    /**
-     * Score a single document
-     * @param doc document id
-     * @param freq term frequency
-     * @return document's score
-     */
-    public abstract float score(int doc, int freq);
-    
-    /**
-     * Explain the score for a single document
-     * @param doc document id
-     * @param freq Explanation of how the term frequency was computed
-     * @return document's score
-     */
-    public Explanation explain(int doc, Explanation freq) {
-      Explanation result = new Explanation(score(doc, (int)freq.getValue()), 
-          "score(doc=" + doc + ",freq=" + freq.getValue() +"), with freq of:");
-      result.addDetail(freq);
-      return result;
-    }
-  }
-  
-  /**
-   * API for scoring "sloppy" queries such as {@link SpanQuery} and 
-   * sloppy {@link PhraseQuery}.
+   * API for scoring "sloppy" queries such as {@link TermQuery},
+   * {@link SpanQuery}, and {@link PhraseQuery}.
    * <p>
    * Frequencies are floating-point values: an approximate 
    * within-document frequency adjusted for "sloppiness" by 
-   * {@link SloppySimScorer#computeSlopFactor(int)}.
+   * {@link SimScorer#computeSlopFactor(int)}.
    */
-  public static abstract class SloppySimScorer {
+  public static abstract class SimScorer {
     
     /**
      * Sole constructor. (For invocation by subclass 
      * constructors, typically implicit.)
      */
-    public SloppySimScorer() {}
+    public SimScorer() {}
 
     /**
      * Score a single document
diff --git a/lucene/core/src/java/org/apache/lucene/search/similarities/SimilarityBase.java b/lucene/core/src/java/org/apache/lucene/search/similarities/SimilarityBase.java
index 4f4f678..c1ccff4 100644
--- a/lucene/core/src/java/org/apache/lucene/search/similarities/SimilarityBase.java
+++ b/lucene/core/src/java/org/apache/lucene/search/similarities/SimilarityBase.java
@@ -190,38 +190,20 @@
   }
   
   @Override
-  public ExactSimScorer exactSimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
+  public SimScorer simScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
     if (stats instanceof MultiSimilarity.MultiStats) {
       // a multi term query (e.g. phrase). return the summation, 
       // scoring almost as if it were boolean query
       SimWeight subStats[] = ((MultiSimilarity.MultiStats) stats).subStats;
-      ExactSimScorer subScorers[] = new ExactSimScorer[subStats.length];
+      SimScorer subScorers[] = new SimScorer[subStats.length];
       for (int i = 0; i < subScorers.length; i++) {
         BasicStats basicstats = (BasicStats) subStats[i];
-        subScorers[i] = new BasicExactDocScorer(basicstats, context.reader().getNormValues(basicstats.field));
+        subScorers[i] = new BasicSimScorer(basicstats, context.reader().getNormValues(basicstats.field));
       }
-      return new MultiSimilarity.MultiExactDocScorer(subScorers);
+      return new MultiSimilarity.MultiSimScorer(subScorers);
     } else {
       BasicStats basicstats = (BasicStats) stats;
-      return new BasicExactDocScorer(basicstats, context.reader().getNormValues(basicstats.field));
-    }
-  }
-  
-  @Override
-  public SloppySimScorer sloppySimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-    if (stats instanceof MultiSimilarity.MultiStats) {
-      // a multi term query (e.g. phrase). return the summation, 
-      // scoring almost as if it were boolean query
-      SimWeight subStats[] = ((MultiSimilarity.MultiStats) stats).subStats;
-      SloppySimScorer subScorers[] = new SloppySimScorer[subStats.length];
-      for (int i = 0; i < subScorers.length; i++) {
-        BasicStats basicstats = (BasicStats) subStats[i];
-        subScorers[i] = new BasicSloppyDocScorer(basicstats, context.reader().getNormValues(basicstats.field));
-      }
-      return new MultiSimilarity.MultiSloppyDocScorer(subScorers);
-    } else {
-      BasicStats basicstats = (BasicStats) stats;
-      return new BasicSloppyDocScorer(basicstats, context.reader().getNormValues(basicstats.field));
+      return new BasicSimScorer(basicstats, context.reader().getNormValues(basicstats.field));
     }
   }
   
@@ -277,46 +259,17 @@
   
   // --------------------------------- Classes ---------------------------------
   
-  /** Delegates the {@link #score(int, int)} and
-   * {@link #explain(int, Explanation)} methods to
-   * {@link SimilarityBase#score(BasicStats, float, float)} and
-   * {@link SimilarityBase#explain(BasicStats, int, Explanation, float)},
-   * respectively.
-   */
-  private class BasicExactDocScorer extends ExactSimScorer {
-    private final BasicStats stats;
-    private final NumericDocValues norms;
-    
-    BasicExactDocScorer(BasicStats stats, NumericDocValues norms) throws IOException {
-      this.stats = stats;
-      this.norms = norms;
-    }
-    
-    @Override
-    public float score(int doc, int freq) {
-      // We have to supply something in case norms are omitted
-      return SimilarityBase.this.score(stats, freq,
-          norms == null ? 1F : decodeNormValue((byte)norms.get(doc)));
-    }
-    
-    @Override
-    public Explanation explain(int doc, Explanation freq) {
-      return SimilarityBase.this.explain(stats, doc, freq,
-          norms == null ? 1F : decodeNormValue((byte)norms.get(doc)));
-    }
-  }
-  
   /** Delegates the {@link #score(int, float)} and
    * {@link #explain(int, Explanation)} methods to
    * {@link SimilarityBase#score(BasicStats, float, float)} and
    * {@link SimilarityBase#explain(BasicStats, int, Explanation, float)},
    * respectively.
    */
-  private class BasicSloppyDocScorer extends SloppySimScorer {
+  private class BasicSimScorer extends SimScorer {
     private final BasicStats stats;
     private final NumericDocValues norms;
     
-    BasicSloppyDocScorer(BasicStats stats, NumericDocValues norms) throws IOException {
+    BasicSimScorer(BasicStats stats, NumericDocValues norms) throws IOException {
       this.stats = stats;
       this.norms = norms;
     }
diff --git a/lucene/core/src/java/org/apache/lucene/search/similarities/TFIDFSimilarity.java b/lucene/core/src/java/org/apache/lucene/search/similarities/TFIDFSimilarity.java
index 1a61477..2ecae36 100644
--- a/lucene/core/src/java/org/apache/lucene/search/similarities/TFIDFSimilarity.java
+++ b/lucene/core/src/java/org/apache/lucene/search/similarities/TFIDFSimilarity.java
@@ -572,25 +572,6 @@
    * when <code>freq</code> is large, and smaller values when <code>freq</code>
    * is small.
    *
-   * <p>The default implementation calls {@link #tf(float)}.
-   *
-   * @param freq the frequency of a term within a document
-   * @return a score factor based on a term's within-document frequency
-   */
-  public float tf(int freq) {
-    return tf((float)freq);
-  }
-
-  /** Computes a score factor based on a term or phrase's frequency in a
-   * document.  This value is multiplied by the {@link #idf(long, long)}
-   * factor for each term in the query and these products are then summed to
-   * form the initial score for a document.
-   *
-   * <p>Terms and phrases repeated in a document indicate the topic of the
-   * document, so implementations of this method usually return larger values
-   * when <code>freq</code> is large, and smaller values when <code>freq</code>
-   * is small.
-   *
    * @param freq the frequency of a term within a document
    * @return a score factor based on a term's within-document frequency
    */
@@ -655,7 +636,7 @@
 
   /** Computes a score factor based on a term's document frequency (the number
    * of documents which contain the term).  This value is multiplied by the
-   * {@link #tf(int)} factor for each term in the query and these products are
+   * {@link #tf(float)} factor for each term in the query and these products are
    * then summed to form the initial score for a document.
    *
    * <p>Terms that occur in fewer documents are better indicators of topic, so
@@ -755,49 +736,17 @@
   }
 
   @Override
-  public final ExactSimScorer exactSimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
+  public final SimScorer simScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
     IDFStats idfstats = (IDFStats) stats;
-    return new ExactTFIDFDocScorer(idfstats, context.reader().getNormValues(idfstats.field));
-  }
-
-  @Override
-  public final SloppySimScorer sloppySimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-    IDFStats idfstats = (IDFStats) stats;
-    return new SloppyTFIDFDocScorer(idfstats, context.reader().getNormValues(idfstats.field));
+    return new TFIDFSimScorer(idfstats, context.reader().getNormValues(idfstats.field));
   }
   
-  // TODO: we can specialize these for omitNorms up front, but we should test that it doesn't confuse stupid hotspot.
-
-  private final class ExactTFIDFDocScorer extends ExactSimScorer {
+  private final class TFIDFSimScorer extends SimScorer {
     private final IDFStats stats;
     private final float weightValue;
     private final NumericDocValues norms;
     
-    ExactTFIDFDocScorer(IDFStats stats, NumericDocValues norms) throws IOException {
-      this.stats = stats;
-      this.weightValue = stats.value;
-      this.norms = norms; 
-    }
-    
-    @Override
-    public float score(int doc, int freq) {
-      final float raw = tf(freq)*weightValue;  // compute tf(f)*weight
-
-      return norms == null ? raw : raw * decodeNormValue((byte)norms.get(doc)); // normalize for field
-    }
-
-    @Override
-    public Explanation explain(int doc, Explanation freq) {
-      return explainScore(doc, freq, stats, norms);
-    }
-  }
-  
-  private final class SloppyTFIDFDocScorer extends SloppySimScorer {
-    private final IDFStats stats;
-    private final float weightValue;
-    private final NumericDocValues norms;
-    
-    SloppyTFIDFDocScorer(IDFStats stats, NumericDocValues norms) throws IOException {
+    TFIDFSimScorer(IDFStats stats, NumericDocValues norms) throws IOException {
       this.stats = stats;
       this.weightValue = stats.value;
       this.norms = norms;
diff --git a/lucene/core/src/java/org/apache/lucene/search/spans/SpanScorer.java b/lucene/core/src/java/org/apache/lucene/search/spans/SpanScorer.java
index a362763..74a098d 100644
--- a/lucene/core/src/java/org/apache/lucene/search/spans/SpanScorer.java
+++ b/lucene/core/src/java/org/apache/lucene/search/spans/SpanScorer.java
@@ -34,9 +34,9 @@
   protected int doc;
   protected float freq;
   protected int numMatches;
-  protected final Similarity.SloppySimScorer docScorer;
+  protected final Similarity.SimScorer docScorer;
   
-  protected SpanScorer(Spans spans, Weight weight, Similarity.SloppySimScorer docScorer)
+  protected SpanScorer(Spans spans, Weight weight, Similarity.SimScorer docScorer)
   throws IOException {
     super(weight);
     this.docScorer = docScorer;
diff --git a/lucene/core/src/java/org/apache/lucene/search/spans/SpanWeight.java b/lucene/core/src/java/org/apache/lucene/search/spans/SpanWeight.java
index 6057308..8e428f1 100644
--- a/lucene/core/src/java/org/apache/lucene/search/spans/SpanWeight.java
+++ b/lucene/core/src/java/org/apache/lucene/search/spans/SpanWeight.java
@@ -23,7 +23,7 @@
 import org.apache.lucene.index.TermContext;
 import org.apache.lucene.search.*;
 import org.apache.lucene.search.similarities.Similarity;
-import org.apache.lucene.search.similarities.Similarity.SloppySimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.util.Bits;
 
 import java.io.IOException;
@@ -86,7 +86,7 @@
     if (stats == null) {
       return null;
     } else {
-      return new SpanScorer(query.getSpans(context, acceptDocs, termContexts), this, similarity.sloppySimScorer(stats, context));
+      return new SpanScorer(query.getSpans(context, acceptDocs, termContexts), this, similarity.simScorer(stats, context));
     }
   }
 
@@ -97,7 +97,7 @@
       int newDoc = scorer.advance(doc);
       if (newDoc == doc) {
         float freq = scorer.sloppyFreq();
-        SloppySimScorer docScorer = similarity.sloppySimScorer(stats, context);
+        SimScorer docScorer = similarity.simScorer(stats, context);
         ComplexExplanation result = new ComplexExplanation();
         result.setDescription("weight("+getQuery()+" in "+doc+") [" + similarity.getClass().getSimpleName() + "], result of:");
         Explanation scoreExplanation = docScorer.explain(doc, new Explanation(freq, "phraseFreq=" + freq));
diff --git a/lucene/core/src/java/org/apache/lucene/util/BytesRef.java b/lucene/core/src/java/org/apache/lucene/util/BytesRef.java
index a3eddab..97310d8 100644
--- a/lucene/core/src/java/org/apache/lucene/util/BytesRef.java
+++ b/lucene/core/src/java/org/apache/lucene/util/BytesRef.java
@@ -119,11 +119,17 @@
     }
   }
 
+  /**
+   * Returns a shallow clone of this instance (the underlying bytes are
+   * <b>not</b> copied and will be shared by both the returned object and this
+   * object.
+   * 
+   * @see #deepCopyOf
+   */
   @Override
   public BytesRef clone() {
     return new BytesRef(bytes, offset, length);
   }
-
   
   /** Calculates the hash code as required by TermsHash during indexing.
    * <p>It is defined as:
diff --git a/lucene/core/src/java/org/apache/lucene/util/CharsRef.java b/lucene/core/src/java/org/apache/lucene/util/CharsRef.java
index a57e73e..4eb6646 100644
--- a/lucene/core/src/java/org/apache/lucene/util/CharsRef.java
+++ b/lucene/core/src/java/org/apache/lucene/util/CharsRef.java
@@ -71,6 +71,13 @@
     this.length = chars.length;
   }
 
+  /**
+   * Returns a shallow clone of this instance (the underlying characters are
+   * <b>not</b> copied and will be shared by both the returned object and this
+   * object.
+   * 
+   * @see #deepCopyOf
+   */  
   @Override
   public CharsRef clone() {
     return new CharsRef(chars, offset, length);
diff --git a/lucene/core/src/java/org/apache/lucene/util/IntsRef.java b/lucene/core/src/java/org/apache/lucene/util/IntsRef.java
index 5ea2aef..f69e105 100644
--- a/lucene/core/src/java/org/apache/lucene/util/IntsRef.java
+++ b/lucene/core/src/java/org/apache/lucene/util/IntsRef.java
@@ -56,6 +56,13 @@
     assert isValid();
   }
 
+  /**
+   * Returns a shallow clone of this instance (the underlying ints are
+   * <b>not</b> copied and will be shared by both the returned object and this
+   * object.
+   * 
+   * @see #deepCopyOf
+   */  
   @Override
   public IntsRef clone() {
     return new IntsRef(ints, offset, length);
diff --git a/lucene/core/src/java/org/apache/lucene/util/LongsRef.java b/lucene/core/src/java/org/apache/lucene/util/LongsRef.java
index 62f15b0..52ad1f1 100644
--- a/lucene/core/src/java/org/apache/lucene/util/LongsRef.java
+++ b/lucene/core/src/java/org/apache/lucene/util/LongsRef.java
@@ -55,6 +55,13 @@
     assert isValid();
   }
 
+  /**
+   * Returns a shallow clone of this instance (the underlying longs are
+   * <b>not</b> copied and will be shared by both the returned object and this
+   * object.
+   * 
+   * @see #deepCopyOf
+   */  
   @Override
   public LongsRef clone() {
     return new LongsRef(longs, offset, length);
diff --git a/lucene/core/src/java/org/apache/lucene/util/RollingBuffer.java b/lucene/core/src/java/org/apache/lucene/util/RollingBuffer.java
index d31bb4c..4cf03f5 100644
--- a/lucene/core/src/java/org/apache/lucene/util/RollingBuffer.java
+++ b/lucene/core/src/java/org/apache/lucene/util/RollingBuffer.java
@@ -17,9 +17,6 @@
  * limitations under the License.
  */
 
-// TODO: probably move this to core at some point (eg,
-// cutover kuromoji, synfilter, LookaheadTokenFilter)
-
 /** Acts like forever growing T[], but internally uses a
  *  circular buffer to reuse instances of T.
  * 
diff --git a/lucene/core/src/java/org/apache/lucene/util/Sorter.java b/lucene/core/src/java/org/apache/lucene/util/Sorter.java
index 12c53a9..6ae43c8 100644
--- a/lucene/core/src/java/org/apache/lucene/util/Sorter.java
+++ b/lucene/core/src/java/org/apache/lucene/util/Sorter.java
@@ -72,7 +72,7 @@
       first_cut = upper(from, mid, second_cut);
       len11 = first_cut - from;
     }
-    rotate( first_cut, mid, second_cut);
+    rotate(first_cut, mid, second_cut);
     final int new_mid = first_cut + len22;
     mergeInPlace(from, first_cut, new_mid);
     mergeInPlace(new_mid, second_cut, to);
@@ -142,7 +142,15 @@
     }
   }
 
-  void rotate(int lo, int mid, int hi) {
+  final void rotate(int lo, int mid, int hi) {
+    assert lo <= mid && mid <= hi;
+    if (lo == mid || mid == hi) {
+      return;
+    }
+    doRotate(lo, mid, hi);
+  }
+
+  void doRotate(int lo, int mid, int hi) {
     if (mid - lo == hi - mid) {
       // happens rarely but saves n/2 swaps
       while (mid < hi) {
diff --git a/lucene/core/src/java/org/apache/lucene/util/TimSorter.java b/lucene/core/src/java/org/apache/lucene/util/TimSorter.java
index 57e2f8d..d8b40be 100644
--- a/lucene/core/src/java/org/apache/lucene/util/TimSorter.java
+++ b/lucene/core/src/java/org/apache/lucene/util/TimSorter.java
@@ -205,9 +205,9 @@
   }
 
   @Override
-  void rotate(int lo, int mid, int hi) {
-    int len1 = mid - lo;
-    int len2 = hi - mid;
+  void doRotate(int lo, int mid, int hi) {
+    final int len1 = mid - lo;
+    final int len2 = hi - mid;
     if (len1 == len2) {
       while (mid < hi) {
         swap(lo++, mid++);
diff --git a/lucene/core/src/java/org/apache/lucene/util/fst/Builder.java b/lucene/core/src/java/org/apache/lucene/util/fst/Builder.java
index cc5c870..7a2ee75 100644
--- a/lucene/core/src/java/org/apache/lucene/util/fst/Builder.java
+++ b/lucene/core/src/java/org/apache/lucene/util/fst/Builder.java
@@ -117,9 +117,9 @@
    * 
    * @param doShareSuffix 
    *    If <code>true</code>, the shared suffixes will be compacted into unique paths.
-   *    This requires an additional hash map for lookups in memory. Setting this parameter to
-   *    <code>false</code> creates a single path for all input sequences. This will result in a larger
-   *    graph, but may require less memory and will speed up construction.  
+   *    This requires an additional RAM-intensive hash map for lookups in memory. Setting this parameter to
+   *    <code>false</code> creates a single suffix path for all input sequences. This will result in a larger
+   *    FST, but requires substantially less memory and CPU during building.  
    *
    * @param doShareNonSingletonNodes
    *    Only used if doShareSuffix is true.  Set this to
diff --git a/lucene/core/src/java/org/apache/lucene/util/fst/NodeHash.java b/lucene/core/src/java/org/apache/lucene/util/fst/NodeHash.java
index 7e09a42..7b6d787 100644
--- a/lucene/core/src/java/org/apache/lucene/util/fst/NodeHash.java
+++ b/lucene/core/src/java/org/apache/lucene/util/fst/NodeHash.java
@@ -19,21 +19,21 @@
 
 import java.io.IOException;
 
-import org.apache.lucene.util.packed.GrowableWriter;
 import org.apache.lucene.util.packed.PackedInts;
+import org.apache.lucene.util.packed.PagedGrowableWriter;
 
 // Used to dedup states (lookup already-frozen states)
 final class NodeHash<T> {
 
-  private GrowableWriter table;
-  private int count;
-  private int mask;
+  private PagedGrowableWriter table;
+  private long count;
+  private long mask;
   private final FST<T> fst;
   private final FST.Arc<T> scratchArc = new FST.Arc<T>();
   private final FST.BytesReader in;
 
   public NodeHash(FST<T> fst, FST.BytesReader in) {
-    table = new GrowableWriter(8, 16, PackedInts.COMPACT);
+    table = new PagedGrowableWriter(16, 1<<30, 8, PackedInts.COMPACT);
     mask = 15;
     this.fst = fst;
     this.in = in;
@@ -69,10 +69,10 @@
 
   // hash code for an unfrozen node.  This must be identical
   // to the un-frozen case (below)!!
-  private int hash(Builder.UnCompiledNode<T> node) {
+  private long hash(Builder.UnCompiledNode<T> node) {
     final int PRIME = 31;
     //System.out.println("hash unfrozen");
-    int h = 0;
+    long h = 0;
     // TODO: maybe if number of arcs is high we can safely subsample?
     for(int arcIdx=0;arcIdx<node.numArcs;arcIdx++) {
       final Builder.Arc<T> arc = node.arcs[arcIdx];
@@ -87,14 +87,14 @@
       }
     }
     //System.out.println("  ret " + (h&Integer.MAX_VALUE));
-    return h & Integer.MAX_VALUE;
+    return h & Long.MAX_VALUE;
   }
 
   // hash code for a frozen node
-  private int hash(long node) throws IOException {
+  private long hash(long node) throws IOException {
     final int PRIME = 31;
     //System.out.println("hash frozen node=" + node);
-    int h = 0;
+    long h = 0;
     fst.readFirstRealTargetArc(node, scratchArc, in);
     while(true) {
       //System.out.println("  label=" + scratchArc.label + " target=" + scratchArc.target + " h=" + h + " output=" + fst.outputs.outputToString(scratchArc.output) + " next?=" + scratchArc.flag(4) + " final?=" + scratchArc.isFinal() + " pos=" + in.getPosition());
@@ -111,13 +111,13 @@
       fst.readNextRealArc(scratchArc, in);
     }
     //System.out.println("  ret " + (h&Integer.MAX_VALUE));
-    return h & Integer.MAX_VALUE;
+    return h & Long.MAX_VALUE;
   }
 
   public long add(Builder.UnCompiledNode<T> nodeIn) throws IOException {
-    // System.out.println("hash: add count=" + count + " vs " + table.size());
-    final int h = hash(nodeIn);
-    int pos = h & mask;
+    //System.out.println("hash: add count=" + count + " vs " + table.size() + " mask=" + mask);
+    final long h = hash(nodeIn);
+    long pos = h & mask;
     int c = 0;
     while(true) {
       final long v = table.get(pos);
@@ -128,7 +128,8 @@
         assert hash(node) == h : "frozenHash=" + hash(node) + " vs h=" + h;
         count++;
         table.set(pos, node);
-        if (table.size() < 2*count) {
+        // Rehash at 2/3 occupancy:
+        if (count > 2*table.size()/3) {
           rehash();
         }
         return node;
@@ -144,7 +145,7 @@
 
   // called only by rehash
   private void addNew(long address) throws IOException {
-    int pos = hash(address) & mask;
+    long pos = hash(address) & mask;
     int c = 0;
     while(true) {
       if (table.get(pos) == 0) {
@@ -158,23 +159,15 @@
   }
 
   private void rehash() throws IOException {
-    final GrowableWriter oldTable = table;
+    final PagedGrowableWriter oldTable = table;
 
-    if (oldTable.size() >= Integer.MAX_VALUE/2) {
-      throw new IllegalStateException("FST too large (> 2.1 GB)");
-    }
-
-    table = new GrowableWriter(oldTable.getBitsPerValue(), 2*oldTable.size(), PackedInts.COMPACT);
+    table = new PagedGrowableWriter(2*oldTable.size(), 1<<30, PackedInts.bitsRequired(count), PackedInts.COMPACT);
     mask = table.size()-1;
-    for(int idx=0;idx<oldTable.size();idx++) {
+    for(long idx=0;idx<oldTable.size();idx++) {
       final long address = oldTable.get(idx);
       if (address != 0) {
         addNew(address);
       }
     }
   }
-
-  public int count() {
-    return count;
-  }
 }
diff --git a/lucene/core/src/java/org/apache/lucene/util/fst/PositiveIntOutputs.java b/lucene/core/src/java/org/apache/lucene/util/fst/PositiveIntOutputs.java
index 2460f25..d13648a 100644
--- a/lucene/core/src/java/org/apache/lucene/util/fst/PositiveIntOutputs.java
+++ b/lucene/core/src/java/org/apache/lucene/util/fst/PositiveIntOutputs.java
@@ -33,26 +33,13 @@
   
   private final static Long NO_OUTPUT = new Long(0);
 
-  private final boolean doShare;
+  private final static PositiveIntOutputs singleton = new PositiveIntOutputs();
 
-  private final static PositiveIntOutputs singletonShare = new PositiveIntOutputs(true);
-  private final static PositiveIntOutputs singletonNoShare = new PositiveIntOutputs(false);
-
-  private PositiveIntOutputs(boolean doShare) {
-    this.doShare = doShare;
+  private PositiveIntOutputs() {
   }
 
-  /** Returns the instance of PositiveIntOutputs. */
   public static PositiveIntOutputs getSingleton() {
-    return getSingleton(true);
-  }
-
-  /** Expert: pass doShare=false to disable output sharing.
-   *  In some cases this may result in a smaller FST,
-   *  however it will also break methods like {@link
-   *  Util#getByOutput} and {@link Util#shortestPaths}. */
-  public static PositiveIntOutputs getSingleton(boolean doShare) {
-    return doShare ? singletonShare : singletonNoShare;
+    return singleton;
   }
 
   @Override
@@ -61,14 +48,10 @@
     assert valid(output2);
     if (output1 == NO_OUTPUT || output2 == NO_OUTPUT) {
       return NO_OUTPUT;
-    } else if (doShare) {
+    } else {
       assert output1 > 0;
       assert output2 > 0;
       return Math.min(output1, output2);
-    } else if (output1.equals(output2)) {
-      return output1;
-    } else {
-      return NO_OUTPUT;
     }
   }
 
@@ -134,6 +117,6 @@
 
   @Override
   public String toString() {
-    return "PositiveIntOutputs(doShare=" + doShare + ")";
+    return "PositiveIntOutputs";
   }
 }
diff --git a/lucene/core/src/java/org/apache/lucene/util/fst/Util.java b/lucene/core/src/java/org/apache/lucene/util/fst/Util.java
index 26aa69a..ed7452e 100644
--- a/lucene/core/src/java/org/apache/lucene/util/fst/Util.java
+++ b/lucene/core/src/java/org/apache/lucene/util/fst/Util.java
@@ -93,9 +93,7 @@
    *
    *  <p>NOTE: this only works with {@code FST<Long>}, only
    *  works when the outputs are ascending in order with
-   *  the inputs and only works when you shared
-   *  the outputs (pass doShare=true to {@link
-   *  PositiveIntOutputs#getSingleton}).
+   *  the inputs.
    *  For example, simple ordinals (0, 1,
    *  2, ...), or file offets (when appending to a file)
    *  fit this. */
@@ -517,11 +515,7 @@
   }
 
   /** Starting from node, find the top N min cost 
-   *  completions to a final node.
-   *
-   *  <p>NOTE: you must share the outputs when you build the
-   *  FST (pass doShare=true to {@link
-   *  PositiveIntOutputs#getSingleton}). */
+   *  completions to a final node. */
   public static <T> MinResult<T>[] shortestPaths(FST<T> fst, FST.Arc<T> fromNode, T startOutput, Comparator<T> comparator, int topN,
                                                  boolean allowEmptyString) throws IOException {
 
diff --git a/lucene/core/src/java/org/apache/lucene/util/fst/package.html b/lucene/core/src/java/org/apache/lucene/util/fst/package.html
index 93c16e1..dfd42a3 100644
--- a/lucene/core/src/java/org/apache/lucene/util/fst/package.html
+++ b/lucene/core/src/java/org/apache/lucene/util/fst/package.html
@@ -43,7 +43,7 @@
     String inputValues[] = {"cat", "dog", "dogs"};
     long outputValues[] = {5, 7, 12};
     
-    PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     Builder&lt;Long&gt; builder = new Builder&lt;Long&gt;(INPUT_TYPE.BYTE1, outputs);
     BytesRef scratchBytes = new BytesRef();
     IntsRef scratchInts = new IntsRef();
@@ -60,8 +60,7 @@
 </pre>
 Retrieval by value:
 <pre class="prettyprint">
-    // Only works because outputs are also in sorted order, and
-    // we passed 'true' for sharing to PositiveIntOutputs.getSingleton
+    // Only works because outputs are also in sorted order
     IntsRef key = Util.getByOutput(fst, 12);
     System.out.println(Util.toBytesRef(key, scratchBytes).utf8ToString()); // dogs
 </pre>
@@ -77,7 +76,6 @@
 </pre>
 N-shortest paths by weight:
 <pre class="prettyprint">
-    // Only works because we passed 'true' for sharing to PositiveIntOutputs.getSingleton
     Comparator&lt;Long&gt; comparator = new Comparator&lt;Long&gt;() {
       public int compare(Long left, Long right) {
         return left.compareTo(right);
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/AbstractAppendingLongBuffer.java b/lucene/core/src/java/org/apache/lucene/util/packed/AbstractAppendingLongBuffer.java
index 087154d..78381e9 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/AbstractAppendingLongBuffer.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/AbstractAppendingLongBuffer.java
@@ -17,6 +17,8 @@
  * limitations under the License.
  */
 
+import static org.apache.lucene.util.packed.PackedInts.checkBlockSize;
+
 import java.util.Arrays;
 
 import org.apache.lucene.util.ArrayUtil;
@@ -25,33 +27,37 @@
 /** Common functionality shared by {@link AppendingLongBuffer} and {@link MonotonicAppendingLongBuffer}. */
 abstract class AbstractAppendingLongBuffer {
 
-  static final int BLOCK_BITS = 10;
-  static final int MAX_PENDING_COUNT = 1 << BLOCK_BITS;
-  static final int BLOCK_MASK = MAX_PENDING_COUNT - 1;
+  static final int MIN_PAGE_SIZE = 64;
+  // More than 1M doesn't really makes sense with these appending buffers
+  // since their goal is to try to have small numbers of bits per value
+  static final int MAX_PAGE_SIZE = 1 << 20;
 
+  final int pageShift, pageMask;
   long[] minValues;
   PackedInts.Reader[] deltas;
   private long deltasBytes;
   int valuesOff;
-  long[] pending;
+  final long[] pending;
   int pendingOff;
 
-  AbstractAppendingLongBuffer(int initialBlockCount) {
-    minValues = new long[16];
-    deltas = new PackedInts.Reader[16];
-    pending = new long[MAX_PENDING_COUNT];
+  AbstractAppendingLongBuffer(int initialBlockCount, int pageSize) {
+    minValues = new long[initialBlockCount];
+    deltas = new PackedInts.Reader[initialBlockCount];
+    pending = new long[pageSize];
+    pageShift = checkBlockSize(pageSize, MIN_PAGE_SIZE, MAX_PAGE_SIZE);
+    pageMask = pageSize - 1;
     valuesOff = 0;
     pendingOff = 0;
   }
 
   /** Get the number of values that have been added to the buffer. */
   public final long size() {
-    return valuesOff * (long) MAX_PENDING_COUNT + pendingOff;
+    return valuesOff * (long) pending.length + pendingOff;
   }
 
   /** Append a value to this buffer. */
   public final void add(long l) {
-    if (pendingOff == MAX_PENDING_COUNT) {
+    if (pendingOff == pending.length) {
       // check size
       if (deltas.length == valuesOff) {
         final int newLength = ArrayUtil.oversize(valuesOff + 1, 8);
@@ -80,8 +86,8 @@
     if (index < 0 || index >= size()) {
       throw new IndexOutOfBoundsException("" + index);
     }
-    int block = (int) (index >> BLOCK_BITS);
-    int element = (int) (index & BLOCK_MASK);
+    final int block = (int) (index >> pageShift);
+    final int element = (int) (index & pageMask);
     return get(block, element);
   }
 
@@ -99,7 +105,7 @@
       if (valuesOff == 0) {
         currentValues = pending;
       } else {
-        currentValues = new long[MAX_PENDING_COUNT];
+        currentValues = new long[pending.length];
         fillValues();
       }
     }
@@ -115,7 +121,7 @@
     public final long next() {
       assert hasNext();
       long result = currentValues[pOff++];
-      if (pOff == MAX_PENDING_COUNT) {
+      if (pOff == pending.length) {
         vOff += 1;
         pOff = 0;
         if (vOff <= valuesOff) {
@@ -139,6 +145,7 @@
   public long ramBytesUsed() {
     // TODO: this is called per-doc-per-norms/dv-field, can we optimize this?
     long bytesUsed = RamUsageEstimator.alignObjectSize(baseRamBytesUsed())
+        + 2 * RamUsageEstimator.NUM_BYTES_INT // pageShift, pageMask
         + RamUsageEstimator.NUM_BYTES_LONG // valuesBytes
         + RamUsageEstimator.sizeOf(pending)
         + RamUsageEstimator.sizeOf(minValues)
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/AbstractBlockPackedWriter.java b/lucene/core/src/java/org/apache/lucene/util/packed/AbstractBlockPackedWriter.java
index 6b16c86..67c8d4b 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/AbstractBlockPackedWriter.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/AbstractBlockPackedWriter.java
@@ -17,6 +17,8 @@
  * limitations under the License.
  */
 
+import static org.apache.lucene.util.packed.PackedInts.checkBlockSize;
+
 import java.io.IOException;
 import java.util.Arrays;
 
@@ -24,22 +26,11 @@
 
 abstract class AbstractBlockPackedWriter {
 
+  static final int MIN_BLOCK_SIZE = 64;
   static final int MAX_BLOCK_SIZE = 1 << (30 - 3);
   static final int MIN_VALUE_EQUALS_0 = 1 << 0;
   static final int BPV_SHIFT = 1;
 
-  static void checkBlockSize(int blockSize) {
-    if (blockSize <= 0 || blockSize > MAX_BLOCK_SIZE) {
-      throw new IllegalArgumentException("blockSize must be > 0 and < " + MAX_BLOCK_SIZE + ", got " + blockSize);
-    }
-    if (blockSize < 64) {
-      throw new IllegalArgumentException("blockSize must be >= 64, got " + blockSize);
-    }
-    if ((blockSize & (blockSize - 1)) != 0) {
-      throw new IllegalArgumentException("blockSize must be a power of two, got " + blockSize);
-    }
-  }
-
   static long zigZagEncode(long n) {
     return (n >> 63) ^ (n << 1);
   }
@@ -66,7 +57,7 @@
    * @param blockSize the number of values of a single block, must be a multiple of <tt>64</tt>
    */
   public AbstractBlockPackedWriter(DataOutput out, int blockSize) {
-    checkBlockSize(blockSize);
+    checkBlockSize(blockSize, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
     reset(out);
     values = new long[blockSize];
   }
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/AppendingLongBuffer.java b/lucene/core/src/java/org/apache/lucene/util/packed/AppendingLongBuffer.java
index 978fc32..2c29729 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/AppendingLongBuffer.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/AppendingLongBuffer.java
@@ -27,9 +27,16 @@
  */
 public final class AppendingLongBuffer extends AbstractAppendingLongBuffer {
 
-  /** Sole constructor. */
+  /** @param initialPageCount the initial number of pages
+   *  @param pageSize         the size of a single page */
+  public AppendingLongBuffer(int initialPageCount, int pageSize) {
+    super(initialPageCount, pageSize);
+  }
+
+  /** Create an {@link AppendingLongBuffer} with initialPageCount=16 and
+   *  pageSize=1024. */
   public AppendingLongBuffer() {
-    super(16);
+    this(16, 1024);
   }
 
   @Override
@@ -43,8 +50,9 @@
     }
   }
 
+  @Override
   void packPendingValues() {
-    assert pendingOff == MAX_PENDING_COUNT;
+    assert pendingOff == pending.length;
 
     // compute max delta
     long minValue = pending[0];
@@ -71,6 +79,7 @@
   }
 
   /** Return an iterator over the values of this buffer. */
+  @Override
   public Iterator iterator() {
     return new Iterator();
   }
@@ -78,20 +87,21 @@
   /** A long iterator. */
   public final class Iterator extends AbstractAppendingLongBuffer.Iterator {
 
-    private Iterator() {
+    Iterator() {
       super();
     }
 
+    @Override
     void fillValues() {
       if (vOff == valuesOff) {
         currentValues = pending;
       } else if (deltas[vOff] == null) {
         Arrays.fill(currentValues, minValues[vOff]);
       } else {
-        for (int k = 0; k < MAX_PENDING_COUNT; ) {
-          k += deltas[vOff].get(k, currentValues, k, MAX_PENDING_COUNT - k);
+        for (int k = 0; k < pending.length; ) {
+          k += deltas[vOff].get(k, currentValues, k, pending.length - k);
         }
-        for (int k = 0; k < MAX_PENDING_COUNT; ++k) {
+        for (int k = 0; k < pending.length; ++k) {
           currentValues[k] += minValues[vOff];
         }
       }
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReader.java b/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReader.java
index a33da95..ff35ec1 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReader.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReader.java
@@ -17,11 +17,14 @@
  * limitations under the License.
  */
 
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.BPV_SHIFT;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MAX_BLOCK_SIZE;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MIN_BLOCK_SIZE;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MIN_VALUE_EQUALS_0;
 import static org.apache.lucene.util.packed.BlockPackedReaderIterator.readVLong;
 import static org.apache.lucene.util.packed.BlockPackedReaderIterator.zigZagDecode;
-import static org.apache.lucene.util.packed.BlockPackedWriter.BPV_SHIFT;
-import static org.apache.lucene.util.packed.BlockPackedWriter.MIN_VALUE_EQUALS_0;
-import static org.apache.lucene.util.packed.BlockPackedWriter.checkBlockSize;
+import static org.apache.lucene.util.packed.PackedInts.checkBlockSize;
+import static org.apache.lucene.util.packed.PackedInts.numBlocks;
 
 import java.io.IOException;
 
@@ -40,14 +43,10 @@
 
   /** Sole constructor. */
   public BlockPackedReader(IndexInput in, int packedIntsVersion, int blockSize, long valueCount, boolean direct) throws IOException {
-    checkBlockSize(blockSize);
     this.valueCount = valueCount;
-    blockShift = Integer.numberOfTrailingZeros(blockSize);
+    blockShift = checkBlockSize(blockSize, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
     blockMask = blockSize - 1;
-    final int numBlocks = (int) (valueCount / blockSize) + (valueCount % blockSize == 0 ? 0 : 1);
-    if ((long) numBlocks * blockSize < valueCount) {
-      throw new IllegalArgumentException("valueCount is too large for this block size");
-    }
+    final int numBlocks = numBlocks(valueCount, blockSize);
     long[] minValues = null;
     subReaders = new PackedInts.Reader[numBlocks];
     for (int i = 0; i < numBlocks; ++i) {
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReaderIterator.java b/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReaderIterator.java
index 288518d..7d8bbd3 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReaderIterator.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/BlockPackedReaderIterator.java
@@ -17,9 +17,13 @@
  * limitations under the License.
  */
 
-import static org.apache.lucene.util.packed.BlockPackedWriter.BPV_SHIFT;
-import static org.apache.lucene.util.packed.BlockPackedWriter.MIN_VALUE_EQUALS_0;
-import static org.apache.lucene.util.packed.BlockPackedWriter.checkBlockSize;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.BPV_SHIFT;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MAX_BLOCK_SIZE;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MIN_BLOCK_SIZE;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MIN_VALUE_EQUALS_0;
+import static org.apache.lucene.util.packed.BlockPackedReaderIterator.readVLong;
+import static org.apache.lucene.util.packed.BlockPackedReaderIterator.zigZagDecode;
+import static org.apache.lucene.util.packed.PackedInts.checkBlockSize;
 
 import java.io.EOFException;
 import java.io.IOException;
@@ -87,7 +91,7 @@
    *                  been used to write the stream
    */
   public BlockPackedReaderIterator(DataInput in, int packedIntsVersion, int blockSize, long valueCount) {
-    checkBlockSize(blockSize);
+    checkBlockSize(blockSize, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
     this.packedIntsVersion = packedIntsVersion;
     this.blockSize = blockSize;
     this.values = new long[blockSize];
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicAppendingLongBuffer.java b/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicAppendingLongBuffer.java
index 4b00994..abac58d 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicAppendingLongBuffer.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicAppendingLongBuffer.java
@@ -37,14 +37,22 @@
     return (n >> 63) ^ (n << 1);
   }
 
-  private float[] averages;
+  float[] averages;
 
-  /** Sole constructor. */
-  public MonotonicAppendingLongBuffer() {
-    super(16);
-    averages = new float[16];
+  /** @param initialPageCount the initial number of pages
+   *  @param pageSize         the size of a single page */
+  public MonotonicAppendingLongBuffer(int initialPageCount, int pageSize) {
+    super(initialPageCount, pageSize);
+    averages = new float[pending.length];
   }
-  
+
+  /** Create an {@link MonotonicAppendingLongBuffer} with initialPageCount=16
+   *  and pageSize=1024. */
+  public MonotonicAppendingLongBuffer() {
+    this(16, 1024);
+  }
+
+  @Override
   long get(int block, int element) {
     if (block == valuesOff) {
       return pending[element];
@@ -66,16 +74,16 @@
 
   @Override
   void packPendingValues() {
-    assert pendingOff == MAX_PENDING_COUNT;
+    assert pendingOff == pending.length;
 
     minValues[valuesOff] = pending[0];
-    averages[valuesOff] = (float) (pending[BLOCK_MASK] - pending[0]) / BLOCK_MASK;
+    averages[valuesOff] = (float) (pending[pending.length - 1] - pending[0]) / (pending.length - 1);
 
-    for (int i = 0; i < MAX_PENDING_COUNT; ++i) {
+    for (int i = 0; i < pending.length; ++i) {
       pending[i] = zigZagEncode(pending[i] - minValues[valuesOff] - (long) (averages[valuesOff] * (long) i));
     }
     long maxDelta = 0;
-    for (int i = 0; i < MAX_PENDING_COUNT; ++i) {
+    for (int i = 0; i < pending.length; ++i) {
       if (pending[i] < 0) {
         maxDelta = -1;
         break;
@@ -94,6 +102,7 @@
   }
 
   /** Return an iterator over the values of this buffer. */
+  @Override
   public Iterator iterator() {
     return new Iterator();
   }
@@ -105,18 +114,19 @@
       super();
     }
 
+    @Override
     void fillValues() {
       if (vOff == valuesOff) {
         currentValues = pending;
       } else if (deltas[vOff] == null) {
-        for (int k = 0; k < MAX_PENDING_COUNT; ++k) {
+        for (int k = 0; k < pending.length; ++k) {
           currentValues[k] = minValues[vOff] + (long) (averages[vOff] * (long) k);
         }
       } else {
-        for (int k = 0; k < MAX_PENDING_COUNT; ) {
-          k += deltas[vOff].get(k, currentValues, k, MAX_PENDING_COUNT - k);
+        for (int k = 0; k < pending.length; ) {
+          k += deltas[vOff].get(k, currentValues, k, pending.length - k);
         }
-        for (int k = 0; k < MAX_PENDING_COUNT; ++k) {
+        for (int k = 0; k < pending.length; ++k) {
           currentValues[k] = minValues[vOff] + (long) (averages[vOff] * (long) k) + zigZagDecode(currentValues[k]);
         }
       }
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicBlockPackedReader.java b/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicBlockPackedReader.java
index 27b14dd..f7f6e44 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicBlockPackedReader.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/MonotonicBlockPackedReader.java
@@ -17,8 +17,11 @@
  * limitations under the License.
  */
 
-import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.checkBlockSize;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MAX_BLOCK_SIZE;
+import static org.apache.lucene.util.packed.AbstractBlockPackedWriter.MIN_BLOCK_SIZE;
 import static org.apache.lucene.util.packed.BlockPackedReaderIterator.zigZagDecode;
+import static org.apache.lucene.util.packed.PackedInts.checkBlockSize;
+import static org.apache.lucene.util.packed.PackedInts.numBlocks;
 
 import java.io.IOException;
 
@@ -39,14 +42,10 @@
 
   /** Sole constructor. */
   public MonotonicBlockPackedReader(IndexInput in, int packedIntsVersion, int blockSize, long valueCount, boolean direct) throws IOException {
-    checkBlockSize(blockSize);
     this.valueCount = valueCount;
-    blockShift = Integer.numberOfTrailingZeros(blockSize);
+    blockShift = checkBlockSize(blockSize, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);
     blockMask = blockSize - 1;
-    final int numBlocks = (int) (valueCount / blockSize) + (valueCount % blockSize == 0 ? 0 : 1);
-    if ((long) numBlocks * blockSize < valueCount) {
-      throw new IllegalArgumentException("valueCount is too large for this block size");
-    }
+    final int numBlocks = numBlocks(valueCount, blockSize);
     minValues = new long[numBlocks];
     averages = new float[numBlocks];
     subReaders = new PackedInts.Reader[numBlocks];
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/PackedInts.java b/lucene/core/src/java/org/apache/lucene/util/packed/PackedInts.java
index b6db582..d26bd0a 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/PackedInts.java
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/PackedInts.java
@@ -213,6 +213,11 @@
       this.format = format;
       this.bitsPerValue = bitsPerValue;
     }
+
+    @Override
+    public String toString() {
+      return "FormatAndBits(format=" + format + " bitsPerValue=" + bitsPerValue + ")";
+    }
   }
 
   /**
@@ -1198,33 +1203,39 @@
       for (int i = 0; i < len; ++i) {
         dest.set(destPos++, src.get(srcPos++));
       }
-    } else {
+    } else if (len > 0) {
       // use bulk operations
-      long[] buf = new long[Math.min(capacity, len)];
-      int remaining = 0;
-      while (len > 0) {
-        final int read = src.get(srcPos, buf, remaining, Math.min(len, buf.length - remaining));
-        assert read > 0;
-        srcPos += read;
-        len -= read;
-        remaining += read;
-        final int written = dest.set(destPos, buf, 0, remaining);
-        assert written > 0;
-        destPos += written;
-        if (written < remaining) {
-          System.arraycopy(buf, written, buf, 0, remaining - written);
-        }
-        remaining -= written;
-      }
-      while (remaining > 0) {
-        final int written = dest.set(destPos, buf, 0, remaining);
-        destPos += written;
-        remaining -= written;
-        System.arraycopy(buf, written, buf, 0, remaining);
-      }
+      final long[] buf = new long[Math.min(capacity, len)];
+      copy(src, srcPos, dest, destPos, len, buf);
     }
   }
-  
+
+  /** Same as {@link #copy(Reader, int, Mutable, int, int, int)} but using a pre-allocated buffer. */
+  static void copy(Reader src, int srcPos, Mutable dest, int destPos, int len, long[] buf) {
+    assert buf.length > 0;
+    int remaining = 0;
+    while (len > 0) {
+      final int read = src.get(srcPos, buf, remaining, Math.min(len, buf.length - remaining));
+      assert read > 0;
+      srcPos += read;
+      len -= read;
+      remaining += read;
+      final int written = dest.set(destPos, buf, 0, remaining);
+      assert written > 0;
+      destPos += written;
+      if (written < remaining) {
+        System.arraycopy(buf, written, buf, 0, remaining - written);
+      }
+      remaining -= written;
+    }
+    while (remaining > 0) {
+      final int written = dest.set(destPos, buf, 0, remaining);
+      destPos += written;
+      remaining -= written;
+      System.arraycopy(buf, written, buf, 0, remaining);
+    }
+  }
+
   /**
    * Expert: reads only the metadata from a stream. This is useful to later
    * restore a stream or open a direct reader via 
@@ -1261,4 +1272,26 @@
     }    
   }
 
-}
\ No newline at end of file
+  /** Check that the block size is a power of 2, in the right bounds, and return
+   *  its log in base 2. */
+  static int checkBlockSize(int blockSize, int minBlockSize, int maxBlockSize) {
+    if (blockSize < minBlockSize || blockSize > maxBlockSize) {
+      throw new IllegalArgumentException("blockSize must be >= " + minBlockSize + " and <= " + maxBlockSize + ", got " + blockSize);
+    }
+    if ((blockSize & (blockSize - 1)) != 0) {
+      throw new IllegalArgumentException("blockSize must be a power of two, got " + blockSize);
+    }
+    return Integer.numberOfTrailingZeros(blockSize);
+  }
+
+  /** Return the number of blocks required to store <code>size</code> values on
+   *  <code>blockSize</code>. */
+  static int numBlocks(long size, int blockSize) {
+    final int numBlocks = (int) (size / blockSize) + (size % blockSize == 0 ? 0 : 1);
+    if ((long) numBlocks * blockSize < size) {
+      throw new IllegalArgumentException("size is too large for this block size");
+    }
+    return numBlocks;
+  }
+
+}
diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/PagedGrowableWriter.java b/lucene/core/src/java/org/apache/lucene/util/packed/PagedGrowableWriter.java
new file mode 100644
index 0000000..1588a34
--- /dev/null
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/PagedGrowableWriter.java
@@ -0,0 +1,136 @@
+package org.apache.lucene.util.packed;

+

+/*

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements.  See the NOTICE file distributed with

+ * this work for additional information regarding copyright ownership.

+ * The ASF licenses this file to You under the Apache License, Version 2.0

+ * (the "License"); you may not use this file except in compliance with

+ * the License.  You may obtain a copy of the License at

+ *

+ *     http://www.apache.org/licenses/LICENSE-2.0

+ *

+ * Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS,

+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ * See the License for the specific language governing permissions and

+ * limitations under the License.

+ */

+

+import static org.apache.lucene.util.packed.PackedInts.checkBlockSize;

+import static org.apache.lucene.util.packed.PackedInts.numBlocks;

+

+/**

+ * A {@link PagedGrowableWriter}. This class slices data into fixed-size blocks

+ * which have independent numbers of bits per value and grow on-demand.

+ * <p>You should use this class instead of {@link AppendingLongBuffer} only when

+ * you need random write-access. Otherwise this class will likely be slower and

+ * less memory-efficient.

+ * @lucene.internal

+ */

+public final class PagedGrowableWriter {

+

+  static final int MIN_BLOCK_SIZE = 1 << 6;

+  static final int MAX_BLOCK_SIZE = 1 << 30;

+

+  final long size;

+  final int pageShift;

+  final int pageMask;

+  final GrowableWriter[] subWriters;

+  final int startBitsPerValue;

+  final float acceptableOverheadRatio;

+

+  /**

+   * Create a new {@link PagedGrowableWriter} instance.

+   *

+   * @param size the number of values to store.

+   * @param pageSize the number of values per page

+   * @param startBitsPerValue the initial number of bits per value

+   * @param acceptableOverheadRatio an acceptable overhead ratio

+   */

+  public PagedGrowableWriter(long size, int pageSize,

+      int startBitsPerValue, float acceptableOverheadRatio) {

+    this(size, pageSize, startBitsPerValue, acceptableOverheadRatio, true);

+  }

+

+  PagedGrowableWriter(long size, int pageSize,int startBitsPerValue, float acceptableOverheadRatio, boolean fillPages) {

+    this.size = size;

+    this.startBitsPerValue = startBitsPerValue;

+    this.acceptableOverheadRatio = acceptableOverheadRatio;

+    pageShift = checkBlockSize(pageSize, MIN_BLOCK_SIZE, MAX_BLOCK_SIZE);

+    pageMask = pageSize - 1;

+    final int numPages = numBlocks(size, pageSize);

+    subWriters = new GrowableWriter[numPages];

+    if (fillPages) {

+      for (int i = 0; i < numPages; ++i) {

+        // do not allocate for more entries than necessary on the last page

+        final int valueCount = i == numPages - 1 ? lastPageSize(size) : pageSize;

+        subWriters[i] = new GrowableWriter(startBitsPerValue, valueCount, acceptableOverheadRatio);

+      }

+    }

+  }

+

+  private int lastPageSize(long size) {

+    final int sz = indexInPage(size);

+    return sz == 0 ? pageSize() : sz;

+  }

+

+  private int pageSize() {

+    return pageMask + 1;

+  }

+

+  /** The number of values. */

+  public long size() {

+    return size;

+  }

+

+  int pageIndex(long index) {

+    return (int) (index >>> pageShift);

+  }

+

+  int indexInPage(long index) {

+    return (int) index & pageMask;

+  }

+

+  /** Get value at <code>index</code>. */

+  public long get(long index) {

+    assert index >= 0 && index < size;

+    final int pageIndex = pageIndex(index);

+    final int indexInPage = indexInPage(index);

+    return subWriters[pageIndex].get(indexInPage);

+  }

+

+  /** Set value at <code>index</code>. */

+  public void set(long index, long value) {

+    assert index >= 0 && index < size;

+    final int pageIndex = pageIndex(index);

+    final int indexInPage = indexInPage(index);

+    subWriters[pageIndex].set(indexInPage, value);

+  }

+

+  /** Create a new {@link PagedGrowableWriter} of size <code>newSize</code>

+   *  based on the content of this buffer. This method is much more efficient

+   *  than creating a new {@link PagedGrowableWriter} and copying values one by

+   *  one. */

+  public PagedGrowableWriter resize(long newSize) {

+    final PagedGrowableWriter newWriter = new PagedGrowableWriter(newSize, pageSize(), startBitsPerValue, acceptableOverheadRatio, false);

+    final int numCommonPages = Math.min(newWriter.subWriters.length, subWriters.length);

+    final long[] copyBuffer = new long[1024];

+    for (int i = 0; i < newWriter.subWriters.length; ++i) {

+      final int valueCount = i == newWriter.subWriters.length - 1 ? lastPageSize(newSize) : pageSize();

+      final int bpv = i < numCommonPages ? subWriters[i].getBitsPerValue() : startBitsPerValue;

+      newWriter.subWriters[i] = new GrowableWriter(bpv, valueCount, acceptableOverheadRatio);

+      if (i < numCommonPages) {

+        final int copyLength = Math.min(valueCount, subWriters[i].size());

+        PackedInts.copy(subWriters[i], 0, newWriter.subWriters[i].getMutable(), 0, copyLength, copyBuffer);

+      }

+    }

+    return newWriter;

+  }

+

+  @Override

+  public String toString() {

+    return getClass().getSimpleName() + "(size=" + size() + ",pageSize=" + pageSize() + ")";

+  }

+

+}

diff --git a/lucene/core/src/java/org/apache/lucene/util/packed/package.html b/lucene/core/src/java/org/apache/lucene/util/packed/package.html
index 1696033..50470dd 100644
--- a/lucene/core/src/java/org/apache/lucene/util/packed/package.html
+++ b/lucene/core/src/java/org/apache/lucene/util/packed/package.html
@@ -47,6 +47,11 @@
         <li>Same as PackedInts.Mutable but grows the number of bits per values when needed.</li>
         <li>Useful to build a PackedInts.Mutable from a read-once stream of longs.</li>
     </ul></li>
+    <li><b>{@link org.apache.lucene.util.packed.PagedGrowableWriter}</b><ul>
+        <li>Slices data into fixed-size blocks stored in GrowableWriters.</li>
+        <li>Supports more than 2B values.</li>
+        <li>You should use AppendingLongBuffer instead if you don't need random write access.</li>
+    </ul></li>
     <li><b>{@link org.apache.lucene.util.packed.AppendingLongBuffer}</b><ul>
         <li>Can store any sequence of longs.</li>
         <li>Compression is good when values are close to each other.</li>
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestAtomicUpdate.java b/lucene/core/src/test/org/apache/lucene/index/TestAtomicUpdate.java
index dad7361..644477e 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestAtomicUpdate.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestAtomicUpdate.java
@@ -25,18 +25,7 @@
 import org.apache.lucene.util.*;
 
 public class TestAtomicUpdate extends LuceneTestCase {
-  private static final class MockIndexWriter extends IndexWriter {
-    public MockIndexWriter(Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-    }
-
-    @Override
-    boolean testPoint(String name) {
-      if (LuceneTestCase.random().nextInt(4) == 2)
-        Thread.yield();
-      return true;
-    }
-  }
+  
 
   private static abstract class TimedThread extends Thread {
     volatile boolean failed;
@@ -124,7 +113,7 @@
         TEST_VERSION_CURRENT, new MockAnalyzer(random()))
         .setMaxBufferedDocs(7);
     ((TieredMergePolicy) conf.getMergePolicy()).setMaxMergeAtOnce(3);
-    IndexWriter writer = new MockIndexWriter(directory, conf);
+    IndexWriter writer = RandomIndexWriter.mockIndexWriter(directory, conf, random());
 
     // Establish a base index of 100 docs:
     for(int i=0;i<100;i++) {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestCodecs.java b/lucene/core/src/test/org/apache/lucene/index/TestCodecs.java
index 794474b..da49331 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestCodecs.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestCodecs.java
@@ -658,8 +658,7 @@
     final int termIndexInterval = _TestUtil.nextInt(random(), 13, 27);
     final Codec codec = Codec.getDefault();
     final SegmentInfo si = new SegmentInfo(dir, Constants.LUCENE_MAIN_VERSION, SEGMENT, 10000, false, codec, null, null);
-    final SegmentWriteState state = 
-        new SegmentWriteState(InfoStream.getDefault(), dir, si, 0, fieldInfos, termIndexInterval, null, null, newIOContext(random()));
+    final SegmentWriteState state = new SegmentWriteState(InfoStream.getDefault(), dir, si, termIndexInterval, fieldInfos, termIndexInterval, null, null, newIOContext(random()));
 
     final FieldsConsumer consumer = codec.postingsFormat().fieldsConsumer(state);
     Arrays.sort(fields);
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java b/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java
index 0c76427..5a09f34 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestConcurrentMergeScheduler.java
@@ -58,6 +58,9 @@
         boolean isClose = false;
         StackTraceElement[] trace = new Exception().getStackTrace();
         for (int i = 0; i < trace.length; i++) {
+          if (isDoFlush && isClose) {
+            break;
+          }
           if ("flush".equals(trace[i].getMethodName())) {
             isDoFlush = true;
           }
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestCustomNorms.java b/lucene/core/src/test/org/apache/lucene/index/TestCustomNorms.java
index eec412c..2a0528e 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestCustomNorms.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestCustomNorms.java
@@ -112,12 +112,7 @@
     }
 
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
+    public SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
       throw new UnsupportedOperationException();
     }
   }
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterStallControl.java b/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterStallControl.java
index 4069223..319d7bc 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterStallControl.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestDocumentsWriterStallControl.java
@@ -339,6 +339,7 @@
       for (Thread thread : threads) {
         if (thread.getState() != state) {
           done = false;
+          break;
         }
       }
       if (done) {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestFieldReplacements.java b/lucene/core/src/test/org/apache/lucene/index/TestFieldReplacements.java
index e8fb76e..5e3f01b 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestFieldReplacements.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestFieldReplacements.java
@@ -773,8 +773,9 @@
   }
   
   public void testReplaceLayers() throws IOException {
-    IndexWriter writer = new IndexWriter(dir, newIndexWriterConfig(
-        TEST_VERSION_CURRENT, new MockAnalyzer(random())));
+    IndexWriterConfig indexWriterConfig = newIndexWriterConfig(
+        TEST_VERSION_CURRENT, new MockAnalyzer(random()));
+    IndexWriter writer = new IndexWriter(dir, indexWriterConfig);
     
     FieldType fieldType = new FieldType();
     fieldType.setIndexed(true);
@@ -784,6 +785,7 @@
     
     Document doc0 = new Document();
     doc0.add(new StoredField("f1", "a", fieldType));
+    doc0.add(new StoredField("f2", "a", fieldType));
     writer.addDocument(doc0);
 
     // add f2:b
@@ -791,7 +793,7 @@
     fields1.add(new StoredField("f2", "b", fieldType));
     writer.updateFields(Operation.ADD_FIELDS, new Term("f1", "a"), fields1);
     
-    // remove f2:b and add f2:c
+    // remove f2:a and f2:b, add f2:c
     Document fields2 = new Document();
     fields2.add(new StoredField("f2", "c", fieldType));
     writer.updateFields(Operation.REPLACE_FIELDS, new Term("f2", "b"), fields2);
@@ -801,12 +803,17 @@
     fields3.add(new StoredField("f2", "d", fieldType));
     writer.updateFields(Operation.ADD_FIELDS, new Term("f2", "b"), fields3);
     
+    // do nothing since f2:a was removed
+    writer.deleteDocuments(new Term("f2", "a"));
+    
     writer.close();
     
     DirectoryReader directoryReader = DirectoryReader.open(dir);
     final AtomicReader atomicReader = directoryReader.leaves().get(0).reader();
     printField(atomicReader, "f1");
     
+    assertEquals("wrong number of documents", 1, directoryReader.numDocs());
+    
     // check indexed fields
     final DocsAndPositionsEnum termPositionsA = atomicReader
         .termPositionsEnum(new Term("f1", "a"));
@@ -816,6 +823,12 @@
     assertEquals("wrong doc id", DocIdSetIterator.NO_MORE_DOCS,
         termPositionsA.nextDoc());
     
+    final DocsAndPositionsEnum termPositionsA2 = atomicReader
+        .termPositionsEnum(new Term("f2", "a"));
+    assertNotNull("no positions for term", termPositionsA2);
+    assertEquals("wrong doc id", DocIdSetIterator.NO_MORE_DOCS,
+        termPositionsA2.nextDoc());
+    
     final DocsAndPositionsEnum termPositionsB = atomicReader
         .termPositionsEnum(new Term("f2", "b"));
     assertNotNull("no positions for term", termPositionsB);
@@ -826,6 +839,7 @@
         .termPositionsEnum(new Term("f2", "c"));
     assertNotNull("no positions for term", termPositionsC);
     assertEquals("wrong doc id", 0, termPositionsC.nextDoc());
+    // 100000 == 2 * StackedDocsEnum.STACKED_SEGMENT_POSITION_INCREMENT
     assertEquals("wrong position", 100000, termPositionsC.nextPosition());
     assertEquals("wrong doc id", DocIdSetIterator.NO_MORE_DOCS,
         termPositionsC.nextDoc());
@@ -872,7 +886,7 @@
   }
   
   public void printIndexes() throws IOException {
-    File outDir = new File("D:/temp/ifu/compare/scenario/b");
+    File outDir = new File("D:/temp/ifu/compare/scenario/a");
     outDir.mkdirs();
     
     for (int i = 0; i < 42; i++) {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
index 02b6abb..67defdd 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriter.java
@@ -1183,6 +1183,46 @@
     t.join();
     assertFalse(t.failed);
   }
+  
+  /** testThreadInterruptDeadlock but with 2 indexer threads */
+  public void testTwoThreadsInterruptDeadlock() throws Exception {
+    IndexerThreadInterrupt t1 = new IndexerThreadInterrupt();
+    t1.setDaemon(true);
+    t1.start();
+    
+    IndexerThreadInterrupt t2 = new IndexerThreadInterrupt();
+    t2.setDaemon(true);
+    t2.start();
+
+    // Force class loader to load ThreadInterruptedException
+    // up front... else we can see a false failure if 2nd
+    // interrupt arrives while class loader is trying to
+    // init this class (in servicing a first interrupt):
+    assertTrue(new ThreadInterruptedException(new InterruptedException()).getCause() instanceof InterruptedException);
+
+    // issue 300 interrupts to child thread
+    final int numInterrupts = atLeast(300);
+    int i = 0;
+    while(i < numInterrupts) {
+      // TODO: would be nice to also sometimes interrupt the
+      // CMS merge threads too ...
+      Thread.sleep(10);
+      IndexerThreadInterrupt t = random().nextBoolean() ? t1 : t2;
+      if (t.allowInterrupt) {
+        i++;
+        t.interrupt();
+      }
+      if (!t1.isAlive() && !t2.isAlive()) {
+        break;
+      }
+    }
+    t1.finish = true;
+    t2.finish = true;
+    t1.join();
+    t2.join();
+    assertFalse(t1.failed);
+    assertFalse(t2.failed);
+  }
 
 
   public void testIndexStoreCombos() throws Exception {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
index cd5cee9..bcb154d 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
@@ -25,6 +25,8 @@
 import java.util.Collections;
 import java.util.List;
 import java.util.Random;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -302,6 +304,69 @@
     modifier.close();
     dir.close();
   }
+  
+  
+  public void testDeleteAllNoDeadLock() throws IOException, InterruptedException {
+    Directory dir = newDirectory();
+    final RandomIndexWriter modifier = new RandomIndexWriter(random(), dir); 
+    int numThreads = atLeast(2);
+    Thread[] threads = new Thread[numThreads];
+    final CountDownLatch latch = new CountDownLatch(1);
+    final CountDownLatch doneLatch = new CountDownLatch(numThreads);
+    for (int i = 0; i < numThreads; i++) {
+      final int offset = i;
+      threads[i] = new Thread() {
+        @Override
+        public void run() {
+          int id = offset * 1000;
+          int value = 100;
+          try {
+            latch.await();
+            for (int i = 0; i < 1000; i++) {
+              Document doc = new Document();
+              doc.add(newTextField("content", "aaa", Field.Store.NO));
+              doc.add(newStringField("id", String.valueOf(id++), Field.Store.YES));
+              doc.add(newStringField("value", String.valueOf(value), Field.Store.NO));
+              doc.add(new NumericDocValuesField("dv", value));
+              modifier.addDocument(doc);
+              if (VERBOSE) {
+                System.out.println("\tThread["+offset+"]: add doc: " + id);
+              }
+            }
+          } catch (Exception e) {
+            throw new RuntimeException(e);
+          } finally {
+            doneLatch.countDown();
+            if (VERBOSE) {
+              System.out.println("\tThread["+offset+"]: done indexing" );
+            }
+          }
+        }
+      };
+      threads[i].start();
+    }
+    latch.countDown();
+    while(!doneLatch.await(1, TimeUnit.MILLISECONDS)) {
+      modifier.deleteAll();
+      if (VERBOSE) {
+        System.out.println("del all");
+      }
+    }
+    
+    modifier.deleteAll();
+    for (Thread thread : threads) {
+      thread.join();
+    }
+    
+    modifier.close();
+    DirectoryReader reader = DirectoryReader.open(dir);
+    assertEquals(reader.maxDoc(), 0);
+    assertEquals(reader.numDocs(), 0);
+    assertEquals(reader.numDeletedDocs(), 0);
+    reader.close();
+
+    dir.close();
+  }
 
   // test rollback of deleteAll()
   public void testDeleteAllRollback() throws IOException {
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
index 10280bf..4d80262 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions.java
@@ -210,15 +210,10 @@
 
   ThreadLocal<Thread> doFail = new ThreadLocal<Thread>();
 
-  private class MockIndexWriter extends IndexWriter {
+  private class TestPoint1 implements RandomIndexWriter.TestPoint {
     Random r = new Random(random().nextLong());
-
-    public MockIndexWriter(Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-    }
-
     @Override
-    boolean testPoint(String name) {
+    public void apply(String name) {
       if (doFail.get() != null && !name.equals("startDoFlush") && r.nextInt(40) == 17) {
         if (VERBOSE) {
           System.out.println(Thread.currentThread().getName() + ": NOW FAIL: " + name);
@@ -226,7 +221,6 @@
         }
         throw new RuntimeException(Thread.currentThread().getName() + ": intentionally failing at " + name);
       }
-      return true;
     }
   }
 
@@ -238,8 +232,9 @@
 
     MockAnalyzer analyzer = new MockAnalyzer(random());
     analyzer.setEnableChecks(false); // disable workflow checking as we forcefully close() in exceptional cases.
-    MockIndexWriter writer  = new MockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, analyzer)
-        .setRAMBufferSizeMB(0.1).setMergeScheduler(new ConcurrentMergeScheduler()));
+    
+    IndexWriter writer  = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, analyzer)
+        .setRAMBufferSizeMB(0.1).setMergeScheduler(new ConcurrentMergeScheduler()), new TestPoint1());
     ((ConcurrentMergeScheduler) writer.getConfig().getMergeScheduler()).setSuppressExceptions();
     //writer.setMaxBufferedDocs(10);
     if (VERBOSE) {
@@ -281,8 +276,8 @@
     Directory dir = newDirectory();
     MockAnalyzer analyzer = new MockAnalyzer(random());
     analyzer.setEnableChecks(false); // disable workflow checking as we forcefully close() in exceptional cases.
-    MockIndexWriter writer  = new MockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, analyzer)
-        .setRAMBufferSizeMB(0.2).setMergeScheduler(new ConcurrentMergeScheduler()));
+    IndexWriter writer  = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, analyzer)
+        .setRAMBufferSizeMB(0.2).setMergeScheduler(new ConcurrentMergeScheduler()), new TestPoint1());
     ((ConcurrentMergeScheduler) writer.getConfig().getMergeScheduler()).setSuppressExceptions();
     //writer.setMaxBufferedDocs(10);
     writer.commit();
@@ -324,19 +319,13 @@
   }
 
   // LUCENE-1198
-  private static final class MockIndexWriter2 extends IndexWriter {
-
-    public MockIndexWriter2(Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-    }
-
+  private static final class TestPoint2 implements RandomIndexWriter.TestPoint {
     boolean doFail;
 
     @Override
-    boolean testPoint(String name) {
+    public void apply(String name) {
       if (doFail && name.equals("DocumentsWriterPerThread addDocument start"))
         throw new RuntimeException("intentionally failing");
-      return true;
     }
   }
 
@@ -367,11 +356,12 @@
 
   public void testExceptionDocumentsWriterInit() throws IOException {
     Directory dir = newDirectory();
-    MockIndexWriter2 w = new MockIndexWriter2(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())));
+    TestPoint2 testPoint = new TestPoint2();
+    IndexWriter w = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())), testPoint);
     Document doc = new Document();
     doc.add(newTextField("field", "a field", Field.Store.YES));
     w.addDocument(doc);
-    w.doFail = true;
+    testPoint.doFail = true;
     try {
       w.addDocument(doc);
       fail("did not hit exception");
@@ -385,7 +375,7 @@
   // LUCENE-1208
   public void testExceptionJustBeforeFlush() throws IOException {
     Directory dir = newDirectory();
-    MockIndexWriter w = new MockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())).setMaxBufferedDocs(2));
+    IndexWriter w = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())).setMaxBufferedDocs(2), new TestPoint1());
     Document doc = new Document();
     doc.add(newTextField("field", "a field", Field.Store.YES));
     w.addDocument(doc);
@@ -412,22 +402,15 @@
     dir.close();
   }
 
-  private static final class MockIndexWriter3 extends IndexWriter {
-
-    public MockIndexWriter3(Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-    }
-
+  private static final class TestPoint3 implements RandomIndexWriter.TestPoint {
     boolean doFail;
     boolean failed;
-
     @Override
-    boolean testPoint(String name) {
+    public void apply(String name) {
       if (doFail && name.equals("startMergeInit")) {
         failed = true;
         throw new RuntimeException("intentionally failing");
       }
-      return true;
     }
   }
 
@@ -441,8 +424,9 @@
     cms.setSuppressExceptions();
     conf.setMergeScheduler(cms);
     ((LogMergePolicy) conf.getMergePolicy()).setMergeFactor(2);
-    MockIndexWriter3 w = new MockIndexWriter3(dir, conf);
-    w.doFail = true;
+    TestPoint3 testPoint = new TestPoint3();
+    IndexWriter w = RandomIndexWriter.mockIndexWriter(dir, conf, testPoint);
+    testPoint.doFail = true;
     Document doc = new Document();
     doc.add(newTextField("field", "a field", Field.Store.YES));
     for(int i=0;i<10;i++)
@@ -453,7 +437,7 @@
       }
 
     ((ConcurrentMergeScheduler) w.getConfig().getMergeScheduler()).sync();
-    assertTrue(w.failed);
+    assertTrue(testPoint.failed);
     w.close();
     dir.close();
   }
@@ -555,10 +539,15 @@
         boolean sawAppend = false;
         boolean sawFlush = false;
         for (int i = 0; i < trace.length; i++) {
-          if (FreqProxTermsWriterPerField.class.getName().equals(trace[i].getClassName()) && "flush".equals(trace[i].getMethodName()))
+          if (sawAppend && sawFlush) {
+            break;
+          }
+          if (FreqProxTermsWriterPerField.class.getName().equals(trace[i].getClassName()) && "flush".equals(trace[i].getMethodName())) {
             sawAppend = true;
-          if ("flush".equals(trace[i].getMethodName()))
+          }
+          if ("flush".equals(trace[i].getMethodName())) {
             sawFlush = true;
+          }
         }
 
         if (sawAppend && sawFlush && count++ >= 30) {
@@ -892,12 +881,18 @@
       boolean isDelete = false;
       boolean isInGlobalFieldMap = false;
       for (int i = 0; i < trace.length; i++) {
-        if (SegmentInfos.class.getName().equals(trace[i].getClassName()) && stage.equals(trace[i].getMethodName()))
+        if (isCommit && isDelete && isInGlobalFieldMap) {
+          break;
+        }
+        if (SegmentInfos.class.getName().equals(trace[i].getClassName()) && stage.equals(trace[i].getMethodName())) {
           isCommit = true;
-        if (MockDirectoryWrapper.class.getName().equals(trace[i].getClassName()) && "deleteFile".equals(trace[i].getMethodName()))
+        }
+        if (MockDirectoryWrapper.class.getName().equals(trace[i].getClassName()) && "deleteFile".equals(trace[i].getMethodName())) {
           isDelete = true;
-        if (SegmentInfos.class.getName().equals(trace[i].getClassName()) && "writeGlobalFieldMap".equals(trace[i].getMethodName()))
+        }
+        if (SegmentInfos.class.getName().equals(trace[i].getClassName()) && "writeGlobalFieldMap".equals(trace[i].getMethodName())) {
           isInGlobalFieldMap = true;
+        }
           
       }
       if (isInGlobalFieldMap && dontFailDuringGlobalFieldMap) {
@@ -1014,29 +1009,26 @@
   }
 
   // LUCENE-1347
-  private static final class MockIndexWriter4 extends IndexWriter {
-
-    public MockIndexWriter4(Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-    }
+  private static final class TestPoint4 implements RandomIndexWriter.TestPoint {
 
     boolean doFail;
 
     @Override
-    boolean testPoint(String name) {
+    public void apply(String name) {
       if (doFail && name.equals("rollback before checkpoint"))
         throw new RuntimeException("intentionally failing");
-      return true;
     }
   }
 
   // LUCENE-1347
   public void testRollbackExceptionHang() throws Throwable {
     Directory dir = newDirectory();
-    MockIndexWriter4 w = new MockIndexWriter4(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())));
+    TestPoint4 testPoint = new TestPoint4();
+    IndexWriter w = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig( TEST_VERSION_CURRENT, new MockAnalyzer(random())), testPoint);
+    
 
     addDoc(w);
-    w.doFail = true;
+    testPoint.doFail = true;
     try {
       w.rollback();
       fail("did not hit intentional RuntimeException");
@@ -1044,7 +1036,7 @@
       // expected
     }
 
-    w.doFail = false;
+    testPoint.doFail = false;
     w.rollback();
     dir.close();
   }
@@ -1342,6 +1334,7 @@
       for (int i = 0; i < trace.length; i++) {
         if (TermVectorsConsumer.class.getName().equals(trace[i].getClassName()) && stage.equals(trace[i].getMethodName())) {
           fail = true;
+          break;
         }
       }
       
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnJRECrash.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnJRECrash.java
index 535f9e3..75b393f 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnJRECrash.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterOnJRECrash.java
@@ -18,10 +18,11 @@
  *
  */
 
-import java.io.BufferedInputStream;
 import java.io.File;
 import java.io.IOException;
 import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.PrintStream;
 import java.lang.reflect.Field;
 import java.lang.reflect.Method;
 import java.util.ArrayList;
@@ -49,10 +50,6 @@
   
   @Override @Nightly
   public void testNRTThreads() throws Exception {
-    String vendor = Constants.JAVA_VENDOR;
-    assumeTrue(vendor + " JRE not supported.", 
-        vendor.startsWith("Oracle") || vendor.startsWith("Sun") || vendor.startsWith("Apple"));
-    
     // if we are not the fork
     if (System.getProperty("tests.crashmode") == null) {
       // try up to 10 times to create an index
@@ -112,18 +109,40 @@
     pb.directory(tempDir);
     pb.redirectErrorStream(true);
     Process p = pb.start();
-    InputStream is = p.getInputStream();
-    BufferedInputStream isl = new BufferedInputStream(is);
-    byte buffer[] = new byte[1024];
-    int len = 0;
-    if (VERBOSE) System.err.println(">>> Begin subprocess output");
-    while ((len = isl.read(buffer)) != -1) {
-      if (VERBOSE) {
-        System.err.write(buffer, 0, len);
-      }
-    }
-    if (VERBOSE) System.err.println("<<< End subprocess output");
+
+    // We pump everything to stderr.
+    PrintStream childOut = System.err; 
+    Thread stdoutPumper = ThreadPumper.start(p.getInputStream(), childOut);
+    Thread stderrPumper = ThreadPumper.start(p.getErrorStream(), childOut);
+    if (VERBOSE) childOut.println(">>> Begin subprocess output");
     p.waitFor();
+    stdoutPumper.join();
+    stderrPumper.join();
+    if (VERBOSE) childOut.println("<<< End subprocess output");
+  }
+
+  /** A pipe thread. It'd be nice to reuse guava's implementation for this... */
+  static class ThreadPumper {
+    public static Thread start(final InputStream from, final OutputStream to) {
+      Thread t = new Thread() {
+        @Override
+        public void run() {
+          try {
+            byte [] buffer = new byte [1024];
+            int len;
+            while ((len = from.read(buffer)) != -1) {
+              if (VERBOSE) {
+                to.write(buffer, 0, len);
+              }
+            }
+          } catch (IOException e) {
+            System.err.println("Couldn't pipe from the forked process: " + e.toString());
+          }
+        }
+      };
+      t.start();
+      return t;
+    }
   }
   
   /**
@@ -155,20 +174,40 @@
     }
     return false;
   }
-  
+
   /**
    * currently, this only works/tested on Sun and IBM.
    */
   public void crashJRE() {
-    try {
-      Class<?> clazz = Class.forName("sun.misc.Unsafe");
-      // we should use getUnsafe instead, harmony implements it, etc.
-      Field field = clazz.getDeclaredField("theUnsafe");
-      field.setAccessible(true);
-      Object o = field.get(null);
-      Method m = clazz.getMethod("putAddress", long.class, long.class);
-      m.invoke(o, 0L, 0L);
-    } catch (Exception e) { e.printStackTrace(); }
-    fail();
+    final String vendor = Constants.JAVA_VENDOR;
+    final boolean supportsUnsafeNpeDereference = 
+        vendor.startsWith("Oracle") || 
+        vendor.startsWith("Sun") || 
+        vendor.startsWith("Apple");
+
+      try {
+        if (supportsUnsafeNpeDereference) {
+          try {
+            Class<?> clazz = Class.forName("sun.misc.Unsafe");
+            Field field = clazz.getDeclaredField("theUnsafe");
+            field.setAccessible(true);
+            Object o = field.get(null);
+            Method m = clazz.getMethod("putAddress", long.class, long.class);
+            m.invoke(o, 0L, 0L);
+          } catch (Throwable e) {
+            System.out.println("Couldn't kill the JVM via Unsafe.");
+            e.printStackTrace(System.out); 
+          }
+        }
+
+        // Fallback attempt to Runtime.halt();
+        Runtime.getRuntime().halt(-1);
+      } catch (Exception e) {
+        System.out.println("Couldn't kill the JVM.");
+        e.printStackTrace(System.out); 
+      }
+
+      // We couldn't get the JVM to crash for some reason.
+      fail();
   }
 }
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java
index cc646b1..9e4e5a3 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterWithThreads.java
@@ -358,6 +358,9 @@
         boolean sawClose = false;
         boolean sawMerge = false;
         for (int i = 0; i < trace.length; i++) {
+          if (sawAbortOrFlushDoc && sawMerge && sawClose) {
+            break;
+          }
           if ("abort".equals(trace[i].getMethodName()) ||
               "finishDocument".equals(trace[i].getMethodName())) {
             sawAbortOrFlushDoc = true;
@@ -370,8 +373,9 @@
           }
         }
         if (sawAbortOrFlushDoc && !sawClose && !sawMerge) {
-          if (onlyOnce)
+          if (onlyOnce) {
             doFail = false;
+          }
           //System.out.println(Thread.currentThread().getName() + ": now fail");
           //new Throwable().printStackTrace(System.out);
           throw new IOException("now failing on purpose");
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestNorms.java b/lucene/core/src/test/org/apache/lucene/index/TestNorms.java
index bf7f13c..51e921c 100755
--- a/lucene/core/src/test/org/apache/lucene/index/TestNorms.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestNorms.java
@@ -179,12 +179,7 @@
     }
 
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
+    public SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
       throw new UnsupportedOperationException();
     }
   } 
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestStressIndexing2.java b/lucene/core/src/test/org/apache/lucene/index/TestStressIndexing2.java
index 906f80cf..271711f 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestStressIndexing2.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestStressIndexing2.java
@@ -47,21 +47,16 @@
   static int maxBufferedDocs=3;
   static int seed=0;
 
-  public class MockIndexWriter extends IndexWriter {
-
-    public MockIndexWriter(Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-    }
+  public final class YieldTestPoint implements RandomIndexWriter.TestPoint {
 
     @Override
-    boolean testPoint(String name) {
+    public void apply(String name) {
       //      if (name.equals("startCommit")) {
       if (random().nextInt(4) == 2)
         Thread.yield();
-      return true;
     }
   }
-  
+//  
   public void testRandomIWReader() throws Throwable {
     Directory dir = newDirectory();
     
@@ -151,9 +146,9 @@
   
   public DocsAndWriter indexRandomIWReader(int nThreads, int iterations, int range, Directory dir) throws IOException, InterruptedException {
     Map<String,Document> docs = new HashMap<String,Document>();
-    IndexWriter w = new MockIndexWriter(dir, newIndexWriterConfig(
+    IndexWriter w = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig(
         TEST_VERSION_CURRENT, new MockAnalyzer(random())).setOpenMode(OpenMode.CREATE).setRAMBufferSizeMB(
-                                                                                                  0.1).setMaxBufferedDocs(maxBufferedDocs).setMergePolicy(newLogMergePolicy()));
+            0.1).setMaxBufferedDocs(maxBufferedDocs).setMergePolicy(newLogMergePolicy()), new YieldTestPoint());
     w.commit();
     LogMergePolicy lmp = (LogMergePolicy) w.getConfig().getMergePolicy();
     lmp.setUseCompoundFile(false);
@@ -202,10 +197,10 @@
   public Map<String,Document> indexRandom(int nThreads, int iterations, int range, Directory dir, int maxThreadStates,
                                           boolean doReaderPooling) throws IOException, InterruptedException {
     Map<String,Document> docs = new HashMap<String,Document>();
-    IndexWriter w = new MockIndexWriter(dir, newIndexWriterConfig(
+    IndexWriter w = RandomIndexWriter.mockIndexWriter(dir, newIndexWriterConfig(
         TEST_VERSION_CURRENT, new MockAnalyzer(random())).setOpenMode(OpenMode.CREATE)
              .setRAMBufferSizeMB(0.1).setMaxBufferedDocs(maxBufferedDocs).setIndexerThreadPool(new ThreadAffinityDocumentsWriterThreadPool(maxThreadStates))
-             .setReaderPooling(doReaderPooling).setMergePolicy(newLogMergePolicy()));
+             .setReaderPooling(doReaderPooling).setMergePolicy(newLogMergePolicy()), new YieldTestPoint());
     LogMergePolicy lmp = (LogMergePolicy) w.getConfig().getMergePolicy();
     lmp.setUseCompoundFile(false);
     lmp.setMergeFactor(mergeFactor);
diff --git a/lucene/core/src/test/org/apache/lucene/index/TestUniqueTermCount.java b/lucene/core/src/test/org/apache/lucene/index/TestUniqueTermCount.java
index 5e1da90..3afa9a2 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestUniqueTermCount.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestUniqueTermCount.java
@@ -110,12 +110,7 @@
     }
 
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      throw new UnsupportedOperationException();
-    }
-
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
+    public SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
       throw new UnsupportedOperationException();
     }
   }
diff --git a/lucene/core/src/test/org/apache/lucene/search/JustCompileSearch.java b/lucene/core/src/test/org/apache/lucene/search/JustCompileSearch.java
index 7171cb1..b4bf0a4 100644
--- a/lucene/core/src/test/org/apache/lucene/search/JustCompileSearch.java
+++ b/lucene/core/src/test/org/apache/lucene/search/JustCompileSearch.java
@@ -270,12 +270,7 @@
     }
 
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight stats, AtomicReaderContext context) {
-      throw new UnsupportedOperationException(UNSUPPORTED_MSG);
-    }
-
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight stats, AtomicReaderContext context) {
+    public SimScorer simScorer(SimWeight stats, AtomicReaderContext context) {
       throw new UnsupportedOperationException(UNSUPPORTED_MSG);
     }
 
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestConjunctions.java b/lucene/core/src/test/org/apache/lucene/search/TestConjunctions.java
index a326e78..1c9497f 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestConjunctions.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestConjunctions.java
@@ -109,18 +109,8 @@
     }
 
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      return new ExactSimScorer() {
-        @Override
-        public float score(int doc, int freq) {
-          return freq;
-        }
-      };
-    }
-
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      return new SloppySimScorer() {
+    public SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
+      return new SimScorer() {
         @Override
         public float score(int doc, float freq) {
           return freq;
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestDocValuesScoring.java b/lucene/core/src/test/org/apache/lucene/search/TestDocValuesScoring.java
index 5df6a43..e3a5369 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestDocValuesScoring.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestDocValuesScoring.java
@@ -156,34 +156,11 @@
     }
 
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-      final ExactSimScorer sub = sim.exactSimScorer(stats, context);
-      final FieldCache.Floats values = FieldCache.DEFAULT.getFloats(context.reader(), boostField, false);
-
-      return new ExactSimScorer() {
-        @Override
-        public float score(int doc, int freq) {
-          return values.get(doc) * sub.score(doc, freq);
-        }
-
-        @Override
-        public Explanation explain(int doc, Explanation freq) {
-          Explanation boostExplanation = new Explanation(values.get(doc), "indexDocValue(" + boostField + ")");
-          Explanation simExplanation = sub.explain(doc, freq);
-          Explanation expl = new Explanation(boostExplanation.getValue() * simExplanation.getValue(), "product of:");
-          expl.addDetail(boostExplanation);
-          expl.addDetail(simExplanation);
-          return expl;
-        }
-      };
-    }
-
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
-      final SloppySimScorer sub = sim.sloppySimScorer(stats, context);
+    public SimScorer simScorer(SimWeight stats, AtomicReaderContext context) throws IOException {
+      final SimScorer sub = sim.simScorer(stats, context);
       final FieldCache.Floats values = FieldCache.DEFAULT.getFloats(context.reader(), boostField, false);
       
-      return new SloppySimScorer() {
+      return new SimScorer() {
         @Override
         public float score(int doc, float freq) {
           return values.get(doc) * sub.score(doc, freq);
diff --git a/lucene/core/src/test/org/apache/lucene/search/TestMinShouldMatch2.java b/lucene/core/src/test/org/apache/lucene/search/TestMinShouldMatch2.java
index 2d294c7..57db46a 100644
--- a/lucene/core/src/test/org/apache/lucene/search/TestMinShouldMatch2.java
+++ b/lucene/core/src/test/org/apache/lucene/search/TestMinShouldMatch2.java
@@ -37,7 +37,7 @@
 import org.apache.lucene.index.TermContext;
 import org.apache.lucene.search.BooleanQuery.BooleanWeight;
 import org.apache.lucene.search.similarities.DefaultSimilarity;
-import org.apache.lucene.search.similarities.Similarity.ExactSimScorer;
+import org.apache.lucene.search.similarities.Similarity.SimScorer;
 import org.apache.lucene.search.similarities.Similarity.SimWeight;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.util.BytesRef;
@@ -274,7 +274,7 @@
     final int maxDoc;
 
     final Set<Long> ords = new HashSet<Long>();
-    final ExactSimScorer[] sims;
+    final SimScorer[] sims;
     final int minNrShouldMatch;
     
     double score = Float.NaN;
@@ -285,7 +285,7 @@
       this.maxDoc = reader.maxDoc();
       BooleanQuery bq = (BooleanQuery) weight.getQuery();
       this.minNrShouldMatch = bq.getMinimumNumberShouldMatch();
-      this.sims = new ExactSimScorer[(int)dv.getValueCount()];
+      this.sims = new SimScorer[(int)dv.getValueCount()];
       for (BooleanClause clause : bq.getClauses()) {
         assert !clause.isProhibited();
         assert !clause.isRequired();
@@ -300,7 +300,7 @@
                         searcher.termStatistics(term, context));
           w.getValueForNormalization(); // ignored
           w.normalize(1F, 1F);
-          sims[(int)ord] = weight.similarity.exactSimScorer(w, reader.getContext());
+          sims[(int)ord] = weight.similarity.simScorer(w, reader.getContext());
         }
       }
     }
diff --git a/lucene/core/src/test/org/apache/lucene/search/spans/JustCompileSearchSpans.java b/lucene/core/src/test/org/apache/lucene/search/spans/JustCompileSearchSpans.java
index 9cb4376..177874f 100644
--- a/lucene/core/src/test/org/apache/lucene/search/spans/JustCompileSearchSpans.java
+++ b/lucene/core/src/test/org/apache/lucene/search/spans/JustCompileSearchSpans.java
@@ -148,7 +148,7 @@
   static final class JustCompileSpanScorer extends SpanScorer {
 
     protected JustCompileSpanScorer(Spans spans, Weight weight,
-        Similarity.SloppySimScorer docScorer) throws IOException {
+        Similarity.SimScorer docScorer) throws IOException {
       super(spans, weight, docScorer);
     }
 
diff --git a/lucene/core/src/test/org/apache/lucene/search/spans/TestPayloadSpans.java b/lucene/core/src/test/org/apache/lucene/search/spans/TestPayloadSpans.java
index 40172ba..f4b3535 100644
--- a/lucene/core/src/test/org/apache/lucene/search/spans/TestPayloadSpans.java
+++ b/lucene/core/src/test/org/apache/lucene/search/spans/TestPayloadSpans.java
@@ -379,11 +379,11 @@
     PayloadSpanUtil psu = new PayloadSpanUtil(searcher.getTopReaderContext());
     
     Collection<byte[]> payloads = psu.getPayloadsForQuery(new TermQuery(new Term(PayloadHelper.FIELD, "rr")));
-    if(VERBOSE)
+    if(VERBOSE) {
       System.out.println("Num payloads:" + payloads.size());
-    for (final byte [] bytes : payloads) {
-      if(VERBOSE)
+      for (final byte [] bytes : payloads) {
         System.out.println(new String(bytes, "UTF-8"));
+      }
     }
     reader.close();
     directory.close();
@@ -451,12 +451,12 @@
         System.out.println("\nSpans Dump --");
       if (spans.isPayloadAvailable()) {
         Collection<byte[]> payload = spans.getPayload();
-        if(VERBOSE)
+        if(VERBOSE) {
           System.out.println("payloads for span:" + payload.size());
-        for (final byte [] bytes : payload) {
-          if(VERBOSE)
+          for (final byte [] bytes : payload) {
             System.out.println("doc:" + spans.doc() + " s:" + spans.start() + " e:" + spans.end() + " "
               + new String(bytes, "UTF-8"));
+          }
         }
 
         assertEquals(numPayloads[cnt],payload.size());
diff --git a/lucene/core/src/test/org/apache/lucene/store/TestMockDirectoryWrapper.java b/lucene/core/src/test/org/apache/lucene/store/TestMockDirectoryWrapper.java
index 5a1e3a9..347fbda 100644
--- a/lucene/core/src/test/org/apache/lucene/store/TestMockDirectoryWrapper.java
+++ b/lucene/core/src/test/org/apache/lucene/store/TestMockDirectoryWrapper.java
@@ -55,7 +55,7 @@
   public void testDiskFull() throws IOException {
     // test writeBytes
     MockDirectoryWrapper dir = newMockDirectory();
-    dir.setMaxSizeInBytes(2);
+    dir.setMaxSizeInBytes(3);
     final byte[] bytes = new byte[] { 1, 2};
     IndexOutput out = dir.createOutput("foo", IOContext.DEFAULT);
     out.writeBytes(bytes, bytes.length); // first write should succeed
@@ -73,7 +73,7 @@
     
     // test copyBytes
     dir = newMockDirectory();
-    dir.setMaxSizeInBytes(2);
+    dir.setMaxSizeInBytes(3);
     out = dir.createOutput("foo", IOContext.DEFAULT);
     out.copyBytes(new ByteArrayDataInput(bytes), bytes.length); // first copy should succeed
     // flush() to ensure the written bytes are not buffered and counted
diff --git a/lucene/core/src/test/org/apache/lucene/util/TestTimSorter.java b/lucene/core/src/test/org/apache/lucene/util/TestTimSorter.java
index df18996..456d36d 100644
--- a/lucene/core/src/test/org/apache/lucene/util/TestTimSorter.java
+++ b/lucene/core/src/test/org/apache/lucene/util/TestTimSorter.java
@@ -25,7 +25,7 @@
 
   @Override
   public Sorter newSorter(Entry[] arr) {
-    return new ArrayTimSorter<Entry>(arr, ArrayUtil.<Entry>naturalComparator(), random().nextInt(arr.length));
+    return new ArrayTimSorter<Entry>(arr, ArrayUtil.<Entry>naturalComparator(), _TestUtil.nextInt(random(), 0, arr.length));
   }
 
 }
diff --git a/lucene/core/src/test/org/apache/lucene/util/fst/Test2BFST.java b/lucene/core/src/test/org/apache/lucene/util/fst/Test2BFST.java
index 701e921..a149ed6 100644
--- a/lucene/core/src/test/org/apache/lucene/util/fst/Test2BFST.java
+++ b/lucene/core/src/test/org/apache/lucene/util/fst/Test2BFST.java
@@ -34,7 +34,7 @@
 import org.junit.Ignore;
 import com.carrotsearch.randomizedtesting.annotations.TimeoutSuite;
 
-@Ignore("Requires tons of heap to run (10G works)")
+@Ignore("Requires tons of heap to run (420G works)")
 @TimeoutSuite(millis = 100 * TimeUnits.HOUR)
 public class Test2BFST extends LuceneTestCase {
 
@@ -50,12 +50,12 @@
     for(int doPackIter=0;doPackIter<2;doPackIter++) {
       boolean doPack = doPackIter == 1;
 
-      // Build FST w/ NoOutputs and stop when nodeCount > 3B
+      // Build FST w/ NoOutputs and stop when nodeCount > 2.2B
       if (!doPack) {
         System.out.println("\nTEST: 3B nodes; doPack=false output=NO_OUTPUTS");
         Outputs<Object> outputs = NoOutputs.getSingleton();
         Object NO_OUTPUT = outputs.getNoOutput();
-        final Builder<Object> b = new Builder<Object>(FST.INPUT_TYPE.BYTE1, 0, 0, false, false, Integer.MAX_VALUE, outputs,
+        final Builder<Object> b = new Builder<Object>(FST.INPUT_TYPE.BYTE1, 0, 0, true, true, Integer.MAX_VALUE, outputs,
                                                       null, doPack, PackedInts.COMPACT, true, 15);
 
         int count = 0;
@@ -72,7 +72,7 @@
           if (count % 100000 == 0) {
             System.out.println(count + ": " + b.fstSizeInBytes() + " bytes; " + b.getTotStateCount() + " nodes");
           }
-          if (b.getTotStateCount() > LIMIT) {
+          if (b.getTotStateCount() > Integer.MAX_VALUE + 100L * 1024 * 1024) {
             break;
           }
           nextInput(r, ints2);
diff --git a/lucene/core/src/test/org/apache/lucene/util/fst/TestFSTs.java b/lucene/core/src/test/org/apache/lucene/util/fst/TestFSTs.java
index fd7e8ac..fe21e0a 100644
--- a/lucene/core/src/test/org/apache/lucene/util/fst/TestFSTs.java
+++ b/lucene/core/src/test/org/apache/lucene/util/fst/TestFSTs.java
@@ -126,7 +126,7 @@
 
       // FST ord pos int
       {
-        final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+        final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
         final List<FSTTester.InputOutput<Long>> pairs = new ArrayList<FSTTester.InputOutput<Long>>(terms2.length);
         for(int idx=0;idx<terms2.length;idx++) {
           pairs.add(new FSTTester.InputOutput<Long>(terms2[idx], (long) idx));
@@ -171,7 +171,7 @@
 
     // PositiveIntOutput (ord)
     {
-      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
       final List<FSTTester.InputOutput<Long>> pairs = new ArrayList<FSTTester.InputOutput<Long>>(terms.length);
       for(int idx=0;idx<terms.length;idx++) {
         pairs.add(new FSTTester.InputOutput<Long>(terms[idx], (long) idx));
@@ -181,8 +181,7 @@
 
     // PositiveIntOutput (random monotonically increasing positive number)
     {
-      final boolean doShare = random().nextBoolean();
-      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(doShare);
+      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
       final List<FSTTester.InputOutput<Long>> pairs = new ArrayList<FSTTester.InputOutput<Long>>(terms.length);
       long lastOutput = 0;
       for(int idx=0;idx<terms.length;idx++) {
@@ -190,12 +189,12 @@
         lastOutput = value;
         pairs.add(new FSTTester.InputOutput<Long>(terms[idx], value));
       }
-      new FSTTester<Long>(random(), dir, inputMode, pairs, outputs, doShare).doTest(true);
+      new FSTTester<Long>(random(), dir, inputMode, pairs, outputs, true).doTest(true);
     }
 
     // PositiveIntOutput (random positive number)
     {
-      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(random().nextBoolean());
+      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
       final List<FSTTester.InputOutput<Long>> pairs = new ArrayList<FSTTester.InputOutput<Long>>(terms.length);
       for(int idx=0;idx<terms.length;idx++) {
         pairs.add(new FSTTester.InputOutput<Long>(terms[idx], _TestUtil.nextLong(random(), 0, Long.MAX_VALUE)));
@@ -205,8 +204,8 @@
 
     // Pair<ord, (random monotonically increasing positive number>
     {
-      final PositiveIntOutputs o1 = PositiveIntOutputs.getSingleton(random().nextBoolean());
-      final PositiveIntOutputs o2 = PositiveIntOutputs.getSingleton(random().nextBoolean());
+      final PositiveIntOutputs o1 = PositiveIntOutputs.getSingleton();
+      final PositiveIntOutputs o2 = PositiveIntOutputs.getSingleton();
       final PairOutputs<Long,Long> outputs = new PairOutputs<Long,Long>(o1, o2);
       final List<FSTTester.InputOutput<PairOutputs.Pair<Long,Long>>> pairs = new ArrayList<FSTTester.InputOutput<PairOutputs.Pair<Long,Long>>>(terms.length);
       long lastOutput = 0;
@@ -306,7 +305,7 @@
     }
     IndexReader r = DirectoryReader.open(writer, true);
     writer.close();
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(random().nextBoolean());
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
 
     final boolean doRewrite = random().nextBoolean();
 
@@ -653,8 +652,8 @@
 
     if (storeOrds && storeDocFreqs) {
       // Store both ord & docFreq:
-      final PositiveIntOutputs o1 = PositiveIntOutputs.getSingleton(true);
-      final PositiveIntOutputs o2 = PositiveIntOutputs.getSingleton(false);
+      final PositiveIntOutputs o1 = PositiveIntOutputs.getSingleton();
+      final PositiveIntOutputs o2 = PositiveIntOutputs.getSingleton();
       final PairOutputs<Long,Long> outputs = new PairOutputs<Long,Long>(o1, o2);
       new VisitTerms<PairOutputs.Pair<Long,Long>>(dirOut, wordsFileIn, inputMode, prune, outputs, doPack, noArcArrays) {
         Random rand;
@@ -669,7 +668,7 @@
       }.run(limit, verify, false);
     } else if (storeOrds) {
       // Store only ords
-      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
       new VisitTerms<Long>(dirOut, wordsFileIn, inputMode, prune, outputs, doPack, noArcArrays) {
         @Override
         public Long getOutput(IntsRef input, int ord) {
@@ -678,7 +677,7 @@
       }.run(limit, verify, true);
     } else if (storeDocFreqs) {
       // Store only docFreq
-      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(false);
+      final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
       new VisitTerms<Long>(dirOut, wordsFileIn, inputMode, prune, outputs, doPack, noArcArrays) {
         Random rand;
         @Override
@@ -781,7 +780,7 @@
     // smaller FST if the outputs grow monotonically.  But
     // if numbers are "random", false should give smaller
     // final size:
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
 
     // Build an FST mapping BytesRef -> Long
     final Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, outputs);
@@ -1100,7 +1099,7 @@
   }
 
   public void testFinalOutputOnEndState() throws Exception {
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
 
     final Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE4, 2, 0, true, true, Integer.MAX_VALUE, outputs, null, random().nextBoolean(), PackedInts.DEFAULT, true, 15);
     builder.add(Util.toUTF32("stat", new IntsRef()), 17L);
@@ -1115,7 +1114,7 @@
   }
 
   public void testInternalFinalState() throws Exception {
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     final boolean willRewrite = random().nextBoolean();
     final Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, 0, 0, true, true, Integer.MAX_VALUE, outputs, null, willRewrite, PackedInts.DEFAULT, true, 15);
     builder.add(Util.toIntsRef(new BytesRef("stat"), new IntsRef()), outputs.getNoOutput());
@@ -1136,7 +1135,7 @@
   // Make sure raw FST can differentiate between final vs
   // non-final end nodes
   public void testNonFinalStopNode() throws Exception {
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     final Long nothing = outputs.getNoOutput();
     final Builder<Long> b = new Builder<Long>(FST.INPUT_TYPE.BYTE1, outputs);
 
@@ -1216,7 +1215,7 @@
   };
 
   public void testShortestPaths() throws Exception {
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     final Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, outputs);
 
     final IntsRef scratch = new IntsRef();
@@ -1258,8 +1257,8 @@
   public void testShortestPathsWFST() throws Exception {
 
     PairOutputs<Long,Long> outputs = new PairOutputs<Long,Long>(
-        PositiveIntOutputs.getSingleton(true), // weight
-        PositiveIntOutputs.getSingleton(true)  // output
+        PositiveIntOutputs.getSingleton(), // weight
+        PositiveIntOutputs.getSingleton()  // output
     );
     
     final Builder<Pair<Long,Long>> builder = new Builder<Pair<Long,Long>>(FST.INPUT_TYPE.BYTE1, outputs);
@@ -1301,7 +1300,7 @@
     final TreeMap<String,Long> slowCompletor = new TreeMap<String,Long>();
     final TreeSet<String> allPrefixes = new TreeSet<String>();
     
-    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    final PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     final Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, outputs);
     final IntsRef scratch = new IntsRef();
     
@@ -1416,8 +1415,8 @@
     final TreeSet<String> allPrefixes = new TreeSet<String>();
     
     PairOutputs<Long,Long> outputs = new PairOutputs<Long,Long>(
-        PositiveIntOutputs.getSingleton(true), // weight
-        PositiveIntOutputs.getSingleton(true)  // output
+        PositiveIntOutputs.getSingleton(), // weight
+        PositiveIntOutputs.getSingleton()  // output
     );
     final Builder<Pair<Long,Long>> builder = new Builder<Pair<Long,Long>>(FST.INPUT_TYPE.BYTE1, outputs);
     final IntsRef scratch = new IntsRef();
diff --git a/lucene/core/src/test/org/apache/lucene/util/junitcompat/TestFailOnFieldCacheInsanity.java b/lucene/core/src/test/org/apache/lucene/util/junitcompat/TestFailOnFieldCacheInsanity.java
index 202e9fe..20c8e47 100644
--- a/lucene/core/src/test/org/apache/lucene/util/junitcompat/TestFailOnFieldCacheInsanity.java
+++ b/lucene/core/src/test/org/apache/lucene/util/junitcompat/TestFailOnFieldCacheInsanity.java
@@ -69,6 +69,7 @@
     for(Failure f : r.getFailures()) {
       if (f.getMessage().indexOf("Insane") != -1) {
         insane = true;
+        break;
       }
     }
     Assert.assertTrue(insane);
diff --git a/lucene/core/src/test/org/apache/lucene/util/packed/TestPackedInts.java b/lucene/core/src/test/org/apache/lucene/util/packed/TestPackedInts.java
index 10e7b01..dbd65da 100644
--- a/lucene/core/src/test/org/apache/lucene/util/packed/TestPackedInts.java
+++ b/lucene/core/src/test/org/apache/lucene/util/packed/TestPackedInts.java
@@ -659,6 +659,61 @@
     assertEquals(1 << 10, wrt.get(valueCount - 1));
   }
 
+  public void testPagedGrowableWriter() {
+    int pageSize = 1 << (_TestUtil.nextInt(random(), 6, 30));
+    // supports 0 values?
+    PagedGrowableWriter writer = new PagedGrowableWriter(0, pageSize, _TestUtil.nextInt(random(), 1, 64), random().nextFloat());
+    assertEquals(0, writer.size());
+
+    // compare against AppendingLongBuffer
+    AppendingLongBuffer buf = new AppendingLongBuffer();
+    int size = random().nextInt(1000000);
+    long max = 5;
+    for (int i = 0; i < size; ++i) {
+      buf.add(_TestUtil.nextLong(random(), 0, max));
+      if (rarely()) {
+        max = PackedInts.maxValue(rarely() ? _TestUtil.nextInt(random(), 0, 63) : _TestUtil.nextInt(random(), 0, 31));
+      }
+    }
+    writer = new PagedGrowableWriter(size, pageSize, _TestUtil.nextInt(random(), 1, 64), random().nextFloat());
+    assertEquals(size, writer.size());
+    for (int i = size - 1; i >= 0; --i) {
+      writer.set(i, buf.get(i));
+    }
+    for (int i = 0; i < size; ++i) {
+      assertEquals(buf.get(i), writer.get(i));
+    }
+
+    // test copy
+    PagedGrowableWriter copy = writer.resize(_TestUtil.nextLong(random(), writer.size() / 2, writer.size() * 3 / 2));
+    for (long i = 0; i < copy.size(); ++i) {
+      if (i < writer.size()) {
+        assertEquals(writer.get(i), copy.get(i));
+      } else {
+        assertEquals(0, copy.get(i));
+      }
+    }
+  }
+
+  // memory hole
+  @Ignore
+  public void testPagedGrowableWriterOverflow() {
+    final long size = _TestUtil.nextLong(random(), 2 * (long) Integer.MAX_VALUE, 3 * (long) Integer.MAX_VALUE);
+    final int pageSize = 1 << (_TestUtil.nextInt(random(), 16, 30));
+    final PagedGrowableWriter writer = new PagedGrowableWriter(size, pageSize, 1, random().nextFloat());
+    final long index = _TestUtil.nextLong(random(), (long) Integer.MAX_VALUE, size - 1);
+    writer.set(index, 2);
+    assertEquals(2, writer.get(index));
+    for (int i = 0; i < 1000000; ++i) {
+      final long idx = _TestUtil.nextLong(random(), 0, size);
+      if (idx == index) {
+        assertEquals(2, writer.get(idx));
+      } else {
+        assertEquals(0, writer.get(idx));
+      }
+    }
+  }
+
   public void testSave() throws IOException {
     final int valueCount = _TestUtil.nextInt(random(), 1, 2048);
     for (int bpv = 1; bpv <= 64; ++bpv) {
@@ -808,13 +863,15 @@
     final long[] arr = new long[RandomInts.randomIntBetween(random(), 1, 1000000)];
     for (int bpv : new int[] {0, 1, 63, 64, RandomInts.randomIntBetween(random(), 2, 62)}) {
       for (boolean monotonic : new boolean[] {true, false}) {
+        final int pageSize = 1 << _TestUtil.nextInt(random(), 6, 20);
+        final int initialPageCount = _TestUtil.nextInt(random(), 0, 16);
         AbstractAppendingLongBuffer buf;
         final int inc;
         if (monotonic) {
-          buf = new MonotonicAppendingLongBuffer();
+          buf = new MonotonicAppendingLongBuffer(initialPageCount, pageSize);
           inc = _TestUtil.nextInt(random(), -1000, 1000);
         } else {
-          buf = new AppendingLongBuffer();
+          buf = new AppendingLongBuffer(initialPageCount, pageSize);
           inc = 0;
         }
         if (bpv == 0) {
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/range/RangeAccumulator.java b/lucene/facet/src/java/org/apache/lucene/facet/range/RangeAccumulator.java
index 1ab3b73..5cdf33e 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/range/RangeAccumulator.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/range/RangeAccumulator.java
@@ -64,7 +64,7 @@
         throw new IllegalArgumentException("only flat (dimension only) CategoryPath is allowed");
       }
 
-      RangeFacetRequest<?> rfr = (RangeFacetRequest) fr;
+      RangeFacetRequest<?> rfr = (RangeFacetRequest<?>) fr;
 
       requests.add(new RangeSet(rfr.ranges, rfr.categoryPath.components[0]));
     }
@@ -86,8 +86,11 @@
       RangeSet ranges = requests.get(i);
 
       int[] counts = new int[ranges.ranges.length];
-      for(MatchingDocs hits : matchingDocs) {
+      for (MatchingDocs hits : matchingDocs) {
         NumericDocValues ndv = hits.context.reader().getNumericDocValues(ranges.field);
+        if (ndv == null) {
+          continue; // no numeric values for this field in this reader
+        }
         final int length = hits.bits.length();
         int doc = 0;
         while (doc < length && (doc = hits.bits.nextSetBit(doc)) != -1) {
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SampleFixer.java b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SampleFixer.java
index a3305f8..d72752c 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SampleFixer.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SampleFixer.java
@@ -3,6 +3,7 @@
 import java.io.IOException;
 
 import org.apache.lucene.facet.search.FacetResult;
+import org.apache.lucene.facet.search.FacetResultNode;
 import org.apache.lucene.facet.search.ScoredDocIDs;
 
 /*
@@ -23,22 +24,50 @@
  */
 
 /**
- * Fixer of sample facet accumulation results
+ * Fixer of sample facet accumulation results.
  * 
  * @lucene.experimental
  */
-public interface SampleFixer {
+public abstract class SampleFixer {
   
   /**
    * Alter the input result, fixing it to account for the sampling. This
-   * implementation can compute accurate or estimated counts for the sampled facets. 
-   * For example, a faster correction could just multiply by a compensating factor.
+   * implementation can compute accurate or estimated counts for the sampled
+   * facets. For example, a faster correction could just multiply by a
+   * compensating factor.
    * 
    * @param origDocIds
    *          full set of matching documents.
    * @param fres
    *          sample result to be fixed.
-   * @throws IOException If there is a low-level I/O error.
+   * @throws IOException
+   *           If there is a low-level I/O error.
    */
-  public void fixResult(ScoredDocIDs origDocIds, FacetResult fres) throws IOException; 
+  public void fixResult(ScoredDocIDs origDocIds, FacetResult fres, double samplingRatio) throws IOException {
+    FacetResultNode topRes = fres.getFacetResultNode();
+    fixResultNode(topRes, origDocIds, samplingRatio);
+  }
+  
+  /**
+   * Fix result node count, and, recursively, fix all its children
+   * 
+   * @param facetResNode
+   *          result node to be fixed
+   * @param docIds
+   *          docids in effect
+   * @throws IOException
+   *           If there is a low-level I/O error.
+   */
+  protected void fixResultNode(FacetResultNode facetResNode, ScoredDocIDs docIds, double samplingRatio) 
+      throws IOException {
+    singleNodeFix(facetResNode, docIds, samplingRatio);
+    for (FacetResultNode frn : facetResNode.subResults) {
+      fixResultNode(frn, docIds, samplingRatio);
+    }
+  }
+  
+  /** Fix the given node's value. */
+  protected abstract void singleNodeFix(FacetResultNode facetResNode, ScoredDocIDs docIds, double samplingRatio) 
+      throws IOException;
+  
 }
\ No newline at end of file
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/sampling/Sampler.java b/lucene/facet/src/java/org/apache/lucene/facet/sampling/Sampler.java
index 85306b4..ec39ef7 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/sampling/Sampler.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/sampling/Sampler.java
@@ -12,7 +12,6 @@
 import org.apache.lucene.facet.search.FacetResultNode;
 import org.apache.lucene.facet.search.ScoredDocIDs;
 import org.apache.lucene.facet.taxonomy.TaxonomyReader;
-import org.apache.lucene.index.IndexReader;
 
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
@@ -111,16 +110,6 @@
       throws IOException;
 
   /**
-   * Get a fixer of sample facet accumulation results. Default implementation
-   * returns a <code>TakmiSampleFixer</code> which is adequate only for
-   * counting. For any other accumulator, provide a different fixer.
-   */
-  public SampleFixer getSampleFixer(IndexReader indexReader, TaxonomyReader taxonomyReader,
-      FacetSearchParams searchParams) {
-    return new TakmiSampleFixer(indexReader, taxonomyReader, searchParams);
-  }
-  
-  /**
    * Result of sample computation
    */
   public final static class SampleResult {
@@ -220,7 +209,7 @@
       super(orig.categoryPath, num);
       this.orig = orig;
       setDepth(orig.getDepth());
-      setNumLabel(orig.getNumLabel());
+      setNumLabel(0); // don't label anything as we're over-sampling
       setResultMode(orig.getResultMode());
       setSortOrder(orig.getSortOrder());
     }
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingAccumulator.java b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingAccumulator.java
index 54329e6..2a04394 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingAccumulator.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingAccumulator.java
@@ -79,30 +79,43 @@
   public List<FacetResult> accumulate(ScoredDocIDs docids) throws IOException {
     // Replacing the original searchParams with the over-sampled
     FacetSearchParams original = searchParams;
-    searchParams = sampler.overSampledSearchParams(original);
+    SampleFixer samplerFixer = sampler.samplingParams.getSampleFixer();
+    final boolean shouldOversample = sampler.samplingParams.shouldOverSample();
+    if (shouldOversample) {
+      searchParams = sampler.overSampledSearchParams(original);
+    }
     
     List<FacetResult> sampleRes = super.accumulate(docids);
     
-    List<FacetResult> fixedRes = new ArrayList<FacetResult>();
+    List<FacetResult> results = new ArrayList<FacetResult>();
     for (FacetResult fres : sampleRes) {
       // for sure fres is not null because this is guaranteed by the delegee.
       PartitionsFacetResultsHandler frh = createFacetResultsHandler(fres.getFacetRequest());
-      // fix the result of current request
-      sampler.getSampleFixer(indexReader, taxonomyReader, searchParams).fixResult(docids, fres);
+      if (samplerFixer != null) {
+        // fix the result of current request
+        samplerFixer.fixResult(docids, fres, samplingRatio);
+        
+        fres = frh.rearrangeFacetResult(fres); // let delegee's handler do any arranging it needs to
+
+        if (shouldOversample) {
+          // Using the sampler to trim the extra (over-sampled) results
+          fres = sampler.trimResult(fres);
+        }
+      }
       
-      fres = frh.rearrangeFacetResult(fres); // let delegee's handler do any arranging it needs to
-
-      // Using the sampler to trim the extra (over-sampled) results
-      fres = sampler.trimResult(fres);
-
       // final labeling if allowed (because labeling is a costly operation)
-      frh.labelResult(fres);
-      fixedRes.add(fres); // add to final results
+      if (fres.getFacetResultNode().ordinal == TaxonomyReader.INVALID_ORDINAL) {
+        // category does not exist, add an empty result
+        results.add(emptyResult(fres.getFacetResultNode().ordinal, fres.getFacetRequest()));
+      } else {
+        frh.labelResult(fres);
+        results.add(fres);
+      }
     }
     
     searchParams = original; // Back to original params
     
-    return fixedRes; 
+    return results; 
   }
 
   @Override
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingParams.java b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingParams.java
index 4366d57..464b593 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingParams.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingParams.java
@@ -28,7 +28,7 @@
    * Default factor by which more results are requested over the sample set.
    * @see SamplingParams#getOversampleFactor()
    */
-  public static final double DEFAULT_OVERSAMPLE_FACTOR = 2d;
+  public static final double DEFAULT_OVERSAMPLE_FACTOR = 1d;
   
   /**
    * Default ratio between size of sample to original size of document set.
@@ -59,6 +59,8 @@
   private double sampleRatio = DEFAULT_SAMPLE_RATIO;
   private int samplingThreshold = DEFAULT_SAMPLING_THRESHOLD;
   private double oversampleFactor = DEFAULT_OVERSAMPLE_FACTOR;
+
+  private SampleFixer sampleFixer = null;
   
   /**
    * Return the maxSampleSize.
@@ -166,4 +168,29 @@
     this.oversampleFactor = oversampleFactor;
   }
 
-}
\ No newline at end of file
+  /**
+   * @return {@link SampleFixer} to be used while fixing the sampled results, if
+   *         <code>null</code> no fixing will be performed
+   */
+  public SampleFixer getSampleFixer() {
+    return sampleFixer;
+  }
+
+  /**
+   * Set a {@link SampleFixer} to be used while fixing the sampled results.
+   * {@code null} means no fixing will be performed
+   */
+  public void setSampleFixer(SampleFixer sampleFixer) {
+    this.sampleFixer = sampleFixer;
+  }
+
+  /**
+   * Returns whether over-sampling should be done. By default returns
+   * {@code true} when {@link #getSampleFixer()} is not {@code null} and
+   * {@link #getOversampleFactor()} &gt; 1, {@code false} otherwise.
+   */
+  public boolean shouldOverSample() {
+    return sampleFixer != null && oversampleFactor > 1d;
+  }
+  
+}
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingWrapper.java b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingWrapper.java
index 829c671..a6cdeeb 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingWrapper.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/sampling/SamplingWrapper.java
@@ -10,6 +10,7 @@
 import org.apache.lucene.facet.search.FacetResult;
 import org.apache.lucene.facet.search.ScoredDocIDs;
 import org.apache.lucene.facet.search.StandardFacetsAccumulator;
+import org.apache.lucene.facet.taxonomy.TaxonomyReader;
 
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
@@ -52,31 +53,48 @@
   public List<FacetResult> accumulate(ScoredDocIDs docids) throws IOException {
     // Replacing the original searchParams with the over-sampled (and without statistics-compute)
     FacetSearchParams original = delegee.searchParams;
-    delegee.searchParams = sampler.overSampledSearchParams(original);
+    boolean shouldOversample = sampler.samplingParams.shouldOverSample();
+   
+    if (shouldOversample) {
+      delegee.searchParams = sampler.overSampledSearchParams(original);
+    }
     
     SampleResult sampleSet = sampler.getSampleSet(docids);
 
     List<FacetResult> sampleRes = delegee.accumulate(sampleSet.docids);
 
-    List<FacetResult> fixedRes = new ArrayList<FacetResult>();
+    List<FacetResult> results = new ArrayList<FacetResult>();
+    SampleFixer sampleFixer = sampler.samplingParams.getSampleFixer();
+    
     for (FacetResult fres : sampleRes) {
       // for sure fres is not null because this is guaranteed by the delegee.
       PartitionsFacetResultsHandler frh = createFacetResultsHandler(fres.getFacetRequest());
-      // fix the result of current request
-      sampler.getSampleFixer(indexReader, taxonomyReader, searchParams).fixResult(docids, fres); 
-      fres = frh.rearrangeFacetResult(fres); // let delegee's handler do any
+      if (sampleFixer != null) {
+        // fix the result of current request
+        sampleFixer.fixResult(docids, fres, sampleSet.actualSampleRatio); 
+        fres = frh.rearrangeFacetResult(fres); // let delegee's handler do any
+      }
       
-      // Using the sampler to trim the extra (over-sampled) results
-      fres = sampler.trimResult(fres);
+      if (shouldOversample) {
+        // Using the sampler to trim the extra (over-sampled) results
+        fres = sampler.trimResult(fres);
+      }
       
       // final labeling if allowed (because labeling is a costly operation)
-      frh.labelResult(fres);
-      fixedRes.add(fres); // add to final results
+      if (fres.getFacetResultNode().ordinal == TaxonomyReader.INVALID_ORDINAL) {
+        // category does not exist, add an empty result
+        results.add(emptyResult(fres.getFacetResultNode().ordinal, fres.getFacetRequest()));
+      } else {
+        frh.labelResult(fres);
+        results.add(fres);
+      }
     }
 
-    delegee.searchParams = original; // Back to original params
+    if (shouldOversample) {
+      delegee.searchParams = original; // Back to original params
+    }
     
-    return fixedRes; 
+    return results; 
   }
 
   @Override
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/sampling/TakmiSampleFixer.java b/lucene/facet/src/java/org/apache/lucene/facet/sampling/TakmiSampleFixer.java
index 83536e2..ade148c 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/sampling/TakmiSampleFixer.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/sampling/TakmiSampleFixer.java
@@ -2,21 +2,19 @@
 
 import java.io.IOException;
 
-import org.apache.lucene.index.IndexReader;
-import org.apache.lucene.index.MultiFields;
-import org.apache.lucene.index.Term;
-import org.apache.lucene.index.DocsEnum;
-import org.apache.lucene.search.DocIdSetIterator;
-import org.apache.lucene.util.Bits;
-
 import org.apache.lucene.facet.params.FacetSearchParams;
 import org.apache.lucene.facet.search.DrillDownQuery;
-import org.apache.lucene.facet.search.FacetResult;
 import org.apache.lucene.facet.search.FacetResultNode;
 import org.apache.lucene.facet.search.ScoredDocIDs;
 import org.apache.lucene.facet.search.ScoredDocIDsIterator;
 import org.apache.lucene.facet.taxonomy.CategoryPath;
 import org.apache.lucene.facet.taxonomy.TaxonomyReader;
+import org.apache.lucene.index.DocsEnum;
+import org.apache.lucene.index.IndexReader;
+import org.apache.lucene.index.MultiFields;
+import org.apache.lucene.index.Term;
+import org.apache.lucene.search.DocIdSetIterator;
+import org.apache.lucene.util.Bits;
 
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
@@ -36,16 +34,21 @@
  */
 
 /**
- * Fix sampling results by counting the intersection between two lists: a
- * TermDocs (list of documents in a certain category) and a DocIdSetIterator
- * (list of documents matching the query).
- * 
+ * Fix sampling results by correct results, by counting the intersection between
+ * two lists: a TermDocs (list of documents in a certain category) and a
+ * DocIdSetIterator (list of documents matching the query).
+ * <p>
+ * This fixer is suitable for scenarios which prioritize accuracy over
+ * performance. 
+ * <p>
+ * <b>Note:</b> for statistically more accurate top-k selection, set
+ * {@link SamplingParams#setOversampleFactor(double) oversampleFactor} to at
+ * least 2, so that the top-k categories would have better chance of showing up
+ * in the sampled top-cK results (see {@link SamplingParams#getOversampleFactor}
  * 
  * @lucene.experimental
  */
-// TODO (Facet): implement also an estimated fixing by ratio (taking into
-// account "translation" of counts!)
-class TakmiSampleFixer implements SampleFixer {
+public class TakmiSampleFixer extends SampleFixer {
   
   private TaxonomyReader taxonomyReader;
   private IndexReader indexReader;
@@ -59,29 +62,11 @@
   }
 
   @Override
-  public void fixResult(ScoredDocIDs origDocIds, FacetResult fres)
-      throws IOException {
-    FacetResultNode topRes = fres.getFacetResultNode();
-    fixResultNode(topRes, origDocIds);
+  public void singleNodeFix(FacetResultNode facetResNode, ScoredDocIDs docIds, double samplingRatio) throws IOException {
+    recount(facetResNode, docIds);
   }
   
   /**
-   * Fix result node count, and, recursively, fix all its children
-   * 
-   * @param facetResNode
-   *          result node to be fixed
-   * @param docIds
-   *          docids in effect
-   * @throws IOException If there is a low-level I/O error.
-   */
-  private void fixResultNode(FacetResultNode facetResNode, ScoredDocIDs docIds) throws IOException {
-    recount(facetResNode, docIds);
-    for (FacetResultNode frn : facetResNode.subResults) {
-      fixResultNode(frn, docIds);
-    }
-  }
-
-  /**
    * Internal utility: recount for a facet result node
    * 
    * @param fresNode
@@ -179,4 +164,5 @@
     }
     return false; // exhausted
   }
+
 }
\ No newline at end of file
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/search/DrillSideways.java b/lucene/facet/src/java/org/apache/lucene/facet/search/DrillSideways.java
index a67d4e3..4c530e7 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/search/DrillSideways.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/search/DrillSideways.java
@@ -402,16 +402,20 @@
       query = new DrillDownQuery(filter, query);
     }
     if (sort != null) {
+      int limit = searcher.getIndexReader().maxDoc();
+      if (limit == 0) {
+        limit = 1; // the collector does not alow numHits = 0
+      }
+      topN = Math.min(topN, limit);
       final TopFieldCollector hitCollector = TopFieldCollector.create(sort,
-                                                                      Math.min(topN, searcher.getIndexReader().maxDoc()),
+                                                                      topN,
                                                                       after,
                                                                       true,
                                                                       doDocScores,
                                                                       doMaxScore,
                                                                       true);
       DrillSidewaysResult r = search(query, hitCollector, fsp);
-      r.hits = hitCollector.topDocs();
-      return r;
+      return new DrillSidewaysResult(r.facetResults, hitCollector.topDocs());
     } else {
       return search(after, query, topN, fsp);
     }
@@ -423,10 +427,14 @@
    */
   public DrillSidewaysResult search(ScoreDoc after,
                                     DrillDownQuery query, int topN, FacetSearchParams fsp) throws IOException {
-    TopScoreDocCollector hitCollector = TopScoreDocCollector.create(Math.min(topN, searcher.getIndexReader().maxDoc()), after, true);
+    int limit = searcher.getIndexReader().maxDoc();
+    if (limit == 0) {
+      limit = 1; // the collector does not alow numHits = 0
+    }
+    topN = Math.min(topN, limit);
+    TopScoreDocCollector hitCollector = TopScoreDocCollector.create(topN, after, true);
     DrillSidewaysResult r = search(query, hitCollector, fsp);
-    r.hits = hitCollector.topDocs();
-    return r;
+    return new DrillSidewaysResult(r.facetResults, hitCollector.topDocs());
   }
 
   /** Override this to use a custom drill-down {@link
@@ -454,16 +462,20 @@
     return false;
   }
 
-  /** Represents the returned result from a drill sideways
-   *  search. */
+  /**
+   * Represents the returned result from a drill sideways search. Note that if
+   * you called
+   * {@link DrillSideways#search(DrillDownQuery, Collector, FacetSearchParams)},
+   * then {@link #hits} will be {@code null}.
+   */
   public static class DrillSidewaysResult {
     /** Combined drill down & sideways results. */
     public final List<FacetResult> facetResults;
 
     /** Hits. */
-    public TopDocs hits;
+    public final TopDocs hits;
 
-    DrillSidewaysResult(List<FacetResult> facetResults, TopDocs hits) {
+    public DrillSidewaysResult(List<FacetResult> facetResults, TopDocs hits) {
       this.facetResults = facetResults;
       this.hits = hits;
     }
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/search/FacetResult.java b/lucene/facet/src/java/org/apache/lucene/facet/search/FacetResult.java
index 9a010fa..d21fbf6 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/search/FacetResult.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/search/FacetResult.java
@@ -1,5 +1,16 @@
 package org.apache.lucene.facet.search;
 
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.lucene.facet.taxonomy.CategoryPath;
+import org.apache.lucene.facet.taxonomy.TaxonomyReader;
+import org.apache.lucene.util.CollectionUtil;
+
 /*
  * Licensed to the Apache Software Foundation (ASF) under one or more
  * contributor license agreements.  See the NOTICE file distributed with
@@ -24,6 +35,140 @@
  */
 public class FacetResult {
   
+  private static FacetResultNode addIfNotExist(Map<CategoryPath, FacetResultNode> nodes, FacetResultNode node) {
+    FacetResultNode n = nodes.get(node.label);
+    if (n == null) {
+      nodes.put(node.label, node);
+      n = node;
+    }
+    return n;
+  }
+
+  /**
+   * A utility for merging multiple {@link FacetResult} of the same
+   * (hierarchical) dimension into a single {@link FacetResult}, to reconstruct
+   * the hierarchy. The results are merged according to the following rules:
+   * <ul>
+   * <li>If two results share the same dimension (first component in their
+   * {@link CategoryPath}), they are merged.
+   * <li>If a result is missing ancestors in the other results, e.g. A/B/C but
+   * no corresponding A or A/B, these nodes are 'filled' with their label,
+   * ordinal and value (obtained from the respective {@link FacetArrays}).
+   * <li>If a result does not share a dimension with other results, it is
+   * returned as is.
+   * </ul>
+   * <p>
+   * <b>NOTE:</b> the returned results are not guaranteed to be in the same
+   * order of the input ones.
+   * 
+   * @param results
+   *          the results to merge
+   * @param taxoReader
+   *          the {@link TaxonomyReader} to use when creating missing ancestor
+   *          nodes
+   * @param dimArrays
+   *          a mapping from a dimension to the respective {@link FacetArrays}
+   *          from which to pull the nodes values
+   */
+  public static List<FacetResult> mergeHierarchies(List<FacetResult> results, TaxonomyReader taxoReader,
+      Map<String, FacetArrays> dimArrays) throws IOException {
+    final Map<String, List<FacetResult>> dims = new HashMap<String,List<FacetResult>>();
+    for (FacetResult fr : results) {
+      String dim = fr.getFacetRequest().categoryPath.components[0];
+      List<FacetResult> frs = dims.get(dim);
+      if (frs == null) {
+        frs = new ArrayList<FacetResult>();
+        dims.put(dim, frs);
+      }
+      frs.add(fr);
+    }
+
+    final List<FacetResult> res = new ArrayList<FacetResult>();
+    for (List<FacetResult> frs : dims.values()) {
+      FacetResult mergedResult = frs.get(0);
+      if (frs.size() > 1) {
+        CollectionUtil.introSort(frs, new Comparator<FacetResult>() {
+          @Override
+          public int compare(FacetResult fr1, FacetResult fr2) {
+            return fr1.getFacetRequest().categoryPath.compareTo(fr2.getFacetRequest().categoryPath);
+          }
+        });
+        Map<CategoryPath, FacetResultNode> mergedNodes = new HashMap<CategoryPath,FacetResultNode>();
+        FacetArrays arrays = dimArrays != null ? dimArrays.get(frs.get(0).getFacetRequest().categoryPath.components[0]) : null;
+        for (FacetResult fr : frs) {
+          FacetResultNode frn = fr.getFacetResultNode();
+          FacetResultNode merged = mergedNodes.get(frn.label);
+          if (merged == null) {
+            CategoryPath parent = frn.label.subpath(frn.label.length - 1);
+            FacetResultNode childNode = frn;
+            FacetResultNode parentNode = null;
+            while (parent.length > 0 && (parentNode = mergedNodes.get(parent)) == null) {
+              int parentOrd = taxoReader.getOrdinal(parent);
+              double parentValue = arrays != null ? fr.getFacetRequest().getValueOf(arrays, parentOrd) : -1;
+              parentNode = new FacetResultNode(parentOrd, parentValue);
+              parentNode.label = parent;
+              parentNode.subResults = new ArrayList<FacetResultNode>();
+              parentNode.subResults.add(childNode);
+              mergedNodes.put(parent, parentNode);
+              childNode = parentNode;
+              parent = parent.subpath(parent.length - 1);
+            }
+
+            // at least one parent was added, so link the final (existing)
+            // parent with the child
+            if (parent.length > 0) {
+              if (!(parentNode.subResults instanceof ArrayList)) {
+                parentNode.subResults = new ArrayList<FacetResultNode>(parentNode.subResults);
+              }
+              parentNode.subResults.add(childNode);
+            }
+
+            // for missing FRNs, add new ones with label and value=-1
+            // first time encountered this label, add it and all its children to
+            // the map.
+            mergedNodes.put(frn.label, frn);
+            for (FacetResultNode child : frn.subResults) {
+              addIfNotExist(mergedNodes, child);
+            }
+          } else {
+            if (!(merged.subResults instanceof ArrayList)) {
+              merged.subResults = new ArrayList<FacetResultNode>(merged.subResults);
+            }
+            for (FacetResultNode sub : frn.subResults) {
+              // make sure sub wasn't already added
+              sub = addIfNotExist(mergedNodes, sub);
+              if (!merged.subResults.contains(sub)) {
+                merged.subResults.add(sub);
+              }
+            }
+          }
+        }
+        
+        // find the 'first' node to put on the FacetResult root
+        CategoryPath min = null;
+        for (CategoryPath cp : mergedNodes.keySet()) {
+          if (min == null || cp.compareTo(min) < 0) {
+            min = cp;
+          }
+        }
+        FacetRequest dummy = new FacetRequest(min, frs.get(0).getFacetRequest().numResults) {
+          @Override
+          public double getValueOf(FacetArrays arrays, int idx) {
+            throw new UnsupportedOperationException("not supported by this request");
+          }
+          
+          @Override
+          public FacetArraysSource getFacetArraysSource() {
+            throw new UnsupportedOperationException("not supported by this request");
+          }
+        };
+        mergedResult = new FacetResult(dummy, mergedNodes.get(min), -1);
+      }
+      res.add(mergedResult);
+    }
+    return res;
+  }
+
   private final FacetRequest facetRequest;
   private final FacetResultNode rootNode;
   private final int numValidDescendants;
diff --git a/lucene/facet/src/java/org/apache/lucene/facet/search/StandardFacetsAccumulator.java b/lucene/facet/src/java/org/apache/lucene/facet/search/StandardFacetsAccumulator.java
index 25769db..97e57cb 100644
--- a/lucene/facet/src/java/org/apache/lucene/facet/search/StandardFacetsAccumulator.java
+++ b/lucene/facet/src/java/org/apache/lucene/facet/search/StandardFacetsAccumulator.java
@@ -94,7 +94,7 @@
 
   private Object accumulateGuard;
 
-  private double complementThreshold;
+  private double complementThreshold = DEFAULT_COMPLEMENT_THRESHOLD;
   
   public StandardFacetsAccumulator(FacetSearchParams searchParams, IndexReader indexReader, 
       TaxonomyReader taxonomyReader) {
diff --git a/lucene/facet/src/test/org/apache/lucene/facet/complements/TestFacetsAccumulatorWithComplement.java b/lucene/facet/src/test/org/apache/lucene/facet/complements/TestFacetsAccumulatorWithComplement.java
index 8cb229a..f3de1ca 100644
--- a/lucene/facet/src/test/org/apache/lucene/facet/complements/TestFacetsAccumulatorWithComplement.java
+++ b/lucene/facet/src/test/org/apache/lucene/facet/complements/TestFacetsAccumulatorWithComplement.java
@@ -121,8 +121,8 @@
     
     // Results are ready, printing them...
     int i = 0;
-    for (FacetResult facetResult : res) {
-      if (VERBOSE) {
+    if (VERBOSE) {
+      for (FacetResult facetResult : res) {
         System.out.println("Res "+(i++)+": "+facetResult);
       }
     }
diff --git a/lucene/facet/src/test/org/apache/lucene/facet/sampling/BaseSampleTestTopK.java b/lucene/facet/src/test/org/apache/lucene/facet/sampling/BaseSampleTestTopK.java
index 8214160..152a40d 100644
--- a/lucene/facet/src/test/org/apache/lucene/facet/sampling/BaseSampleTestTopK.java
+++ b/lucene/facet/src/test/org/apache/lucene/facet/sampling/BaseSampleTestTopK.java
@@ -94,7 +94,7 @@
         for (int nTrial = 0; nTrial < RETRIES; nTrial++) {
           try {
             // complement with sampling!
-            final Sampler sampler = createSampler(nTrial, useRandomSampler);
+            final Sampler sampler = createSampler(nTrial, useRandomSampler, samplingSearchParams);
             
             assertSampling(expectedResults, q, sampler, samplingSearchParams, false);
             assertSampling(expectedResults, q, sampler, samplingSearchParams, true);
@@ -128,14 +128,20 @@
     return FacetsCollector.create(sfa);
   }
   
-  private Sampler createSampler(int nTrial, boolean useRandomSampler) {
+  private Sampler createSampler(int nTrial, boolean useRandomSampler, FacetSearchParams sParams) {
     SamplingParams samplingParams = new SamplingParams();
     
+    /*
+     * Set sampling to Exact fixing with TakmiSampleFixer as it is not easy to
+     * validate results with amortized results. 
+     */
+    samplingParams.setSampleFixer(new TakmiSampleFixer(indexReader, taxoReader, sParams));
+        
     final double retryFactor = Math.pow(1.01, nTrial);
+    samplingParams.setOversampleFactor(5.0 * retryFactor); // Oversampling 
     samplingParams.setSampleRatio(0.8 * retryFactor);
     samplingParams.setMinSampleSize((int) (100 * retryFactor));
     samplingParams.setMaxSampleSize((int) (10000 * retryFactor));
-    samplingParams.setOversampleFactor(5.0 * retryFactor);
     samplingParams.setSamplingThreshold(11000); //force sampling
 
     Sampler sampler = useRandomSampler ? 
diff --git a/lucene/facet/src/test/org/apache/lucene/facet/sampling/SamplerTest.java b/lucene/facet/src/test/org/apache/lucene/facet/sampling/SamplerTest.java
new file mode 100644
index 0000000..b029e72
--- /dev/null
+++ b/lucene/facet/src/test/org/apache/lucene/facet/sampling/SamplerTest.java
@@ -0,0 +1,111 @@
+package org.apache.lucene.facet.sampling;

+

+import java.util.ArrayList;

+import java.util.List;

+

+import org.apache.lucene.facet.FacetTestBase;

+import org.apache.lucene.facet.params.FacetIndexingParams;

+import org.apache.lucene.facet.params.FacetSearchParams;

+import org.apache.lucene.facet.search.CountFacetRequest;

+import org.apache.lucene.facet.search.FacetResultNode;

+import org.apache.lucene.facet.search.FacetsCollector;

+import org.apache.lucene.facet.search.StandardFacetsAccumulator;

+import org.apache.lucene.facet.taxonomy.CategoryPath;

+import org.apache.lucene.search.MatchAllDocsQuery;

+import org.junit.After;

+import org.junit.Before;

+

+/*

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements.  See the NOTICE file distributed with

+ * this work for additional information regarding copyright ownership.

+ * The ASF licenses this file to You under the Apache License, Version 2.0

+ * (the "License"); you may not use this file except in compliance with

+ * the License.  You may obtain a copy of the License at

+ *

+ *     http://www.apache.org/licenses/LICENSE-2.0

+ *

+ * Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS,

+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ * See the License for the specific language governing permissions and

+ * limitations under the License.

+ */

+

+public class SamplerTest extends FacetTestBase {

+  

+  private FacetIndexingParams fip;

+  

+  @Override

+  @Before

+  public void setUp() throws Exception {

+    super.setUp();

+    fip = getFacetIndexingParams(Integer.MAX_VALUE);

+    initIndex(fip);

+  }

+  

+  @Override

+  protected int numDocsToIndex() {

+    return 100;

+  }

+  

+  @Override

+  protected List<CategoryPath> getCategories(final int doc) {

+    return new ArrayList<CategoryPath>() {

+      {

+        add(new CategoryPath("root", "a", Integer.toString(doc % 10)));

+      }

+    };

+  }

+  

+  @Override

+  protected String getContent(int doc) {

+    return "";

+  }

+  

+  @Override

+  @After

+  public void tearDown() throws Exception {

+    closeAll();

+    super.tearDown();

+  }

+  

+  public void testDefaultFixer() throws Exception {

+    RandomSampler randomSampler = new RandomSampler();

+    SampleFixer fixer = randomSampler.samplingParams.getSampleFixer();

+    assertEquals(null, fixer);

+  }

+  

+  public void testCustomFixer() throws Exception {

+    SamplingParams sp = new SamplingParams();

+    sp.setSampleFixer(new TakmiSampleFixer(null, null, null));

+    assertEquals(TakmiSampleFixer.class, sp.getSampleFixer().getClass());

+  }

+  

+  public void testNoFixing() throws Exception {

+    SamplingParams sp = new SamplingParams();

+    sp.setMaxSampleSize(10);

+    sp.setMinSampleSize(5);

+    sp.setSampleRatio(0.01d);

+    sp.setSamplingThreshold(50);

+    sp.setOversampleFactor(5d);

+    

+    assertNull("Fixer should be null as the test is for no-fixing",

+        sp.getSampleFixer());

+    FacetSearchParams fsp = new FacetSearchParams(fip, new CountFacetRequest(

+        new CategoryPath("root", "a"), 1));

+    SamplingAccumulator accumulator = new SamplingAccumulator(

+        new RandomSampler(sp, random()), fsp, indexReader, taxoReader);

+    

+    // Make sure no complements are in action

+    accumulator

+        .setComplementThreshold(StandardFacetsAccumulator.DISABLE_COMPLEMENT);

+    

+    FacetsCollector fc = FacetsCollector.create(accumulator);

+    

+    searcher.search(new MatchAllDocsQuery(), fc);

+    FacetResultNode node = fc.getFacetResults().get(0).getFacetResultNode();

+    

+    assertTrue(node.value < numDocsToIndex());

+  }

+}

diff --git a/lucene/facet/src/test/org/apache/lucene/facet/search/FacetRequestTest.java b/lucene/facet/src/test/org/apache/lucene/facet/search/FacetRequestTest.java
index c68ca4e..c9f3a7c 100644
--- a/lucene/facet/src/test/org/apache/lucene/facet/search/FacetRequestTest.java
+++ b/lucene/facet/src/test/org/apache/lucene/facet/search/FacetRequestTest.java
@@ -23,7 +23,7 @@
  */
 
 public class FacetRequestTest extends FacetTestCase {
-
+  
   @Test(expected=IllegalArgumentException.class)
   public void testIllegalNumResults() throws Exception {
     assertNotNull(new CountFacetRequest(new CategoryPath("a", "b"), 0));
@@ -33,7 +33,7 @@
   public void testIllegalCategoryPath() throws Exception {
     assertNotNull(new CountFacetRequest(null, 1));
   }
-
+  
   @Test
   public void testHashAndEquals() {
     CountFacetRequest fr1 = new CountFacetRequest(new CategoryPath("a"), 8);
diff --git a/lucene/facet/src/test/org/apache/lucene/facet/search/FacetResultTest.java b/lucene/facet/src/test/org/apache/lucene/facet/search/FacetResultTest.java
new file mode 100644
index 0000000..ddf257c
--- /dev/null
+++ b/lucene/facet/src/test/org/apache/lucene/facet/search/FacetResultTest.java
@@ -0,0 +1,204 @@
+package org.apache.lucene.facet.search;

+

+/*

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements.  See the NOTICE file distributed with

+ * this work for additional information regarding copyright ownership.

+ * The ASF licenses this file to You under the Apache License, Version 2.0

+ * (the "License"); you may not use this file except in compliance with

+ * the License.  You may obtain a copy of the License at

+ *

+ *     http://www.apache.org/licenses/LICENSE-2.0

+ *

+ * Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS,

+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ * See the License for the specific language governing permissions and

+ * limitations under the License.

+ */

+

+import java.io.IOException;

+import java.util.ArrayList;

+import java.util.Comparator;

+import java.util.HashMap;

+import java.util.List;

+import java.util.Map;

+

+import org.apache.lucene.analysis.MockAnalyzer;

+import org.apache.lucene.document.Document;

+import org.apache.lucene.facet.FacetTestCase;

+import org.apache.lucene.facet.FacetTestUtils;

+import org.apache.lucene.facet.index.FacetFields;

+import org.apache.lucene.facet.params.FacetIndexingParams;

+import org.apache.lucene.facet.params.FacetSearchParams;

+import org.apache.lucene.facet.search.DrillSideways.DrillSidewaysResult;

+import org.apache.lucene.facet.taxonomy.CategoryPath;

+import org.apache.lucene.facet.taxonomy.TaxonomyReader;

+import org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader;

+import org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter;

+import org.apache.lucene.index.DirectoryReader;

+import org.apache.lucene.index.IndexWriter;

+import org.apache.lucene.index.IndexWriterConfig;

+import org.apache.lucene.search.IndexSearcher;

+import org.apache.lucene.search.MatchAllDocsQuery;

+import org.apache.lucene.store.Directory;

+import org.apache.lucene.store.RAMDirectory;

+import org.apache.lucene.util.CollectionUtil;

+import org.apache.lucene.util.IOUtils;

+import org.junit.Test;

+

+public class FacetResultTest extends FacetTestCase {

+  

+  private Document newDocument(FacetFields facetFields, String... categories) throws IOException {

+    Document doc = new Document();

+    List<CategoryPath> cats = new ArrayList<CategoryPath>();

+    for (String cat : categories) {

+      cats.add(new CategoryPath(cat, '/'));

+    }

+    facetFields.addFields(doc, cats);

+    return doc;

+  }

+  

+  private void initIndex(Directory indexDir, Directory taxoDir) throws IOException {

+    IndexWriterConfig conf = new IndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random()));

+    IndexWriter indexWriter = new IndexWriter(indexDir, conf);

+    DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(taxoDir);

+    FacetFields facetFields = new FacetFields(taxoWriter);

+    indexWriter.addDocument(newDocument(facetFields, "Date/2010/March/12", "A/1"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2010/March/23", "A/2"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2010/April/17", "A/3"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2010/May/18", "A/1"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2011/January/1", "A/3"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2011/February/12", "A/1"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2011/February/18", "A/4"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2012/August/15", "A/1"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2012/July/5", "A/2"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2013/September/13", "A/1"));

+    indexWriter.addDocument(newDocument(facetFields, "Date/2013/September/25", "A/4"));

+    IOUtils.close(indexWriter, taxoWriter);

+  }

+  

+  private void searchIndex(TaxonomyReader taxoReader, IndexSearcher searcher, boolean fillMissingCounts, String[] exp,

+      String[][] drillDowns, int[] numResults) throws IOException {

+    CategoryPath[][] cps = new CategoryPath[drillDowns.length][];

+    for (int i = 0; i < cps.length; i++) {

+      cps[i] = new CategoryPath[drillDowns[i].length];

+      for (int j = 0; j < cps[i].length; j++) {

+        cps[i][j] = new CategoryPath(drillDowns[i][j], '/');

+      }

+    }

+    DrillDownQuery ddq = new DrillDownQuery(FacetIndexingParams.DEFAULT, new MatchAllDocsQuery());

+    for (CategoryPath[] cats : cps) {

+      ddq.add(cats);

+    }

+    

+    List<FacetRequest> facetRequests = new ArrayList<FacetRequest>();

+    for (CategoryPath[] cats : cps) {

+      for (int i = 0; i < cats.length; i++) {

+        CategoryPath cp = cats[i];

+        int numres = numResults == null ? 2 : numResults[i];

+        // for each drill-down, add itself as well as its parent as requests, so

+        // we get the drill-sideways

+        facetRequests.add(new CountFacetRequest(cp, numres));

+        CountFacetRequest parent = new CountFacetRequest(cp.subpath(cp.length - 1), numres);

+        if (!facetRequests.contains(parent) && parent.categoryPath.length > 0) {

+          facetRequests.add(parent);

+        }

+      }

+    }

+    

+    FacetSearchParams fsp = new FacetSearchParams(facetRequests);

+    final DrillSideways ds;

+    final Map<String,FacetArrays> dimArrays;

+    if (fillMissingCounts) {

+      dimArrays = new HashMap<String,FacetArrays>();

+      ds = new DrillSideways(searcher, taxoReader) {

+        @Override

+        protected FacetsAccumulator getDrillSidewaysAccumulator(String dim, FacetSearchParams fsp) throws IOException {

+          FacetsAccumulator fa = super.getDrillSidewaysAccumulator(dim, fsp);

+          dimArrays.put(dim, fa.facetArrays);

+          return fa;

+        }

+      };

+    } else {

+      ds = new DrillSideways(searcher, taxoReader);

+      dimArrays = null;

+    }

+    

+    final DrillSidewaysResult sidewaysRes = ds.search(null, ddq, 5, fsp);

+    List<FacetResult> facetResults = FacetResult.mergeHierarchies(sidewaysRes.facetResults, taxoReader, dimArrays);

+    CollectionUtil.introSort(facetResults, new Comparator<FacetResult>() {

+      @Override

+      public int compare(FacetResult o1, FacetResult o2) {

+        return o1.getFacetRequest().categoryPath.compareTo(o2.getFacetRequest().categoryPath);

+      }

+    });

+    assertEquals(exp.length, facetResults.size()); // A + single one for date

+    for (int i = 0; i < facetResults.size(); i++) {

+      assertEquals(exp[i], FacetTestUtils.toSimpleString(facetResults.get(i)));

+    }

+  }

+  

+  @Test

+  public void testMergeHierarchies() throws Exception {

+    Directory indexDir = new RAMDirectory(), taxoDir = new RAMDirectory();

+    initIndex(indexDir, taxoDir);

+    

+    DirectoryReader indexReader = DirectoryReader.open(indexDir);

+    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoDir);

+    IndexSearcher searcher = new IndexSearcher(indexReader);

+    

+    String[] exp = new String[] { "Date (0)\n  2010 (4)\n  2011 (3)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date" } }, null);

+    

+    // two dimensions

+    exp = new String[] { "A (0)\n  1 (5)\n  4 (2)\n", "Date (0)\n  2010 (4)\n  2011 (3)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date" }, new String[] { "A" } }, null);

+    

+    // both parent and child are OR'd

+    exp = new String[] { "Date (-1)\n  2010 (4)\n    March (2)\n      23 (1)\n      12 (1)\n    May (1)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2010/March", "Date/2010/March/23" }}, null);

+    

+    // both parent and child are OR'd (fill counts)

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n      23 (1)\n      12 (1)\n    May (1)\n" };

+    searchIndex(taxoReader, searcher, true, exp, new String[][] { new String[] { "Date/2010/March", "Date/2010/March/23" }}, null);

+    

+    // same DD twice

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n    May (1)\n  2011 (3)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2010", "Date/2010" }}, null);

+    

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n    May (1)\n  2011 (3)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2010" }}, null);

+    

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n    May (1)\n  2011 (3)\n    February (2)\n    January (1)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2010", "Date/2011" }}, null);

+    

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n      23 (1)\n      12 (1)\n    May (1)\n  2011 (3)\n    February (2)\n    January (1)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2010/March", "Date/2011" }}, null);

+    

+    // Date/2010/April not in top-2 of Date/2010

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n      23 (1)\n      12 (1)\n    May (1)\n    April (1)\n      17 (1)\n  2011 (3)\n    February (2)\n    January (1)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2010/March", "Date/2010/April", "Date/2011" }}, null);

+    

+    // missing ancestors

+    exp = new String[] { "Date (-1)\n  2010 (4)\n    March (2)\n    May (1)\n    April (1)\n      17 (1)\n  2011 (-1)\n    January (1)\n      1 (1)\n" };

+    searchIndex(taxoReader, searcher, false, exp, new String[][] { new String[] { "Date/2011/January/1", "Date/2010/April" }}, null);

+    

+    // missing ancestors (fill counts)

+    exp = new String[] { "Date (0)\n  2010 (4)\n    March (2)\n    May (1)\n    April (1)\n      17 (1)\n  2011 (3)\n    January (1)\n      1 (1)\n" };

+    searchIndex(taxoReader, searcher, true, exp, new String[][] { new String[] { "Date/2011/January/1", "Date/2010/April" }}, null);

+    

+    // non-hierarchical dimension with both parent and child

+    exp = new String[] { "A (0)\n  1 (5)\n  4 (2)\n  3 (2)\n" };

+    searchIndex(taxoReader, searcher, INFOSTREAM, exp, new String[][] { new String[] { "A", "A/3" }}, null);

+    

+    // non-hierarchical dimension with same request but different numResults

+    exp = new String[] { "A (0)\n  1 (5)\n  4 (2)\n  3 (2)\n  2 (2)\n" };

+    searchIndex(taxoReader, searcher, INFOSTREAM, exp, new String[][] { new String[] { "A", "A" }}, new int[] { 2, 4 });

+    

+    IOUtils.close(indexReader, taxoReader);

+    

+    IOUtils.close(indexDir, taxoDir);

+  }

+  

+}

diff --git a/lucene/facet/src/test/org/apache/lucene/facet/search/TestDrillSideways.java b/lucene/facet/src/test/org/apache/lucene/facet/search/TestDrillSideways.java
index 79b62c7..846d911 100644
--- a/lucene/facet/src/test/org/apache/lucene/facet/search/TestDrillSideways.java
+++ b/lucene/facet/src/test/org/apache/lucene/facet/search/TestDrillSideways.java
@@ -59,15 +59,18 @@
 import org.apache.lucene.search.Scorer;
 import org.apache.lucene.search.Sort;
 import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.SortField.Type;
 import org.apache.lucene.search.TermQuery;
 import org.apache.lucene.search.TopDocs;
 import org.apache.lucene.store.Directory;
 import org.apache.lucene.util.Bits;
 import org.apache.lucene.util.BytesRef;
 import org.apache.lucene.util.FixedBitSet;
+import org.apache.lucene.util.IOUtils;
 import org.apache.lucene.util.InPlaceMergeSorter;
 import org.apache.lucene.util.InfoStream;
 import org.apache.lucene.util._TestUtil;
+import org.junit.Test;
 
 public class TestDrillSideways extends FacetTestCase {
 
@@ -1144,5 +1147,34 @@
     }
     return b.toString();
   }
+  
+  @Test
+  public void testEmptyIndex() throws Exception {
+    // LUCENE-5045: make sure DrillSideways works with an empty index
+    Directory dir = newDirectory();
+    Directory taxoDir = newDirectory();
+    writer = new RandomIndexWriter(random(), dir);
+    taxoWriter = new DirectoryTaxonomyWriter(taxoDir, IndexWriterConfig.OpenMode.CREATE);
+    IndexSearcher searcher = newSearcher(writer.getReader());
+    writer.close();
+    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoWriter);
+    taxoWriter.close();
+
+    // Count "Author"
+    FacetSearchParams fsp = new FacetSearchParams(new CountFacetRequest(new CategoryPath("Author"), 10));
+
+    DrillSideways ds = new DrillSideways(searcher, taxoReader);
+    DrillDownQuery ddq = new DrillDownQuery(fsp.indexingParams, new MatchAllDocsQuery());
+    ddq.add(new CategoryPath("Author", "Lisa"));
+    
+    DrillSidewaysResult r = ds.search(null, ddq, 10, fsp); // this used to fail on IllegalArgEx
+    assertEquals(0, r.hits.totalHits);
+
+    r = ds.search(ddq, null, null, 10, new Sort(new SortField("foo", Type.INT)), false, false, fsp); // this used to fail on IllegalArgEx
+    assertEquals(0, r.hits.totalHits);
+    
+    IOUtils.close(searcher.getIndexReader(), taxoReader, dir, taxoDir);
+  }
+  
 }
 
diff --git a/lucene/facet/src/test/org/apache/lucene/facet/search/TestFacetsCollector.java b/lucene/facet/src/test/org/apache/lucene/facet/search/TestFacetsCollector.java
index 525d2d2..bf011b4 100644
--- a/lucene/facet/src/test/org/apache/lucene/facet/search/TestFacetsCollector.java
+++ b/lucene/facet/src/test/org/apache/lucene/facet/search/TestFacetsCollector.java
@@ -17,8 +17,20 @@
 import org.apache.lucene.facet.params.FacetIndexingParams;
 import org.apache.lucene.facet.params.FacetSearchParams;
 import org.apache.lucene.facet.params.PerDimensionIndexingParams;
+import org.apache.lucene.facet.range.LongRange;
+import org.apache.lucene.facet.range.RangeAccumulator;
+import org.apache.lucene.facet.range.RangeFacetRequest;
+import org.apache.lucene.facet.sampling.RandomSampler;
+import org.apache.lucene.facet.sampling.Sampler;
+import org.apache.lucene.facet.sampling.SamplingAccumulator;
+import org.apache.lucene.facet.sampling.SamplingParams;
+import org.apache.lucene.facet.sampling.SamplingWrapper;
+import org.apache.lucene.facet.sampling.TakmiSampleFixer;
 import org.apache.lucene.facet.search.FacetRequest.ResultMode;
+import org.apache.lucene.facet.sortedset.SortedSetDocValuesAccumulator;
+import org.apache.lucene.facet.sortedset.SortedSetDocValuesReaderState;
 import org.apache.lucene.facet.taxonomy.CategoryPath;
+import org.apache.lucene.facet.taxonomy.TaxonomyReader;
 import org.apache.lucene.facet.taxonomy.TaxonomyWriter;
 import org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyReader;
 import org.apache.lucene.facet.taxonomy.directory.DirectoryTaxonomyWriter;
@@ -384,5 +396,72 @@
     
     IOUtils.close(taxo, taxoDir, r, indexDir);
   }
-  
+
+  @Test
+  public void testLabeling() throws Exception {
+    Directory indexDir = newDirectory(), taxoDir = newDirectory();
+
+    // create the index
+    IndexWriter indexWriter = new IndexWriter(indexDir, newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random())));
+    DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(taxoDir);
+    FacetFields facetFields = new FacetFields(taxoWriter);
+    Document doc = new Document();
+    facetFields.addFields(doc, Arrays.asList(new CategoryPath("A/1", '/')));
+    indexWriter.addDocument(doc);
+    IOUtils.close(indexWriter, taxoWriter);
+    
+    DirectoryReader indexReader = DirectoryReader.open(indexDir);
+    TaxonomyReader taxoReader = new DirectoryTaxonomyReader(taxoDir);
+    IndexSearcher searcher = new IndexSearcher(indexReader);
+    // ask to count a non-existing category to test labeling
+    FacetSearchParams fsp = new FacetSearchParams(new CountFacetRequest(new CategoryPath("B"), 5));
+    
+    final SamplingParams sampleParams = new SamplingParams();
+    sampleParams.setMaxSampleSize(100);
+    sampleParams.setMinSampleSize(100);
+    sampleParams.setSamplingThreshold(100);
+    sampleParams.setOversampleFactor(1.0d);
+    if (random().nextBoolean()) {
+      sampleParams.setSampleFixer(new TakmiSampleFixer(indexReader, taxoReader, fsp));
+    }
+    final Sampler sampler = new RandomSampler(sampleParams, random());
+    
+    FacetsAccumulator[] accumulators = new FacetsAccumulator[] {
+      new FacetsAccumulator(fsp, indexReader, taxoReader),
+      new StandardFacetsAccumulator(fsp, indexReader, taxoReader),
+      new SamplingAccumulator(sampler, fsp, indexReader, taxoReader),
+      new AdaptiveFacetsAccumulator(fsp, indexReader, taxoReader),
+      new SamplingWrapper(new StandardFacetsAccumulator(fsp, indexReader, taxoReader), sampler)
+    };
+    
+    for (FacetsAccumulator fa : accumulators) {
+      FacetsCollector fc = FacetsCollector.create(fa);
+      searcher.search(new MatchAllDocsQuery(), fc);
+      List<FacetResult> facetResults = fc.getFacetResults();
+      assertNotNull(facetResults);
+      assertEquals("incorrect label returned for " + fa, fsp.facetRequests.get(0).categoryPath, facetResults.get(0).getFacetResultNode().label);
+    }
+    
+    try {
+      // SortedSetDocValuesAccumulator cannot even be created in such state
+      assertNull(new SortedSetDocValuesAccumulator(fsp, new SortedSetDocValuesReaderState(indexReader)));
+      // if this ever changes, make sure FacetResultNode is labeled correctly 
+      fail("should not have succeeded to execute a request over a category which wasn't indexed as SortedSetDVField");
+    } catch (IllegalArgumentException e) {
+      // expected
+    }
+
+    fsp = new FacetSearchParams(new RangeFacetRequest<LongRange>("f", new LongRange("grr", 0, true, 1, true)));
+    RangeAccumulator ra = new RangeAccumulator(fsp, indexReader);
+    FacetsCollector fc = FacetsCollector.create(ra);
+    searcher.search(new MatchAllDocsQuery(), fc);
+    List<FacetResult> facetResults = fc.getFacetResults();
+    assertNotNull(facetResults);
+    assertEquals("incorrect label returned for RangeAccumulator", fsp.facetRequests.get(0).categoryPath, facetResults.get(0).getFacetResultNode().label);
+
+    IOUtils.close(indexReader, taxoReader);
+
+    IOUtils.close(indexDir, taxoDir);
+  }
+
 }
diff --git a/lucene/grouping/src/test/org/apache/lucene/search/grouping/AllGroupHeadsCollectorTest.java b/lucene/grouping/src/test/org/apache/lucene/search/grouping/AllGroupHeadsCollectorTest.java
index a7a23a4..1c5efcf 100644
--- a/lucene/grouping/src/test/org/apache/lucene/search/grouping/AllGroupHeadsCollectorTest.java
+++ b/lucene/grouping/src/test/org/apache/lucene/search/grouping/AllGroupHeadsCollectorTest.java
@@ -72,74 +72,78 @@
         dir,
         newIndexWriterConfig(TEST_VERSION_CURRENT,
             new MockAnalyzer(random())).setMergePolicy(newLogMergePolicy()));
-    boolean canUseIDV = true;
     DocValuesType valueType = vts[random().nextInt(vts.length)];
 
     // 0
     Document doc = new Document();
-    addGroupField(doc, groupField, "author1", canUseIDV, valueType);
-    doc.add(newTextField("content", "random text", Field.Store.YES));
-    doc.add(newStringField("id", "1", Field.Store.YES));
+    addGroupField(doc, groupField, "author1", valueType);
+    doc.add(newTextField("content", "random text", Field.Store.NO));
+    doc.add(newStringField("id_1", "1", Field.Store.NO));
+    doc.add(newStringField("id_2", "1", Field.Store.NO));
     w.addDocument(doc);
 
     // 1
     doc = new Document();
-    addGroupField(doc, groupField, "author1", canUseIDV, valueType);
-    doc.add(newTextField("content", "some more random text blob", Field.Store.YES));
-    doc.add(newStringField("id", "2", Field.Store.YES));
+    addGroupField(doc, groupField, "author1", valueType);
+    doc.add(newTextField("content", "some more random text blob", Field.Store.NO));
+    doc.add(newStringField("id_1", "2", Field.Store.NO));
+    doc.add(newStringField("id_2", "2", Field.Store.NO));
     w.addDocument(doc);
 
     // 2
     doc = new Document();
-    addGroupField(doc, groupField, "author1", canUseIDV, valueType);
-    doc.add(newTextField("content", "some more random textual data", Field.Store.YES));
-    doc.add(newStringField("id", "3", Field.Store.YES));
+    addGroupField(doc, groupField, "author1", valueType);
+    doc.add(newTextField("content", "some more random textual data", Field.Store.NO));
+    doc.add(newStringField("id_1", "3", Field.Store.NO));
+    doc.add(newStringField("id_2", "3", Field.Store.NO));
     w.addDocument(doc);
     w.commit(); // To ensure a second segment
 
     // 3
     doc = new Document();
-    addGroupField(doc, groupField, "author2", canUseIDV, valueType);
-    doc.add(newTextField("content", "some random text", Field.Store.YES));
-    doc.add(newStringField("id", "4", Field.Store.YES));
+    addGroupField(doc, groupField, "author2", valueType);
+    doc.add(newTextField("content", "some random text", Field.Store.NO));
+    doc.add(newStringField("id_1", "4", Field.Store.NO));
+    doc.add(newStringField("id_2", "4", Field.Store.NO));
     w.addDocument(doc);
 
     // 4
     doc = new Document();
-    addGroupField(doc, groupField, "author3", canUseIDV, valueType);
-    doc.add(newTextField("content", "some more random text", Field.Store.YES));
-    doc.add(newStringField("id", "5", Field.Store.YES));
+    addGroupField(doc, groupField, "author3", valueType);
+    doc.add(newTextField("content", "some more random text", Field.Store.NO));
+    doc.add(newStringField("id_1", "5", Field.Store.NO));
+    doc.add(newStringField("id_2", "5", Field.Store.NO));
     w.addDocument(doc);
 
     // 5
     doc = new Document();
-    addGroupField(doc, groupField, "author3", canUseIDV, valueType);
-    doc.add(newTextField("content", "random blob", Field.Store.YES));
-    doc.add(newStringField("id", "6", Field.Store.YES));
+    addGroupField(doc, groupField, "author3", valueType);
+    doc.add(newTextField("content", "random blob", Field.Store.NO));
+    doc.add(newStringField("id_1", "6", Field.Store.NO));
+    doc.add(newStringField("id_2", "6", Field.Store.NO));
     w.addDocument(doc);
 
     // 6 -- no author field
     doc = new Document();
-    doc.add(newTextField("content", "random word stuck in alot of other text", Field.Store.YES));
-    doc.add(newStringField("id", "6", Field.Store.YES));
+    doc.add(newTextField("content", "random word stuck in alot of other text", Field.Store.NO));
+    doc.add(newStringField("id_1", "6", Field.Store.NO));
+    doc.add(newStringField("id_2", "6", Field.Store.NO));
     w.addDocument(doc);
 
     // 7 -- no author field
     doc = new Document();
-    doc.add(newTextField("content", "random word stuck in alot of other text", Field.Store.YES));
-    doc.add(newStringField("id", "7", Field.Store.YES));
+    doc.add(newTextField("content", "random word stuck in alot of other text", Field.Store.NO));
+    doc.add(newStringField("id_1", "7", Field.Store.NO));
+    doc.add(newStringField("id_2", "7", Field.Store.NO));
     w.addDocument(doc);
 
     IndexReader reader = w.getReader();
     IndexSearcher indexSearcher = newSearcher(reader);
-    if (SlowCompositeReaderWrapper.class.isAssignableFrom(reader.getClass())) {
-      canUseIDV = false;
-    }
 
     w.close();
     int maxDoc = reader.maxDoc();
 
-    Sort sortWithinGroup = new Sort(new SortField("id", SortField.Type.INT, true));
+    Sort sortWithinGroup = new Sort(new SortField("id_1", SortField.Type.INT, true));
     AbstractAllGroupHeadsCollector<?> allGroupHeadsCollector = createRandomCollector(groupField, sortWithinGroup);
     indexSearcher.search(new TermQuery(new Term("content", "random")), allGroupHeadsCollector);
     assertTrue(arrayContains(new int[]{2, 3, 5, 7}, allGroupHeadsCollector.retrieveGroupHeads()));
@@ -156,13 +160,13 @@
     assertTrue(openBitSetContains(new int[]{1, 5}, allGroupHeadsCollector.retrieveGroupHeads(maxDoc), maxDoc));
 
     // STRING sort type triggers different implementation
-    Sort sortWithinGroup2 = new Sort(new SortField("id", SortField.Type.STRING, true));
+    Sort sortWithinGroup2 = new Sort(new SortField("id_2", SortField.Type.STRING, true));
     allGroupHeadsCollector = createRandomCollector(groupField, sortWithinGroup2);
     indexSearcher.search(new TermQuery(new Term("content", "random")), allGroupHeadsCollector);
     assertTrue(arrayContains(new int[]{2, 3, 5, 7}, allGroupHeadsCollector.retrieveGroupHeads()));
     assertTrue(openBitSetContains(new int[]{2, 3, 5, 7}, allGroupHeadsCollector.retrieveGroupHeads(maxDoc), maxDoc));
 
-    Sort sortWithinGroup3 = new Sort(new SortField("id", SortField.Type.STRING, false));
+    Sort sortWithinGroup3 = new Sort(new SortField("id_2", SortField.Type.STRING, false));
     allGroupHeadsCollector = createRandomCollector(groupField, sortWithinGroup3);
     indexSearcher.search(new TermQuery(new Term("content", "random")), allGroupHeadsCollector);
     // 7 b/c higher doc id wins, even if order of field is in not in reverse.
@@ -402,6 +406,7 @@
       for (int a : actual) {
         if (e == a) {
           found = true;
+          break;
         }
       }
 
@@ -539,11 +544,10 @@
     return collector;
   }
 
-  private void addGroupField(Document doc, String groupField, String value, boolean canUseIDV, DocValuesType valueType) {
-    doc.add(new TextField(groupField, value, Field.Store.YES));
-    if (canUseIDV) {
-      Field valuesField = null;
-      switch(valueType) {
+  private void addGroupField(Document doc, String groupField, String value, DocValuesType valueType) {
+    doc.add(new TextField(groupField, value, Field.Store.NO));
+    Field valuesField = null;
+    switch(valueType) {
       case BINARY:
         valuesField = new BinaryDocValuesField(groupField + "_dv", new BytesRef(value));
         break;
@@ -552,9 +556,8 @@
         break;
       default:
         fail("unhandled type");
-      }
-      doc.add(valuesField);
     }
+    doc.add(valuesField);
   }
 
   private static class GroupDoc {
diff --git a/lucene/grouping/src/test/org/apache/lucene/search/grouping/TestGrouping.java b/lucene/grouping/src/test/org/apache/lucene/search/grouping/TestGrouping.java
index 7f435d4..5ddf60f 100644
--- a/lucene/grouping/src/test/org/apache/lucene/search/grouping/TestGrouping.java
+++ b/lucene/grouping/src/test/org/apache/lucene/search/grouping/TestGrouping.java
@@ -827,12 +827,14 @@
           for(SortField sf : docSort.getSort()) {
             if (sf.getType() == SortField.Type.SCORE) {
               getScores = true;
+              break;
             }
           }
 
           for(SortField sf : groupSort.getSort()) {
             if (sf.getType() == SortField.Type.SCORE) {
               getScores = true;
+              break;
             }
           }
 
diff --git a/lucene/highlighter/src/java/org/apache/lucene/search/postingshighlight/PostingsHighlighter.java b/lucene/highlighter/src/java/org/apache/lucene/search/postingshighlight/PostingsHighlighter.java
index 850c77a..b52fcbe 100644
--- a/lucene/highlighter/src/java/org/apache/lucene/search/postingshighlight/PostingsHighlighter.java
+++ b/lucene/highlighter/src/java/org/apache/lucene/search/postingshighlight/PostingsHighlighter.java
@@ -669,7 +669,7 @@
     public void stringField(FieldInfo fieldInfo, String value) throws IOException {
       assert currentField >= 0;
       StringBuilder builder = builders[currentField];
-      if (builder.length() > 0) {
+      if (builder.length() > 0 && builder.length() < maxLength) {
         builder.append(' '); // for the offset gap, TODO: make this configurable
       }
       if (builder.length() + value.length() > maxLength) {
diff --git a/lucene/highlighter/src/test/org/apache/lucene/search/postingshighlight/TestPostingsHighlighter.java b/lucene/highlighter/src/test/org/apache/lucene/search/postingshighlight/TestPostingsHighlighter.java
index 699950e..8308bed 100644
--- a/lucene/highlighter/src/test/org/apache/lucene/search/postingshighlight/TestPostingsHighlighter.java
+++ b/lucene/highlighter/src/test/org/apache/lucene/search/postingshighlight/TestPostingsHighlighter.java
@@ -123,6 +123,43 @@
     dir.close();
   }
   
+  // simple test with multiple values that make a result longer than maxLength.
+  public void testMaxLengthWithMultivalue() throws Exception {
+    Directory dir = newDirectory();
+    // use simpleanalyzer for more natural tokenization (else "test." is a token)
+    IndexWriterConfig iwc = newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random(), MockTokenizer.SIMPLE, true));
+    iwc.setMergePolicy(newLogMergePolicy());
+    RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwc);
+    
+    FieldType offsetsType = new FieldType(TextField.TYPE_STORED);
+    offsetsType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS);
+    Document doc = new Document();
+    
+    for(int i = 0; i < 3 ; i++) {
+      Field body = new Field("body", "", offsetsType);
+      body.setStringValue("This is a multivalued field");
+      doc.add(body);
+    }
+    
+    iw.addDocument(doc);
+    
+    IndexReader ir = iw.getReader();
+    iw.close();
+    
+    IndexSearcher searcher = newSearcher(ir);
+    PostingsHighlighter highlighter = new PostingsHighlighter(40);
+    Query query = new TermQuery(new Term("body", "field"));
+    TopDocs topDocs = searcher.search(query, null, 10, Sort.INDEXORDER);
+    assertEquals(1, topDocs.totalHits);
+    String snippets[] = highlighter.highlight("body", query, searcher, topDocs);
+    assertEquals(1, snippets.length);
+    assertTrue("Snippet should have maximum 40 characters plus the pre and post tags",
+        snippets[0].length() == (40 + "<b></b>".length()));
+    
+    ir.close();
+    dir.close();
+  }
+  
   public void testMultipleFields() throws Exception {
     Directory dir = newDirectory();
     IndexWriterConfig iwc = newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random(), MockTokenizer.SIMPLE, true));
diff --git a/lucene/join/src/java/org/apache/lucene/search/join/TermsQuery.java b/lucene/join/src/java/org/apache/lucene/search/join/TermsQuery.java
index f4f2eb1..81d5ddd 100644
--- a/lucene/join/src/java/org/apache/lucene/search/join/TermsQuery.java
+++ b/lucene/join/src/java/org/apache/lucene/search/join/TermsQuery.java
@@ -38,6 +38,7 @@
 class TermsQuery extends MultiTermQuery {
 
   private final BytesRefHash terms;
+  private final int[] ords;
   private final Query fromQuery; // Used for equals() only
 
   /**
@@ -48,6 +49,7 @@
     super(field);
     this.fromQuery = fromQuery;
     this.terms = terms;
+    ords = terms.sort(BytesRef.getUTF8SortedAsUnicodeComparator());
   }
 
   @Override
@@ -56,7 +58,7 @@
       return TermsEnum.EMPTY;
     }
 
-    return new SeekingTermSetTermsEnum(terms.iterator(null), this.terms);
+    return new SeekingTermSetTermsEnum(terms.iterator(null), this.terms, ords);
   }
 
   @Override
@@ -104,12 +106,12 @@
     private BytesRef seekTerm;
     private int upto = 0;
 
-    SeekingTermSetTermsEnum(TermsEnum tenum, BytesRefHash terms) {
+    SeekingTermSetTermsEnum(TermsEnum tenum, BytesRefHash terms, int[] ords) {
       super(tenum);
       this.terms = terms;
-
+      this.ords = ords;
+      comparator = BytesRef.getUTF8SortedAsUnicodeComparator();
       lastElement = terms.size() - 1;
-      ords = terms.sort(comparator = tenum.getComparator());
       lastTerm = terms.get(ords[lastElement], new BytesRef());
       seekTerm = terms.get(ords[upto], spare);
     }
diff --git a/lucene/licenses/commons-logging-1.1.1.jar.sha1 b/lucene/licenses/commons-logging-1.1.1.jar.sha1
new file mode 100644
index 0000000..f585b03
--- /dev/null
+++ b/lucene/licenses/commons-logging-1.1.1.jar.sha1
@@ -0,0 +1 @@
+5043bfebc3db072ed80fbd362e7caf00e885d8ae

diff --git a/lucene/licenses/commons-logging-LICENSE-ASL.txt b/lucene/licenses/commons-logging-LICENSE-ASL.txt
new file mode 100644
index 0000000..75b5248
--- /dev/null
+++ b/lucene/licenses/commons-logging-LICENSE-ASL.txt
@@ -0,0 +1,202 @@
+

+                                 Apache License

+                           Version 2.0, January 2004

+                        http://www.apache.org/licenses/

+

+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

+

+   1. Definitions.

+

+      "License" shall mean the terms and conditions for use, reproduction,

+      and distribution as defined by Sections 1 through 9 of this document.

+

+      "Licensor" shall mean the copyright owner or entity authorized by

+      the copyright owner that is granting the License.

+

+      "Legal Entity" shall mean the union of the acting entity and all

+      other entities that control, are controlled by, or are under common

+      control with that entity. For the purposes of this definition,

+      "control" means (i) the power, direct or indirect, to cause the

+      direction or management of such entity, whether by contract or

+      otherwise, or (ii) ownership of fifty percent (50%) or more of the

+      outstanding shares, or (iii) beneficial ownership of such entity.

+

+      "You" (or "Your") shall mean an individual or Legal Entity

+      exercising permissions granted by this License.

+

+      "Source" form shall mean the preferred form for making modifications,

+      including but not limited to software source code, documentation

+      source, and configuration files.

+

+      "Object" form shall mean any form resulting from mechanical

+      transformation or translation of a Source form, including but

+      not limited to compiled object code, generated documentation,

+      and conversions to other media types.

+

+      "Work" shall mean the work of authorship, whether in Source or

+      Object form, made available under the License, as indicated by a

+      copyright notice that is included in or attached to the work

+      (an example is provided in the Appendix below).

+

+      "Derivative Works" shall mean any work, whether in Source or Object

+      form, that is based on (or derived from) the Work and for which the

+      editorial revisions, annotations, elaborations, or other modifications

+      represent, as a whole, an original work of authorship. For the purposes

+      of this License, Derivative Works shall not include works that remain

+      separable from, or merely link (or bind by name) to the interfaces of,

+      the Work and Derivative Works thereof.

+

+      "Contribution" shall mean any work of authorship, including

+      the original version of the Work and any modifications or additions

+      to that Work or Derivative Works thereof, that is intentionally

+      submitted to Licensor for inclusion in the Work by the copyright owner

+      or by an individual or Legal Entity authorized to submit on behalf of

+      the copyright owner. For the purposes of this definition, "submitted"

+      means any form of electronic, verbal, or written communication sent

+      to the Licensor or its representatives, including but not limited to

+      communication on electronic mailing lists, source code control systems,

+      and issue tracking systems that are managed by, or on behalf of, the

+      Licensor for the purpose of discussing and improving the Work, but

+      excluding communication that is conspicuously marked or otherwise

+      designated in writing by the copyright owner as "Not a Contribution."

+

+      "Contributor" shall mean Licensor and any individual or Legal Entity

+      on behalf of whom a Contribution has been received by Licensor and

+      subsequently incorporated within the Work.

+

+   2. Grant of Copyright License. Subject to the terms and conditions of

+      this License, each Contributor hereby grants to You a perpetual,

+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable

+      copyright license to reproduce, prepare Derivative Works of,

+      publicly display, publicly perform, sublicense, and distribute the

+      Work and such Derivative Works in Source or Object form.

+

+   3. Grant of Patent License. Subject to the terms and conditions of

+      this License, each Contributor hereby grants to You a perpetual,

+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable

+      (except as stated in this section) patent license to make, have made,

+      use, offer to sell, sell, import, and otherwise transfer the Work,

+      where such license applies only to those patent claims licensable

+      by such Contributor that are necessarily infringed by their

+      Contribution(s) alone or by combination of their Contribution(s)

+      with the Work to which such Contribution(s) was submitted. If You

+      institute patent litigation against any entity (including a

+      cross-claim or counterclaim in a lawsuit) alleging that the Work

+      or a Contribution incorporated within the Work constitutes direct

+      or contributory patent infringement, then any patent licenses

+      granted to You under this License for that Work shall terminate

+      as of the date such litigation is filed.

+

+   4. Redistribution. You may reproduce and distribute copies of the

+      Work or Derivative Works thereof in any medium, with or without

+      modifications, and in Source or Object form, provided that You

+      meet the following conditions:

+

+      (a) You must give any other recipients of the Work or

+          Derivative Works a copy of this License; and

+

+      (b) You must cause any modified files to carry prominent notices

+          stating that You changed the files; and

+

+      (c) You must retain, in the Source form of any Derivative Works

+          that You distribute, all copyright, patent, trademark, and

+          attribution notices from the Source form of the Work,

+          excluding those notices that do not pertain to any part of

+          the Derivative Works; and

+

+      (d) If the Work includes a "NOTICE" text file as part of its

+          distribution, then any Derivative Works that You distribute must

+          include a readable copy of the attribution notices contained

+          within such NOTICE file, excluding those notices that do not

+          pertain to any part of the Derivative Works, in at least one

+          of the following places: within a NOTICE text file distributed

+          as part of the Derivative Works; within the Source form or

+          documentation, if provided along with the Derivative Works; or,

+          within a display generated by the Derivative Works, if and

+          wherever such third-party notices normally appear. The contents

+          of the NOTICE file are for informational purposes only and

+          do not modify the License. You may add Your own attribution

+          notices within Derivative Works that You distribute, alongside

+          or as an addendum to the NOTICE text from the Work, provided

+          that such additional attribution notices cannot be construed

+          as modifying the License.

+

+      You may add Your own copyright statement to Your modifications and

+      may provide additional or different license terms and conditions

+      for use, reproduction, or distribution of Your modifications, or

+      for any such Derivative Works as a whole, provided Your use,

+      reproduction, and distribution of the Work otherwise complies with

+      the conditions stated in this License.

+

+   5. Submission of Contributions. Unless You explicitly state otherwise,

+      any Contribution intentionally submitted for inclusion in the Work

+      by You to the Licensor shall be under the terms and conditions of

+      this License, without any additional terms or conditions.

+      Notwithstanding the above, nothing herein shall supersede or modify

+      the terms of any separate license agreement you may have executed

+      with Licensor regarding such Contributions.

+

+   6. Trademarks. This License does not grant permission to use the trade

+      names, trademarks, service marks, or product names of the Licensor,

+      except as required for reasonable and customary use in describing the

+      origin of the Work and reproducing the content of the NOTICE file.

+

+   7. Disclaimer of Warranty. Unless required by applicable law or

+      agreed to in writing, Licensor provides the Work (and each

+      Contributor provides its Contributions) on an "AS IS" BASIS,

+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or

+      implied, including, without limitation, any warranties or conditions

+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A

+      PARTICULAR PURPOSE. You are solely responsible for determining the

+      appropriateness of using or redistributing the Work and assume any

+      risks associated with Your exercise of permissions under this License.

+

+   8. Limitation of Liability. In no event and under no legal theory,

+      whether in tort (including negligence), contract, or otherwise,

+      unless required by applicable law (such as deliberate and grossly

+      negligent acts) or agreed to in writing, shall any Contributor be

+      liable to You for damages, including any direct, indirect, special,

+      incidental, or consequential damages of any character arising as a

+      result of this License or out of the use or inability to use the

+      Work (including but not limited to damages for loss of goodwill,

+      work stoppage, computer failure or malfunction, or any and all

+      other commercial damages or losses), even if such Contributor

+      has been advised of the possibility of such damages.

+

+   9. Accepting Warranty or Additional Liability. While redistributing

+      the Work or Derivative Works thereof, You may choose to offer,

+      and charge a fee for, acceptance of support, warranty, indemnity,

+      or other liability obligations and/or rights consistent with this

+      License. However, in accepting such obligations, You may act only

+      on Your own behalf and on Your sole responsibility, not on behalf

+      of any other Contributor, and only if You agree to indemnify,

+      defend, and hold each Contributor harmless for any liability

+      incurred by, or claims asserted against, such Contributor by reason

+      of your accepting any such warranty or additional liability.

+

+   END OF TERMS AND CONDITIONS

+

+   APPENDIX: How to apply the Apache License to your work.

+

+      To apply the Apache License to your work, attach the following

+      boilerplate notice, with the fields enclosed by brackets "[]"

+      replaced with your own identifying information. (Don't include

+      the brackets!)  The text should be enclosed in the appropriate

+      comment syntax for the file format. We also recommend that a

+      file or class name and description of purpose be included on the

+      same "printed page" as the copyright notice for easier

+      identification within third-party archives.

+

+   Copyright [yyyy] [name of copyright owner]

+

+   Licensed under the Apache License, Version 2.0 (the "License");

+   you may not use this file except in compliance with the License.

+   You may obtain a copy of the License at

+

+       http://www.apache.org/licenses/LICENSE-2.0

+

+   Unless required by applicable law or agreed to in writing, software

+   distributed under the License is distributed on an "AS IS" BASIS,

+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+   See the License for the specific language governing permissions and

+   limitations under the License.

diff --git a/lucene/licenses/commons-logging-NOTICE.txt b/lucene/licenses/commons-logging-NOTICE.txt
new file mode 100644
index 0000000..9c695f2
--- /dev/null
+++ b/lucene/licenses/commons-logging-NOTICE.txt
@@ -0,0 +1,6 @@
+Apache Commons Logging

+Copyright 2003-2013 The Apache Software Foundation

+

+This product includes software developed at

+The Apache Software Foundation (http://www.apache.org/).

+

diff --git a/lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1 b/lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1
new file mode 100644
index 0000000..5914ca6
--- /dev/null
+++ b/lucene/licenses/javax.servlet-3.0.0.v201112011016.jar.sha1
@@ -0,0 +1 @@
+0aaaa85845fb5c59da00193f06b8e5278d8bf3f8

diff --git a/lucene/licenses/javax.servlet-LICENSE-CDDL.txt b/lucene/licenses/javax.servlet-LICENSE-CDDL.txt
new file mode 100644
index 0000000..cade048
--- /dev/null
+++ b/lucene/licenses/javax.servlet-LICENSE-CDDL.txt
@@ -0,0 +1,263 @@
+COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL) Version 1.0

+

+1. Definitions.

+

+   1.1. Contributor. means each individual or entity that creates or contributes to the creation of Modifications.

+

+   1.2. Contributor Version. means the combination of the Original Software, prior Modifications used by a Contributor (if any), and the Modifications made by that particular Contributor.

+

+   1.3. Covered Software. means (a) the Original Software, or (b) Modifications, or (c) the combination of files containing Original Software with files containing Modifications, in each case including portions thereof.

+

+   1.4. Executable. means the Covered Software in any form other than Source Code.

+

+   1.5. Initial Developer. means the individual or entity that first makes Original Software available under this License.

+

+   1.6. Larger Work. means a work which combines Covered Software or portions thereof with code not governed by the terms of this License.

+

+   1.7. License. means this document.

+

+   1.8. Licensable. means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently acquired, any and all of the rights conveyed herein.

+

+   1.9. Modifications. means the Source Code and Executable form of any of the following:

+

+        A. Any file that results from an addition to, deletion from or modification of the contents of a file containing Original Software or previous Modifications;

+

+        B. Any new file that contains any part of the Original Software or previous Modification; or

+

+        C. Any new file that is contributed or otherwise made available under the terms of this License.

+

+   1.10. Original Software. means the Source Code and Executable form of computer software code that is originally released under this License.

+

+   1.11. Patent Claims. means any patent claim(s), now owned or hereafter acquired, including without limitation, method, process, and apparatus claims, in any patent Licensable by grantor.

+

+   1.12. Source Code. means (a) the common form of computer software code in which modifications are made and (b) associated documentation included in or with such code.

+

+   1.13. You. (or .Your.) means an individual or a legal entity exercising rights under, and complying with all of the terms of, this License. For legal entities, .You. includes any entity which controls, is controlled by, or is under common control with You. For purposes of this definition, .control. means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.

+

+2. License Grants.

+

+      2.1. The Initial Developer Grant.

+

+      Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, the Initial Developer hereby grants You a world-wide, royalty-free, non-exclusive license:

+

+         (a) under intellectual property rights (other than patent or trademark) Licensable by Initial Developer, to use, reproduce, modify, display, perform, sublicense and distribute the Original Software (or portions thereof), with or without Modifications, and/or as part of a Larger Work; and

+

+         (b) under Patent Claims infringed by the making, using or selling of Original Software, to make, have made, use, practice, sell, and offer for sale, and/or otherwise dispose of the Original Software (or portions thereof).

+

+        (c) The licenses granted in Sections 2.1(a) and (b) are effective on the date Initial Developer first distributes or otherwise makes the Original Software available to a third party under the terms of this License.

+

+        (d) Notwithstanding Section 2.1(b) above, no patent license is granted: (1) for code that You delete from the Original Software, or (2) for infringements caused by: (i) the modification of the Original Software, or (ii) the combination of the Original Software with other software or devices.

+

+    2.2. Contributor Grant.

+

+    Conditioned upon Your compliance with Section 3.1 below and subject to third party intellectual property claims, each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:

+

+        (a) under intellectual property rights (other than patent or trademark) Licensable by Contributor to use, reproduce, modify, display, perform, sublicense and distribute the Modifications created by such Contributor (or portions thereof), either on an unmodified basis, with other Modifications, as Covered Software and/or as part of a Larger Work; and

+

+        (b) under Patent Claims infringed by the making, using, or selling of Modifications made by that Contributor either alone and/or in combination with its Contributor Version (or portions of such combination), to make, use, sell, offer for sale, have made, and/or otherwise dispose of: (1) Modifications made by that Contributor (or portions thereof); and (2) the combination of Modifications made by that Contributor with its Contributor Version (or portions of such combination).

+

+        (c) The licenses granted in Sections 2.2(a) and 2.2(b) are effective on the date Contributor first distributes or otherwise makes the Modifications available to a third party.

+

+        (d) Notwithstanding Section 2.2(b) above, no patent license is granted: (1) for any code that Contributor has deleted from the Contributor Version; (2) for infringements caused by: (i) third party modifications of Contributor Version, or (ii) the combination of Modifications made by that Contributor with other software (except as part of the Contributor Version) or other devices; or (3) under Patent Claims infringed by Covered Software in the absence of Modifications made by that Contributor.

+

+3. Distribution Obligations.

+

+      3.1. Availability of Source Code.

+      Any Covered Software that You distribute or otherwise make available in Executable form must also be made available in Source Code form and that Source Code form must be distributed only under the terms of this License. You must include a copy of this License with every copy of the Source Code form of the Covered Software You distribute or otherwise make available. You must inform recipients of any such Covered Software in Executable form as to how they can obtain such Covered Software in Source Code form in a reasonable manner on or through a medium customarily used for software exchange.

+

+      3.2. Modifications.

+      The Modifications that You create or to which You contribute are governed by the terms of this License. You represent that You believe Your Modifications are Your original creation(s) and/or You have sufficient rights to grant the rights conveyed by this License.

+

+      3.3. Required Notices.

+      You must include a notice in each of Your Modifications that identifies You as the Contributor of the Modification. You may not remove or alter any copyright, patent or trademark notices contained within the Covered Software, or any notices of licensing or any descriptive text giving attribution to any Contributor or the Initial Developer.

+

+      3.4. Application of Additional Terms.

+      You may not offer or impose any terms on any Covered Software in Source Code form that alters or restricts the applicable version of this License or the recipients. rights hereunder. You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, you may do so only on Your own behalf, and not on behalf of the Initial Developer or any Contributor. You must make it absolutely clear that any such warranty, support, indemnity or liability obligation is offered by You alone, and You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of warranty, support, indemnity or liability terms You offer.

+

+      3.5. Distribution of Executable Versions.

+      You may distribute the Executable form of the Covered Software under the terms of this License or under the terms of a license of Your choice, which may contain terms different from this License, provided that You are in compliance with the terms of this License and that the license for the Executable form does not attempt to limit or alter the recipient.s rights in the Source Code form from the rights set forth in this License. If You distribute the Covered Software in Executable form under a different license, You must make it absolutely clear that any terms which differ from this License are offered by You alone, not by the Initial Developer or Contributor. You hereby agree to indemnify the Initial Developer and every Contributor for any liability incurred by the Initial Developer or such Contributor as a result of any such terms You offer.

+

+      3.6. Larger Works.

+      You may create a Larger Work by combining Covered Software with other code not governed by the terms of this License and distribute the Larger Work as a single product. In such a case, You must make sure the requirements of this License are fulfilled for the Covered Software.

+

+4. Versions of the License.

+

+      4.1. New Versions.

+      Sun Microsystems, Inc. is the initial license steward and may publish revised and/or new versions of this License from time to time. Each version will be given a distinguishing version number. Except as provided in Section 4.3, no one other than the license steward has the right to modify this License.

+

+      4.2. Effect of New Versions.

+      You may always continue to use, distribute or otherwise make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. If the Initial Developer includes a notice in the Original Software prohibiting it from being distributed or otherwise made available under any subsequent version of the License, You must distribute and make the Covered Software available under the terms of the version of the License under which You originally received the Covered Software. Otherwise, You may also choose to use, distribute or otherwise make the Covered Software available under the terms of any subsequent version of the License published by the license steward.

+

+      4.3. Modified Versions.

+      When You are an Initial Developer and You want to create a new license for Your Original Software, You may create and use a modified version of this License if You: (a) rename the license and remove any references to the name of the license steward (except to note that the license differs from this License); and (b) otherwise make it clear that the license contains terms which differ from this License.

+

+5. DISCLAIMER OF WARRANTY.

+

+   COVERED SOFTWARE IS PROVIDED UNDER THIS LICENSE ON AN .AS IS. BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT THE COVERED SOFTWARE IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE COVERED SOFTWARE IS WITH YOU. SHOULD ANY COVERED SOFTWARE PROVE DEFECTIVE IN ANY RESPECT, YOU (NOT THE INITIAL DEVELOPER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING, REPAIR OR CORRECTION. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE. NO USE OF ANY COVERED SOFTWARE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER.

+

+6. TERMINATION.

+

+      6.1. This License and the rights granted hereunder will terminate automatically if You fail to comply with terms herein and fail to cure such breach within 30 days of becoming aware of the breach. Provisions which, by their nature, must remain in effect beyond the termination of this License shall survive.

+

+      6.2. If You assert a patent infringement claim (excluding declaratory judgment actions) against Initial Developer or a Contributor (the Initial Developer or Contributor against whom You assert such claim is referred to as .Participant.) alleging that the Participant Software (meaning the Contributor Version where the Participant is a Contributor or the Original Software where the Participant is the Initial Developer) directly or indirectly infringes any patent, then any and all rights granted directly or indirectly to You by such Participant, the Initial Developer (if the Initial Developer is not the Participant) and all Contributors under Sections 2.1 and/or 2.2 of this License shall, upon 60 days notice from Participant terminate prospectively and automatically at the expiration of such 60 day notice period, unless if within such 60 day period You withdraw Your claim with respect to the Participant Software against such Participant either unilaterally or pursuant to a written agreement with Participant.

+

+      6.3. In the event of termination under Sections 6.1 or 6.2 above, all end user licenses that have been validly granted by You or any distributor hereunder prior to termination (excluding licenses granted to You by any distributor) shall survive termination.

+

+7. LIMITATION OF LIABILITY.

+

+   UNDER NO CIRCUMSTANCES AND UNDER NO LEGAL THEORY, WHETHER TORT (INCLUDING NEGLIGENCE), CONTRACT, OR OTHERWISE, SHALL YOU, THE INITIAL DEVELOPER, ANY OTHER CONTRIBUTOR, OR ANY DISTRIBUTOR OF COVERED SOFTWARE, OR ANY SUPPLIER OF ANY OF SUCH PARTIES, BE LIABLE TO ANY PERSON FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES OF ANY CHARACTER INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOST PROFITS, LOSS OF GOODWILL, WORK STOPPAGE, COMPUTER FAILURE OR MALFUNCTION, OR ANY AND ALL OTHER COMMERCIAL DAMAGES OR LOSSES, EVEN IF SUCH PARTY SHALL HAVE BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGES. THIS LIMITATION OF LIABILITY SHALL NOT APPLY TO LIABILITY FOR DEATH OR PERSONAL INJURY RESULTING FROM SUCH PARTY.S NEGLIGENCE TO THE EXTENT APPLICABLE LAW PROHIBITS SUCH LIMITATION. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THIS EXCLUSION AND LIMITATION MAY NOT APPLY TO YOU.

+

+8. U.S. GOVERNMENT END USERS.

+

+   The Covered Software is a .commercial item,. as that term is defined in 48 C.F.R. 2.101 (Oct. 1995), consisting of .commercial computer software. (as that term is defined at 48 C.F.R. ? 252.227-7014(a)(1)) and .commercial computer software documentation. as such terms are used in 48 C.F.R. 12.212 (Sept. 1995). Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4 (June 1995), all U.S. Government End Users acquire Covered Software with only those rights set forth herein. This U.S. Government Rights clause is in lieu of, and supersedes, any other FAR, DFAR, or other clause or provision that addresses Government rights in computer software under this License.

+

+9. MISCELLANEOUS.

+

+   This License represents the complete agreement concerning subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. This License shall be governed by the law of the jurisdiction specified in a notice contained within the Original Software (except to the extent applicable law, if any, provides otherwise), excluding such jurisdiction.s conflict-of-law provisions. Any litigation relating to this License shall be subject to the jurisdiction of the courts located in the jurisdiction and venue specified in a notice contained within the Original Software, with the losing party responsible for costs, including, without limitation, court costs and reasonable attorneys. fees and expenses. The application of the United Nations Convention on Contracts for the International Sale of Goods is expressly excluded. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not apply to this License. You agree that You alone are responsible for compliance with the United States export administration regulations (and the export control laws and regulation of any other countries) when You use, distribute or otherwise make available any Covered Software.

+

+10. RESPONSIBILITY FOR CLAIMS.

+

+   As between Initial Developer and the Contributors, each party is responsible for claims and damages arising, directly or indirectly, out of its utilization of rights under this License and You agree to work with Initial Developer and Contributors to distribute such responsibility on an equitable basis. Nothing herein is intended or shall be deemed to constitute any admission of liability.

+

+   NOTICE PURSUANT TO SECTION 9 OF THE COMMON DEVELOPMENT AND DISTRIBUTION LICENSE (CDDL)

+

+   The code released under the CDDL shall be governed by the laws of the State of California (excluding conflict-of-law provisions). Any litigation relating to this License shall be subject to the jurisdiction of the Federal Courts of the Northern District of California and the state courts of the State of California, with venue lying in Santa Clara County, California.

+

+

+The GNU General Public License (GPL) Version 2, June 1991

+

+

+Copyright (C) 1989, 1991 Free Software Foundation, Inc. 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

+

+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.

+

+Preamble

+

+The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Library General Public License instead.) You can apply it to your programs, too.

+

+When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.

+

+To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.

+

+For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.

+

+We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.

+

+Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.

+

+Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.

+

+The precise terms and conditions for copying, distribution and modification follow.

+

+

+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION

+

+0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you".

+

+Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.

+

+1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.

+

+You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.

+

+2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:

+

+   a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.

+

+   b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.

+

+   c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)

+

+These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.

+

+Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.

+

+In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.

+

+3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:

+

+   a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

+

+   b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

+

+   c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)

+

+The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.

+

+If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.

+

+4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.

+

+5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.

+

+6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.

+

+7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.

+

+If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.

+

+It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.

+

+This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.

+

+8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.

+

+9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.

+

+Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.

+

+10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.

+

+NO WARRANTY

+

+11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

+

+12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

+

+END OF TERMS AND CONDITIONS

+

+

+How to Apply These Terms to Your New Programs

+

+If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.

+

+To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.

+

+   One line to give the program's name and a brief idea of what it does.

+

+   Copyright (C)

+

+   This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

+

+   This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

+

+   You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

+

+Also add information on how to contact you by electronic and paper mail.

+

+If the program is interactive, make it output a short notice like this when it starts in an interactive mode:

+

+   Gnomovision version 69, Copyright (C) year name of author

+   Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.

+

+The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program.

+

+You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names:

+

+   Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker.

+

+   signature of Ty Coon, 1 April 1989

+   Ty Coon, President of Vice

+

+This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Library General Public License instead of this License.

+

+

+"CLASSPATH" EXCEPTION TO THE GPL VERSION 2

+

+Certain source files distributed by Sun Microsystems, Inc. are subject to the following clarification and special exception to the GPL Version 2, but only where Sun has expressly included in the particular source file's header the words

+

+"Sun designates this particular file as subject to the "Classpath" exception as provided by Sun in the License file that accompanied this code."

+

+Linking this library statically or dynamically with other modules is making a combined work based on this library. Thus, the terms and conditions of the GNU General Public License Version 2 cover the whole combination.

+

+As a special exception, the copyright holders of this library give you permission to link this library with independent modules to produce an executable, regardless of the license terms of these independent modules, and to copy and distribute the resulting executable under terms of your choice, provided that you also meet, for each linked independent module, the terms and conditions of the license of that module.? An independent module is a module which is not derived from or based on this library.? If you modify this library, you may extend this exception to your version of the library, but you are not obligated to do so.? If you do not wish to do so, delete this exception statement from your version.

diff --git a/lucene/licenses/javax.servlet-NOTICE.txt b/lucene/licenses/javax.servlet-NOTICE.txt
new file mode 100644
index 0000000..a79ae9c
--- /dev/null
+++ b/lucene/licenses/javax.servlet-NOTICE.txt
@@ -0,0 +1,2 @@
+javax.servlet-*.jar is under the CDDL license, the original source

+code for this can be found at http://www.eclipse.org/jetty/downloads.php

diff --git a/lucene/misc/src/java/org/apache/lucene/index/IndexSplitter.java b/lucene/misc/src/java/org/apache/lucene/index/IndexSplitter.java
index 6413079..36304ea 100644
--- a/lucene/misc/src/java/org/apache/lucene/index/IndexSplitter.java
+++ b/lucene/misc/src/java/org/apache/lucene/index/IndexSplitter.java
@@ -141,7 +141,7 @@
       SegmentInfo newInfo = new SegmentInfo(destFSDir, info.getVersion(), info.name, info.getDocCount(), 
                                             info.getUseCompoundFile(),
                                             info.getCodec(), info.getDiagnostics(), info.attributes());
-      destInfos.add(new SegmentInfoPerCommit(newInfo, infoPerCommit.getDelCount(), infoPerCommit.getDelGen(), -1L));
+      destInfos.add(new SegmentInfoPerCommit(newInfo, infoPerCommit.getDelCount(), infoPerCommit.getDelGen(), infoPerCommit.getUpdateGen()));
       // now copy files over
       Collection<String> files = infoPerCommit.files();
       for (final String srcName : files) {
diff --git a/lucene/misc/src/java/org/apache/lucene/misc/SweetSpotSimilarity.java b/lucene/misc/src/java/org/apache/lucene/misc/SweetSpotSimilarity.java
index 06f3cab..5ebff53 100644
--- a/lucene/misc/src/java/org/apache/lucene/misc/SweetSpotSimilarity.java
+++ b/lucene/misc/src/java/org/apache/lucene/misc/SweetSpotSimilarity.java
@@ -158,7 +158,7 @@
    * @see #baselineTf
    */
   @Override
-  public float tf(int freq) {
+  public float tf(float freq) {
     return baselineTf(freq);
   }
   
diff --git a/lucene/misc/src/java/org/apache/lucene/util/fst/ListOfOutputs.java b/lucene/misc/src/java/org/apache/lucene/util/fst/ListOfOutputs.java
index 8db654a..99b2f3f 100644
--- a/lucene/misc/src/java/org/apache/lucene/util/fst/ListOfOutputs.java
+++ b/lucene/misc/src/java/org/apache/lucene/util/fst/ListOfOutputs.java
@@ -38,6 +38,15 @@
  * more than one output, as this requires pushing all
  * multi-output values to a final state.
  *
+ * <p>NOTE: the only way to create multiple outputs is to
+ * add the same input to the FST multiple times in a row.  This is
+ * how the FST maps a single input to multiple outputs (e.g. you
+ * cannot pass a List&lt;Object&gt; to {@link Builder#add}).  If
+ * your outputs are longs, and you need at most 2, then use
+ * {@link UpToTwoPositiveIntOutputs} instead since it stores
+ * the outputs more compactly (by stealing a bit from each
+ * long value).
+ *
  * <p>NOTE: this cannot wrap itself (ie you cannot make an
  * FST with List&lt;List&lt;Object&gt;&gt; outputs using this).
  *
diff --git a/lucene/misc/src/java/org/apache/lucene/util/fst/UpToTwoPositiveIntOutputs.java b/lucene/misc/src/java/org/apache/lucene/util/fst/UpToTwoPositiveIntOutputs.java
index 04cbbf1..78e2715 100644
--- a/lucene/misc/src/java/org/apache/lucene/util/fst/UpToTwoPositiveIntOutputs.java
+++ b/lucene/misc/src/java/org/apache/lucene/util/fst/UpToTwoPositiveIntOutputs.java
@@ -17,21 +17,6 @@
  * limitations under the License.
  */
 
-/**
- * An FST {@link Outputs} implementation where each output
- * is one or two non-negative long values.  If it's a
- * single output, Long is returned; else, TwoLongs.  Order
- * is preserved in the TwoLongs case, ie .first is the first
- * input/output added to Builder, and .second is the
- * second.  You cannot store 0 output with this (that's
- * reserved to mean "no output")!
- *
- * NOTE: the resulting FST is not guaranteed to be minimal!
- * See {@link Builder}.
- *
- * @lucene.experimental
- */
-
 import java.io.IOException;
 
 import org.apache.lucene.store.DataInput;
@@ -46,11 +31,21 @@
  * second.  You cannot store 0 output with this (that's
  * reserved to mean "no output")!
  *
- * NOTE: the resulting FST is not guaranteed to be minimal!
+ * <p>NOTE: the only way to create a TwoLongs output is to
+ * add the same input to the FST twice in a row.  This is
+ * how the FST maps a single input to two outputs (e.g. you
+ * cannot pass a TwoLongs to {@link Builder#add}.  If you
+ * need more than two then use {@link ListOfOutputs}, but if
+ * you only have at most 2 then this implementation will
+ * require fewer bytes as it steals one bit from each long
+ * value.
+ *
+ * <p>NOTE: the resulting FST is not guaranteed to be minimal!
  * See {@link Builder}.
  *
  * @lucene.experimental
  */
+
 public final class UpToTwoPositiveIntOutputs extends Outputs<Object> {
 
   /** Holds two long outputs. */
diff --git a/lucene/misc/src/test/org/apache/lucene/index/sorter/SorterTestBase.java b/lucene/misc/src/test/org/apache/lucene/index/sorter/SorterTestBase.java
index 1597c703..38456a5 100644
--- a/lucene/misc/src/test/org/apache/lucene/index/sorter/SorterTestBase.java
+++ b/lucene/misc/src/test/org/apache/lucene/index/sorter/SorterTestBase.java
@@ -98,13 +98,8 @@
     }
     
     @Override
-    public ExactSimScorer exactSimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      return in.exactSimScorer(weight, context);
-    }
-    
-    @Override
-    public SloppySimScorer sloppySimScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
-      return in.sloppySimScorer(weight, context);
+    public SimScorer simScorer(SimWeight weight, AtomicReaderContext context) throws IOException {
+      return in.simScorer(weight, context);
     }
     
   }
diff --git a/lucene/misc/src/test/org/apache/lucene/misc/SweetSpotSimilarityTest.java b/lucene/misc/src/test/org/apache/lucene/misc/SweetSpotSimilarityTest.java
index f238ba7..f5e35f2 100644
--- a/lucene/misc/src/test/org/apache/lucene/misc/SweetSpotSimilarityTest.java
+++ b/lucene/misc/src/test/org/apache/lucene/misc/SweetSpotSimilarityTest.java
@@ -246,7 +246,7 @@
   
     SweetSpotSimilarity ss = new SweetSpotSimilarity() {
         @Override
-        public float tf(int freq) {
+        public float tf(float freq) {
           return hyperbolicTf(freq);
         }
       };
diff --git a/lucene/queries/src/java/org/apache/lucene/queries/CustomScoreQuery.java b/lucene/queries/src/java/org/apache/lucene/queries/CustomScoreQuery.java
index 5cf2c74..602fa8b 100755
--- a/lucene/queries/src/java/org/apache/lucene/queries/CustomScoreQuery.java
+++ b/lucene/queries/src/java/org/apache/lucene/queries/CustomScoreQuery.java
@@ -58,7 +58,7 @@
    * @param subQuery the sub query whose scored is being customized. Must not be null. 
    */
   public CustomScoreQuery(Query subQuery) {
-    this(subQuery, new Query[0]);
+    this(subQuery, new FunctionQuery[0]);
   }
 
   /**
@@ -67,9 +67,9 @@
    * @param scoringQuery a value source query whose scores are used in the custom score
    * computation.  This parameter is optional - it can be null.
    */
-  public CustomScoreQuery(Query subQuery, Query scoringQuery) {
+  public CustomScoreQuery(Query subQuery, FunctionQuery scoringQuery) {
     this(subQuery, scoringQuery!=null ? // don't want an array that contains a single null..
-        new Query[] {scoringQuery} : new Query[0]);
+        new FunctionQuery[] {scoringQuery} : new FunctionQuery[0]);
   }
 
   /**
@@ -78,7 +78,7 @@
    * @param scoringQueries value source queries whose scores are used in the custom score
    * computation.  This parameter is optional - it can be null or even an empty array.
    */
-  public CustomScoreQuery(Query subQuery, Query... scoringQueries) {
+  public CustomScoreQuery(Query subQuery, FunctionQuery... scoringQueries) {
     this.subQuery = subQuery;
     this.scoringQueries = scoringQueries !=null?
         scoringQueries : new Query[0];
diff --git a/lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/TFValueSource.java b/lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/TFValueSource.java
index 0fc4fc9..12554be 100755
--- a/lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/TFValueSource.java
+++ b/lucene/queries/src/java/org/apache/lucene/queries/function/valuesource/TFValueSource.java
@@ -29,7 +29,7 @@
 import java.util.Map;
 
 /** 
- * Function that returns {@link TFIDFSimilarity#tf(int)}
+ * Function that returns {@link TFIDFSimilarity#tf(float)}
  * for every document.
  * <p>
  * Note that the configured Similarity for the field must be
diff --git a/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/CommonQueryParserConfiguration.java b/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/CommonQueryParserConfiguration.java
index 3f688e6..7c305f3 100644
--- a/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/CommonQueryParserConfiguration.java
+++ b/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/CommonQueryParserConfiguration.java
@@ -33,13 +33,9 @@
 public interface CommonQueryParserConfiguration {
   
   /**
-   * Set to <code>true</code> to allow leading wildcard characters.
-   * <p>
-   * When set, <code>*</code> or <code>?</code> are allowed as the first
-   * character of a PrefixQuery and WildcardQuery. Note that this can produce
-   * very slow queries on big indexes.
-   * <p>
-   * Default: false.
+   * Whether terms of multi-term queries (e.g., wildcard,
+   * prefix, fuzzy and range) should be automatically
+   * lower-cased or not.  Default is <code>true</code>.
    */
   public void setLowercaseExpandedTerms(boolean lowercaseExpandedTerms);
   
diff --git a/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/config/StandardQueryConfigHandler.java b/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/config/StandardQueryConfigHandler.java
index b65e1a7..a7fd34a 100644
--- a/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/config/StandardQueryConfigHandler.java
+++ b/lucene/queryparser/src/java/org/apache/lucene/queryparser/flexible/standard/config/StandardQueryConfigHandler.java
@@ -58,7 +58,7 @@
     final public static ConfigurationKey<Boolean> ENABLE_POSITION_INCREMENTS = ConfigurationKey.newInstance();
     
     /**
-     * Key used to set whether expanded terms should be expanded
+     * Key used to set whether expanded terms should be lower-cased
      * 
      * @see StandardQueryParser#setLowercaseExpandedTerms(boolean)
      * @see StandardQueryParser#getLowercaseExpandedTerms()
diff --git a/lucene/replicator/build.xml b/lucene/replicator/build.xml
index 3786902..8413495 100644
--- a/lucene/replicator/build.xml
+++ b/lucene/replicator/build.xml
@@ -31,8 +31,8 @@
 
 	<target name="resolve" depends="common.resolve">
 		<sequential>
-	    <!-- servlet-api.jar -->
-	    <ivy:retrieve conf="servlet" log="download-only" type="orbit" pattern="lib/servlet-api-3.0.jar"/>
+	    <!-- javax.servlet jar -->
+	    <ivy:retrieve conf="servlet" log="download-only" type="orbit"/>
 		</sequential>
 	</target>
 
diff --git a/lucene/replicator/ivy.xml b/lucene/replicator/ivy.xml
index fe3bd34..0fa8dd7 100644
--- a/lucene/replicator/ivy.xml
+++ b/lucene/replicator/ivy.xml
@@ -39,8 +39,7 @@
     <dependency org="org.eclipse.jetty" name="jetty-io" rev="&jetty.version;" transitive="false" conf="jetty->default"/>
     <dependency org="org.eclipse.jetty" name="jetty-continuation" rev="&jetty.version;" transitive="false" conf="jetty->default"/>
     <dependency org="org.eclipse.jetty" name="jetty-http" rev="&jetty.version;" transitive="false" conf="jetty->default"/>
-    <dependency org="org.slf4j" name="slf4j-api" rev="1.6.6" transitive="false" conf="logging->default"/>
-    <dependency org="org.slf4j" name="jcl-over-slf4j" rev="1.6.6" transitive="false" conf="logging->default"/>
+    <dependency org="commons-logging" name="commons-logging" rev="1.1.1" transitive="false" conf="logging->default"/>
     <dependency org="org.eclipse.jetty.orbit" name="javax.servlet" rev="3.0.0.v201112011016" transitive="false" conf="servlet->default">
       <artifact name="javax.servlet" type="orbit" ext="jar"/>
     </dependency>
diff --git a/lucene/replicator/src/test/org/apache/lucene/replicator/LocalReplicatorTest.java b/lucene/replicator/src/test/org/apache/lucene/replicator/LocalReplicatorTest.java
index 1fb9152..c55f9c3 100755
--- a/lucene/replicator/src/test/org/apache/lucene/replicator/LocalReplicatorTest.java
+++ b/lucene/replicator/src/test/org/apache/lucene/replicator/LocalReplicatorTest.java
@@ -19,6 +19,7 @@
 
 import java.io.FileNotFoundException;
 import java.io.IOException;
+import java.nio.file.NoSuchFileException;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map.Entry;
@@ -156,6 +157,8 @@
       fail("should have failed obtaining an unrecognized file");
     } catch (FileNotFoundException e) {
       // expected
+    } catch (NoSuchFileException e) {
+      // expected (only java 1.7)
     }
   }
   
diff --git a/lucene/replicator/src/test/org/apache/lucene/replicator/ReplicatorTestCase.java b/lucene/replicator/src/test/org/apache/lucene/replicator/ReplicatorTestCase.java
index 704578a..f621408 100755
--- a/lucene/replicator/src/test/org/apache/lucene/replicator/ReplicatorTestCase.java
+++ b/lucene/replicator/src/test/org/apache/lucene/replicator/ReplicatorTestCase.java
@@ -17,7 +17,7 @@
  * limitations under the License.
  */
 
-import java.net.SocketException;
+import java.util.Random;
 
 import org.apache.http.conn.ClientConnectionManager;
 import org.apache.http.impl.conn.PoolingClientConnectionManager;
@@ -26,17 +26,17 @@
 import org.eclipse.jetty.server.Connector;
 import org.eclipse.jetty.server.Handler;
 import org.eclipse.jetty.server.Server;
+import org.eclipse.jetty.server.bio.SocketConnector;
+import org.eclipse.jetty.server.nio.SelectChannelConnector;
+import org.eclipse.jetty.server.session.HashSessionIdManager;
+import org.eclipse.jetty.server.ssl.SslSelectChannelConnector;
+import org.eclipse.jetty.server.ssl.SslSocketConnector;
+import org.eclipse.jetty.util.ssl.SslContextFactory;
 import org.eclipse.jetty.util.thread.QueuedThreadPool;
 import org.junit.AfterClass;
 
 @SuppressCodecs("Lucene3x")
-public class ReplicatorTestCase extends LuceneTestCase {
-  
-  private static final int BASE_PORT = 7000;
-  
-  // if a test calls newServer() multiple times, or some ports already failed,
-  // don't start from BASE_PORT again
-  private static int lastPortUsed = -1;
+public abstract class ReplicatorTestCase extends LuceneTestCase {
   
   private static ClientConnectionManager clientConnectionManager;
   
@@ -53,39 +53,85 @@
    * {@link #serverPort(Server)}.
    */
   public static synchronized Server newHttpServer(Handler handler) throws Exception {
-    int port = lastPortUsed == -1 ? BASE_PORT : lastPortUsed + 1;
-    Server server = null;
-    while (true) {
-      try {
-        server = new Server(port);
-        
-        server.setHandler(handler);
-        
-        QueuedThreadPool threadPool = new QueuedThreadPool();
-        threadPool.setDaemon(true);
-        threadPool.setMaxIdleTimeMs(0);
-        server.setThreadPool(threadPool);
-        
-        // this will test the port
-        server.start();
-        
-        // if here, port is available
-        lastPortUsed = port;
-        return server;
-      } catch (SocketException e) {
-        stopHttpServer(server);
-        // this is ok, we'll try the next port until successful.
-        ++port;
+    Server server = new Server(0);
+    
+    server.setHandler(handler);
+    
+    final String connectorName = System.getProperty("tests.jettyConnector", "SelectChannel");
+    
+    // if this property is true, then jetty will be configured to use SSL
+    // leveraging the same system properties as java to specify
+    // the keystore/truststore if they are set
+    //
+    // This means we will use the same truststore, keystore (and keys) for
+    // the server as well as any client actions taken by this JVM in
+    // talking to that server, but for the purposes of testing that should 
+    // be good enough
+    final boolean useSsl = Boolean.getBoolean("tests.jettySsl");
+    final SslContextFactory sslcontext = new SslContextFactory(false);
+    
+    if (useSsl) {
+      if (null != System.getProperty("javax.net.ssl.keyStore")) {
+        sslcontext.setKeyStorePath
+        (System.getProperty("javax.net.ssl.keyStore"));
       }
+      if (null != System.getProperty("javax.net.ssl.keyStorePassword")) {
+        sslcontext.setKeyStorePassword
+        (System.getProperty("javax.net.ssl.keyStorePassword"));
+      }
+      if (null != System.getProperty("javax.net.ssl.trustStore")) {
+        sslcontext.setTrustStore
+        (System.getProperty("javax.net.ssl.trustStore"));
+      }
+      if (null != System.getProperty("javax.net.ssl.trustStorePassword")) {
+        sslcontext.setTrustStorePassword
+        (System.getProperty("javax.net.ssl.trustStorePassword"));
+      }
+      sslcontext.setNeedClientAuth(Boolean.getBoolean("tests.jettySsl.clientAuth"));
     }
+    
+    final Connector connector;
+    final QueuedThreadPool threadPool;
+    if ("SelectChannel".equals(connectorName)) {
+      final SelectChannelConnector c = useSsl ? new SslSelectChannelConnector(sslcontext) : new SelectChannelConnector();
+      c.setReuseAddress(true);
+      c.setLowResourcesMaxIdleTime(1500);
+      connector = c;
+      threadPool = (QueuedThreadPool) c.getThreadPool();
+    } else if ("Socket".equals(connectorName)) {
+      final SocketConnector c = useSsl ? new SslSocketConnector(sslcontext) : new SocketConnector();
+      c.setReuseAddress(true);
+      connector = c;
+      threadPool = (QueuedThreadPool) c.getThreadPool();
+    } else {
+      throw new IllegalArgumentException("Illegal value for system property 'tests.jettyConnector': " + connectorName);
+    }
+    
+    connector.setPort(0);
+    connector.setHost("127.0.0.1");
+    if (threadPool != null) {
+      threadPool.setDaemon(true);
+      threadPool.setMaxThreads(10000);
+      threadPool.setMaxIdleTimeMs(5000);
+      threadPool.setMaxStopTimeMs(30000);
+    }
+    
+    server.setConnectors(new Connector[] {connector});
+    server.setSessionIdManager(new HashSessionIdManager(new Random(random().nextLong())));
+    
+    server.start();
+    
+    return server;
   }
   
-  /**
-   * Returns a {@link Server}'s port. This method assumes that no
-   * {@link Connector}s were added to the Server besides the default one.
-   */
-  public static int serverPort(Server httpServer) {
-    return httpServer.getConnectors()[0].getPort();
+  /** Returns a {@link Server}'s port. */
+  public static int serverPort(Server server) {
+    return server.getConnectors()[0].getLocalPort();
+  }
+  
+  /** Returns a {@link Server}'s host. */
+  public static String serverHost(Server server) {
+    return server.getConnectors()[0].getHost();
   }
   
   /**
diff --git a/lucene/replicator/src/test/org/apache/lucene/replicator/http/HttpReplicatorTest.java b/lucene/replicator/src/test/org/apache/lucene/replicator/http/HttpReplicatorTest.java
index 28499ff..46b5942 100755
--- a/lucene/replicator/src/test/org/apache/lucene/replicator/http/HttpReplicatorTest.java
+++ b/lucene/replicator/src/test/org/apache/lucene/replicator/http/HttpReplicatorTest.java
@@ -50,6 +50,7 @@
   private DirectoryReader reader;
   private Server server;
   private int port;
+  private String host;
   private Directory serverIndexDir, handlerIndexDir;
   
   private void startServer() throws Exception {
@@ -59,12 +60,14 @@
     replicationHandler.addServletWithMapping(servlet, ReplicationService.REPLICATION_CONTEXT + "/*");
     server = newHttpServer(replicationHandler);
     port = serverPort(server);
+    host = serverHost(server);
   }
   
   @Before
   @Override
   public void setUp() throws Exception {
     super.setUp();
+    System.setProperty("org.eclipse.jetty.LEVEL", "DEBUG"); // sets stderr logging to DEBUG level
     clientWorkDir = _TestUtil.getTempDir("httpReplicatorTest");
     handlerIndexDir = newDirectory();
     serverIndexDir = newDirectory();
@@ -81,6 +84,7 @@
   public void tearDown() throws Exception {
     stopHttpServer(server);
     IOUtils.close(reader, writer, handlerIndexDir, serverIndexDir);
+    System.clearProperty("org.eclipse.jetty.LEVEL");
     super.tearDown();
   }
   
@@ -101,7 +105,7 @@
   
   @Test
   public void testBasic() throws Exception {
-    Replicator replicator = new HttpReplicator("localhost", port, ReplicationService.REPLICATION_CONTEXT + "/s1", 
+    Replicator replicator = new HttpReplicator(host, port, ReplicationService.REPLICATION_CONTEXT + "/s1", 
         getClientConnectionManager());
     ReplicationClient client = new ReplicationClient(replicator, new IndexReplicationHandler(handlerIndexDir, null), 
         new PerSessionDirectoryFactory(clientWorkDir));
diff --git a/lucene/sandbox/src/java/org/apache/lucene/sandbox/queries/SlowFuzzyTermsEnum.java b/lucene/sandbox/src/java/org/apache/lucene/sandbox/queries/SlowFuzzyTermsEnum.java
index de8539e..f63c1a1 100644
--- a/lucene/sandbox/src/java/org/apache/lucene/sandbox/queries/SlowFuzzyTermsEnum.java
+++ b/lucene/sandbox/src/java/org/apache/lucene/sandbox/queries/SlowFuzzyTermsEnum.java
@@ -31,9 +31,12 @@
 import org.apache.lucene.util.StringHelper;
 import org.apache.lucene.util.UnicodeUtil;
 
-/** Classic fuzzy TermsEnum for enumerating all terms that are similar
+/** Potentially slow fuzzy TermsEnum for enumerating all terms that are similar
  * to the specified filter term.
- *
+ * <p> If the minSimilarity or maxEdits is greater than the Automaton's
+ * allowable range, this backs off to the classic (brute force)
+ * fuzzy terms enum method by calling FuzzyTermsEnum's getAutomatonEnum.
+ * </p>
  * <p>Term enumerations are always ordered by
  * {@link #getComparator}.  Each term in the enumeration is
  * greater than all that precede it.</p>
@@ -103,18 +106,43 @@
     private final IntsRef utf32 = new IntsRef(20);
     
     /**
-     * The termCompare method in FuzzyTermEnum uses Levenshtein distance to 
+     * <p>The termCompare method in FuzzyTermEnum uses Levenshtein distance to 
      * calculate the distance between the given term and the comparing term. 
+     * </p>
+     * <p>If the minSimilarity is >= 1.0, this uses the maxEdits as the comparison.
+     * Otherwise, this method uses the following logic to calculate similarity.
+     * <pre>
+     *   similarity = 1 - ((float)distance / (float) (prefixLength + Math.min(textlen, targetlen)));
+     *   </pre>
+     * where distance is the Levenshtein distance for the two words.
+     * </p>
+     * 
      */
     @Override
     protected final AcceptStatus accept(BytesRef term) {
       if (StringHelper.startsWith(term, prefixBytesRef)) {
         UnicodeUtil.UTF8toUTF32(term, utf32);
-        final float similarity = similarity(utf32.ints, realPrefixLength, utf32.length - realPrefixLength);
-        if (similarity > minSimilarity) {
+        final int distance = calcDistance(utf32.ints, realPrefixLength, utf32.length - realPrefixLength);
+       
+        //Integer.MIN_VALUE is the sentinel that Levenshtein stopped early
+        if (distance == Integer.MIN_VALUE){
+           return AcceptStatus.NO;
+        }
+        //no need to calc similarity, if raw is true and distance > maxEdits
+        if (raw == true && distance > maxEdits){
+              return AcceptStatus.NO;
+        } 
+        final float similarity = calcSimilarity(distance, (utf32.length - realPrefixLength), text.length);
+        
+        //if raw is true, then distance must also be <= maxEdits by now
+        //given the previous if statement
+        if (raw == true ||
+              (raw == false && similarity > minSimilarity)) {
           boostAtt.setBoost((similarity - minSimilarity) * scale_factor);
           return AcceptStatus.YES;
-        } else return AcceptStatus.NO;
+        } else {
+           return AcceptStatus.NO;
+        }
       } else {
         return AcceptStatus.END;
       }
@@ -125,52 +153,34 @@
      ******************************/
     
     /**
-     * <p>Similarity returns a number that is 1.0f or less (including negative numbers)
-     * based on how similar the Term is compared to a target term.  It returns
-     * exactly 0.0f when
-     * <pre>
-     *    editDistance &gt; maximumEditDistance</pre>
-     * Otherwise it returns:
-     * <pre>
-     *    1 - (editDistance / length)</pre>
-     * where length is the length of the shortest term (text or target) including a
-     * prefix that are identical and editDistance is the Levenshtein distance for
-     * the two words.</p>
-     *
+     * <p>calcDistance returns the Levenshtein distance between the query term
+     * and the target term.</p>
+     * 
      * <p>Embedded within this algorithm is a fail-fast Levenshtein distance
      * algorithm.  The fail-fast algorithm differs from the standard Levenshtein
      * distance algorithm in that it is aborted if it is discovered that the
      * minimum distance between the words is greater than some threshold.
-     *
-     * <p>To calculate the maximum distance threshold we use the following formula:
-     * <pre>
-     *     (1 - minimumSimilarity) * length</pre>
-     * where length is the shortest term including any prefix that is not part of the
-     * similarity comparison.  This formula was derived by solving for what maximum value
-     * of distance returns false for the following statements:
-     * <pre>
-     *   similarity = 1 - ((float)distance / (float) (prefixLength + Math.min(textlen, targetlen)));
-     *   return (similarity > minimumSimilarity);</pre>
-     * where distance is the Levenshtein distance for the two words.
-     * </p>
+
      * <p>Levenshtein distance (also known as edit distance) is a measure of similarity
      * between two strings where the distance is measured as the number of character
      * deletions, insertions or substitutions required to transform one string to
      * the other string.
      * @param target the target word or phrase
-     * @return the similarity,  0.0 or less indicates that it matches less than the required
-     * threshold and 1.0 indicates that the text and target are identical
+     * @param offset the offset at which to start the comparison
+     * @param length the length of what's left of the string to compare
+     * @return the number of edits or Integer.MIN_VALUE if the edit distance is
+     * greater than maxDistance.
      */
-    private final float similarity(final int[] target, int offset, int length) {
+    private final int calcDistance(final int[] target, int offset, int length) {
       final int m = length;
       final int n = text.length;
       if (n == 0)  {
         //we don't have anything to compare.  That means if we just add
         //the letters for m we get the new word
-        return realPrefixLength == 0 ? 0.0f : 1.0f - ((float) m / realPrefixLength);
+        return m;
       }
       if (m == 0) {
-        return realPrefixLength == 0 ? 0.0f : 1.0f - ((float) n / realPrefixLength);
+        return n;
       }
       
       final int maxDistance = calculateMaxDistance(m);
@@ -183,7 +193,7 @@
         //which is 8-3 or more precisely Math.abs(3-8).
         //if our maximum edit distance is 4, then we can discard this word
         //without looking at it.
-        return Float.NEGATIVE_INFINITY;
+        return Integer.MIN_VALUE;
       }
       
       // init matrix d
@@ -214,7 +224,7 @@
         if (j > maxDistance && bestPossibleEditDistance > maxDistance) {  //equal is okay, but not greater
           //the closest the target can be to the text is just too far away.
           //this target is leaving the party early.
-          return Float.NEGATIVE_INFINITY;
+          return Integer.MIN_VALUE;
         }
 
         // copy current distance counts to 'previous row' distance counts: swap p and d
@@ -226,12 +236,17 @@
       // our last action in the above loop was to switch d and p, so p now
       // actually has the most recent cost counts
 
+      return p[n];
+    }
+    
+    private float calcSimilarity(int edits, int m, int n){
       // this will return less than 0.0 when the edit distance is
       // greater than the number of characters in the shorter word.
       // but this was the formula that was previously used in FuzzyTermEnum,
       // so it has not been changed (even though minimumSimilarity must be
       // greater than 0.0)
-      return 1.0f - ((float)p[n] / (float) (realPrefixLength + Math.min(n, m)));
+      
+      return 1.0f - ((float)edits / (float) (realPrefixLength + Math.min(n, m)));
     }
     
     /**
diff --git a/lucene/sandbox/src/test/org/apache/lucene/sandbox/queries/TestSlowFuzzyQuery.java b/lucene/sandbox/src/test/org/apache/lucene/sandbox/queries/TestSlowFuzzyQuery.java
index a4a125d..c823807 100644
--- a/lucene/sandbox/src/test/org/apache/lucene/sandbox/queries/TestSlowFuzzyQuery.java
+++ b/lucene/sandbox/src/test/org/apache/lucene/sandbox/queries/TestSlowFuzzyQuery.java
@@ -43,6 +43,9 @@
 public class TestSlowFuzzyQuery extends LuceneTestCase {
 
   public void testFuzziness() throws Exception {
+    //every test with SlowFuzzyQuery.defaultMinSimilarity
+    //is exercising the Automaton, not the brute force linear method
+    
     Directory directory = newDirectory();
     RandomIndexWriter writer = new RandomIndexWriter(random(), directory);
     addDoc("aaaaa", writer);
@@ -194,6 +197,30 @@
     directory.close();
   }
 
+  public void testFuzzinessLong2() throws Exception {
+     //Lucene-5033
+     Directory directory = newDirectory();
+     RandomIndexWriter writer = new RandomIndexWriter(random(), directory);
+     addDoc("abcdef", writer);
+     addDoc("segment", writer);
+
+     IndexReader reader = writer.getReader();
+     IndexSearcher searcher = newSearcher(reader);
+     writer.close();
+
+     SlowFuzzyQuery query;
+     
+     query = new SlowFuzzyQuery(new Term("field", "abcxxxx"), 3f, 0);   
+     ScoreDoc[] hits = searcher.search(query, null, 1000).scoreDocs;
+     assertEquals(0, hits.length);
+     
+     query = new SlowFuzzyQuery(new Term("field", "abcxxxx"), 4f, 0);   
+     hits = searcher.search(query, null, 1000).scoreDocs;
+     assertEquals(1, hits.length);
+     reader.close();
+     directory.close();
+  }
+  
   public void testFuzzinessLong() throws Exception {
     Directory directory = newDirectory();
     RandomIndexWriter writer = new RandomIndexWriter(random(), directory);
@@ -385,7 +412,6 @@
   
   public void testGiga() throws Exception {
 
-    MockAnalyzer analyzer = new MockAnalyzer(random());
     Directory index = newDirectory();
     RandomIndexWriter w = new RandomIndexWriter(random(), index);
 
@@ -440,25 +466,21 @@
     assertEquals(1, hits.length);
     assertEquals("foobar", searcher.doc(hits[0].doc).get("field"));
     
-    // TODO: cannot really be supported given the legacy scoring
-    // system which scores negative, if the distance > min term len,
-    // so such matches were always impossible with lucene 3.x, etc
-    //
-    //q = new SlowFuzzyQuery(new Term("field", "t"), 3);
-    //hits = searcher.search(q, 10).scoreDocs;
-    //assertEquals(1, hits.length);
-    //assertEquals("test", searcher.doc(hits[0].doc).get("field"));
+    q = new SlowFuzzyQuery(new Term("field", "t"), 3);
+    hits = searcher.search(q, 10).scoreDocs;
+    assertEquals(1, hits.length);
+    assertEquals("test", searcher.doc(hits[0].doc).get("field"));
     
-    // q = new SlowFuzzyQuery(new Term("field", "a"), 4f, 0, 50);
-    // hits = searcher.search(q, 10).scoreDocs;
-    // assertEquals(1, hits.length);
-    // assertEquals("test", searcher.doc(hits[0].doc).get("field"));
+    q = new SlowFuzzyQuery(new Term("field", "a"), 4f, 0, 50);
+    hits = searcher.search(q, 10).scoreDocs;
+    assertEquals(1, hits.length);
+    assertEquals("test", searcher.doc(hits[0].doc).get("field"));
     
-    // q = new SlowFuzzyQuery(new Term("field", "a"), 6f, 0, 50);
-    // hits = searcher.search(q, 10).scoreDocs;
-    // assertEquals(2, hits.length);
-    // assertEquals("test", searcher.doc(hits[0].doc).get("field"));
-    // assertEquals("foobar", searcher.doc(hits[1].doc).get("field"));
+    q = new SlowFuzzyQuery(new Term("field", "a"), 6f, 0, 50);
+    hits = searcher.search(q, 10).scoreDocs;
+    assertEquals(2, hits.length);
+    assertEquals("test", searcher.doc(hits[0].doc).get("field"));
+    assertEquals("foobar", searcher.doc(hits[1].doc).get("field"));
     
     reader.close();
     index.close();
diff --git a/lucene/spatial/src/test/org/apache/lucene/spatial/prefix/SpatialOpRecursivePrefixTreeTest.java b/lucene/spatial/src/test/org/apache/lucene/spatial/prefix/SpatialOpRecursivePrefixTreeTest.java
index 96cc51c..8c6a317 100644
--- a/lucene/spatial/src/test/org/apache/lucene/spatial/prefix/SpatialOpRecursivePrefixTreeTest.java
+++ b/lucene/spatial/src/test/org/apache/lucene/spatial/prefix/SpatialOpRecursivePrefixTreeTest.java
@@ -58,6 +58,8 @@
 
 public class SpatialOpRecursivePrefixTreeTest extends StrategyTestCase {
 
+  static final int ITERATIONS = 10;//Test Iterations
+
   private SpatialPrefixTree grid;
 
   @Before
@@ -81,28 +83,28 @@
   }
 
   @Test
-  @Repeat(iterations = 10)
+  @Repeat(iterations = ITERATIONS)
   public void testIntersects() throws IOException {
     mySetup(-1);
     doTest(SpatialOperation.Intersects);
   }
 
   @Test
-  @Repeat(iterations = 10)
+  @Repeat(iterations = ITERATIONS)
   public void testWithin() throws IOException {
     mySetup(-1);
     doTest(SpatialOperation.IsWithin);
   }
 
   @Test
-  @Repeat(iterations = 10)
+  @Repeat(iterations = ITERATIONS)
   public void testContains() throws IOException {
     mySetup(-1);
     doTest(SpatialOperation.Contains);
   }
 
   @Test
-  @Repeat(iterations = 10)
+  @Repeat(iterations = ITERATIONS)
   public void testDisjoint() throws IOException {
     mySetup(-1);
     doTest(SpatialOperation.IsDisjointTo);
@@ -334,9 +336,10 @@
     @Override
     public SpatialRelation relate(Shape other) {
       SpatialRelation r = relateApprox(other);
-      if (r != INTERSECTS)
+      if (r != INTERSECTS && !(r == WITHIN && biasContainsThenWithin))
         return r;
-      //See if the correct answer is actually Contains
+      //See if the correct answer is actually Contains, when the indexed shapes are adjacent,
+      // creating a larger shape that contains the input shape.
       Rectangle oRect = (Rectangle)other;
       boolean pairTouches = shape1.relate(shape2).intersects();
       if (!pairTouches)
diff --git a/lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/AnalyzingSuggester.java b/lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/AnalyzingSuggester.java
index 66c25d7..6e797ad 100644
--- a/lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/AnalyzingSuggester.java
+++ b/lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/AnalyzingSuggester.java
@@ -512,7 +512,7 @@
 
       reader = new Sort.ByteSequencesReader(tempSorted);
      
-      PairOutputs<Long,BytesRef> outputs = new PairOutputs<Long,BytesRef>(PositiveIntOutputs.getSingleton(true), ByteSequenceOutputs.getSingleton());
+      PairOutputs<Long,BytesRef> outputs = new PairOutputs<Long,BytesRef>(PositiveIntOutputs.getSingleton(), ByteSequenceOutputs.getSingleton());
       Builder<Pair<Long,BytesRef>> builder = new Builder<Pair<Long,BytesRef>>(FST.INPUT_TYPE.BYTE1, outputs);
 
       // Build FST:
@@ -634,7 +634,7 @@
   public boolean load(InputStream input) throws IOException {
     DataInput dataIn = new InputStreamDataInput(input);
     try {
-      this.fst = new FST<Pair<Long,BytesRef>>(dataIn, new PairOutputs<Long,BytesRef>(PositiveIntOutputs.getSingleton(true), ByteSequenceOutputs.getSingleton()));
+      this.fst = new FST<Pair<Long,BytesRef>>(dataIn, new PairOutputs<Long,BytesRef>(PositiveIntOutputs.getSingleton(), ByteSequenceOutputs.getSingleton()));
       maxAnalyzedPathsForOneInput = dataIn.readVInt();
       hasPayloads = dataIn.readByte() == 1;
     } finally {
diff --git a/lucene/suggest/src/java/org/apache/lucene/search/suggest/fst/WFSTCompletionLookup.java b/lucene/suggest/src/java/org/apache/lucene/search/suggest/fst/WFSTCompletionLookup.java
index 7b8d782..f634bee 100644
--- a/lucene/suggest/src/java/org/apache/lucene/search/suggest/fst/WFSTCompletionLookup.java
+++ b/lucene/suggest/src/java/org/apache/lucene/search/suggest/fst/WFSTCompletionLookup.java
@@ -101,7 +101,7 @@
     TermFreqIterator iter = new WFSTTermFreqIteratorWrapper(iterator);
     IntsRef scratchInts = new IntsRef();
     BytesRef previous = null;
-    PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton(true);
+    PositiveIntOutputs outputs = PositiveIntOutputs.getSingleton();
     Builder<Long> builder = new Builder<Long>(FST.INPUT_TYPE.BYTE1, outputs);
     while ((scratch = iter.next()) != null) {
       long cost = iter.weight();
@@ -136,7 +136,7 @@
   @Override
   public boolean load(InputStream input) throws IOException {
     try {
-      this.fst = new FST<Long>(new InputStreamDataInput(input), PositiveIntOutputs.getSingleton(true));
+      this.fst = new FST<Long>(new InputStreamDataInput(input), PositiveIntOutputs.getSingleton());
     } finally {
       IOUtils.close(input);
     }
diff --git a/lucene/test-framework/src/java/org/apache/lucene/codecs/asserting/AssertingStoredFieldsFormat.java b/lucene/test-framework/src/java/org/apache/lucene/codecs/asserting/AssertingStoredFieldsFormat.java
index e4c7e99..3bb711e 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/codecs/asserting/AssertingStoredFieldsFormat.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/codecs/asserting/AssertingStoredFieldsFormat.java
@@ -63,7 +63,8 @@
     }
 
     @Override
-    public void visitDocument(int n, StoredFieldVisitor visitor, Set<String> ignoreFields) throws IOException {
+    public void visitDocument(int n, StoredFieldVisitor visitor,
+        Set<String> ignoreFields) throws IOException {
       assert n >= 0 && n < maxDoc;
       in.visitDocument(n, visitor, ignoreFields);
     }
diff --git a/lucene/test-framework/src/java/org/apache/lucene/index/BaseStoredFieldsFormatTestCase.java b/lucene/test-framework/src/java/org/apache/lucene/index/BaseStoredFieldsFormatTestCase.java
index f77980b..6884db7 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/index/BaseStoredFieldsFormatTestCase.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/index/BaseStoredFieldsFormatTestCase.java
@@ -53,6 +53,7 @@
 import org.apache.lucene.search.TermQuery;
 import org.apache.lucene.search.TopDocs;
 import org.apache.lucene.store.Directory;
+import org.apache.lucene.store.MMapDirectory;
 import org.apache.lucene.store.MockDirectoryWrapper;
 import org.apache.lucene.store.MockDirectoryWrapper.Throttling;
 import org.apache.lucene.util.BytesRef;
@@ -594,7 +595,9 @@
   public void testBigDocuments() throws IOException {
     // "big" as "much bigger than the chunk size"
     // for this test we force a FS dir
-    Directory dir = newFSDirectory(_TestUtil.getTempDir(getClass().getSimpleName()));
+    // we can't just use newFSDirectory, because this test doesn't really index anything.
+    // so if we get NRTCachingDir+SimpleText, we make massive stored fields and OOM (LUCENE-4484)
+    Directory dir = new MockDirectoryWrapper(random(), new MMapDirectory(_TestUtil.getTempDir("testBigDocuments")));
     IndexWriterConfig iwConf = newIndexWriterConfig(TEST_VERSION_CURRENT, new MockAnalyzer(random()));
     iwConf.setMaxBufferedDocs(RandomInts.randomIntBetween(random(), 2, 30));
     RandomIndexWriter iw = new RandomIndexWriter(random(), dir, iwConf);
diff --git a/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java b/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
index a507a68..3fea86c 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/index/RandomIndexWriter.java
@@ -25,17 +25,12 @@
 import org.apache.lucene.analysis.Analyzer;
 import org.apache.lucene.analysis.MockAnalyzer;
 import org.apache.lucene.codecs.Codec;
-import org.apache.lucene.document.BinaryDocValuesField;
-import org.apache.lucene.document.Document;
-import org.apache.lucene.document.Field;
-import org.apache.lucene.document.NumericDocValuesField; 
-import org.apache.lucene.document.SortedDocValuesField; 
-import org.apache.lucene.index.FieldInfo.DocValuesType;
 import org.apache.lucene.index.IndexWriter; // javadoc
 import org.apache.lucene.search.Query;
 import org.apache.lucene.store.Directory;
-import org.apache.lucene.util.BytesRef;
+import org.apache.lucene.util.InfoStream;
 import org.apache.lucene.util.LuceneTestCase;
+import org.apache.lucene.util.NullInfoStream;
 import org.apache.lucene.util.Version;
 import org.apache.lucene.util._TestUtil;
 
@@ -55,23 +50,22 @@
   private boolean getReaderCalled;
   private final Codec codec; // sugar
 
-  // Randomly calls Thread.yield so we mixup thread scheduling
-  private static final class MockIndexWriter extends IndexWriter {
-
-    private final Random r;
-
-    public MockIndexWriter(Random r, Directory dir, IndexWriterConfig conf) throws IOException {
-      super(dir, conf);
-      // TODO: this should be solved in a different way; Random should not be shared (!).
-      this.r = new Random(r.nextLong());
-    }
-
-    @Override
-    boolean testPoint(String name) {
-      if (r.nextInt(4) == 2)
-        Thread.yield();
-      return true;
-    }
+  
+  public static IndexWriter mockIndexWriter(Directory dir, IndexWriterConfig conf, Random r) throws IOException {
+    // Randomly calls Thread.yield so we mixup thread scheduling
+    final Random random = new Random(r.nextLong());
+    return mockIndexWriter(dir, conf,  new TestPoint() {
+      @Override
+      public void apply(String message) {
+        if (random.nextInt(4) == 2)
+          Thread.yield();
+      }
+    });
+  }
+  
+  public static IndexWriter mockIndexWriter(Directory dir, IndexWriterConfig conf, TestPoint testPoint) throws IOException {
+    conf.setInfoStream(new TestPointInfoStream(conf.getInfoStream(), testPoint));
+    return new IndexWriter(dir, conf);
   }
 
   /** create a RandomIndexWriter with a random config: Uses TEST_VERSION_CURRENT and MockAnalyzer */
@@ -93,7 +87,7 @@
   public RandomIndexWriter(Random r, Directory dir, IndexWriterConfig c) throws IOException {
     // TODO: this should be solved in a different way; Random should not be shared (!).
     this.r = new Random(r.nextLong());
-    w = new MockIndexWriter(r, dir, c);
+    w = mockIndexWriter(dir, c, r);
     flushAt = _TestUtil.nextInt(r, 10, 1000);
     codec = w.getConfig().getCodec();
     if (LuceneTestCase.VERBOSE) {
@@ -345,4 +339,42 @@
   public void forceMerge(int maxSegmentCount) throws IOException {
     w.forceMerge(maxSegmentCount);
   }
+  
+  private static final class TestPointInfoStream extends InfoStream {
+    private final InfoStream delegate;
+    private final TestPoint testPoint;
+    
+    public TestPointInfoStream(InfoStream delegate, TestPoint testPoint) {
+      this.delegate = delegate == null ? new NullInfoStream(): delegate;
+      this.testPoint = testPoint;
+    }
+
+    @Override
+    public void close() throws IOException {
+      delegate.close();
+    }
+
+    @Override
+    public void message(String component, String message) {
+      if ("TP".equals(component)) {
+        testPoint.apply(message);
+      }
+      if (delegate.isEnabled(component)) {
+        delegate.message(component, message);
+      }
+    }
+    
+    @Override
+    public boolean isEnabled(String component) {
+      return "TP".equals(component) || delegate.isEnabled(component);
+    }
+  }
+  
+  /**
+   * Simple interface that is executed for each <tt>TP</tt> {@link InfoStream} component
+   * message. See also {@link RandomIndexWriter#mockIndexWriter(Directory, IndexWriterConfig, TestPoint)}
+   */
+  public static interface TestPoint {
+    public abstract void apply(String message);
+  }
 }
diff --git a/lucene/test-framework/src/java/org/apache/lucene/store/MockIndexOutputWrapper.java b/lucene/test-framework/src/java/org/apache/lucene/store/MockIndexOutputWrapper.java
index e5d61e0..0989d9e 100644
--- a/lucene/test-framework/src/java/org/apache/lucene/store/MockIndexOutputWrapper.java
+++ b/lucene/test-framework/src/java/org/apache/lucene/store/MockIndexOutputWrapper.java
@@ -55,14 +55,14 @@
     long realUsage = 0;
 
     // Enforce disk full:
-    if (dir.maxSize != 0 && freeSpace < len) {
+    if (dir.maxSize != 0 && freeSpace <= len) {
       // Compute the real disk free.  This will greatly slow
       // down our test but makes it more accurate:
       realUsage = dir.getRecomputedActualSizeInBytes();
       freeSpace = dir.maxSize - realUsage;
     }
 
-    if (dir.maxSize != 0 && freeSpace < len) {
+    if (dir.maxSize != 0 && freeSpace <= len) {
       if (freeSpace > 0) {
         realUsage += freeSpace;
         if (b != null) {
diff --git a/lucene/tools/junit4/tests.policy b/lucene/tools/junit4/tests.policy
index d17d3c5..f8c4002 100644
--- a/lucene/tools/junit4/tests.policy
+++ b/lucene/tools/junit4/tests.policy
@@ -54,6 +54,7 @@
 
   // Solr needs those:
   permission java.net.NetPermission "*";
+  permission java.sql.SQLPermission "*";
   permission java.util.logging.LoggingPermission "control";
   permission javax.management.MBeanPermission "*", "*";
   permission javax.management.MBeanServerPermission "*";
diff --git a/solr/core/src/java/org/apache/solr/search/similarities/SweetSpotSimilarityFactory.java b/solr/core/src/java/org/apache/solr/search/similarities/SweetSpotSimilarityFactory.java
index 6a6c582..7094e6a 100644
--- a/solr/core/src/java/org/apache/solr/search/similarities/SweetSpotSimilarityFactory.java
+++ b/solr/core/src/java/org/apache/solr/search/similarities/SweetSpotSimilarityFactory.java
@@ -180,7 +180,7 @@
   private static final class HyperbolicSweetSpotSimilarity 
     extends SweetSpotSimilarity {
     @Override
-    public float tf(int freq) {
+    public float tf(float freq) {
       return hyperbolicTf(freq);
     }
   };